That's it!
Architeuthis
I'm not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I'm willing to bet it's extremely dumb.
I'm almost certain I've seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
To have a dead simple UI where you, a person with no technical expertise, can ask in plain language for the data you want in the way you want them presented, along with some basic analysis that you can tell it to make it sound important. Then you tell it to turn it into an email in the style of your previous emails, send it, and take a 50min coffee break. All this allegedly with no overhead besides paying a subscription and telling your IT people to point the thing to the thing.
I mean, it would be quite something if transformers could do all that, instead of raising global temperatures to synthesize convincing looking but highly suspect messaging at best while being prone to delirium at worst.
Google pivoting to selling shovels for the AI gold rush in the form of data tools should be pretty viable if they commit to it, I hadn't thought if it that way.
It's a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.
edit: I love that he's chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.
debate pervert in a reply-guy world
Well done.
There's a bit in the beginning where he talks about how actors handling and drinking from obviously weightless empty cups ruins suspension of disbelief, so I'm assuming it's a callback.
I kinda want to replay subnautica now.
"Manifest is open minded about eugenics and securing the existence of our people and a future for high IQ children."
Great quote from the article on why prediction markets and scientific racism currently appear to be at one degree of separation:
Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
It hasn't worked 'well' for computers since like the pentium, what are you talking about?
The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.
There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.
So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.