AGI
Define AGI
OpenAI engineer @jbetker defines it as:
- A way of interacting with and observing a complex environment. Typically this means embodiment: the ability to perceive and interact with the natural world.
- A robust world model covering the environment. This is the mechanism which allows an entity to perform quick inference with a reasonable accuracy. World models in humans are generally referred to as “intuition”, “fast thinking” or “system 1 thinking”.
- A mechanism for performing deep introspection on arbitrary topics. This is thought of in many different ways – it is “reasoning”, “slow thinking” or “system 2 thinking”.
AGI is Already Here
Well-known AI researchers Blaise Agüera y Arcas and Peter Norvig argue in NOEMA that Artificial General Intelligence Is Already Here, mainly because state-of-the-art GPT systems already behave in ways that appear to go beyond their training and show signs of genuinely learning new tasks.
Geoffrey Hinton, one of the godfathers of AI, said: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future.”
Another AI great, Yoshua Benigo said: “The recent advances suggest that even the future where we know how to build superintelligent AIs (smarter than humans across the board) is closer than most people expected just a year ago.”
AGI is a Long Way From Now
But Gary Marcus responds that Reports of the birth of AGI are greatly exaggerated because, despite the recent progress, machines still fail at tasks that wouldn’t trouble a five-year-old, including five-digit multiplication.
In March 2022 he bet that AGI won’t be here by 2029 and then doubled-down in Jun 2024.
Mind Prison summaries the arguments for What if AGI isn’t coming pointing out that the core ideas behind neural networks haven’t changed in 50 years, that much of the impressive progress we see now is based on massive scaling that is reaching its diminishing returns. As evidence, note that GPT-4 uses 50x the resources of GPT-3.5, but it’s not anywhere near 50x better.
Who Cares
Subprime Intelligence by Ed Zitron
Generative AI’s greatest threat is that it is capable of creating a certain kind of bland, generic content very quickly and cheaply. As I discussed in my last newsletter, media entities are increasingly normalizing their content to please search engine algorithms, and the jobs that involve pooling affiliate links and answering where you can watch the Super Bowl are very much at risk. The normalization of journalism — the consistent point to which many outlets decide to write about the exact same thing — is a weak point that makes every outlet “exploring AI” that bit more scary, but the inevitable outcome is that these models are not reliable enough to actually replace anyone, and those that have experimented with doing so have found themselves deeply embarrassed.
See Also
Nick Bostrom’s new book asks what becomes possible if everything about AI goes right. Wired Interview
My Thoughts
A key problem for the AGI argument is that nobody can define “intelligence” in a way that agrees with our (unspoken) intuition that it only applies to humans.
Watch for the sleight of hand
People switch back and forth between the words “AI” and “AGI” at their convenience.
Nick Bostrom’s Superintelligence presented several technologies that could lead to AI, but – importantly – he didn’t mention LLMs, a technology that didn’t exist at the time. Yes, he assumed they are all the same risk. His “risk” was co-opted by the doomers after the technology was invented. In other words, no matter what comes along, Nick Bostom will have been right – exactly the opposite of science.
“If they’re smart enough to be scary, why are they not smart enough to be wise”. The system is smart enough and resourceful enough to make paperclips out of anything imaginable, but it’s not smart enough to realize it’s destroying everything?