Steven Pinker Doubts AI Will Take Over
A June 2022 “debate” between Steven Pinker and Scott Aaronson about AI Scaling.
Pinker is skeptical, even dismissive of the various extreme claims of the hardcore AI safety people, especially the “paper clip” theories.
- the concept of “general intelligence” is meaningless.
- Intuitively by “intelligence” we really mean “the ability to use information to attain a goal in an environment”, so specifying that goal is critical to evaluating it.
- Goals are exogenous, from which it follows that there will be many different, specialized goals, depending on who sets them:
There will be no omnipotent super-intelligence or wonder algorithm (or singularity or AGI or existential threat or foom), just better and better gadgets.
In response, Aaronson hypothesizes a “speeded-up” Einstein: imagine an AI that can do everything Einstein did, only faster. But Pinker replies that Einstein’s genius at physics also included some dumb ideas about other subjects.
He concludes that the real dangers of AI are more well-understood:
if intelligence is a mechanism rather than a superpower, the real dangers of AI come into sharper focus. An AI system designed to replace workers may cause mass unemployment; a system designed to use data to sort people may sort them in ways we find invidious; a system designed to fool people may be exploited to fool them in nefarious ways; and as many other hazards as there are AI systems. These dangers are not conjectural, and I suspect each will have to be mitigated by a different combination of policies and patches, just like other safety challenges such as falls, fires, and drownings.
Update: Scott Aaronson “sets the record straight” that he only gives a 2% risk to the “paper clip” scenario.