Wolfram on Machine Learning
In a lengthy post Stephen Wolfram explains that machine learning works by “piggybacking” on computational irreducibility, i.e. how for many problems there is no short-cut to simply running the computation step-by-step until the end.
The phenomenon of computational irreducibility leads to a fundamental tradeoff, of particular importance in thinking about things like AI. If we want to be able to know in advance—and broadly guarantee—what a system is going to do or be able to do, we have to set the system up to be computationally reducible. But if we want the system to be able to make the richest use of computation, it’ll inevitably be capable of computationally irreducible behavior. And it’s the same story with machine learning. If we want machine learning to be able to do the best it can, and perhaps give us the impression of “achieving magic”, then we have to allow it to show computational irreducibility. And if we want machine learning to be “understandable” it has to be computationally reducible, and not able to access the full power of computation.
Consequences for Biology
Biological organisms owe their complexity to computational irreducibility, but some of their configurations are “characteristically special”, some places in rule-space where short-cuts are possible. In the future, we may discover these as general rules in biology similar to what we have in physics.