posted by Abby Nussey
Once in a while I’d like to “mine” for particularly interesting posts from the NKS Forum (by the way, if you haven’t registered for the NKS Forum yet, do it now!) — there are truly some buried gems that I think merit more serious reflection and discussion. I came across this post during my first mining trip, and thought it particularly meritorious
Original post: The Mozart Problem (by Jason Cawley)
The Mozart Problem
Popper imagined a deterministic theory sufficient to predict all the actions of a genuis of the caliber of Mozart, composing a symphony. He thought it so implausible he considered it evidence against determinism. Kurzweil wants Chopin preludes instead. Before he has them, he remains unimpressed by computational universality. The practical problems and potential capabilities of algorithmic music aside, there is a philosophical issue here, which I think has been misdiagnosed.
Those involved in the argument seem to think that it turns on questions of prior metaphysical principle (in the case of Popper) or of some missing additional trick needed to create strong AI. I’ll stick to Popper for now. My idea is to grant all the positions of NKS that seem to run against Popper’s attitude, and then to show that his judgment remains sound. Just not for the reasons he thought.
So we stipulate that there is a complete deterministic theory of humans (brains, or more if more turns out to matter), taking human specifications and initial states, and returning as output all the controls needed to drive a human emulator. I imagine it as something like a Mathematica function, parallel to CellularAutomaton.
Its input format is HumanSim[human, initialstate, lengthoftime]. Its output can be put in a parameterized function of the whole behavior expression depending upon it. The first argument is a bare number that enumerates possible lists of rules, each sufficient to specify the entire range of behavior of one human being. The initial state may, without loss of generality, include interaction information (FoldList fashion) to specify the external environment, or if there is anything pseudorandom in the rules involved, a random seed. The external specification might be fantastically difficult, but we assume we can encode anything that actually matters for the evolution of any human system. Notice however that it must be encoded outward from its effects on human rule tables, and you would have to deduce anything it implies about externals. For simplicity’s sake, imagine the function can return data from some person’s life as movies to a viewer, or its raw instructions to behavior subsystems can be queried in machine readable form. If the third argument is left to run from some parameter values for the others, you get the forward deterministic movie of that person’s life.
Notice, we have therefore assumed far more than strong AI. We have a complete enumeration of possible humans and possible trajectories for human behaviors. We can scan the inputs programmatically or by any other method we can specify. We can query the outputs. We can tweak only the exteral environment and see what it does to the time-forward behavior, and so ask questions of our virtual humans, if we find that useful or efficient.
All of this is simply stipulated: the deterministic simple programs view is true by mere hypothesis and we have already succeeded in constructing this function. Now our mission is to find Wolfgang Amadeus Mozart and ask him about his next symphony. Cherchez la femme! My claim is that we have not solved the problem. That we have not solved the problem “in principle”. That we have at best removed a few distracting issues subject to frequent philosophical contention, but that the real problem remains before us.
Reality is measure epsilon in possibility space.
What resources are allowed us to address the problem? I do not grant one countable infinity. I do not grant infinite running time on a slow Turing machine. I give a lot, but strictly finite resources. You have the efforts of 1 million human beings – a scale set by the largest corporations. You have computers 1000 times faster than any that exist today, as many as the people. And you have 1000 years. And you have all the Mathematica‘s you want running the above function, trying initials or searching through outputs. Your people can interview virtual humans to ascertain whether they are Wolfgang Amadeus Mozart. You can set one computer simulating an interviewer asking others whether they are. You can specify any effective procedures or programs you please, to search within the space of all possible humans in all possible environments from all possible prior states.
And I claim you can’t begin to find him. You might purely theoretically find a man of the same name and circumstances, creating wonderfully involved pastries in his bakery. You might find a great classical composer, who sounds like Chopin. You will certainly find scads of people who never existed and are nothing like him. But the mission is to find the real Wolfgang Amadeus Mozart, with his actual experiences and memories, and ask him about his next symphony. The target is not a point – a range of initials were part of that real man’s life, and minor changes to his environment would not change who he was. But the target set is modest, and the space is beyond astronomical.
Reality is measure epsilon in possibility space.
Solving strong AI is easy compared to this problem – so I claim. Writing algorithmic music software so good it makes Mozart quality symphonies is easy compared to this problem. Writing the HumanSim function may be incredibly difficult, but is child’s play compared to using the thing once it is written, to solve a problem this hard. So I claim. Anyone who thinks differently is welcome to give a solution within the stipulated resources, with the above assumptions.
Suppose we start at the other end. Having realized that our aiming at all possible humans has dramatically expanded the search space, beyond a level we can deal with for the problem at hand, we might instead hope to arrive at the tiny outcome box by getting the right initial much farther back. We therefore further stipulate that we have a deterministic, computational, fundamental theory of physics, along the lines suggested by NKS. We have found the rule of the universe (we do not need to consider 10^800 string theories), and it starts from a simple initial condition, so we do not have to search over an enourmous space of those. If we run it forward long enough, the truth and determinism of this rule (given by hypothesis) must eventuate in the historical Wolfgang Amadeus Mozart, in all his glory and all his mortality.
But we can’t run it forward far enough. We can’t make it to the epoch when the universe becomes transparent to light. It is exceedingly doubtful we can run it for a day of real time when the universe is at its smallest. OK, but perhaps we can shortcut portions of its evolution. After all, at a coarser level it has simpler descriptions.
But as soon as you go up to those instead of the full underlying theory, you will throw out detailed configuration information, and your transitions may become multivalued. If this were not a problem you would not have needed the underlying generator in the first place, because you would have had an exhaustive deterministic description at the higher level. We know by the time we reach the overlying emergent level of QM, we have already gone multivalued. Some of the differing details necessary for single valued deterministic evolution have been lumped into the same bins, by then.
If you throw out details of the configuration, you have left the path forward from the posited whole universe simple initial. There is no longer one world-path to follow, but a branching possibility space. If you keep the details because they matter, you must calculate every detail, and you cannot. If you discard them even though they can matter, you must scan a space of possibilities.
Either way, the historical uniqueness of definite individuals remains. It is not accessible horizontally from an origin because the entire universe cannot be calculated. It is not accessible vertically because entire possibility spaces, for anything complex enough, cannot be calculated. Because all complex systems require real computational work, and in general we cannot shortcut that work, the massive calculation that is history adds something fundamental that we cannot replace formally.
What we might actually do instead is constrain the search with empirical data, slicing off great swaths from abstract possibility space with a blade built from observation, over and over again. That way we exploit prior calculation that history has already performed for us. For a space as large as HumanSim, however, that would still be almost arbitrarily hard. Or we might write a function less general than HumanSim – say SymphonyMaker – that does not try to find Wolfgang Amadeus Mozart, but merely to make pleasant music.
Dreams of the power of formalism need to be tempered by consultating external reality, because reality is measure epsilon in possibility space. There is information in reality that is – pragmatically but strictly – not recoverable by formalism. Even if there are purely formal underlying causes for all of reality, even if they are simple and deterministic, and even if we can in principle find them.
Formalism is often much more useful practically, when it aims at possibility spaces we can actually span.
One quote — repeated several times — is “reality is measure epsilon in possibility space.” Though we might disagree as to countability, I think Jason makes a valid point, one that deserves further thought. What implications does this quote and its explanatory essay have on trying to find instances of particular complex structures (here, a Mozartic emergence)? Do we not search for Mozart, or for his symphonies, and the fundamental theory of physics, or vice-versa, because “reality is measure epsilon in possibility space”?
Is accepting determinancy complacent, or is that question itself absurd (as one must accept determinancy, if it is true)? A fellow participant at the NKS Summer School balked at the idea of “sitting back” and letting himself travel down whatever path had already been computed. At bottom, the Mozart Problem to him was a problem of free will. A Mozartic emergence occurred in our measure-epsilon reality, a historical accident whose reemergence would call upon computational resources so vast we’d have to relive history exactly the way it played out, detail by detail, to ever again watch Mozart emerge.
A Taj Mahal could print out, in every detail, in the evolution of some cellular automaton. And it would be very interesting to see this, certainly! But to do so, we’d effectively have to retrace history, detail by detail, to laying of the first to last stones. Does this mean we might never see the Taj Mahal cellular automaton? Surely we might. But the possibility is measure epsilon.
Thanks, Jason, for a great essay. Any comments (from anyone!) are welcome.