On page 628 of Stephen Wolfram’s A New Kind of Science, he states:
There has in the past been a great tendency to assume that given all its apparent complexity, human thinking must somehow be an altogether fundamentally complex process, not amenable at any level to simple explanation or meaningful theory.
But from the discoveries in this book we now know that highly complex behavior can in fact arise even from very simple basic rules. and from this it immediately becomes conceivable hat there could in reality be quite simple mechanisms that underlie human thinking.
Certainly there are many complicated details to the construction of the brain, and no doubt there are specific aspects of human thinking that depend on some of these details. But I strongly suspect that there is a definite core to the phenomenon of human thinking that is largely independent of such details—and that will in the end turn out to be based on rules that are rather simple.
Any student of science fiction knows of the dilemma of which Wolfram speaks — the brain is generally considered to be potentially such a complex thing to understand, that only the greatest minds toiling for a long time in the distant future would ever have even a glimmer of hope of puzzling out a working model.
And truly, when it comes to the Holy Grail of replicating a human-like intelligence coupled with artificial life, research has thus far fallen short.
However, Wolfram’s conjecture must be studied a bit more closely. Could it be that we were over-reaching, and really, we would first see artificial life coupled with a human-like artificial intelligence emerge from a seemingly simpler system and set of rules? Universal computers are built from the same such simple rules and initial conditions. Whatever the Universe might be, we can agree that the human brain is a computer – does this imply that reproducing a human-like artificial intelligence could be more within reach than we thought?
Perhaps we’ll see intelligence emerge from simplicity. And perhaps not. The main point I believe Wolfram was trying to make is that, when modeling intelligence, it is very useful to look at simple systems like ECAs, or perhaps only a few levels more complicated. Truly, there is much yet to discover in this very promising field, who knows what might emerge?