Eliezer S. Yudkowsky of the Singularity Institute for Artificial Intelligence has released a new paper that will be included in the upcoming book Real AI: New Approaches to Artificial General Intelligence (Goertzel and Pennachin, eds., forthcoming):
Levels of Organization in General Intelligence.
Here's a printable HTML version, in case you want to print out the whole thing.
Posted by Lisa at April 24, 2002 07:19 AM | TrackBackWhere the human line developed from very complex non-general intelligence into very complex general intelligence, a successful AI project is more likely to develop from a primitive general intelligence into a complex general intelligence. Note that primitive does not mean architecturally simple. The right set of subsystems, even in a primitive and simplified state, may be able to function together as a complete but imbecilic mind which then provides a framework for further development. This does not imply that AI can be reduced to a single algorithm containing the "essence of intelligence". A cognitive supersystem may be "primitive" relative to a human and still require a tremendous amount of functional complexity.
I am admittedly biased against the search for a single essence of intelligence; I believe that the search for a single essence of intelligence lies at the center of AI's previous failures. Simplicity is the grail of physics, not AI. Physicists win Nobel Prizes when they discover a previously unknown underlying layer and explain its behaviors. We already know what the ultimate bottom layer of an Artificial Intelligence looks like; it looks like ones and zeroes. Our job is to build something interesting out of those ones and zeroes. The Turing formalism does not solve this problem any more than quantum electrodynamics tells us how to build a bicycle; knowing the abstract fact that a bicycle is built from atoms doesn't tell you how to build a bicycle out of atoms - which atoms to use and where to put them. Similarly, the abstract knowledge that biological neurons implement human intelligence does not explain human intelligence. The classical hype of early neural networks, that they used "the same parallel architecture as the human brain", should, at most, have been a claim of using the same parallel architecture as an earthworm's brain. (And given the complexity of biological neurons, the claim would still have been wrong.)