Moravec’s Paradox says that, in AI, the easy things are hard and the hard things are relatively easy. That is, it’s pretty straightforward to make a computer program that is better than the best human at things we think of as “higher reasoning”, such as playing chess or proving theorems. It’s the stuff that comes naturally to us—recognizing faces, moving around in space—that really gives the computers trouble.
The theory as to why this might be the case is that the basic stuff has had way longer to evolve and become perfected than has this new-fangled “reasoning” system. Things have been moving around and seeing stuff for at least a good 500 million years. Abstract thought, Moravec writes, is maybe only a hundred thousand years old.
As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
—Steven Pinker
This theory jives well with Kahneman’s conception of Systems 1 & 2, with the intuitive, quick System 1 making up the bulk of our minds, and the slow, clunky, deliberate higher reasoning of System 2 only showing up when it’s absolutely needed. I like how Julia Galef put it: your mind is like a monkey riding an elephant. The monkey thinks it’s in charge, but mostly the elephant goes where it likes.
All of which is a long winded preface to what I actually wanted to write about this morning: what do people mean when they talk about an “authentic self” or a “deep inner truth”? I think the “true self” is no more or less true than any other self, and that the language of authenticity and truth is just a way to legitimize what it really represents—whim and desire. It’s a shame that those things need to be wrapped in the language of Science for others to take them seriously. But these are thoughts for another post.