The Potential and Limits of Artificial Intelligence
Is the human brain essentially a computer? And is a sufficiently advanced computer, with all the right inputs and outputs, essentially a human brain?
These questions form the crux of dozens of controversies surrounding artificial intelligence and the possibilities of machine sentience. Hundreds of books, movies, and video games explore the implications of these questions—from the evil supercomputers of 2001: A Space Odyssey to the robot girlfriend of the blockbuster Her.
One hypothesis within the vast subject of computer cognition is “Strong AI,” which argues that an appropriately programmed computer would have a mind in the same way that human beings do. For many years, the optimism of computer science made many experts believe that strong AI was indeed possible, and many still believe that human-like computers are not far off in the future.
Curbing this optimism, however, are several arguments that claim strong AI simply isn’t possible. One of the most commonly endorsed arguments is John Searle’s “Chinese room” thought experiment. The experiment goes something like this: Imagine that you’re trapped inside a room, and that you don’t know any Chinese, either written or spoken. Imagine that the people outside the room slip questions under the door in Chinese. Imagine also, however, that you have with you a set of rules that allow you to correlate certain Chinese symbols with others in a way that allows you to respond to these questions—all without actually understanding a single word of the conversation. This, Searle argues, is what advanced AI would be like. Though it might seem to carry out an intelligent conversation or possess human characteristics, a computer does not possess a genuine understanding of its own programmed behavior.
One neuroscience argument in favor of the “Chinese room” thought experiment is that computers, unlike human brains, are fundamentally dualistic entities. They possess a hardware versus software distinction, which the brain simply does not have. The mind (or “consciousness”)and brain (the neural networks between your ears) cannot be separated–if a change is observed in the mind , it will surface in the brain as well. On the other hand, however, some claim that the mind and brain are indeed separate. Addiction, for instance, may stem from certain neurological processes, but also has an element of cognitive control and behavior.
Furthermore, if a computer were to be built to perfectly mimic the human brain—with nerve firings and neurons rather than the traditional program setup with coded scripts —it would be just like a brain and therefore would process all the same information in the same way; this would therefore prove strong AI, as the computer would be functioning exactly like a brain.
Of course, if strong AI were indeed possible, this invites ethical controversy about what rights these computers would have, how powerful they might become (the aforementioned fear of world-conquering computers), and the prospect of transhumanism, a movement to use technology to improve upon the human condition and eventually overcome human mortality.
If all this sounds ridiculous or futuristic, perhaps that is indeed the case. Yet some computers are already able to recognize feelings solely based on a human’s stride, and others are being made to mimic human emotion. Just last year, a computer program called Eugene Goostman was claimed to have successfully passed the Turing test, which investigates whether judges can detect if they are speaking to humans or machines. While the supposed AI victory is certainly a matter of controversy, the stories of Bladerunner and I, Robot do seem just a little bit closer with every new advancement in this increasingly computerized world.
And, of course, it raises the timeless existential question: what does it really mean to be human?