The problem of defining consciousness seems to be much trickier than that of defining intelligence, so let's start slow. Before I talk about the many theories that try to explain where consciousness comes from, it would be nice to have some agreement on what we mean by the word "consciousness". That question is also too hard, so let's take another step back. What are some of the properties of consciousness? We obviously mean something when we speak that word; it has certain connotations, so let's try to get some handle on some of the concepts that seem to be related to the concept of consciousness. There are quite a few.
Consciousness seems to be a property that some objects posess. Humans are conscious, and tables are not. Some scientists disagree even with such a basic statement, but at least we can agree that humans clearly have some strange property that tables do not seem to posess. We have feelings; we make decisions; we strive to understand ourselves. There is no evidence that tables do these things, so let's call that strange property that humans posess
consciousness. That is not a definition, but it is a step towards understanding what it is we are trying to define.
Are dogs conscious? Probably, yes. There isn't much difference between a dog and a human. Their DNA is almost the same as ours, and the biggest feature dogs seem to lack is a large neocortex. Theirs is smaller. Most people would agree that dogs are conscious. What about pigeons? Probably, yes. Cockroaches? Why not? They make decisions. Even an amoeba makes a decision which direction to swim next, and those decisions are not random. So if we are to believe that amoebae are conscious, then there seems to be a connection between consciousness and the ability to make decisions, or
free will. In fact, it seems that the concepts of consciousness and free will are closely related, if not identical. Maybe the definition of a conscious being is a being that has free will - a being that is able to make its own decisions. Is this a step forward, or are we simply replacing one puzzling concept by another?
Let's explore this connection, but first, consider the following question. Is there a test for consciousness? Can we observe an object and determine whether it is conscious or not?
Alan Turing, considered by many to be the father of computer science, devised a
test for artificial intelligence. It is a simple test - write a chat program that can talk to people, say on
IRC, and if the program can fool a human into believing that it is itself human, then that program will have passed the Turing test. The problem is that some programs have already come
very close to passing the Turing test. Just look at some of the winners of the
Loebner prize. The test also seems to clump together intelligence and consciousness, and after a bit of thought, would you really be convinced that a machine that passes the Turing test is "like a human"?
The Turing test is a fascinating subject in itself, but it is clearly not a good test for consciousness. If consciousness is related to free will and decision making, then what about the following test, proposed by Alex Naverniouk? Create a small robotic cockroach that, to the naked eye, is indistinguishable from a real cockroach. Let a human observe its behaviour for as long as the human wishes. Then ask the human this question: "Is the cockroach making its own decisions, or is it being controlled remotely by someone else?" If the answer is uncertain, then the cockroach must be declared conscious. Note that this test is not about the
presence of consciousness, but about its
location. Is the cockroach controlled from within or from wihout? If it is controlled from within, then it is making its own decisions and is conscious. If not, then it is a zombie, a robot. What if we replace the cockroach with a cat and it still passes the test? Would we be more convinced that the robotic cat is conscious? What about a
Mars rover? Those things are very intelligent. They make a lot of their own decisions and receive only high-level commands from the scientists on Earth. If an alien were to see a Mars rover in action, would the alien find the rover conscious?
Speaking of aliens, there is a very cool thought experiment I heard about from Dr. Steven Sevush on the
Consciousness DVD's. He was using it to illustrate a different point, but the idea is useful here. Take a look at New York City from outer space. Its streets are full of people during the day; it lights up at night. There are periods of outflow when people leave on vacation and there are great gatherings at sporting events. After a tragedy like
September 11, the city reacts by cleaning up and slowly healing the wound. What would an alien think after having observed New York City from space? Is the city alive? Well, most people would agree that it is not alive. But is it conscious? Does the city make decisions? Or is it the individual human inhabitants of the city who are conscious? How could the alien observer determine that? The distinction, once again, seems to be in the
location of consciousness - is it within the city itself or within each of the persons? Sevush goes on to ask whether consciousness is inside a single human or inside each of the neurons in that human's brain? We will return to Dr. Sevush and his
single-neuron theory of consciousness later. For now, I would like to think about the idea that relating consciousness to decision-making seems to lead us to ask the question, "Where is consciousness?" instead of, "What is consciousness?"
The last motivating thought experiment I will describe is another one of Alex Naverniouk's. Consider a card dealer in the game of
Blackjack. There are strict rules that the dealer must follow (hit until 17, then stand, unless it's a "soft 17"). So is the Blackjack dealer conscious? Of course the dealer is human, so she is conscious. But what if we think about that human purely in her role as a dealer? She makes no decisions of her own - her whole game follows strict rules. A computer could do it. I would say that the dealer, purely in her capacity as a dealer and not as a human being, is unconscious.
We now come to the main point I would like to make about equating consciousness to free will or decision making. Assume for the moment that an amoeba is conscious. If that is unacceptable to you, you may replace the amoeba by the simplest possible organism that you would consider conscious. Suppose that a few centuries from now our scientific knowledge is so advanced that we can finally understand how an amoeba works. We know the function and purpose of every protein, every molecule and every atom of the amoeba. In fact, we are so advanced that we can simulate an amoeba and calculate precisely what its next action is going to be. We understand how the amoeba makes its decisions and can predict its next decision. Would that mean that the amoeba is not conscious? If we know exactly how an organism operates, does it make that organism into a robot, a mindless zombie? Once we fully understand the mechanisms responsible for making the amoeba's decisions, it seems to stop being conscious! What about that alien who is observing the Mars rover? At first, he thinks that the rover is conscious, but then he lands on Mars, examines the rover and find out that it's just a robot - it is unconscious because the alien knows exactly how the rover operates. Is consciousness a property only of organisms that we do not fully understand? If you say yes, then we clearly have a problem. That would mean that the definition of consciousness is "the ability to make unpredictable decisions," and that is a very bad definition.
A definition like that is completely useless and unproductive. It is saying that studying consciousness is pointless. As soon as we understand what it is, it seases being consciousness. It is elusive and unknowable by definition. Fortunately, there are some holes in my reasoning above. First of all, some people believe that we will never fully understand how an amoeba operates. Look at
Heisenberg's uncertainty principle. It is a fundamental, testable law of physics that one is never able to know the exact posistion and momentum of an electron. It's impossible. Therefore, we will never be able to understand everything there is to know about any single atom, much less an amoeba. This is our first encounter of
quantum mechanics in the context of consciousness, and it is only the beginning. We will get to other fascinating connections soon.
To summarise, if we assume that conscioiusness is the same as free will (the ability to make independent decisions), then first of all, we constantly find ourselves asking the question, "Where is consciousness located?" Once we localise consciousness to a physical body, like a human brain, or a cockroach, or an amoeba, we must immediately conclude that understanding the process in which that body makes decisions must be unknowable to us. Otherwise, our definition of consciousness makes no sense at all - we are forced to define "conscious" as "unpredictable", and that just seems wrong. We, as humans, are not conscious just because we are unpredictable. So (at least) one of our assumptions must be false. I would like to believe that eventually, the world can be understood by a human mind. The thought that some things are fundamentally incomprehensible bothers me as a scientist. Hence, I would rather believe that defining consciousness in terms of free will is not the right way to go, and we should be looking for another definition.