Sunday, January 08, 2006

Part 3: Consciousness and free will

The problem of defining consciousness seems to be much trickier than that of defining intelligence, so let's start slow. Before I talk about the many theories that try to explain where consciousness comes from, it would be nice to have some agreement on what we mean by the word "consciousness". That question is also too hard, so let's take another step back. What are some of the properties of consciousness? We obviously mean something when we speak that word; it has certain connotations, so let's try to get some handle on some of the concepts that seem to be related to the concept of consciousness. There are quite a few.

Consciousness seems to be a property that some objects posess. Humans are conscious, and tables are not. Some scientists disagree even with such a basic statement, but at least we can agree that humans clearly have some strange property that tables do not seem to posess. We have feelings; we make decisions; we strive to understand ourselves. There is no evidence that tables do these things, so let's call that strange property that humans posess consciousness. That is not a definition, but it is a step towards understanding what it is we are trying to define.

Are dogs conscious? Probably, yes. There isn't much difference between a dog and a human. Their DNA is almost the same as ours, and the biggest feature dogs seem to lack is a large neocortex. Theirs is smaller. Most people would agree that dogs are conscious. What about pigeons? Probably, yes. Cockroaches? Why not? They make decisions. Even an amoeba makes a decision which direction to swim next, and those decisions are not random. So if we are to believe that amoebae are conscious, then there seems to be a connection between consciousness and the ability to make decisions, or free will. In fact, it seems that the concepts of consciousness and free will are closely related, if not identical. Maybe the definition of a conscious being is a being that has free will - a being that is able to make its own decisions. Is this a step forward, or are we simply replacing one puzzling concept by another?

Let's explore this connection, but first, consider the following question. Is there a test for consciousness? Can we observe an object and determine whether it is conscious or not? Alan Turing, considered by many to be the father of computer science, devised a test for artificial intelligence. It is a simple test - write a chat program that can talk to people, say on IRC, and if the program can fool a human into believing that it is itself human, then that program will have passed the Turing test. The problem is that some programs have already come very close to passing the Turing test. Just look at some of the winners of the Loebner prize. The test also seems to clump together intelligence and consciousness, and after a bit of thought, would you really be convinced that a machine that passes the Turing test is "like a human"?

The Turing test is a fascinating subject in itself, but it is clearly not a good test for consciousness. If consciousness is related to free will and decision making, then what about the following test, proposed by Alex Naverniouk? Create a small robotic cockroach that, to the naked eye, is indistinguishable from a real cockroach. Let a human observe its behaviour for as long as the human wishes. Then ask the human this question: "Is the cockroach making its own decisions, or is it being controlled remotely by someone else?" If the answer is uncertain, then the cockroach must be declared conscious. Note that this test is not about the presence of consciousness, but about its location. Is the cockroach controlled from within or from wihout? If it is controlled from within, then it is making its own decisions and is conscious. If not, then it is a zombie, a robot. What if we replace the cockroach with a cat and it still passes the test? Would we be more convinced that the robotic cat is conscious? What about a Mars rover? Those things are very intelligent. They make a lot of their own decisions and receive only high-level commands from the scientists on Earth. If an alien were to see a Mars rover in action, would the alien find the rover conscious?

Speaking of aliens, there is a very cool thought experiment I heard about from Dr. Steven Sevush on the Consciousness DVD's. He was using it to illustrate a different point, but the idea is useful here. Take a look at New York City from outer space. Its streets are full of people during the day; it lights up at night. There are periods of outflow when people leave on vacation and there are great gatherings at sporting events. After a tragedy like September 11, the city reacts by cleaning up and slowly healing the wound. What would an alien think after having observed New York City from space? Is the city alive? Well, most people would agree that it is not alive. But is it conscious? Does the city make decisions? Or is it the individual human inhabitants of the city who are conscious? How could the alien observer determine that? The distinction, once again, seems to be in the location of consciousness - is it within the city itself or within each of the persons? Sevush goes on to ask whether consciousness is inside a single human or inside each of the neurons in that human's brain? We will return to Dr. Sevush and his single-neuron theory of consciousness later. For now, I would like to think about the idea that relating consciousness to decision-making seems to lead us to ask the question, "Where is consciousness?" instead of, "What is consciousness?"

The last motivating thought experiment I will describe is another one of Alex Naverniouk's. Consider a card dealer in the game of Blackjack. There are strict rules that the dealer must follow (hit until 17, then stand, unless it's a "soft 17"). So is the Blackjack dealer conscious? Of course the dealer is human, so she is conscious. But what if we think about that human purely in her role as a dealer? She makes no decisions of her own - her whole game follows strict rules. A computer could do it. I would say that the dealer, purely in her capacity as a dealer and not as a human being, is unconscious.

We now come to the main point I would like to make about equating consciousness to free will or decision making. Assume for the moment that an amoeba is conscious. If that is unacceptable to you, you may replace the amoeba by the simplest possible organism that you would consider conscious. Suppose that a few centuries from now our scientific knowledge is so advanced that we can finally understand how an amoeba works. We know the function and purpose of every protein, every molecule and every atom of the amoeba. In fact, we are so advanced that we can simulate an amoeba and calculate precisely what its next action is going to be. We understand how the amoeba makes its decisions and can predict its next decision. Would that mean that the amoeba is not conscious? If we know exactly how an organism operates, does it make that organism into a robot, a mindless zombie? Once we fully understand the mechanisms responsible for making the amoeba's decisions, it seems to stop being conscious! What about that alien who is observing the Mars rover? At first, he thinks that the rover is conscious, but then he lands on Mars, examines the rover and find out that it's just a robot - it is unconscious because the alien knows exactly how the rover operates. Is consciousness a property only of organisms that we do not fully understand? If you say yes, then we clearly have a problem. That would mean that the definition of consciousness is "the ability to make unpredictable decisions," and that is a very bad definition.

A definition like that is completely useless and unproductive. It is saying that studying consciousness is pointless. As soon as we understand what it is, it seases being consciousness. It is elusive and unknowable by definition. Fortunately, there are some holes in my reasoning above. First of all, some people believe that we will never fully understand how an amoeba operates. Look at Heisenberg's uncertainty principle. It is a fundamental, testable law of physics that one is never able to know the exact posistion and momentum of an electron. It's impossible. Therefore, we will never be able to understand everything there is to know about any single atom, much less an amoeba. This is our first encounter of quantum mechanics in the context of consciousness, and it is only the beginning. We will get to other fascinating connections soon.

To summarise, if we assume that conscioiusness is the same as free will (the ability to make independent decisions), then first of all, we constantly find ourselves asking the question, "Where is consciousness located?" Once we localise consciousness to a physical body, like a human brain, or a cockroach, or an amoeba, we must immediately conclude that understanding the process in which that body makes decisions must be unknowable to us. Otherwise, our definition of consciousness makes no sense at all - we are forced to define "conscious" as "unpredictable", and that just seems wrong. We, as humans, are not conscious just because we are unpredictable. So (at least) one of our assumptions must be false. I would like to believe that eventually, the world can be understood by a human mind. The thought that some things are fundamentally incomprehensible bothers me as a scientist. Hence, I would rather believe that defining consciousness in terms of free will is not the right way to go, and we should be looking for another definition.


At 11:12 PM, Blogger Alex Lopez-Ortiz said...

Igor: quite a few of these "are computers intelligent?" discussions seem rather unscientific. Partipants start from the belief that computers are/aren't intelligent and then try to define terms to match their preconceived notion. This amounts to biased research.

Consciousness can be defined either as "that which separates humans from computers" which leaves open the possibility that consciousness is the empty set, or, alternatively, it is defined on its own, and once the definition has been put forth we evaluate to see if computers meet it or not.

It seems to me that in the previous discussion you move back and forth between the two, which is not consistent.

I'd also like to pick on the categorical statement that computers "almost certainly" aren't conscious. You see, if we were to look at a couple of cells under a microscope we would say that they almost certainly aren't a tree. However unbeknownst to us it turns out that they are elm tree cells, and a few billion of them put together are a tree.

So yes, computers today certainly do not seem conscious, but perhaps this is only a matter of more CPU power and a few more AI breakthroughs, just like the computer chess champion eventually came about after enough CPU power and enough theoretical advances had become known in the world of computer chess. Perhaps then we will have a program that fulfills all the practical definitions of self-consciousenss.

At 1:19 AM, Blogger Abednego said...

Thanks for your comment, Alex. Let me try and reply to each of your points.

It is a great challenge find the right words to start a scinetific discussion about intelligence or consciousness. So far, I have not mentioned any of the scientific theories people have proposed to study these terms. I have always had a problem with these theories because they try to study something that they don't attempt to define. That is why I am striving to find any definition at all that would be acceptable. Yes, some of these approaches are biased, but my hope is to throw out some ideas, ask enough questions and collect enough facts so that we can finally come up with an acceptable definition of consciousness. Perhaps you are right that the way I am approaching these terms is biased and wrong, and that is why they evade definition.

I want to keep an open mind, but I disagree with your example of elm tree cells. I think that if you take a billion cells and put them together, you will not get a tree. Here is a better example. I think that if you take a billion human cells of various types and carefully construct a human body out of them, you will get a dead human body. Also, if you take carbon, nitrogen, oxygen, water and other molecules and build an amoeba out of them, you will get a dead amoeba. I am not certain, but I just have a gut feeling about that.

You are correct - there is a theory that says that any system, if complicated enough, will acquire consciousness, and that includes computers. There is also a theory that says that consciousness is an illusion. I have no idea which is correct; no one does. The reason I claimed that computers are not conscious is the fact that there seems to be a fundamental difference in the way computers and humans think, and I would like to try and define what that is. As you said, it could turn out to be an empty set of differences.

Also, suppose that a computer that is complicated enough can be declared conscious. We do not seem to call today's computers conscious. Why not? What will be that breaking point?

At 10:26 AM, Anonymous Anonymous said...

Hi Igor,

I found my way over here via Anthony's blog. I don't have anything to add to the current discussion, but in the spirit of de-lurking week, I thought I'd say hello. Plus your blog has a certain aesthetic quality, it just kind of makes me feel at home.

Anyway, I also enjoy an interest in consciousness, and hope to be able to play devil's advocate with you at some point. In the meantime, I need to check out that Consciousness DVD set, it looks very interesting.

At 11:17 AM, Blogger Abednego said...

Thanks, kurt. I checked out your blog. Looks pretty cool. I'm trying to prove that your P vs. NP proof idea will lead nowhere. ;-) Fortunately, I've been unsuccessful so far.

At 3:18 PM, Blogger Alex Lopez-Ortiz said...

so that we can finally come up with an acceptable definition of consciousness.

And I applaud you for your efforts. Let's just try to keep the bias out of the equation. Say assuming that computers are not conscious can be helpful to narrow your search for a definition of consciousness but it shouldn't be the driving consideration, as this might lead you to reject an otherwise sensible definition of consciousness for the mere reason that it might be jarring to your preconcieved notions. A good example of this is the definition of continuity in calculus, which allows some weird functions to be continuous.

Also, suppose that a computer that is complicated enough can be declared conscious. We do not seem to call today's computers conscious. Why not? What will be that breaking point?

Twice in my life I've seen the hint of consciousness in computer programs. What I mean by this is that these computers performed a specific, highly intelligent action that if they were to repeat on a consistent basis, it would certainly give one the illusion of consciousness.

Would this be real consciousness (at least as far as a functional definition is concerned) or simply an illusion of consciousnes a la Elisa or state-of-the-art chat-bots? I do not know.

Lastly, perhaps it is too early to define consciousness. Say, imagine trying to pin down the definition of disease before pathogens were discovered. This would have been a fruitless task (and it was). For one, we were trying to isolate a single definition for what in reality are at least two (and perhaps as many as six) different and independent phenomena, that is

(i) a disease cause by an external agent (virus, bacteria, prion, chemical poisson) and

(ii) a mutation causing sickle-cell anemia, cancer or some other such internal disease.

Indeed the search for AI, seems to already have produced at least one such split, that you accurately pointed out: consciousness and intelligence. Artificial (functional) intelligence has been achieved at least in restricted domains (e.g. deep blue), consciousness with its ability to adapt and react to reality has not.

At 12:06 AM, Blogger Evgueni Naverniouk said...

Right on. Someone to talk to about this.

Firstly, I think you absolutely must bring back the idea of decision-making when defining consciousness. Alex's example, (or your's, I forget whose), of instead of just looking at an object, choosing to look in particular direction, or choosing not to look at all. Something, that can't be programmed by any known computer, because it isn't random and it isn't not random. It just is. That definatly needs to be brought up, because that for me is the breaking point where computers simply can't ever be conscious (at least not in the way they work today).

Secondly, I still really dislike how you completely dismiss the idea of consciousness being simple "making intelligent decisions". So what if that means when we figure out how the amoeba fully works, it is no longer "conscious". We still haven't been able to do that, and that's just an amoeba. Think of how hard it would be to try and figure that out for a human. To me, this definition of consciousness really doesn't have any flaws. I mean, the only problem that you see with it, is that it means that consciousness is not stationary. Consciousness is something we made up to define an organism that seems to operate in a particular "lively" way. I think that this particular "way" can be explained and calculated for organisms, even, eventually large organisms as humans.

At 2:32 AM, Blogger Abednego said...

Thanks again, Alex. Yes, I agree with your criticism about bias. Thinking back on it, I accepted as "almost certain" the claim that computers are unconscious because of frustration of not being able to find a single uncontroversial property that consciousness must have. So I chose something that, in my mind, would cause the least disagreement. That didn't work. Now I'm back to the uncomfortable state where I know this word: "consciousness", but I have no idea how to even begin to define it. Any attempt seems to immediately run into a wall of counterexamples.

Perhaps you are right, and we are just not ready to define it yet. On the other hand, we should certainly try. There were two reasons I wanted to start this blog. Firstly, I wanted to summarise the various theories of consciousness (which I will get to in Part 4) in one place for myself. Secondly, I felt frustrated that all of these theories tried to explain the mechanics of consciousness without defining it. Now I see why.

I really like the parallel with the word "disease". It brings hope. ;-)

At 2:41 AM, Blogger Abednego said...

Jenia, I see your point. The reason I disagree is that you propose a sort of a "floating" definition for the word "consciousness". When (if) we completely understand the amoeba, your definition will change. When we understand the cockroach, it will change again, and so on. The whole point of defining it is to try and understand what it is. If the definition is not fixed, then it doesn't seem to help in understanding. At least, that's my feeling of it.

I also see a contradiction between your two paragraphs. In the first one, you say that there is a "breaking point", and on one side are the conscious humans and on the other side are the unconscious computers. Then in the second paragraph, you say that the mechanics conscioiusnes can eventually be explained for all organisms. (I'm simplifying here, but I hope that I understand your points correctly.) That, to me, seems to imply that there is no fundamental difference between the things on the two sides of the "breaking point". We just need to wait long enough for science to "catch up". Do you think that is a contradiction?

At 10:35 AM, Blogger Evgueni Naverniouk said...

I don't think it's a contradiction at all. I actually believe that tables probably do have a "consciousness". Consciousness is directly linked to intelligence. The more intelligence you have - the more complex the object's consciousness. Probably something with a huge relationship like x = y^999. So because an amoeba's intelligent capacity is so much smaller than a human's we will surely be able to understand it much sooner before we can understand a human's. The reason for that is that consciousness is decision-making and decision-making depends on your intelligence. The more intelligent you are, the better choices you can make since you are more aware of more possibilities.

When I speak of the "breaking point", that is actually the place where we pass the barrier between observing an object behave and being able to fully understand it. For example, we can built a table and it will function exactly like any other table. We can't built a working amoeba yet. Just like we can't built a computer that's intelligent in everything we want it to do. Surely we can make a chess-playing computer that can rival Kasparov, but can that same computer beat someone at backgammon too? No, that would require doubling everything in the computer's programming to fulfill the new game requirements. Now that's just 2 games. How many things can a human do? You can clearly see that trying to computer intelligence will eventually lead to that particular decision-making that is responsible for conscious behavior is extremely hard.

At 10:49 AM, Blogger Evgueni Naverniouk said...

Consciousness is definatly just an illusion. Just like your free-will. You said we shouldn't be afraid if there is no free-will, well you shouldn't be afraid if there is no consciousness. ;)

At 1:30 PM, Blogger Abednego said...

Jenia, I think that you will like Part 4. I try to make many of the same points there.

At 2:01 PM, Blogger Alex Lopez-Ortiz said...

Perhaps you are right, and we are just not ready to define it yet. On the other hand, we should certainly try.

I'm not convinced that we ought to try to define it just yet. Imagine if NASA had set up a committee to define lunar dust in 1961. This would have been a foolish task. The best time to define lunar dust was after the lunar landing, when samples were available.

This does not mean we shouldn't study consciousness and perhaps even theorize about its nature (NASA, after all, theorized about the nature of lunar dust to ensure the safety of the landing area).

But overlall I think we would be better off if we start from the assumption that we are nowhere ready to give up a final definition. Rather, we are barely beginning the process of suggesting possibilities with the intention of narrowing down the field. We would be further ahead in this scientific endeavour if we focus on finding things that consciousness clearly isn't and devicing scientific tests to confirm those statements. Once we have narrowed the field sufficiently (a decade or two from now) we can then study what is left, understand it, and only then define it.

At 2:16 PM, Blogger Abednego said...

Ok. I think we have only a minor difference of opinion here. My argument would be that attempting to define what consciousness isn't is the same as trying to define what it is. It's just a slower, more careful way of doing the same thing.

In the same sense, we are probably still nowhere near being able to define what a disease is. We have some very good ideas and examples, but who know what nasty things await us if we were to land on some distant planet. And for something closer to home, there is great disagreement on whether addictions are diseases or not.

Basically, I agree with you. I just think of defining and understanding as very similar concepts.

At 2:25 AM, Blogger JeremyHussell said...

My own working definition of "consciousness" has a lot to do with internal models of the world. By my definition, an ameoba or even a cockroach would not be conscious, because they simply react in hard-wired ways to their surroundings. Animals with brains complex enough to contain internal models of the world, however simple they may seem to us, can sometimes reason about their situation and react appropriately to situations they have never encountered before. The more complex the internal model, and the better the reasoning applied to it, the more conscious the animal or person.

Incidentally, under this definition, you can speak of things as being "conscious of the world" or "conscious of itself" (i.e. self aware). If they have an internal model of themselves about which they can reason, then they're self aware.

One should attempt to define these slippery words like "intelligence" and "consciousness" in terms of things that are as concrete as possible. We'll likely err, but at least we'll have a working definition that can be improved as we discover more about learning, reasoning, decision making, and any other processes that go on inside the brain.

At 2:47 AM, Blogger JeremyHussell said...

The Turing Test actually talks about a complete simulation of a human being. He asserts, in effect, that anything we cannot distinguish from a human being, we must by default treat as an (intelligent) human being. The chat interface is a modern simplification.

Are cockroaches or the Mars Rover intelligent? To a degree, yes. They both react to their environment in intelligent ways. Are they conscious? Well, I would argue that cockroaches aren't. They're mostly a bundle of hard-wired reflexes. They'll react in the same way to the same stimulus, even when they're in a situation where the reflex harms them. They're poor at learning.

There's a researcher at the MIT robotics lab who has had some success at creating robot insects by moving away from the model used by his peers, where the robots had a complex internal model of the world, to a system where every component of the robot reacted to its own sensors independantly, with coordination handled by sensors that told one component about the state of another. For example, each leg had its own position and pressure sensors, and the motors in the joints were controlled by circuits that made their decisions based only on data from those sensors and some higher level information about the position of the neighboring legs.

There's some evidence that real cockroaches are wired in a similar way.

The Mars Rover I'm not sure about. It's possible it's conscious, in a very low-level way. On the other hand, it is also similar to a tool. It relays information back to Earth, where humans analyze it and send back commands that control the Rover's actions. In that sense, the Mars Rover is merely an extension of the human concsiousnesses on the other end of the communications link.

The puzzle of whether cities should be considered alive, or even intelligent or conscious, is a good one. There are some people who view hives of social insects, e.g. a bee hive, termite mound, or ant hill, as a meta-organism, an organism made of smaller organisms. In this view, ants that are specialized to perform certain functions for the colony, and could not survive independantly, are much like our cells, which are specialized to serve us to the point that they could not survive independantly, despite the fact that there are single celled generalists out there that can survive just fine by themselves.

You could extend this idea to think of human cities, or possibly societies, as meta-organisms. Most people perform such specialized jobs that they couldn't really survive without the rest of society. I certainly wouldn't know how to grow my own food or make my own clothes if civilization collapsed.

At 3:01 AM, Blogger JeremyHussell said...

It might be helpful to think of things this way: there are two fundamental things in the universe, space/time, and matter/energy. You can explain a lot of phenomena in terms of these. However, life, intelligence, and consciousness are most easily explained as patterns in the matter and energy spread through space and time. They are not inherent properties of the building blocks of the universe, but instead arise from complex patterns in the arrangements of the building blocks.

Perhaps that's too abstract, but I've found it useful. Thinking of life and intelligence in terms of patterns of information makes it obvious that there's nothing really mysterious about them. There's certainly a lot we don't know yet, but there's no fundamental barrier that will inevitably prevent us from understanding these things.

At 3:14 AM, Blogger Abednego said...

I think you are contradicting yourself. (I was waiting for this to happen. ;-))

First, you said that consciousness was not a binary property. Now you seem to be saying that amoebae and cockroaches are not conscious, while humans are. And you support this by saying that cockroaches do not build models of the world around them.

First of all, I think they do. They can learn to run into the same crack in the wall whenever I turn on the light. Second of all, how do you know that humans are not "hard-wired" the same way you believe amoebae to be? Maybe it only looks as if our brains are building some models, when in fact, it's all an illusion, and all of our thinking follows a well-defined deterministic algorithm. A part of that algorithm could be the process of remembering past experiences in the form of generalizations or models.

I think that the terms "conscious of the world" and "self-conscious" are misleading. They refer to intelligence rather than consciousness. Humans reason about the world around them using a set of higher-order abstractions. That is what we mean when we say that humans are more intelligent than cockroaches, who reason about the world using very low-level concepts (for example, "run towards the dark spot").

I think that having a model of the world and a model of self is an attribute of intelligent beings, not conscious ones. For example, an anti-virus program works by building a model of the computer. In very simple terms, it learns what normal-functioning programs look like. Then if it meets a program that exibits signs of infection (one that conflicts with its internal model of a healthy program), it disables and disinfects that program. The anti-virus not only maintains a model of a healthy computer; it actively uses that model to detect and react to irregularities. In that sense, I would call an anti-virus intelligent because it uses a high-level abstraction (a model) to search for patterns and to make decisions.

Instead of saying "self-aware", I would rather use the term "intelligent enough to have the concept of self". And instead of "conscious of the world", I would rather say "intelligent enough to be able to operate with high-level concepts of real-world objects".

I think intelligence is all about the concepts and abstractions that a being uses to make decisions, and consciousness is about the big Why - why do we make any decisions at all? why do we decide to say one thing and not another? Why does the cockroach run back into the crack in the wall? Is it a hard-wired instinct of running to a dark place, or is there a little conscious demon in its head saying, "I think it's a good idea to run over there now."

It's no different with humans. Somehow, I decide to go to school in the morning. I'm intelligent enough to evaluate several options of how to get there (subway, bus, bike, cab, walking) and pick one that, according to my model of the world, will minimize my travel time and cost. But what is it in my brain that made the decision to go to school in the first place? It is a hard-wired instinct of following the daily routine, or was it some "demon" making a decision? Do I have free will to decide whether I go to school tomorrow morning, or am I a machine following instructions? That is what the question of consciousness is. Whether I have a simple or a complicated model of the world (or no model at all), I think, is irrelevant.

At 4:32 AM, Blogger JeremyHussell said...

Crud. This is why I sometimes wish we'd just abandon these fuzzy words. They make it difficult to talk about the world as it really is. They promote misthoughts.

I don't believe consciousness is a binary distinction, despite my difficulty in writing about it that way.

As for decisions, who says they're anything special? What if there's just a threshold, and when you evaluate a function determining cost, if you stay under the threshold you do one thing, and if you go over, you do the other?

That looks like a decision to me, and, in your own words, "no one is making any decisions."

(Ok, so you were describing the materialist theory, and you were careful to say that you don't necessarily agree with that theory. I do agree with it, for now.)

At 5:12 AM, Blogger James said...

> Is consciousness a property only of organisms that we do not fully understand?
I'm really tempted to say yes!

> If you say yes, then we clearly have a problem.
Sorry about that :-)

> It is saying that studying consciousness is pointless.
Or maybe it is saying that consciousness is just an illusion, same for free will.

I'm curious to read an alternative definition of consciousness!

In the comments there is a discussion about whether a computer is conscious or not. I remember that when I started using them it seems magic to me. That was before I started programming... Now I know this is just a bunch of if...else statements and I don't see the magic anymore, but my mom does! If you combine enough if...else statements together you obtain a sophisticated reasoning. And for a human being that doesn't understand how it works, the machine seems to be intelligent. But we know that all that is artificial as the reasoning is hard coded by a human. Therefore, the human impressed by the intelligence of the computer is in reality seeing the intelligence of the human that coded the program. As the human experimenting the computer magic is not aware of that, for him, the computer appears to be intelligent.

For me there is 2 conclusion out of this story:
1) the computer is not intelligent (as a programmer I can assume you that MY computer is not intelligent :-)
2) a complex reasoning appears to be the manifestation of an intelligence (or consciousness) to us. I define complex reasoning as something that we don't know how the entity (pc, human, etc..) has reached that conclusion.

Curiously enough I'm reaching the point where you ask the question "Is consciousness a property only of organisms that we do not fully understand?" again. But now I think that the problem lies not with the answer (yes) but with our limited capacity of understanding what we are. Historically, human being have seen magic everywhere: in animal, plants, other humans, the sun,... In fact, something is magic until we find how it works. Once we find the trick, the magic disappears (as least for the ones who know the trick). However ask yourself the following. Does knowing the trick changes anything? Is is less wonderful because we know how it works? Do you value the 'how it works' or the output more? Let me tell you that you have no choice. You have to value the output otherwise you will not be able to continue the search for consciousness. Because what we're trying to do is putting consciousness in a box. In other words, we're trying to understand the 'how it works', we're trying to kill the magic, and by doing this we're reducing the human being to a bunch of if...else statements. That's why I value the output: I don't want to consider myself a 'robot' even if I am one. That's why so many human being will be too afraid to lead such search. If one day we find an answer to our question, the magic may be lost for some of us, but not for the others human beings. In addition, we will be able to bring even more magic to the live of these other human beings as we would have the ability to put intelligence were today there is none.

> Do I have free will to decide whether I go to school tomorrow morning, or am I a machine following instructions?
For me we are a machine following instructions and living in a context that greatly influence our behavior. Many people won't agree because it is not so obvious. The instructions are a little bit different for everyone of us. We are all a little bit different (biologically) so with the same input we won't produce the same output. Our personal history is also different and that influences the output again. Finally, the context, the societies we live in are all a bit different. But all this doesn't changes the fact that we are a machine (albeit reconfigurable within some extent) and that there are instructions. But in the end, this is not the fundamental part. The fundamental part is the output we are able to produce. The output has value, no matter how it has been produced.

At 12:05 AM, Blogger mattlesnake said...

Your thought experiment about NYC jogged a thought. What if consciousness is simply the intersection between mood/emotion and instinct, and that intersection is most profoundly demonstrated in human beings?

Starting with a human and extrapolating upwards to family, kinship groups, societies and nations, as you progress upwards you see a lot less of what we might call directed action or thought, and a lot more structural elements that are based on mood. Cities and nations are often described as 'optimistic, traditional, depressed (even angry.) The alien might have described post-911 NYC as 'in shock.'

Extrapolating downwards, snails may have some choice based actions, but it's likely that a lot of their actions are based on instinct. As you go further into the microscopic, cells, bacteria and such likely operate almost entirely on genetic programming.

Human beings have both...emotion and instinct. From the former comes choices from the latter, drives. When they merge, you get consciousness.

At 2:05 AM, Blogger James said...

Hi mattlesnake,

you said some interesting things but I don't agree with them. I would put emotion and instinct in the same bag:
- both drives you.
- you don't choose your emotions nor your instincts.

When making a decision process, the consciousness evaluates several options and chooses one. Everything that interfere with this process (like emotions) have for consequence to remove options from the table and thus makes you less free to choose.

Ironically, our capacity to act logically, like a robot, is what makes us a little bit free. That said, freedom to choose is not absolute. We are more free in some situations than in others.

At 2:17 AM, Blogger Abednego said...

I wouldn't be so quick to dismiss mattlesnake's insight. I think that emotions and instincts are different.

Instinct is a mechanism of making decisions quickly. If I'm walking down a narrow street, and there is a person walking towards me, I would instinctively keep to the right to avoid collision, without giving it much thought. (And if I were in England or Japan, awkwardness would follow.) There is no emotion involved. I don't "feel" like keeping to the right. I don't do it because I "like" it. My brain subconsciously makes the decision to keep right instead of left. That's instinct.

Emotion is another word for feeling. It's hormones. Love, fear and curiosity are emotions. I think of them as inputs to the decision-making brain. I believe that I can control my emotions - I don't let them make any decisions for me. (Perhaps, I'm being naive here.)

At 10:52 AM, Blogger mattlesnake said...

James I have to disagree. Instinct is automatic, sort of like breathing. You don't think about it. Emotion and choice are the same thing. Human beings don't ever make 'rational, unemotional choices.' They make choices for emotional reasons and then use logic and reason to support them.

I believe that the emotional aspect is what separates us from lower animals and is very different than pure instinct (ie I don't feel like having a heartbeat today. I just do (although negative emotions, like depression, can eventually effect instinctive actions, like a heartbeat.))

At 4:02 AM, Blogger James said...

You know mattlesnake what I find interesting? That we manage to disagree on something I consider a simple and clear concept :-) Obviously I must be wrong and it must not be not so simple otherwise we would not be discussing this.

When you said that emotion and choice are the same thing I would ask you: who is choosing? As you don't create voluntarily your emotions (they just arise), the choices that they make you do are not the expression of free will. Of course some individuals can control their emotions up to a certain extend. When I say that they can control their emotion, it is only after the emotion is perceived by your consciousness. Therefore the emotion is already here and you consciousness can sometimes act to refrain or enhance the emotion. But this is not possible for everyone every time.

So you say that human beings "make choices for emotional reasons and then use logic and reason to support them", your are right. This is a limitation of our specie. For many human being, many decision are made like that. However what you describe is the absolute opposite of free will (they have not chosen freely the emotions that will trigger the choice).

Lastly, when you say that "the emotional aspect is what separates us from lower animals", i think it is safe to say that this is wrong. Every animals might not have a consciousness comparable to ours, but many animals (if not all mammals) have emotions.

IF you want to find a differences between us and them, you should better praise your logic, analytical skills, our complex language, creativity or what we call our consciousness in my opinion. The main difference between us and them is our collective capacity to build a large knowledge database and to pass it to future generations (standing on the shoulders of giants).

At 2:44 PM, Blogger mattlesnake said...

Hi James,
I agree with most of what you said, but I don't agree that there isn't choice. We do make choices for emotional reasons, and then use logic and reason to justify them, but that doesn't mean we don't have competing emotions. An analogy would be when you are torn over two choices. You have multiple emotions, but there is choice in picking which one to listen to and act on.

Ultimately I think this whole consciousness conversation is really silly. The buddhists have it right. The essence of the human being/soul/consciousness/ whatever is simply that which observes and chooses what to observe. Those billions of decisions throughout a lifetime create all of the conditioning and programming that we want to call higher thought, including emotions. And you can choose your emotions, not in the spur of the moment, but through conditioning. Spend time around a bunch of negative people, or work in a failed state, and you'll see what that negative influence does to your emotions. You can condition emotions, mostly through what you choose to observe.

I also realize that other animals have emotions (my do smiles all the time when happy, as do other mammals,) but as you descend down the chain of complexity, I think that lower life forms don't have emotion but instead operate on instinct (insects and such.) Emotion is a higher order feature.

So humans are high order emoters with instinct. I do agree there are many other things that set us apart, including the things you list.

For me the real question is more about the core...the essence of us all. The observer/chooser. That is the 'conscious' part of us. What is observing?

For me the best explanation is that it is a mechanism the universe has evolved for examining itself.

NPR had a good segment on something related to this idea
Why is the Universe Right for Life?

At 2:54 AM, Blogger James said...

Hi mattlesnake,

I like the Buddhist thing: "which observes and chooses what to observe". I didn't know about it. I like the absence of purpose of this answer.

About the "what is observing" question is a little bit like asking "Why are you living" or "what is the purpose of life". Your answer is 'it is a mechanism the universe has evolved for examining itself.' Although I find it interesting I thing that it doesn't really makes sense because it assumes that the universe wants to examine itself. How could the universe want anything if it is not conscious? And if it was conscious it could do the job all by itself...

What if the answer of this question is that things just are? I think that it is an unsatisfactory answer for a human brain but that doesn't make the statement false.


Post a Comment

<< Home