Thursday, January 12, 2006

Part 4: The "no magic" theory

I will start my survey of the theories of consciousness with materialism - the "no magic" theory. Materialism is the child of scepticism and a scientific principle called Occam's Razor, which says that of all the plausible theories, the simplest one is prabably correct. "Simplest" here refers to the number of assumptions made by the theory. In the context of consciousness, materialism is often referred to as eliminative materialism, a term associated with Paul and Patricia Churchland.

A true materialist believes that since over the course of millenia, we have found no evidence for the existence of "God", or "souls", or "supreme beings" of any kind, then those things do not exist. The only things that do exist are material - that is, made of matter. Tables are made of matter, and humans are made of matter. There is no reason to assume that consciousness (whatever it is) is not made of matter also. Note that matter in this sense includes energy. In fact, let's use the word "matter" to denote anything that can be detected and measured by physical instruments.

If consiousness is material, then it is located in the brain. That is because you can replace almost every other part of your body without it affecting your consciousness. Well, what are the things we have in our brains? There is the neocortex, which is responsible for building abstractions and interacting with our senses, speech and motor functions. There is the "old brain" - everything that is inside the cortex - which is responsible for things like our instincts. And then there is this strange thing called consciousness that seems to be responsible for making all of our decisions. We cannot detect a "soul"; therefore, it is not there.

According to materialists, such phenomena as belief, intention and love can be explained in terms of physical and chemical processes that occur in the brain. Given enough time, scientific advances will let us understand everything about the basic functions of the brain. Long before that, we will understand completely how simpler organisms work. Take, for example, a virus. There is debate among scientists whether a virus is a living organism. Why is that? It is probably because we already understand a great deal about how a virus operates. A Hepatitis C virus is a tiny ball made up of about 10 different kinds of RNA molecules. We know how it reproduces by infecting living cells. We know most of the chemical reactions that are involved. In some sense, adding a virus to a living cell is no different from adding vinegar to baking soda - there will be a chemical reaction, and we know its outcome. There is nothing special about it. A materialist would say that eventually we will be able to understand humans as well as we understand virii.

The main consequence of this theory is the absence of free will. In short, we are all pieces in a game whose rules are the rules of nature, and our whole life is predetermined from birth to death. All of the decisions that we make have already been made, and our choices in life are merely an illusion. Consiousness is an illusion. Although there are some scientists who try to remain materialistic while side-stepping the issue of free will, it is very difficult to avoid making this conclusion.

Consequently, the main argument against materialism attacks the absense of free will. First of all, people find depressing the suggestion that they have no control over their own life. If that is so, then why should we try? Why not quit our jobs, relax and have fun in life? It's all pre-determined anyway. I think that this line of reasoning is a mistake. So what if free will is an illusion? Since we cannot predict the future, it does not matter that our whole life story has already been written. Since there is no way we can read the story's ending, the fact that free will is an illusion is inconsequential. My decisions clearly have consequences. I'm not jumping out of the window because I know that the outcome will be unfavourable. As far as I can tell, my decisions have consequences, and that is what matters.

So what is the mechanism behind decision-making? Here is a thought. As I have mentioned above, the brain has consciousness (for making decisions), intelligence (for allowing us to operate with abstract concepts) and instincts. What are instincts? First of all, we have the survival instinct. Why do we have it? Simple. Evolution. Any species that didn't have the survival instinct didn't survive very long. Next, we have sexual instincts. Same deal - evolution. I claim (and I'm sure that I'm not the first one to claim) that every desire that you have is the result of some instinct. We have instincts that drive us to try and understand the world around us, instincts to be a good person and make the lives of those around us better, etc. Some of these are the result of evolution, some are imposed by society and family, and some just seem random (different people can have diametrically opposite instincts).

So in the spirit of Occam's razor, here is a dead-simple theory of consciousness. Our large set of instincts defines an extremely complicated "objective function", and every decision that we make in our life is meant to optimise this function. The cortex plays a big role in it because, first of all, it allows us to operate with very vaguely defined instincts in terms of highly abstract concepts. And second of all, it allows us to evaluate the objective function. Remember Jeff Hawkins' memory-prediction framework, where sensory inputs flow up the cortex hierarchy and predictions flow down? Now suppose that you are faced with a decision. Then the cortex can predict what would happen as a result of deciding A and what would happen as a result of deciding B. In both cases, your brain would evaluate the effect on the objective function and choose the better option.

For the mathematically inclined, what I am claiming is that consciousness is the process of optimising an objective function using a brain, where the objective function is given as a complicated weighted combination of our instincts, and the optimization is done using the forward Euler method performed in the neocortex. Obviously, the no free will objection is still valid, but the main afgument for this theory is that it is extremely simple and does not rely on voodoo concepts like "elementary particles of conciousness" and "microtubular bridges between the quantum world and the classical world". Of course, those concepts are fascinating, and I will get to them shortly.


At 2:54 PM, Blogger Evgueni Naverniouk said...

Err... I still disagree. I think that the "no magic" theory still has a lot of problems with it that may need quantum mechanics or some biological explaination for it. All that the materialistic theory does is it defines the consciousness properties, it does not explain how consciousness works. Which brings forward to the new, the important question, "What is making those decisions?" If consciousness is decision-making, and there is no free-will and everything is predetermined, who or what is making those decisions and who is predetermening our actions? That is where Hammeroff's theories can come into play. But as you said in Part 1, where Hammeroff didn't do is define what consciousness is. I think the "no magic" theory defines it.

What the "no magic" theory does is it tells us that consciousness must sit somewhere in the brain. Since removing the brain, removes decision-making, (eg. table). So all those whacked out theories of consciousness being outside of the being and all that we do is tap into this consciousness completely contradicts it.

At 10:11 PM, Blogger Abednego said...

No, you misunderstand. Materialism says that no one is making any decisions. It is like asking, "When I mix vinegar and baking soda, who is making the decision that there be a chemical reaction?" The laws of thermodynamics are making that decision - it's just chemistry, systems try to move to a lower energy state.

Along the same lines, I am proposing that all of the decisions that we make follow the "law of instinct" - every decision must satisfy our instincts. Basically, you calculate which of the two outcomes will make you happier, and you go with that one. By "happier", I mean that complicated function.

For example, if you are deciding between A and B, where A causes you to die and B doesn't, you will choose B. If neither causes you to die, you see which one causes you less harm. If there is still no difference, you see which one seems more morally correct to you, etc. Of course, it doesn't happen like that, as a sequence of questions. All of those considerations are combined together, and you make a decision based on the combination of "will I die", "will harm come to me", "will harm come to others I love", "will it taste good", "will I become famous", etc. Once you calculate all of those things, you can decide whether A or B is better and you go with that decision.

There is a great deal of instincts to consider, but the brain is huge - it can do lots of computations simultaneously, so computing time is not a problem.

So to summarise, no one is making any decisions. Everything you do is based on what you feel is the right thing to do, and that is just chemistry and electrical signals in the brain - completely predictable. If I had the right instruments and could study your brain for long enough time, I could predict every decision you would ever make.

Just a theory...

At 10:30 PM, Blogger Evgueni Naverniouk said...

Oh, ok. Yah I like that. I always though, there still had to be some "demon child" sitting inside the brain making those decisions. But it makes a lot of sense if it's just insticts. Yah, I think I like that theory very much.

At 4:33 PM, Blogger twidjaja said...

That makes sense - but why can't we think of "free will" as some sort of matter that human beings can never understand? Just like consciousness ...

On a related note, people used to assert that the property of matter is that we have to be able to measure it with a physical device. This definition is rather unsatisfactory as it may change through time. For example, before the advent of MRI, we were unable to detect/measure thoughts - now, as you previously mentioned, each concept is associated with a neuron in our brain, which is a cool concept.

At 10:04 PM, Blogger Abednego said...

To your first question, I would reply by saying that a materialist would be unwilling to accept that something material cannot be understood. The theory says that everything (including "free will" and "consciousness") can be explained in terms of physical, measurable phenomena. We need to agree on what it means to understand something. I would say that to understand is to be able to predict.

To your second point, I don't think that defining matter as something that can be measured is unsatisfactory. So far, this definition has been working, and there is no sign that it will need to change in the future. Also, if something cannot be measured with a physical device, then it cannot possibly have an effect on the material world (as a materialist would claim). Hence, phenomena that are not measurable are of no consequence and are thus non-existent, as far as we are concerned.

At 3:21 AM, Blogger JeremyHussell said...

This same argument, materialism vs the idea that there's something special and unexplainable in the universe (sometimes called vitalism), has been going on for a very long time. Vitalism once held that pretty much everything was unexplainable. Over the centuries, however, vitalism has been in continuous retreat in the face of materialism. Where once vitalism asserted that living matter was fundamentally different from inorganic matter, now we routinely study and synthesize organic molecules. Where once vitalism claimed that although our bodies are mostly chemical machines, nerves communicate through purely non-physical and unexplainable means, we now understand how nerves communicate electrically along their length and chemically from one nerve to another.

I think the idea that there are things we can't know is just a way of dealing with the uncomfortable fact that we don't know some things and the possibility that it may be impossible for us to know them in our lifetimes. Unfortunately, the coping strategy gets in the way quite a bit when the time comes to actually figure these things out.

At 3:28 AM, Blogger JeremyHussell said...

Viruses are a perfect example of how most of the concepts we've been discussing aren't binary, yes/no things. Are viruses alive? They don't grow, and they can only reproduce themselves by highjacking the machinery of a cell. On its own, a virus is practically inert, which is a pretty unalive kind of property. On the other hand, viruses are also quite chemically complicated, and when they hit the right environment, they do get copied.

Sometimes I despair at all this playing with the definitions of words, and conclude that we should ignore words, which are obviously just artifacts of our language, with their incomplete definitions and arbitrary divisions, and concentrate on trying to understand the world as a complete, extremely complicated system.

At 12:44 PM, Blogger James said...

I think I like that materialism theory. It makes sense to me. We are made of matter so our material architecture is producing these effects that we call intelligence, consciousness, etc…

One great way to see that we are machines is when we have an unfortunate accident like a cerebral hemorrhage (like my brother just did). When you see a person not ‘functioning’ correctly you realize that we are just machines. When everything is working correctly it is harder to see ourselves as just a [hyper sophisticated] machine.

About instincts, I’m wondering if anything controls them or if they are ‘hard coded’ or something close to that. I understand that they are important for a biological entity but I don’t think that an artificial entity would necessarily need them. As a matter of fact there are many senses that we possess that are probably not necessary to create an artificial consciousness. Does an artificial entity need to see, ear, smell, feel variations in temperature and pressure? I think that it is not necessary. If it can decode text inputs and communicate via textual output, that would be enough for the moment. The rest can always be added later. An artificial entity doesn’t have the same set of constraints as a biological entity.

At 5:12 PM, Blogger Leo said...

Hi ! I found this blog yesterday. I think that it's very interesting.
I like to think that there are a partial materialism, i say, all decisions are outcome of a chemical reaction, a cortex calculus, or something else, but not all is predetermined. If i make a decision A, then i am changing something in my brain, something in the reality, that in the future will influence on my behavior and my future decisions, then my life will be different that if i had chosen option B before.
Also i like to think that materialism and a god can coexists. What if "the creation" was only a recipe (or a description) of how the chemicals or particles will operate? :D.
A pure materialism makes the consciousness and the intelligence the same thing, a complex calculus.
I would not like to be a machine, I remain with the partial materialism.
I am sorry for my english !
Leo from Argentina (CS Student)

At 10:45 AM, Blogger santi said...

After reading almost all the posts in this blog (for which I've to congratulate you, since they are really well written and very inspiring), I've to say that I still don't understand what do you exactly understand by "consciousness". In this post you say:

"As I have mentioned above, the brain has consciousness (for making decisions), intelligence (for allowing us to operate with abstract concepts) and instincts."

But I think that decision making is all about intelligence, not consciousness. Correct me if I'm wrong, but I have the impression that you associate "conscience" with a "function", i.e. "decision making". But I've always thought of conscience more as a "phenomenon", as "being aware of the decisions". It's hard to put it into words. But I think of it more as "feeling that you are taking those decisions" than the ability to take them in the first place (which seems to me as intelligence).

So, the way I see it, conscience does not have any function at all. To me, conscience is more about the feeling of "awareness" of the self. Of being aware that you are taking decisions. In that sense I like the theory that someone posted in some comments to another post about "having a model of yourself in your mind". Although, it's not entirely satisfying, since we can easily build machines with that capability, which "probably are not conscious".

But anyway, I might be wrong, but that's the way I've always seen it.

At 12:53 PM, Blogger Abednego said...

Yes, santi, I am trying to define consciousness in terms of behaviour rather than a feeling or a sensation. It is very difficult to measure and compare feelings, which makes it practically impossible to study consciousness scientifically. How can we possibly know whether your feeling of awareness is the same as mine?

Tying it to behaviour offers a starting point for scientific theories of consciousness and for reproducible experiments. It is much easier to compare and analyse behaviours than feelings.

Also, I think that consciousness is not a Yes or No kind of property. It's a sliding scale -- some organisms are more conscious than others. People can also be more or less conscious at different times. If you don't think of consciousness this way, then you will be forced to draw a hard line between conscious things and unconscious things, and that's very difficult to do because you will always keep running into exceptions. (Are rocks conscious? are cockroaches? plants? Venus fly traps? drunk people?))

At 6:36 AM, Blogger santi said...

Yeah, I also see it as a continuous thing. Defining it as a binary property makes no sense, since, as you say, where do you draw the line.

But trying to define something which is not a behavior in terms of behaviors is like looking for the keys you dropped in the closet in your bedroom, just because there is no light in the closet.

If we accept consciousness as your definition (decision making), then we can easily define tests to see if certain objects have "decision making" capabilities. But my intuition tells me that that is not correlated with consciousness at all. And I think that that's part of the problem.

I think consciousness is hardly definable in terms of behavior or other measurable properties. I don't even have the most remote idea of how to setup an experiment that will test for consciousness. But don't get me wrong, I don't believe in souls or supernatural things. I'm a strong physicalist myself.

At 1:05 PM, Blogger Abednego said...

So what do you propose? If we can't even decide on the terms to use for defining "consciousness", what hope do we have of studying it, or discovering any of its properties?

I'm not trying to define consciousness as decision making. I think it's something else (concrete and measurable), and maybe I will write a post on that.

My only point here is that if we can't figure out what it is, maybe we should start by looking at what it does first. This is a scientific approach to studying a phenomenon. If we can't yet figure out what makes consciousness exist, let's start by looking at the effects of consciousness and study those. How does a conscious being behave differently from an unconscious one? Maybe by starting there, we can discover something useful about the nature of consciousness.

At 3:56 AM, Blogger santi said...

I don't have any particular proposal. I just wanted to clarify what did you mean when in this blog you talk about "artificial consciousness". Since, most things that you mentioned sounded like "intelligence" to me rather than "consciousness".

Under the typical definition of consciousness ("awareness by the mind of itself and the world"), consciousness is just a subjective experience, and has no external manifestation. The natural way to study subjective phenomenons seem to me to be through self-introspection.

One idea I've always had is that consciousness has also to do with "perception", i.e. in order to "be aware of yourself", you need to "perceive yourself". Thus, I don't think that an intelligent being without self-perception could ever be self-conscious. So, it's something like if the combination of intelligence + self-perception give rise to consciousness.

But I don't really know if I'm talking nonsense. This is a tough subject! I'm an AI researcher myself, and I'm used to study aspects of intelligence that are very easily measurable. So, consciousness completely escapes the reach of my usual reasoning toolbox :)

At 4:11 AM, Blogger Abednego said...

If consciousness is entirely internal, then how can I know whether your consciousness is similar to mine? It seems even more difficult to talk about the consciousness of other, non-human organisms. How could we possibly know what their consciousness is like, if there is no way to detect it from outside?

Also, if consciousness does not manifest itself externally through behaviour, then why does it matter at all? If conscious organisms behave in exactly the same way as unconscious ones do, then Occam's Razor says that we might as well assume that consciousness doesn't exist. It's an inconsequential concept.

At 10:32 AM, Blogger santi said...

Concerning your question "how can I know whether your consciousness is similar to mine?". Some years ago (not too many :)), when I was a PhD student discussing about AI with my classmates, I heard that question lots of times. Or even "how do I know you are conscious at all?" But I've never heard an answer to that question...

Some years ago I attended a conference which which happened to be colocated with a symposium in "AI and consciousness". I promised myself I'd read the papers once I got back home. But I never got time to do that. Maybe now is the proper time to do so. This was it:

AAAI keeps pdf copies available of all the papers published in its conferences and symposiums. So, they are all easy to find through google scholar.


Post a Comment

<< Home