Friday, January 06, 2006

Part 1: The Basiscs, or lack thereof

Before we can have a sane discussion about "smart computers," we need to define a few things. Unfortunately, that is where most of the troubles begin. How do you define "intelligence"? It gets worse - how do you tell whether something is intelligent? Is a dolphin intelligent? How about a dog? A crab? Most scientists have no idea how to define "intelligence", "consciousness", "awareness", "thought", "mind", "soul", "free will" or anything that has to do with that "stuff" that seems to make humans fundamentally different from trees.

On the bright side, we seem to be settling in on a few conventions, so things are not that grim. The first important idea that is not at all obvious is that there is a fundamental difference between intelligence and consciousness. Several speakers on the DVD set agree on this. Here is the argument they give. Intelligence is not an absolute quantity; it is relative. Instead of calling objects either "intelligent" or "unintelligent", we should say that humans are more intelligent than dogs, and monkeys are more intelligent than birds. Consciousness, on the other hand, seems to be a binary property - humans are conscious and tables are not. (Although there is disagreement on that point, too.)

But what about intelligent computers? The argument is that many computers are intelligent already. Take, for example, Deep Blue - a computer that plays on par with the best human chess players in the world. In the world of chess, Deep Blue makes intelligent decisions. If intelligent is the opposite of stupid, then it is hard to argue agains that. This is where most people say, "Wait. That's not what we are talking about. Computers play chess using brute force calculations. There is nothing intelligent about that. It's just memorization." But how could you tell? If the only thing you could see was the chessboard of the match between Deep Blue and Garry Kasparov, could you say right away who was playing which side? The point is that Deep Blue is making intelligent decisions.

Of course, there is something fundamentally different about the way Kasparov and Deep Blue think. That is the difference between intelligence and consciousness. The argument is that we should stop looking for Artificial Intelligence (AI) and start looking for Artificial Consciousness. AI is already here, and there is nothing special about it. It is clear that Kasparov is conscious, and Deep Blue is not. Now if intelligence is the ability to make intelligent decisions, then what is consciousness? Humans are conscious. Dogs are conscious. Even bacteria seem to be conscious! If you look at some bacteria in the microscope, they wiggle. They move through water in ways that seem random. Some bacteria have the ability to swim towards salty water. If you have a water tank that has one side where the water is more salty, these bacteria will find a way to get there. The bacteria are making a decision. This decision has been programmed into their DNA by evolution. Are these bacteria intelligent? Probably not. Are they conscious? Probably yes. Are computers conscious? Almost certainly not.

So what is consciousness? That is probably the most difficult question. Scientists call it the "hard problem", a term coined by David Chalmers. It turns out that defining intelligence is much easier than defining consciousness.

Here is my personal view on the situation. This view is likely to change. I think that "life", "consciousness" and "intelligence" are all different properties. Intelligence is the ability to make decisions based on sensory inputs and past experiences, an ability to learn. In that sense, computers are intelligent. Life is that which is studied by biology. Several definitions of life have been proposed over the centuries, and they are basically right. Things that are alive grow, reproduce and die. Life is what distinguishes a tree from a rock. Consciousness is what distinguishes animals from plants. Bacteria are conscious. Flowers are not.

Many scientists disagree with this view. There also seem to be gray areas on the boundary between alive and dead. Crystals grow and reduce entropy; are they alive? Worker bees and infertile people do not reproduce; are they alive? Virii do not grow; are they alive? What about the possibility of silicon-based life forms? The bondary between plants and animals is not very clear either. What about mushrooms, or flowers that eat flies, or virii again? Maybe our current definition of life is completely wrong. Finding new life forms on other planets would certainly help us clarify things and zoom in on the properties that are unique to life only. There is also the possibility that it is all an illusion, and there is really no fundamental difference between a tree and a rock, but that argument sounds like giving up to me. There is clearly life on Earth. And there is clearly no life on the Moon. What's up with that?

Let's leave the concept of life to another discussion and move on to something that should be more interesting to humans. We, as humans, are not just alive. We are also conscious and intelligent. Before I talk about the "hard problem" of defining consciousness, let's get the simpler concept out of the way first - intelligence.

19 Comments:

At 1:45 AM, Blogger JeremyHussell said...

This comment has been removed by a blog administrator.

 
At 1:56 AM, Blogger JeremyHussell said...

I disagree with your characterization of consciousness as a binary, on or off property. I think all of these concepts, life, intelligence, consciousness, and many others (e.g. complexity), are quite fuzzy concepts that shade imperceptibly into their opposites. There is no one point at which something becomes obviously conscious.

Indeed, some of these things, like intelligence, are not even linear scales that proceed smoothly from not intelligent to intelligent. Intelligence is much more complicated than that. One can be very intelligent in one area, while quite stupid in others. Deep Blue, for example, is very intelligent in the domain of chess, while quite stupid at just about everything else. Monkeys are dumber than I am at using cell phones, but smarter than I am at leaping from branch to branch and finding fresh fruit.

On the biological front, I disagree with the idea that consciousness is what distinguishes animals from plants. (Plus, bacteria are neither plants nor animals to a biologist.) Plants react quite intelligently to their environment. They just react more slowly than we can perceive (time-lapse films of plants growing or sunflowers turning to face the sun are really eye-opening) or on a biochemical level. Among biologists, plants are well known for being the world's experts at biochemical warfare against both insects and each other.

 
At 4:20 AM, Blogger Harrison said...

It seems that you define intelligence from the output that a program produces. I began to disagree with that view after I read about the Chinese Room argument. I believe that intelligence involves how a person solves a problem, and the thought processes in which take place; but this leads to another question: can intelligence be measured in definite units?

For people that havent heard about the Chinese Room argument, it is a scenario: A person that does not know Chinese is locked inside a small room with a huge book. Someone from outside slips a message written in Chinese under the door. The person in the room looks in his book, which lists every possible Chinese message there is, and an appropriate response for each one. The person looks up the message he recieved, and copies the reply onto a sheet of paper, slipping it back through the door.

The person outside the room has no indication that the person inside does not know Chinese. All the responses to his messages are perfect. Is the person inside the room intelligent as far as Chinese is concerned? Arguably, the answer is no.

But we are not evaluating the book, but rather the entity (person). I do not think a chat bot such as ALICE, whose every response is preprogrammed just for that input, is considered smart. ALICE has convinced people that the person they are talking to is a real person (the turing game). Yet I disagree on the fact that ALICE has true intelligence.

Harrison

 
At 9:01 AM, Blogger Abednego said...

The Chinese room argument is intentinally misleading. First of all, I claim that the language selection is unimportant, so let's do the same thought experiment with English. The fact that Chinese is involved is only meant to take us away from the familiar and mislead us. Consider a room with a person who does not speak English inside. Do you honestly believe that it is possible to write a book with every possible response to every possible question in English? I can easily come up with a sentence that no one else has written before me, and any intelligent person will understand me and be able to reply intelligently. Yet the room will fail.

Take this one for example. "If twelve purple polka dot monkeys can eat thirteen empty cartons of milk, then how many empty cartons of milk will thirteen purple polka dot monkeys be able to eat in the same amount of time, Mr. Jean-Claude van Damme?" This question has a correct answer that any intelligent person can find immediately.

The number of possible questions is infinite. Not just very large. Infinite. Therefore, it's impossible to write every possible question in any book.

 
At 12:53 AM, Blogger Harrison said...

Yes I agree on your point that it should not matter what language is used in the "Chinese-man" argument. I believe that the person that proposed it used the language Chinese only becuase it is a language so different from English that it would require obvious understanding to be able to converse in that language. But in any sense, it could be any language.

The book with every possible message in the world (though it is impossible) is just a representation of CBR, or Case Based Reasoning. That basically means an input for every output. The guy doesnt know the foreign language. He uses his "book", or in this case programming, to determine an output. To the guy outside he is exhibiting intelligent behavior.

Yet there is a clear difference between a person knowing watever lanaguage, and a person or robot using a definite preprogrammed input/output system.

Harrison

 
At 1:24 AM, Blogger Abednego said...

Are you sure that there is a difference? Could you define precisely what that difference is? If you say that replying in Chinese is different from understanding Chinese, can you explain exactly what that difference is, in terms of some measurable physical or chemical process?

I agree that it feels like there should be a difference, but perhaps it's all an illusion caused by our arrogant belief that understanding is an amazing unique ability that humans posess. Suppose that we create the Chinese room with such a gigantic book that it contains the answer to every possible question with at most 100,000 words in it. That's a finite number of questions, so it's theoretically possible. No one is going to ask a question longer than that, so no human could possibly tell a difference between the Chinese room and a person that understands Chinese. So here is my question: is there a difference? I would say no. In that case, the room would truly understand Chinese.

 
At 6:07 AM, Blogger Harrison said...

I see exactly what you are getting at, and I do recognize the logic in that. Right now, at least in terms of concept, the problem is what we define as intelligence. Can we judge intelligence by the produced result alone?

Im not necessarily taking any side on this. It really seems like there is a difference between someone that knows a language and the previous scenario. However, if we accept that intelligence is not measured from the resulting actions alone, then there will probably never be a good way to measure or even compare it. On the other hand, what makes us different from "intelligent" computer programs? This takes us into consiousness which I'll post on your other thread. I am interested to know what your opinion is, from the result in practical terms.

For example, ALICE is considered to be intelligent. However, everything she says has been predicted beforehand and a response paired with it (weak AI). Soon when we need humanoid robots to do everyday tasks though, this system becomes much less practical. Ive seen some of the projects in our school's other department on robotics, and the robots are not nearly as "intelligent" as we would like.

Strong AI is the theory of an AI which can learn and evolve. A scenario which I have theorized in my MS dissertation is a robot which initially has the ability to logically reason with its input given, which could learn to speak on its own. A structured diagram shows how the robot itself can start to draw connections between commonly used words, and with its konwledge of basic concept, start to learn nouns, adjectives, etc. Currently the Genesis project is based on this previous work.

You said on the previous thread that you believe research should be aimed now at Artificial Consciousness. It is an interesting theory. Can I ask you:

1. Wont ask you to define consciousness thats very abstract, but do you believe dogs have conscious? How about worms?.

2. Do you believe we need Strong AI, if Weak AI can do the same job in many areas such as IM chatting and correspondence - based on your idea that their 'intelligence' would be equal.

3. How do you believe that consciousness within robots can be achived? You supported Dennet's theory about consciousness being a attribute of intelligence. Do you believe that with either strong or weak AI, that if the program's intelligence gets to a certain point, that it will become conscious?

 
At 6:51 AM, Blogger Harrison said...

By the way, the URL for ALICE is http://alice.pandorabots.com/. Very interesting chatbot.

 
At 9:02 AM, Blogger Abednego said...

1. Dogs: yes. Worms: probably not. This is a good question, and I have a good answer to it, but I haven't had time to write a proper article about it. It's coming...

2. I don't think there is any fundamental difference between Weak AI and Strong AI. We already have programs that evolve and learn, and they don't seem any more comscious than the ones based on hard-coded decisions. You said that Weak AI is predictable, and that makes it boring. But I can write a chat program that uses (pseudo-)random numbers, and it will be unpredictable. Yet, I imagine it will not be any more satisfying as a chat partner. I don't think it's about predictability. It's more about original ideas (whatever we define those to be). I want to have a conversation with a robot and have that robot come up with an original thought that no one has though of before. I think that will soon be possible.

3. I don't think simply having enough intelligence will cause consciousness to emerge. We have computer systems that are ridiculously complicated. You could say they are very intelligent because they consider millions of parameters and are capable of producing billions of different results based on those parameters. No human could ever come close to being that intelligent. Yet they aren't conscious. So it's not about how complex a system it, but I do think that a certain level of complexity is required for consciousness to appear. That ties into the dog-worm question above, and I will answer it in more detail soon.

 
At 9:12 AM, Blogger Abednego said...

1. Dogs: yes. Worms: probably not. This is a good question, and I have a good answer to it, but I haven't had time to write a proper article about it. It's coming...

2. I don't think there is any fundamental difference between Weak AI and Strong AI. We already have programs that evolve and learn, and they don't seem any more comscious than the ones based on hard-coded decisions. You said that Weak AI is predictable, and that makes it boring. But I can write a chat program that uses (pseudo-)random numbers, and it will be unpredictable. Yet, I imagine it will not be any more satisfying as a chat partner. I don't think it's about predictability. It's more about original ideas (whatever we define those to be). I want to have a conversation with a robot and have that robot come up with an original thought that no one has though of before. I think that will soon be possible.

3. I don't think simply having enough intelligence will cause consciousness to emerge. We have computer systems that are ridiculously complicated. You could say they are very intelligent because they consider millions of parameters and are capable of producing billions of different results based on those parameters. No human could ever come close to being that intelligent. Yet they aren't conscious. So it's not about how complex a system it, but I do think that a certain level of complexity is required for consciousness to appear. That ties into the dog-worm question above, and I will answer it in more detail soon.

 
At 10:16 AM, Blogger KiLVaiDeN said...

The Chinese room is not to be taken by the word. Of course there is no book with every answer in the world, otherwise, we would be reading it every day :) or not ? ;)

I find this blog really interesting, cause I myself wonder a lot about those Intelligence questions.

In my opinion, it's crucial to note that being "conscious" is different than "conscious of being conscious".

Besides, it's also different to be "intelligent" than "intelligent as being able to analyze something new and understand it".

Just my 2 cents :) I'll be reading your blog when I'll have more free time :)

Cheers
K

 
At 12:56 PM, Blogger Abednego said...

KiLVaiDeN, could you explain what you think the difference between "conscious" and "conscious of being conscious" is? And also the other difference you said needs to be made. I don't understand why you think there is a difference.

 
At 4:28 AM, Blogger James said...

Intelligence: don't forget the substrate. For a human it is the ability to create connections between neurons.
In other words, intelligence is the physical ability to relate 2 or more concepts/things to build meaning or a better understanding of the world.
Intelligence is directly related to knowledge and the ability to use it effectively.
There is not intelligence without knowledge.
Who builds the knowledge? No one. It is a random process. However, as we are conscious, we can recognize when we see something of value and we remember it. We also share it with other. That's how we collectively create knowledge and thus intelligence.

When I read what I've written, I see that I jump from one concept to the other in the hope to make sense of all that. I think that what I've done is to put consciousness at the center of all.

 
At 11:22 AM, Blogger James said...

Just thinking out loud:

consciousness builds intelligence(s).

I'm not intelligent in a vacuum. I know I'm intelligent in certain situations because I see that I react appropriately to a given situation. Therefore to be intelligent in a given situation is to exhibit an appropriate behavior.


So:
consciousness builds specific programs (we call them behaviors) to respond to certain situations. The response to the situation can be more a less appropriate (meaning it is more or less intelligent).

So intelligence per se doesn't exists. It is the result of a match between a situation and a behavior. If the match is good, we are intelligent. If it is not good, we are dull.

Myself I am intelligent in certain situations and stupid in others.


So I am:
a consciousness that, with the help of a database (we call it memory), builds and enhances programs to give an hopefully appropriate answer to different situations.

How many programs do I have into my head? Don't know but I guess that many, many.

So the consciousness is my programmer.


The question now is how to build that consiousness. How to build it in such a way that it can create different programs for different situations ... with the most reduced database possible. Why? for testability and speed. The important at this first stage is to build a great 'programmer' and not necessarily to exhibit a complex behavior. Both the situations and the database should should contain as few elements as possible but that still allows to creation of many different situations.

The consciousness (aka the programmer) is general in scope. there is only one. It is not like Deep Blue that only plays chess. It is more abstract. It has no specific purpose. Its purpose is to create programs that response to situations.

 
At 1:29 PM, Blogger Abednego said...

Hi James.

What you are saying makes some sense, but I wouldn't attribute this much to consciousness. I think that consciousness is much simpler than that. Most of the perceived complexity of the human brain is intelligence. The fact that we can react in intelligent ways to complex situations is not the effect of being conscious. I would argue that mice are conscious.

Instead of calling consciousness the programmer, I would call it instinct. Consciousness by itself is very primitive. If we start thinking of consciousness as something complicated and mysterious, then we are just pushing the "hard problem" further away.

 
At 3:02 AM, Blogger James said...

Hi Abednego,

> Most of the perceived complexity of the human brain is intelligence.
I think I agree. Using my metaphor, intelligence is a collection of little (or big) programs that deal with certain situations.

> The fact that we can react in intelligent ways to complex situations is not the effect of being conscious.
I mostly agree. Using my metaphor, something has to choose the right program to handle the task. For example, cooking and solving math problem requires different abilities. You might have a program that deals with Math and another one that deals with cooking. The fact that you use one ability over the other is decided somewhere. I say that the consciousness is also there to pick the right program for the task. If there is no program available for the situation, it tries to build one.

> I would argue that mice are conscious.
I have no problem with that.

> Instead of calling consciousness the programmer, I would call it instinct.
The fact that consciousness build programs is one thing. The fact that instincts/emotions are what drive consciousness to build programs in certain way is another thing.

I guess that you've noticed that I'm trying to answer the question: what does consciousness do? I think that it is a little bit different than asking: what is consciousness It is a little bit like a black hole. You know they are there but can't see them, but you still can see the effect they have on their surrounding.

I think that if consciousness exists it is because it has one or several tasks to do.

I understand you objections. Consciousness still remains a black box. However it is a black box with inputs and outputs. I think that by describing those, it helps understanding the scope of what the back box does.

As input I propose: emotions, instincts and stimuli of the senses.

As output I propose: behaviors (including the creation and modification of these programs) and the selection of a behavior.

Do you see different inputs/outputs?

 
At 3:51 AM, Blogger James said...

I would like to add a clarification.

When I say that the output is a behavior I have oversimplified my thought a bit.

The output can be a behavior or a program that can lead to some kind of behavior. I'm also using a slightly peculiar definition of behavior. The behavior I'm talking about can be physical (manner of acting or controlling yourself) but also intellectual (calculating a Math problem and writing the answer).

An example of a program might make things clearer still. The perfect example is riding a bicycle or driving a car. I remember that these things where hard to do. When first confronted with such a task, I consciously tried to handle the situation. I remembered to be overwhelmed by the number of movements to coordinate and the number of things to monitor all at the same time. With time I have build a specific program for each of these activities. Now these activities are no challenge for me. I do it automatically, without conscious efforts.

So, my consciousness has built a program that leads to an appropriate behavior for the situation.

I've given an example for driving, which is an activity that deals with the world around me (outside my head). However, the same would apply to an activity that occurs inside my head like reading or solving a Math problem. These things were hard at a point in time until a specific program has been built to handle such tasks.

 
At 11:48 AM, Blogger Abednego said...

Why do you make a distinction between what you call a "program" and what you call "selecting a program among a set of programs"? Why is the act of selecting the right program not just another program? For that matter, what's wrong with thinking that the human brain is just a machine executing a single program? Its inputs are the senses and its outputs are the behaviours.

 
At 2:04 PM, Blogger James said...

> Why do you make a distinction between what you call a "program" and what you call "selecting a program among a set of programs"? Why is the act of selecting the right program not just another program?

Just to recap the situation. I've described consciousness as a black box which produces programs and selects an appropriate program among a collection of programs. Consciousness can also directly produce a behavior (the output of a program) if not program is up to the task.

I think that I answer both questions if I say that, for me, consciousness is the one responsible to select a program. Why it is not just another program? That's a good and hard question. I think that we are a mostly single threaded machine. I don't know if you agree but if I take me as example, I see that I'm able to do only one thing at a time. The activity I'm doing can be interrupted and I can switch to another task, but I'm doing one task at a time. So this links to the concept of focus and attention. You focus on a task and ignore other signals. I think that consciousness is doing that job. I'm putting consciousness at the center of most higher level activities (I'm excluding non-conscious tasks like essential biological survival tasks which are handled elsewhere). So for me consciousness decides what to focus on. If consciousness decides which task to do, it makes sense that it also selects the right program for the task. Consciousness centralizes the inputs, decides which one to focus on, and then look into its directory of programs the one suited for the task. If nothing is found, it handles it 'manually'.


> For that matter, what's wrong with thinking that the human brain is just a machine executing a single program?
There is nothing wrong but it is biologically false and it would not be a very scalable architecture. I think that it is false because if the brain was a single program, when the brain is damaged, the whole program should not stop working. I think that the medical literature shows that people that had some areas of the brain damaged have lost some abilities but not all the abilities. So it shows that the brain has some areas dedicated to some tasks. For me that sounds like the brains has different programs.
Anyway, in a computer system, you can write a big program that handles 100s tasks or a little program that uses 100s of other little programs to achieve a task. The second approach gives better results when the number of tasks is huge.

Lastly, let me clarify the difference between a program and the consciousness (which can also do the same things a program do). The difference is optimization. A program (which is built by the consciousness) can be executed faster and consumes less resources than the consciousness. The consciousness would take more time and more resources to achieve the same results. However, the consciousness can do anything while a program is specific in scope (ie: mostly hard coded).

 

Post a Comment

Links to this post:

Create a Link

<< Home