Friday, January 06, 2006

Part 2: Defining intelligence

Most of the following ideas come from Jeff Hawkins' book, On Intelligence. Although in the book, Hawkins seems to be confusing intelligence and consciousness in a few places, he still presents a very believable theory of what he calls "true intelligence".

It's all about the neocortex, or cortex for short. Cortex is that wrinkly outside part of the brain. First of all, the most important physiological difference between humans and chimps seems to be the size of our cortex. Sure, we also have less hair, lack a tail and aren't as good at grabbing tree branches with our feet, but the size of our cortex is what seems to be the reason why humans are smarter than chimps. Second of all, humans do not have the biggest brain of all animals, but we do have the biggest cortex, by far.

So what is the cortex made of? It is built out of billions of fatty cells called neurons. Neurons can be active or passive - they switch on and off all of the time. When a neuron is active, it sends electro-chemical signals to its neighbouring neurons across neuron-neuron connections that are called synapses. As a result of signals being sent, the synapses may get stronger or weaker, allowing for stronger or weaker signals to be sent through them. This is an extremely simplified view of a neural network in the human brain. In fact, there are other types of connections that are present between some neurons, and there are different types of neurons that perform different functions, but we will keep things simple here.

All of the neurons are arranged into six layers. That's not just a random number thrown in to make the theory look fancy; we can actually see the 6 layers through a microscope. The bottom layer has connections to our senses. Our optical nerves carry signals from the retina to the bottom layer. That is also where we receive signals from the auditory nerve and the touch, taste and smell sensors. When the bottom layer neurons receive signals, they get activated, or excited. That causes some neurons on the next layer to get excited, too. Eventually, the signals propagate to higher and higher layers. And that is how we recognize images and objects. At least that was the state of our knowledge until recently.

Here is an interesting experiment conducted by Rodrigo Quiroga and his colleagues to test an idea proposed in 1967 by a neuroscientist Jerry Lettvin. It turns out that in your brain you have a Bill Clinton neuron - a single neuron that gets excited whenever you see the face of Bill Clinton, or hear the voice of Bill Clinton, or read the words "Bill Clinton". Well, actually there are several such neurons, duplicated just to be safe. That's no joke. Such neurons that are responsible for particular concepts are called "grandmother cells" or "gnostic" units. This suggests that when we see an image of Bill Clinton, somehow the cells in the retinas of our eyes send signals down the optical nerve, and those signals get received by the bottom layer of the cortical hierarchy; then, the signals propagate to higher layers until in the top layer the Bill Clinton neuron gets excited, and we recognize the face.

That is the simple view held by many neuroscientists. In fact, things seem to be a lot more interesting. Our eyes never take just one look at a face. Instead, they always rapidly jump from one facial feature to the other - left eye, right eye, nose, left eye, mouth, right eye, etc. What our brain is getting is a sequence of snapshots of small features of a person's face. In turn, the brain is telling the eyes what to look at next. This happens on every layer - information from the censors flows up the hierarchy, and the brain's commands, or signals, flow down to the muscles.

Now here is the cool part. The signals that flow down the hierarchy are predictions that our brain makes of the future. Here is what happens when we see a face. Our eyes catch a glimpse of a nose. The brain tells the eyes, "Look down a little bit, you should see a mouth." The eyes look below and see the mouth. The brain says, "Now look up and to the left, and you should see the right cheek". Our eyes jump to the right cheek. At this point, the brain starts recognizing the face of Bill Clinton and says, "Now look at the right eye; it should look like Bill Clinton's right eye." This process continues, as the brain gets more and more certain that we are indeed looking at the face of Bill Clinton. At every moment, each neuron is receiving sensory inputs from below and predictions from above and matching the two together.

It gets better. Ask yourself, "What is learning?" When do we learn new things? The answer is: precisely when our predictions do not match what we are percieving. Suppose that you are talking to somebody you have never seen before. Your eyes are trying to recognize the person's face, but where you have been expecting to see a chin, you notice a beard instead. To your facial recognition neurons, this is unexpected, so they get excited and start sending signals to the layer above. Eventually, these signals cause you to recognize that there is a relationship between the name of the person that you are talking to and the notion of a beard. From now on, whenever the neuron representing that person in your brain gets excited, the "beard neuron" will also get excited. In fact, if you are ever asked to describe that person, you might start by saying something like, "A tall guy with a beard." I'm speculating here and hiding many details, but the general idea makes sense.

There are fascinating questions about the way in which we build new memories. An area of the brain called the hippocampus seems to play a major role in it. After damaging their hippocampus, people lose the ability to remember new events (see Memento), although their older memories seem unaffected. It is likely that the only reason we require sleep is to transfer new memories acquired during the day to more permanent memory and free up space in the hippocampus for new information. I know for a fact that when I am deprived of sleep, my short-term memory suffers; I start forgetting appointments and losing my keys.

In short, the human neocortex is an extremely powerful tool that serves two purposes - (1) translating enormous amounts of information received by our senses into a language of abstract concepts and (2) translating commands given by the brain from the languages of abstractions to sequences of nerve signals that go off to the various muscles that control the way in which we affect the world around us. That is intelligence - the fact that we are able to reason about and interact with the world in terms of high-level abstract concepts. Jeff Hawkins claims that that is all we are, and that soon we will be able to build computers that simulate the functions of the neocortex and behave intelligently - like humans do.

The book left me with the feeling that I should sit down and start programming somthing right now! However, after thinking about it a little bit, I found that there are two huge holes in Hawkins' theory. Alright, I can build a six-layer network of a million neurons, have them get excited and send signals to each other. But wait; what does a single neuron do? There is a lot of evidence that all neurons are pretty much the same in terms of what they do, but what is it... that they do? They learn to recognize patters of signals. How? Does one neuron have a very simple function or a very complicated one? That is the first problem you run into when you ambitiously open a text editor to write some brain simulating code.

The second problem is the transition between perception and prediction. Alright, I can organize a hierarchy of neurons that will pass messages up and down. On the bottom, I will connect a bunch of sensors - vision, hearing, etc. Signals will propagate up to the top layer. But what do I connect at the top? Who is it that little demon that sits there, observes the Bill Clinton, grandmother and beard neurons light up and makes a decision which muscle to move? That's consciousness! I am calling it that for lack of a better term. When you are talking to somebody, a lot of thoughts rush through your mind, but somehow out of all that mess you build a sentense and speak it. We know how from the sounds that you hear, your brain builds a complicated concept of what the speaker is saying - that's the cortex. We know how your ideas get transformed into hundreds of complicated nerve signals that move your lips and tongue to produce speech - that's the cortex, too. But what happens between the time you understand what the other person has said and the time you have decided on an appropriate reply? Who makes that decision? Is there a decision being made? This is precisely the "hard problem" of consciousness again. In a sense, the cortex is no different from the hand - it is an organ that humans have evolved to ease the task of interacting with the world. The cortex is the organ of intelligence.

Most scientists believe that these two problems are actually the same problem. The place where the little demon sits is inside the neurons. But that is where the question of intelligence ends and the question of consciousness begins, and if you thought the cortex was weird, from now on things get really hairy.


At 9:17 PM, Blogger twidjaja said...

Interesting post! I presume that here you try to define the meaning of human intelligence. So, then what is your definition of "intelligence" in general? Is it the ability to make sound decisions? That is, can we decide whether an object be intelligent or not by merely observing the outcome of its decisions, i.e., without really looking into what the object is made of?

At 12:28 AM, Blogger Abednego said...

I think intelligence is simply the ability to reason in abstract terms. In his book, Hawkins makes a great argument to show that intelligence is not intelligent behaviour. Imagine sitting in a dark room thinking about a math problem. You are not moving a single muscle, your eyes are closed and the room is quiet. Then, suddenly, the solution occurs to you. You are exibiting intelligence, yet there is no "behaviour" to speak of - you are not doing anything.

On the other hand, to answer your question, yes, I think we can decide whether an object is intelligent by observing its decisions. That is, for example, why I would agree with the claim that Deep Blue is intelligent.

Hawkins does not talk about it, but I personally think it is easier to define intelligence in relative terms, as in A is more intelligent that B. Then it is easier to measure - you just haveto ask the question whether A can operate with higher abstractions than B. For example, dolphins are more intelligent than cats because a dolphin can recognise itself in the mirror and a cat cannot. Therefore, a dolphin has the concept of itself as an individual of the dolphin species. A cat, on the other hand, is just looking at another cat in the mirror.

At 11:14 AM, Blogger twidjaja said...

What about dogs or chimps? Can they recognize themselves in the mirror?

At 3:47 PM, Blogger Abednego said...

The only two animals are chimps and dolphins. Dogs - no.

The test is pretty ingenious. They paint half their face black and put them in front of the mirror. Chimps try to wipe the paint off of their own face, and dogs try to wipe it off the face in the mirror.

At 6:34 AM, Blogger L. Venkata Subramaniam said...

I think machines will eventually be able to chat "intelligently." What would you define intelligence to be? Turing had suggested in 1948 that in a conversation between a machine and a human if the machine is able to fool the human into thinking it is also human then it has to be intelligent. I think we will soon get there. Turing thought we will get there by 2000 but we are still no there yet:

At 5:30 PM, Blogger Abednego said...

Turing never gave a time limit on the test. If you only had 1 second to make that decision, then we already have programs that can pass the Turing test. You just have to make your program print "Hello." If, on the other hand, the time limit is 5 years, then we will not have computers passing the Turing test for many more decades. All you would need to do is make an obscure sarcastic remark and see its reaction. I will be more generous and offer a prize of $500 to anyone who writes a chat program that can trick me into believing it is human after a 30-minute conversation.

At 3:00 AM, Blogger James said...

It is interesting to know how intelligence in biological entities (like humans) works but I don't think that it will help to build an intelligent software. The reason is the number of neurons and connections between neurons. Even if we new exactly how to code each neuron, no computer on earth for the next thousands years will have the horse power to simulate the human brain.

There is certainly many things to learn from the brain but I don't think that we should try to imitate the way it works. Our brain would not be the same with only one million or only one billion neurons. And in the brain there are more than just neurons that influence the whole process...

I think that nature has come up with a design that produce intelligent thought and behavior. However, nature has to obey to a set of constraints. The set of constraints a computer has to obey is completely different. That's the design for a computer brain must also be different in my opinion.

I don't know what consciousness is. Why it is there. Why it works. If we knew, we could probably code it and the rest would follow (intelligent thought and behavior).

At 4:32 AM, Blogger Abednego said...

Hi James,

I don't think the number of neurons is nearly as large as you think. If we assume that each neuron can only have 2 states - excited or inactive, then there already exist computers that have enough memory to store the state of a human brain. Of course, the neurons are probably a lot more complicated than that.

What's even more interesting is that there are much simpler animals with incredibly simple brains that do surprisingly well in this world, much better than the best computers. Cockroaches can navigate in complex environments with great precision and speed. Fruit files are very good at getting away from me when I try to catch one. Their brains are tiny; much less complex than your average computer, yet we have no real idea how they work.

So I don't think it's a matter of complexity. It's not that the brain is so big that we can't simulate it. We just don't know what it is we should be simulating. We don't have a problem rigging up an enormous network of trillions of artificial neurons and letting them talk to each other. The problem is that no one knows for sure what a single neuron does.

At 3:16 AM, Blogger James said...

Hi Abednego,

> I don't think the number of neurons is nearly as large as you think.

You are right it is larger. Typically people say that there are 100 billions neurons.

It might be true that a neuron has only 2 states, but you have to remember that a neuron is connected to other neurons. The number of connections between neurons is what makes the brain much more powerful (without taking into account the fact that this networks can reconfigure itself continuously).

I don't know the number of synapses per neurons but it is a lot too. The page below claims that there are 10^15 synapses and can do 10^16 synapse operation per second.

If they are true it means that you would need to maintain in RAM 100 billions neurons objects PLUS 1'000'000 billions synapses objects. This is without taking into account that your program will have to exchange an astonishing number of operations per second (10^16). Good luck!

Unfortunately, it does not end here. The brain is not only made of neurons. Scientists are discovering the key role played by Glial cells. They influence the communication between the synapses (inhibiting or enhancing the signal) among other more mundane tasks not interesting to this discussion.

So any computer model of the brain must take into account the role of these cells to understand how our brain works. By the way, they are about 10 glial cells for each neurons.

Good luck to put all that in one or several computers!

Here I'm talking about the human brain of course because I'm interested by our superior ability that we call consciousness in this blog. All this brain complexity is probably necessary for a consciousness to emerge in a biological entity. However if we could pin point what consciousness is we may be able to reproduce it in a computer in a more efficient way than nature (let me hope please).

At 3:41 AM, Blogger Abednego said...


Thanks for the numbers. I work at Google, so numbers like 100 billion don't really scare me. It's not a problem to store the state of 100 billion boolean variables even on 1 computer. As for the connections, we can arrange the artificial "neurons" in a 3D structure of some sort and use distance to define connections. Say, each "neuron" is connected to the nearest 1,000,000 neurons. That requires no extra memory. Of course, it's not very flexible, and there is still the problem of processing these connections.

So simulating something this big isn't hard. The hard part is figuring out what 1 neuron does.

You also completely ignored my comments about cockroaches and bees. Their brains are tiny, and yet very powerful.

So my conclusion is that size is not what we should be trying to study. It's not the brain's size or its complexity that makes it so interesting. Even very uncomplicated brains can perform very complicated tasks like navigation and pattern recognition. To me, that's fascinating, and I'd like an answer as to how they do it.

At 3:54 AM, Blogger James said...

I ignored your comment about insects because they do not exhibit the behaviors I'm interested in (I would like to communicate with that thing).

However, you are true, bigger is not equal better. I've read or heard somewhere that bird's brain albeit smaller could be more efficient than ours. Unfortunately I don't recall the details. What I remember is that the brain's architecture is sometimes more important than its size.

Nature has created a few architectures but that doesn't mean you have to do the same think. We have to understand the concept. What is nature trying to achieve? The brain being such a resource hog, I think that nature is trying to produce something robust because reconfigurable. What a new born (or even while still in the body of her mother) knows? How does it starts to makes sens of the world. I don't know if you agree but I think that we start with zero concept in our head but with the ability to identify and classify and filter things. It probably starts with I like/I don't like (classification by emotions thanks to our primitive mind). What I call primitive mind is probably very important even as adults. Some humans are visibly more emotional than others. think that this emotional mind helps building our capacity to reason, to be intelligent. As the neocortex ability grows, it takes more and more the control of our behavior. It is like there is a fight between the neocortex and our emotional mind to know which one will have more influence on our behavior. At the end an equilibrium is reached and it is different for every individual.

Emotions where build into us right from the start. A computer doesn't have such driver. Should we build on? Are emotions really necessary to reach consciousness? Emotions are necessary for a biological entity like us. It is a product of our evolution, but does an artificial entity really needs it? I have the intuition that it is not necessary to mimic exactly all of our emotions. However, I think that it is necessary to hard code some values, some gut feelings, so that when faced with a situation the artificial entity can have a preference for one situation over the other.

At 12:45 PM, Blogger santi said...

Hi James,

With respect to your sentence "I think that we start with zero concept in our head but with the ability to identify and classify and filter things."

It is well accepted that this is actually false. And that in fact, the human brain is born with lots and lots of predetermined concepts and behaviors. Can't remember the cite, but I remember a conference presentation I attended some years ago presenting empirical evidence of remainders of lower mammal preprogramming in human brains. Humans learn to overcome and supress most of that preprogramming, but it is there nontheless when we are born.

So, the human brain is by no means a tabula rasa, but it starts with a complete set of biases, which, by evolution, have been fine tuned to allow for proper development of intelligent behavior.

This means that in addition to the problem of finding out what a neuron does. There is a different problem: the initial configuration of the neural net does matter! A network with 100 billion neurons just connected to the closest million would not work. For instance, the visual neurons (one of the most well understood in the human brain), are very carefully wired in almost exactly the same way in every human.


Post a Comment

<< Home