Friday, January 06, 2006


What is this about? After reading Jeff Hawkins' On Intelligence and watching a series of DVD's entitled "Consciousness", I discovered that I have a lot of questions and ideas, so I decided to write them all down in an effort to make some sense of it all. As for the title of this blog, every other title was taken. Well, not quite. The reason I got interested in consciousness, intelligence and all of this wishy-washy, voodoo stuff was because I wanted to know whether it was possible to code an intelligent computer program. As it turns out, the answer is yes, although it would turn out quite unsatisfactory. In fact, people have been writing intelligent programs for quite some time, and it turns out that what we should really be looking for is a conscious computer program.

But I'm getting ahead of myself. This is just an introduction, and as is customary, let me introduce myself. My name is igor and I am a Ph.D. student in Computer Science at the University of Toronto. I am a pretty good computer programmer, and I don't believe in Artificial Intelligence, as in Hal, Data or the Three Laws of Robotics. At least I started out that way before I listened to Stuart Hameroff and his friends (and enemies). Now I'm just confused.


At 8:48 PM, Blogger twidjaja said...

Cool blog, Igor. Look forward to participating in the discussion.

At 2:40 AM, Blogger Harrison said...

Can I ask you what you define as an "intelligent" computer program?

True, there are many examples of programs behaving in seemingly intelligent ways. Some examples in the field of linguistic robots are Hal6 and ALICE.

However, many experts that I have heard have stated their belief that the theoretical "strong AI" has not yet been achieved.

The reason for this is that almost all cases of artificial intelligence use Case Based Reasoning (CBR). Every expected input to the robot is matched with an output response. My argument is that there is no actual thinking going on, as the program's limit is the extent of which it has been programmed to recieve.

Do you think that it is possible for a program to be self-aware, self learning, and self evolving?

At 10:01 AM, Blogger Abednego said...

Have a look at Parts 1 and 2. I make the case there that intelligence and consciousness are very different concepts. An intelligent compute program is one that is able to make decisions based on high-level abstract concepts.

I believe there is thinking going on inside some computers because thinking, as far as we know, is just the process of combining perceptions and past experiences to produce behaviour. Computers combine their inputs with information stored in their memory and produce outputs. How is this not thinking?

On the other hand, it is not conscious thinking. I think it is possible for a program to be "self-aware" - simply program it to have a variable that represents the state of the program itself. Then, use that variable in computing the results. That's it - now the program "knows" about its own existence and can "think" using this information. Of course, this is not a very satisfying case of self-awareness, but that is because the program is not consciously thinking.

As for learning and self-evolution, these programs already exist. There is a whole branch of computer science called Machine Learning, which is all about making programs that learn from their inputs. Genetic algorithms are programs that evolve over time to make better and better predictions or solutions. All of these are fairly intelligent programs. Some are more intelligent than humans because they are able to make much better predictions and decisions than any human could.

The question is: are any of these programs conscious? The answer is probably no, which is why we don't have "strong AI". But this is difficult to debate because no one has yet given an acceptable definition of what it means to be conscious. There is no test that we could use that would definitively say whether a computer program, a cockroach or a human is conscious or not.

At 7:06 PM, Blogger xheavenlyx said...

I am so lucky to find this blog! I have been 'into' Robotics from a very young age. Until recently, I dont know what changed my thoughts. Maybe my 4 years of Instrumentation BE or me playing Dues Ex.

I have stopped believing in AI. I now think its not 'possible' to construct a conscious AI. I have moved into the field of Human Augmentation (In simple terms, to augment our perception and use OUR brain as a processor with the help of intelligent computers).

However (!), I think there might be some chance, of consciousness, or better yet intuition if we introduce a purely random variable. I would not like to elaborate on this fact here, but just food-for-thought; intuition/consciousness is something we cannot see or explain, just maybe its from a 'universal source' which could be from randomness within thought. Chaos in structured probability.

I would love to talk to you more about this... (I have subscribed to this comment)


Post a Comment

<< Home