How close are computers to becoming Self Aware?

The_Gremlin

Distinguished
Apr 13, 2006
7
0
18,510
This thread:
http://forumz.tomshardware.com/Human-Brain-Fastest-Cpu-ftopict222985.html
got me to thinking about learning computer systems ... you know, like the computer that became self-aware in the Terminator movies or the computers in The Matrix movies that turned humans into batteries.

Brief summary:
We have software that can self-modify it's code based on branched decision trees or code that optimizes itself (self-optimizing).

We have other software that claims to "learn" based on trends of user input data, with corresponding adjusted output (expert systems? for example). The ouput however, remains limited within the software code, that is unalterable by the code.

Do we have any software yet that combines these two, so that it adjusts it's output according to input history AND is able to self-modify it's code to adjust output beyond what was originally envisioned by the programmer? Can a system really be considered a "learning system" if it's not able to self-modify it's code?

I don't know if I'm making much sense, so if I'm not, I can just ask:

How close are we to computers becoming self-aware?
Are there any autonomous learning systems in existence?
[edit]
What is the current state of "learning" computers?
How do people define a "learning" computer?
[/edit]
 

jamiepotter

Distinguished
Oct 8, 2006
4
0
18,510
In a sense, you could look at human beings as limited to their 'code'. So there's no reason why we couldn't regard a sufficiently complex mechanism as self-aware.

So, for instance, one could develop a programme that remembered its responses to prior questions and referred to them in appropriate circumstances, and we'd be inclined to say that this demonstrated 'self-awareness'.

But it always does well to remember that there are many, many instances of 'self-awareness' that look way beyond our technological capabilities. Emotional self-awareness is a case in point. So a fully 'self-aware' machine would have to be one that we would be prepared to call 'happy','jealous' etc.

And now we're so far removed from our current view of computers that it's all incredibly speculative to the point of meaninglessness.

Of cousre, one could delimit 'self-awareness' to some very specific idea of what 'self-awareness' entails. But one has no right to call that 'self-awareness'. One would simply be hijacking the word.
 

The_Gremlin

Distinguished
Apr 13, 2006
7
0
18,510
--snip--
Of cousre, one could delimit 'self-awareness' to some very specific idea of what 'self-awareness' entails. But one has no right to call that 'self-awareness'. One would simply be hijacking the word.
My use of the phrasing "Self Aware" was two-fold:
1) Many people have seen the Terminator movies, where that phrase was used and in some degree popularized, and I thought it would be catchy.
2) Also in the Terminator, the phrase was used to indicate the time when the learning computer hit some critical state of learning and it started acting on it's own behalf.

My use of the term may not have been clear, and I don't want that mis-understanding to detract from the discussion I'm seeking.

I'm asking for insight into the current state of "learning" computers as well as some feedback on how people define a learning computer.

[OP has been edited to reflect this.]
 

The_Gremlin

Distinguished
Apr 13, 2006
7
0
18,510
In a sense, you could look at human beings as limited to their 'code'.
--snip--
Care to elaborate on what "sense" you were refering to? Because I fail to see it that way.

Humans are able to change (modify) their personal tastes and preferences. We can even change our underlying personal beliefs that govern our actions and behaviours.

Our sub-consious is also incredibly powerful, often making changes unconsiously, such as repressing memories or creating multiple personalities. (note that not all changes are considered "good" from a societal perspective :eek: )

IMHO that's what makes us humans so unique and powerful is that we aren't limited to some fixed instruction set.

Although, we still haven't figured out how to break the laws of physics ... yet! :)
 

jamiepotter

Distinguished
Oct 8, 2006
4
0
18,510
Are you saying the sub-conscious or modification of taste/preferences etc. (which I don't deny for one minute) somehow transcend the limitations of being a particular biological entity of a certain size, shape, complexity etc.?

I scare-quoted 'code' because we are not actually coded (coding requires a coder). So yes, we're not limited by 'instructions', as you rightly say. Or better: we don't have a clear understanding of what kind of constraints are in place. We do not, however, somehow transcend causality and physical law.

You can see the point: if our self-awareness is itself a particular family of neural events, then there is no reason to think that one could _never_ build something functionally similar such that we'd be happy to call it 'self-aware'.
 

The_Gremlin

Distinguished
Apr 13, 2006
7
0
18,510
OK, I think I get what you're saying.

--snip--
if our self-awareness is itself a particular family of neural events, then there is no reason to think that one could _never_ build something functionally similar such that we'd be happy to call it 'self-aware'
So, the follow up question is:
Would we have to build it to be self-aware, or would it reach that state by itself, through whatever "learning" process would be required?

Are there any projects / systems / research / etc. in existence capable of (or working towards) this type or learning?
 

jamiepotter

Distinguished
Oct 8, 2006
4
0
18,510
I'm afraid I just don't know enough to be able to answer you. I was weighing in on some of the conceptual aspects (because I'm a philosopher... sorry!), but those questions look pretty empirical to me. I'd be fascinated to know the answers myself.

I do know that _an awful lot_ has to be in place before anything we'd be inclined to call 'learning' can take place. Chomsky made this point about language, for instance: without some innate predisposition to recognise syntax, one could never 'cotton on' in language. One has to be built to recognise certain structural features in what would otherwise be a series of noises (if that). So, to build a machine that could learn a language would be immensely complicated, and we don't have much of a clue, frankly, at the moment.