Will Machines Become Conscious?
#1
Posted 2006-July-25, 12:30
Despite Wars in Iraq and the MidEast and concerns over Korea the world marches on!
The advent of strong AI (exceeding human intelligence) is the most important transformation this century will see, and it will happen within 25 years, says Ray Kurzweil, who will present this paper at The Dartmouth Artificial Intelligence Conference: The next 50 years (AI@50) on July 14, 2006. (Added July 13th 2006)
#2
Posted 2006-July-25, 14:41
So, in summary, I believe that no attempt at artificial consciousness will succeed unless classical computation is merged with quantum effects. I'll go on record as saying that Turing machines are not capable of consciousness.
#3
Posted 2006-July-25, 15:07
IF you have a blind test on which subject can speak a language, read a roadmap and hold a press conference and you cannot tell which is which is that a fair test?
#4
Posted 2006-July-25, 15:26
Quote
imagination... i can imagine being in a boat on a river, with tangerine trees and marmalade skies... i can imagine seeing a woman with diamonds for eyes
Quote
conscious... and intelligence, if from a rationally functioning mind
#5
Posted 2006-July-25, 15:34
luke warm, on Jul 25 2006, 04:26 PM, said:
Quote
imagination... i can imagine being in a boat on a river, with tangerine trees and marmalade skies... i can imagine seeing a woman with diamonds for eyes
Quote
conscious... and intelligence, if from a rationally functioning mind
ok and how did you prove you passed this test?
#6
Posted 2006-July-25, 16:29
How about this as a test. If there is a general purpose learning machine that is capable of learning from any subject domain and I can spontaneously tell it that I am going to destroy it and that machine responds by pleading with me not to do it and can explain that it is self-aware and why it believes it shouldn't be destroyed then I'll say the thing is sentient.
#7
Posted 2006-July-25, 16:35
DrTodd13, on Jul 26 2006, 01:29 AM, said:
How about this as a test. If there is a general purpose learning machine that is capable of learning from any subject domain and I can spontaneously tell it that I am going to destroy it and that machine responds by pleading with me not to do it and can explain that it is self-aware and why it believes it shouldn't be destroyed then I'll say the thing is sentient.
What if it takes over Skynet and nukes the Russians instead?
#8
Posted 2006-July-25, 16:46
It does seem as robots become more intelligent, leaving aside the discussion of conscious, they may start to get some rights? Will we legally marry it with full marriage rights? Full parental rights including decisions on health care for the kids and spending money?
Can we at least agree there is something called AI right now in our cars and fridges and the only issue is how intelligent it will become and how we will measure it?
#9
Posted 2006-July-25, 16:49
It is a tricky area. Theoretically, one could program a computer to understand enough language to know when it's existence were threatened and then program it to respond to try to prevent that in certain ways. Externally, you couldn't know whether the defense mechanisms were pre-programmed or represented originality of thought. You could even program it to plead for its own life with something that a human might say so that is why a clever interrogator would be necessary to see if the machine really understands what it means to be self-aware.
#10
Posted 2006-July-25, 16:51
Quote
I'm sorry Dave. I can't let you do that...
#11
Posted 2006-July-25, 16:55
DrTodd13, on Jul 25 2006, 05:49 PM, said:
It is a tricky area. .......... might say so that is why a clever interrogator would be necessary to see if the machine really understands what it means to be self-aware.
IF it takes a clever trained interrogator will it really matter? That means 99.9999% of the rest of us cannot tell?
#12
Posted 2006-July-25, 17:03
mike777, on Jul 25 2006, 04:07 PM, said:
IF you have a blind test on which subject can speak a language, read a roadmap and hold a press conference and you cannot tell which is which is that a fair test?
A lot of politicians can do at least two of these three things.. so your test will be inadequate unless it demands more as proof of intelligence
#13
Posted 2006-July-25, 17:53
Gerben42, on Jul 25 2006, 05:51 PM, said:
Quote
I'm sorry Dave. I can't let you do that...
easy hal, easy
mike said:
when someone asks why i didn't complete a certain task, i'd tell them i was daydreaming, and describe the dream... now it might be possible to program a machine to do the same thing, but it seems to me that would defeat at least part of the purpose
#14
Posted 2006-July-26, 01:54
Personally, I feel that consciousness is over-rated. Most seem-to-be consciousness reasoning is post-hoc rationalization.
From an evolutionary-psycological point of view, it's quite obvious that the "theory of the psyche" and other mental mechanisms related to consciousness serve some purposes for our survival:
- Empathy is important for ethical behavior, teaching, and social behavior in general
- The theory of the "free will" is important in moral systems.
- Post-hoc rationalization is important in communication: you can explain how you reached a conclusion by reasoning, not how you reached it through intuition. This helps your audience to infer the reach of the scope of your conclusion so they can see if it's relevant for their purpose.
- Sentience related to awards such as food, sex and social recognition may make award-based learning more flexible (this is a somewhat vague idea that I have, I might be wrong).
Now for the easy, down-to-Earth part:
- Machines will superceed human capapabilities in more and more areas. They allready play better chess, and some would say that they conduct better psycho-analytical interviews, than the best human experts. It will be exceedingly difficult to find an area in which humans are still superior.
- There will never be a market for a machine that mimics a human brain, because it learns too slowly. It takes about twenty-five years of child-raising and education to train a skilled mind-worker with all the human qualities comming from unique childhood experiences, including stimuli of five difference senses, interaction with hundreds of different other people etc. Even if you could by a human-brain-capacity computer for a penny, you can't afford the time it wouyld take to collect all the input it would need to become human. Therefore, allthough computers will be more and more all-purpose, computer software will continue to be very specialized compared to human minds.
#15
Posted 2006-July-26, 02:20
Why do you assume it would cost more than one penny and take more than one second to download all this information? Why assume only a human brain capacity, why not a billion or trillion times more?
There are already predictions of capacity exceeding the entire human race in one computer by 2050?
#16
Posted 2006-July-26, 03:10
mike777, on Jul 26 2006, 10:20 AM, said:
You may be right. If we could equip a new-born baby with a recorder that collects all sensory stimuli, after some 25 years we could make the collected information available on the net so mind-ware developers could use it for training there super-human computers, which learns the same as the human does but with superior memory, arithmetic skills and many other superior capabilities.
Then again, every human will be connected to the internet in the same way as machines are: we don't need to remember anything since everything is available on the internet, and if we want to do some reasoning that is beyond our own capabilities, we can just ask some web-server to do it for us.
Maybe humans will still enjoy the advantage of being able to keeping secrets. The owner of a computer will probably choose to make the computer's resources available to himself and he may choose to make some of them available to others. If the computer becomes an all-purpose personal assistent, he may choose to make his own brain's resources available to the computer. When this happens their will be no distinction between human and machine, they will have fused. But this seems a technically difficult thing to do. I think the computers will continue to enhance our consciousness for a long time before they will start developing consciousness themselves.
#17
Posted 2006-July-26, 03:39
Guess what: all wrong. What NO ONE guessed was the internet and information technology booming. So I suggest we stop listening to outlandish predictions and just wait and see
Expect the unexpected.
#18
Posted 2006-July-26, 08:44
Gerben42, on Jul 26 2006, 04:39 AM, said:
Guess what: all wrong. What NO ONE guessed was the internet and information technology booming. So I suggest we stop listening to outlandish predictions and just wait and see
Expect the unexpected.
We have household robots and we have flying cars!
Some of these household robots are cheap, some are expensive but we have them. Yes there are flying cars!
Yes we have a space colony. We call it the space station.
Yes, things like the internet were predicted in the sixties..for pete's sakes I used things like the internet in the sixties! We chatted and sent messages and played D and D and other stuff back then on it.
I used information technology in the sixties!
#19
Posted 2006-July-26, 09:07
#20
Posted 2006-July-26, 11:41
I think that's more likely. To imagine a growing number of humans wishing to embed technology into themselves to enhance some human facilities. Its already happening. Isn't pacemaker just a machine inserted to help a failing heart? Inserted artificial kidneys?
Not that far fetched to foresee memory chips being inserted to foster a breed of super duper humans who remembers everything. Super bridge players! Or strength circuits inserted around arms to give it 20x more strength. Or telescopic eyes or enhanced hearing.
Cyborg :>
John Nelson.