This weekend John Markoff of The New York Times wrote an interesting article about machines, and how they may eventually outsmart man. His opening paragraph describes three scenarios that are already a reality: a robot that can open doors and track down an electrical outlet to recharge itself, machines that are very close to killing humans autonomously, and unstoppable computer viruses and worms that have reached a "cockroach" stage of machine intelligence.
The good news is that artificial intelligence hasn't reach the "HAL 9000" level of intellect; computers haven't become self-aware, nor will they form any kind of Skynet any time soon. However many researchers have agreed that the killing robots, as previously mentioned, are in fact here, or will be here soon.
Additionally, progress has raised concerns that robots will take the place of human workers, and that humans will eventually be forced to live with machines that mimic human behaviors. There's also concern that criminals could take advantage of advancements in AI, using a "speech synthesis system" to impersonate another human, for example, or mining smart phones to uncover personal information.
Does that mean super-intelligent machines and artificial intelligence systems will eventually run amok? The researchers attending the conference (mostly) discounted the idea of a spontaneous intelligence stemming from the Internet, and other highly centralized intelligences outside the web. However, Dr. Horvitz said that computer scientists must respond to the notions nonetheless.
To read a more detailed version of the article, pick up a weekend copy of the New York Times or head here.