Robopocalypse Now? Not so fast

The robots are coming. The machines will rise.

It’s known as “the singularity.” It’s that moment when machines reach and then surpass human intelligence, and the idea has captivated mankind for decades.

“It’s an idea that stems from that psychotic dream of unlimited power – it promises omniscience, omnipotence, it promises you’ll become a god,” said Western University professor of philosophy and technology Warren Steele. “And not just a promise of godhood but a denial of death – it promises you that through machines you will live forever.”

While the theory was first developed in the 1960s, it has recently become a focal point in parts of the computer and artificial intelligence communities.

So, are we there yet?

As machines become increasingly intelligent, and increasingly available to the general public, some of the world’s top scientists – including Stephen Hawking and Stephen Wolfram – and philosophers have banded together to question what the ramifications could be.

They have created such think-tanks as the Cambridge Group for Existential Risk in Great Britain and the Lifeboat Foundation in the U.S. to prepare for the singularity and what many believe will follow.

The robot apocalypse.

The basic premise behind the singularity is known as Moore’s Law, named after Intel co-founder Gordon Moore.

Moore’s Law, which was developed in 1965, predicts that computing hardware capability will double roughly every two years, making computers faster and more intelligent. In theory, this means that computers will eventually reach a point where their intelligence and processing powers exceed those of humans, reaching the singularity.

What happens then is the stuff of science fiction movies.

But Timothy Lethbridge, a professor of computer science at the University of Ottawa, warns that people shouldn’t be expecting the Terminator at their door just yet.

Lethbridge says that contrary to popular belief, the artificial intelligence (AI) field is actually developing at a much slower pace than other scientific areas.

“You get these punctuated curves where big discoveries and advances are made, big speed-ups in developments and ideas, and then things slow down again,” he said. “Recently there have been major spurts of advance in certain technologies like CPU speed and cellular technology, but that’s starting to slow down now, too.”

The advantage for artificial intelligence developers today is that they are able to throw more computing resources at problems than they have in the past. Lethbridge noted that this was the case with Watson, the Jeopardy!-playing super computer developed by IBM that defeated its  human competitors.

“Watson was an important milestone, but it was from a combination of AI fields that have been progressing steadily for years.”

IBM's Watson is one of the most powerful AI's to date. It can process 500 gigabytes - roughly a million books - per second. (photo courtesy of Wikimedia Commons)
IBM’s Watson is one of the most powerful AIs to date. It can process 500 gigabytes – roughly a million books – per second.
(Wikimedia Commons).

Atlantic Monthly ran a recent article following up with Watson and found it performing diagnostic medicine at an American hospital. The article noted that Watson could do the job faster and diagnose more accurately than human doctors, positing the question: will machines make doctors obsolete? While robots replaced humans on the manufacturing line decades ago, there has always been a belief humans would continue to hold the professional jobs. Watson’s success may have shaken that belief.

However, Lethbridge believes it shouldn’t. He sees advancing AI not as a threat to humans, but as a powerful tool.

“Are accountants obsolete now because there are accounting programs? No. If you have a powerful tool, it simply allows you to do that much better,” he said.

“There are diagnostic programs that have done better than doctors for 20 years – this isn’t new; Watson is just that much more powerful,” Lethbridge added. “That being said, diagnosis is incredibly complex, and there will be lots of cases where humans with their intuition and experience and sensory ability will still be able to do better – doctors will still have a role, they will just become more abstract in their work.”

That being said, Lethbridge does think the singularity is coming – within the next couple hundred years for certain, and maybe as early as the next 30.

He’s not alone. Neil Jacobstein, co-chair in AI and robotics with NASA’s Singularity University, said that machines reaching human intelligence is inevitable, but it will be over a long period of time, not as a singular event.

But it’s probably not going to mean the rise of machines and the destruction of man.

“The usefulness of the singularity meme is that it points to a time when it will be difficult for unaugmented humans to comprehend what is happening,” said Jacobstein on the phone from northern California. “There is a premium on understanding various scenarios and being proactive about managing opportunities and risks.”

“The word ‘singularity’ has a certain connotation – people think of black holes and everything that goes in them disappears,” added Lethbridge. “But mathematically a singularity means a paradigm shift –something taking on a totally new function.”

“It doesn’t mean the end of the world.”

Steele pointed to the practical reasons around the difficulty of a manifested singularity. He believes that humans will find a way to eliminate themselves through the destruction of the environment before computers take over the world.

“One of the things the singularity is predicated on is the development of technology and eliminating all the problems caused by the development of technology, including the limited resources,” said Steele. “I just don’t think the earth has the space to support all of our grand delusions, and the singularity is one of those delusions.”

“We have way worse things to worry about than an out of control AI – heck, computers aren’t going to be able to do a better job of making themselves than humans if they have no materials to do it,” added Lethbridge.

That being said, Lethbridge does think that the singularity is something that people need to discuss.

“We will get to a point where computers are more intelligent than humans. I think that is going to happen,” he said. “One of the key pushers for AI is the military and they do indeed put bad things – namely killing – into programming. So yes, there is a risk.”

“But even then there is a bit of a self-limiting capacity because there is some mutually assured destruction,” he added.

Steele noted that there are serious questions around the ethics that need to be asked when considering this issue.

“I think the singularity is ultimately an excuse not to care about what we do in the present so we can obtain godhood in the future,” he said. “We really need to do a better job of asking what technology means to us, not just what it’s going to do but who will it hurt?”

While the singularity remains a likely event, it is at best decades away, which will give computer scientists and philosophers plenty of time to determine just what self-aware machines will mean for mankind

In the meantime, the pod bay doors will still open.

For now.