Don’t Fear AI Progress, Steve Wozniak Says

We can’t stop progress but world-changing artificial intelligence-enabled machines are years away. Fear of such smart machines is “unrealistic” at this stage,” according to Apple co-founder Steve Wozniak.

We’ve been taught by movies since HAL refused to open the pod bay doors to be wary of artificial intelligence.

Although the movie 2001: A Space Odyssey premiered in 1968, research house Gartner predicts that mainstream adoption of smart machines – those that utilize AI, cognitive computing, machine learning, or deep learning – will reach 30% by large companies in 2021.

AI becoming reality within the next few years does not have to mean, however, that the smart machines AI brings will guarantee negative interference or impact on the human way of life.

Woz, co-founder of Apple, will share histhoughts on AI, robotics, IoT, and more at Atlantic Design & Manufacturing on June 13. Register for the event here!

“You can’t really stop progress,” said Steve Wozniak , co-founder of Apple and the engineer behind the Apple II, the world-changing first mainstream personal computer, in a conversation with Design News . “Learning, science, being able to make things that never existed before—You can never stop that. Those things can turn out to have bad aspects. Study the atom and you get the atomic bomb. Learn how to build machines that can make clothing and you could have a lot of people out of work and people have to do other things.”

Even with progress coming, seemingly faster and faster each day, we are years away from any AI capable of a HAL-like learning and dedication, let alone robots stealing everyday jobs.

“There’s sort of a fear with artificial intelligence that machines could become so intelligence and versatile that they could totally replace a person so there wouldn’t be other jobs to go to, but that is so far off it’s an unrealistic fear at this stage. It would take decades and decades,” Wozniak said.

Even machines like IBM’s Watson – which not only has bested human players at Jeopardy! but has shown promise in besting doctors’ abilities in making and managing medical diagnoses, as well as having proven use in allowing legal firms to quickly and accurately extract relevant details from dense legal briefs – has been programmed as to how to approach its cognitive computing.

“We have machines that can learn to play a game faster and better than a human,” Wozniak noted. “For 200 years, we’ve had those machines that can make clothing better than a human. It seems like they are thinking better and faster than us but, we told them what to think about, what to work on, what to learn and the method to learn it by—and then it learned very well.”

Wozniak points out, too, that it’s not the machines themselves that have born their intelligence. It’s still humans who tell the machines what to do and how to learn.

“We do not have a machine yet that says, ‘what should I learn, what should I tell myself to go learn, what are the important things to go do.’ And the ethical fear is just a little bit of that those machines will want the things we [humans] want,” he said.

So, for now at least,


Yes, humans tell the AI what to learn, but more, we tell them how to learn. I see a short step to where, weighing importance, the AI can start learning what is "important" through the Internet, and following perturbations, follow leads which affect that importance, essentially changing the specifics of what and why it is important. In short, Steve started on good information but stopped way short of imparting the information relative to this which he really has.

Do you really want it learning what's important from the internet? Within hours we would have tons of information about the Kardashians and a very well thought out commentary about why Brad Pitt and Angelina Jolie should still be married... After that I really wouldn't hold it against any AI that wanted to destroy us as a species..

Interesting after watching an episode of Star Trek The Next Generation about Data having knowledge of an alien that has to be removed from the crews memory to keep the alien from killing them. Called Clues. You can find info in Wikipedia but these comments won't let me post a link.

I'm only worried about advanced artificial intelligence if it serves a malicious or deluded free will. There are plenty of those around, but I don't think they will be made of silicon anytime soon.

Dear Woz, within AI paradigm we are could design anyhow complex, but finite, systems. That mean,functionality of them will be limited too, and systematic threat for humanity will be absent there. The real systematic danger could exist if one will design of an artificial subjective systems as an fully autonomous device. The solution is simple, we are should not design of such devices with ability to have their own desires.

Michael Bandel's picture
Read Klaatu's ending speech from the original "The Day the Earth Stood Still". It's too long for here, but it sums up the situation quite well.

personally I think the risk is not the technology but why the technology is designed for. I mean as Steve says we are telling machines how to think so it's possible people will ill intent will abuse and manipulate the technology to suit their ill needs. This is a good article but I'd like to ask Steve about one important question he doesn't address at all in his article. Policy and laws. Couldn't policies and laws be developed to help integrate this technology properly in society and safe ?

Add new comment

By submitting this form, you accept the Mollom privacy policy.