Don’t Fear AI Progress, Steve Wozniak Says: Page 2 of 2

We can’t stop progress but world-changing artificial intelligence-enabled machines are years away. Fear of such smart machines is “unrealistic” at this stage,” according to Apple co-founder Steve Wozniak.

smart machines remain under human command, assisting with everyday tasks.

For more of Steve Wozniak’s thoughts on tech advancements including robotics and medical devices, see:


ATX East, Atlantic Design & Manufacturing, roboticsSteve Wozniak will take the stage on June 13, in New York, during Atlantic Design & Manufacturing , the East Coast's largest advanced design and manufacturing event. The engineer and cult icon will discuss a range of topics that span his experience at Apple, as well as today's leading tech trends such as robotics, IoT, and wearables, among others. Register for the event here!



Yes, humans tell the AI what to learn, but more, we tell them how to learn. I see a short step to where, weighing importance, the AI can start learning what is "important" through the Internet, and following perturbations, follow leads which affect that importance, essentially changing the specifics of what and why it is important. In short, Steve started on good information but stopped way short of imparting the information relative to this which he really has.

Do you really want it learning what's important from the internet? Within hours we would have tons of information about the Kardashians and a very well thought out commentary about why Brad Pitt and Angelina Jolie should still be married... After that I really wouldn't hold it against any AI that wanted to destroy us as a species..

Interesting after watching an episode of Star Trek The Next Generation about Data having knowledge of an alien that has to be removed from the crews memory to keep the alien from killing them. Called Clues. You can find info in Wikipedia but these comments won't let me post a link.

I'm only worried about advanced artificial intelligence if it serves a malicious or deluded free will. There are plenty of those around, but I don't think they will be made of silicon anytime soon.

Dear Woz, within AI paradigm we are could design anyhow complex, but finite, systems. That mean,functionality of them will be limited too, and systematic threat for humanity will be absent there. The real systematic danger could exist if one will design of an artificial subjective systems as an fully autonomous device. The solution is simple, we are should not design of such devices with ability to have their own desires.

Michael Bandel's picture
Read Klaatu's ending speech from the original "The Day the Earth Stood Still". It's too long for here, but it sums up the situation quite well.

personally I think the risk is not the technology but why the technology is designed for. I mean as Steve says we are telling machines how to think so it's possible people will ill intent will abuse and manipulate the technology to suit their ill needs. This is a good article but I'd like to ask Steve about one important question he doesn't address at all in his article. Policy and laws. Couldn't policies and laws be developed to help integrate this technology properly in society and safe ?

Add new comment

By submitting this form, you accept the Mollom privacy policy.