This editorial was originally published on Feb 15, 2016. It is being re-run as Steve is traveling.
It’s been a good week for AI rights, as Google’s self-driving cars are on the verge of being declared legal drivers. Up until this point, the cars have had to have a human driver and controls that can be taken over at any point, but this change clears the way for truly autonomous vehicles on the roads.
Our robot overlords have done pretty well financially recently, with an Elon Musk-backed group donating $1bn to OpenAI, a non-profit AI consortium. Musk has spoken out against the threat he perceives from artificial intelligences, though it’s possible this is just a smokescreen to distract us from what he’s really up to with all those rockets.
In a similar way to virtual reality, it does feel like AI is starting to move from being a nice idea with no real chance of becoming a mainstream technology to something that’s on the cusp of genuine usefulness. A proliferation of cheap sensors and network technology, and some sane-ish standards make automation easier and more powerful for a huge variety of applications. When I first started dabbling in computing, fuzzy logic was the thing that was going to revolutionize AI, but the promised proliferation of thinking machines never really came about.
The dawn of science fiction promised robots that would walk and talk, the reality was grounded in the prosaic - robotic production lines with highly-specialised machines. Similarly the reality of AI has been bundled off into discrete packages - locomotion in Boston Dynamics’ machines, natural language processing in smartphone assistant applications. IBM’s Watson, one of the few generalist AI projects out there, now has a commercial role as a backend to business - effectively an API for machine learning apps. The superintelligent sci fi AIs will just have to wait for someone to create a good application layer between all these disparate parts.