This editorial was originally published on March 27, 2019. It is being re-run as Steve is on sabbatical.
I was reading a fascinating paper recently about autonomous cars. I'm actually excited about having a car that can drive itself, though I think this is likely quite a few years away, despite the hype. Ever since I read Red Thunder, I've thought that we would first get full time autonomous cars that would either be limited in where they were in use, or part-time autonomous cars that could only be self-driving in certain places. Dense inner cities, or maybe isolated highways might be good places to try this, in my mind.
While we want to do some programming of these cars, we also have a lot of AI/ML systems in place that run models trained to react in certain ways. They identify things that are moving and stationary, trying to determine how the car should navigate and react. The systems aren't quite as tightly programmed as many of us expect, with if this then that logic. Instead they have guidelines that are decided upon by the designers and then reactions to data inputs and analysis are a little more fuzzy.
What are the goals? Well, in most cases they are just moving the car safely down a road. In crisis situations, it's a little more murky. What happens when collisions are unavoidable? How should the car react? Humans often panic and do strange things, but we don't want erratic behavior from automated systems, so what should we set as goals? There's a bit of research that was done to ask humans what they would do when they can consider the situation a little more slowly.
In short, humans make different decisions in different cultures. There are clusters and tendencies in different parts of the world, which is interesting. While people are people and behave similarly in many cases, we tend to value different things, depending on our views of the world. That can be problematic when we start to expect computer systems to be more consistent or predictable. After all, we should decide how computers react and be able to trust our decisions are followed. It is up to humans to imprint our ethical desires as a society on computer systems.
This is an area where I feel AI and ML systems are moving faster than our ability to comprehend the implications. I would want to have a framework built for automated systems, certainly cars, and then expect all vendors of systems would implement that framework in their vehicles. However, this goes beyond cars, and in any places where we are using software, AI/ML based or not, we ought to publish a comprehensive outline of the way in which our system works.
Computers have the capability to improve our world and reduce chaos, but only if we agree on the way in which they work, and disclose in a transparent way the data they handle and the decisions they make based on that data. I hope that we start to get better about informing the world the goals and operation of our systems.