The Self-Driving Car and Machine Autonomy

We live in the age of machines, with everything from smart phones to smart refrigerators constantly chattering behind the scenes, and making decisions which affect us. This raises challenging questions about machine intelligence and decision-making, and about personal autonomy and liberty.

The immediate test case for this problem is captured in the self-driving car.

 

The Self Driving Car

In the near future, cars will have the ability to drive themselves. The technology has already proven itself to be quite impressive, even though it has yet to be adequately proven to achieve widespread consumer adoption. But I think it is clear that self-driving cars are very soon going to be widespread.

In fact even right now, automated cars are actually safer than human drivers. Regulatory and legal barriers are the main reason why we don’t already see self-driving cars on the streets. I’ll briefly touch on the liability- where the manufacturer of the self-driving system would accept liability for an accident, instead of the human driver of a car in an accident. This means the cars will need to be exceptionally safe before a manufacturer will feel comfortable exposing itself to the dangers of its self-driving system causing accidents. But this is a small, solvable problem as soon as the cars are adequately proven to be sufficiently safe.

The truly challenging hurdle is yet to come- the surrendering of personal autonomy that comes with turning over control of the car to the machine’s control. For example, a self-driving system opens up the possibility of police being able to send a signal to a target vehicle ordering it to pull over to the side of the road. Naturally, law enforcement would be overjoyed to have this capability as it would be an end to high-speed chases and police evasion, saving lives. But at the same time, we must agree that human choice has been superseded in an important way. The code within the machine has taken the place of a human action, and human acquiescence to surrendering that choice and autonomy is not a decision we should make lightly.

Self-driving vehicles are a powerful test case demonstrating a trend that is likely to become increasingly prevalent, of machine decision-making taking the place of human action. Although I do agree that in the case of self-driving cars the advantages clearly outweigh the disadvantages, as machine capability creeps forward we must be constantly vigilant about each successive step to avoid surrendering more personal autonomy than we should.

 

Code is Law

Lawrence Lessig’s brilliant piece Code is Law cuts to the heart of this transformation. As we become more reliant on machines, the code which controls those machines begins to take an ever-greater role in managing our lives.

I would take Lessig’s point a step further from software code acting as regulations. In fact, I would go so far as to say that artificial intelligence may one day stand in for officials, such as executives, representatives, even the legislators or regulators themselves.

It is quite unlikely that an AI will resemble a human in the near future. But that doesn’t mean that an AI’s decisions will not be given substantial weight. We can easily imagine a world where software algorithms make decisions that affect us- in fact we’re living in it. We’re only a hop, skip, and a jump away from having an actual AI be making decisions which affect us.

We already live in a world where computers maintain databases containing large amounts of personal information, where data mining can be used to collect large amounts of information about someone, and even make piercing inferences about us. Right now, it is humans which make the decisions based on the information obtained, but it is far from inconceivable that we may one day have AI’s make decisions based on the information they have.

The major challenge of having AI decision-makers is the reduction in transparency that is almost certain to result. In the worst case, potentially even resulting in a world where no human even understands why such an AI makes the decisions it does, and the danger of placing blind faith in such enigmatic systems.

 

Deep Learning & the Enigmatic Machine Minds of the Future

Artificial intelligence has also come a long way in recent years, such as the incredible Go matches between the AlphaGo AI and Lee Sedol. Artificial intelligence has tremendous potential, but it also raises a specter we are going to have to confront. That specter is of surrendering control to artificial intelligence which makes decisions that no human understands.

First, let’s get some major misconceptions out of the way, there is virtually no chance of the “out of control” AI scenario that Hollywood loves so dearly. Robots make great villains, but AI in real life doesn’t work the way Hollywood would have you believe based on human-written scripts. Real AI is built by humans and is not like humans in that it does not have an ego, or its own independent thought, consciousness, or even sentience. Real AI can be highly intelligence while still not rising to the level of sentience of a ground squirrel.

The real threat is that the AI will make decisions that will either be poor, or biased, or otherwise undesirable, but which the humans within its purview will accept as being ‘beyond the ken’ of mere humans. The potential for machine intelligence to make “spooky” breakthroughs that counter-intuitively exceed human potential also introduces the serious potential risk of the AI making bad decisions that go unchallenged by humans expecting the AI knows better than they do.

Let’s bring this down to a concrete example. Suppose we develop a deep learning AI that controls payroll at a company- it takes into account performance data, salaries, and other information to make decisions about who to hire, who to fire, and who gets raises and promotions. For a while, it does a great job. It gets rid of poor performing employees, makes smart hires, and the company grows and becomes more profitable. In fact its ability to identify great employees is almost spooky- people you would never expect based on their background turn out to be amazing, and people whose resume’s look snappy nevertheless have poor results and are accurately spotted and removed. It’s almost like the machine can peer into people’s souls, or predict the future, its power of inference is so sharp.

The difficulty here is- what happens if the machine makes a mistake? Who has the ken to contradict such a mighty, counter-intuitive deep learning system?

The deeper issue here is the kinds of mistakes such an algorithm is likely to make. Most likely, an algorithm like this would be quite savvy at its intended function of maximizing efficiency and performance. But, this is a human society where we have many different considerations other than just raw efficiency.

For example, non-discrimination is also very important. An algorithm relentlessly focused on performance would totally ignore many factors humans consider to be important. Social interactions, for example, or workplace culture, are factors we as humans consider to be vital but factors with which an algorithm is very likely to struggle.

The danger of inscrutable AI’s vested with tremendous amounts of confidence and trust is that they will make decisions that we puny humans lack the insight and computational power to countermand. We can easily imagine a world where seemingly all-knowing machines make important decisions about us, and where a single person who has been harmed by their decision is totally ignored by the rest of us who assume that the machine had good reasons to act as it did.

 

Conclusion

AI has tremendous potential to revolutionize the world. But in many ways the Hollywood issue of a “robot uprising” is a red herring, a distraction from the real issues which we must confront, of protecting people’s privacy, of AI ‘mis-learning’ negative behaviors from patterns we may not know about, and of the dangers of placing great trust in AI we may not have the tools to second-guess.

There is a very real danger of an AI system proving itself to be effective, causing everyone to place such confidence in it that we lose our taste for self-governance and personal autonomy, instead trusting to machines to do a better job. That trust in the machines, so very much like the trust in central government, stands quite contrary to the American philosophical tradition of questioning authority, of holding power accountable, and of citizen involvement.

So what happens if those machines make different decisions that we would prefer to make for ourselves? Would any individual person have the authority, the wisdom, necessary to stand against their decision that had served everyone so well in the past?