REUTERS | Maxim Shemetov

Responsibility for robots

Newspapers today are filled with headlines about the rise of artificial intelligence (AI) and robots. We are usually told that AI is going to take our jobs or take over the world. Both those topics are important, but they tend to obscure the debate on a more immediate issue: how can legal systems accommodate AI, and who or what is responsible for its actions?

What is AI?

There is no agreed definition of AI. Many popular definitions refer back to human intelligence. For instance, the House of Lords Select Committee recently said AI means:

“Technologies with the ability to perform tasks that would otherwise require human intelligence.”

For present purposes it suffices to describe a widely-used category of computer programming which is agreed by most experts to constitute AI: machine learning. This is technology which gives artificial systems the ability to learn from and improve with experience, without being explicitly programmed.

In 2016, a machine learning system called AlphaGo defeated world Champion Lee Sedol at a complex board game called Go, by four games to one. Although non-AI computers have been beating humans at board games for some time, the striking feature was the way in which AlphaGo won. In move 37 of game two, it made a decision which baffled human observers, but turned out several hours later to be the winning move. Essentially, AlphaGo had developed a new way of playing.

This autonomy is what makes AI different from all other technology. Traditional computer programming involves a system being given a series of logical instructions in the form “if yes then X, if not then Y”. In machine learning, the system is usually given a goal but can make up its own instructions and then update them to improve performance over time.

To qualify as AI, a program need not display consciousness. Nor is it necessary for AI to resemble anything like the multipurpose or “general” intelligence commonly displayed by humans. Instead, most current examples of AI are “narrow” in nature, meaning that they are suited to one task only. A narrow AI system may be able to translate the entire works of Shakespeare into Chinese, but it cannot make a cup of coffee.

Responsibility for AI

In legal terms, the key problem is that where AI systems make choices, there is no established framework for determining who or what should be held responsible for any harm caused. It might be the designer, owner, operator, a combination of the above, or perhaps none of this list. Established legal concepts such as vicarious liability and negligence are likely to become increasingly stretched as AI becomes yet more independent and unpredictable. The original designer may be able to argue that the AI’s subsequent development, perhaps in combination with data fed into it by a third party, represents an intervening act.

Two features of AI compound the difficulty of simply blaming the programmer. First, AI is becoming more independent; some AI systems are now able to develop new AI. Secondly, the barriers between programmers and users are being broken down as AI becomes more user-friendly. Think of training a dog rather than writing code.

If AI is incorporated into a product which causes damage, then this might be governed by the EU Product Liability Directive 1985, but it remains uncertain whether the Directive applies where AI does not take a physical form, such as cloud-based services. Indeed, it is in light of these concerns as to coverage for AI that the European Commission plans to issue new interpretative guidance on the Directive in 2019.

As yet, very few disputes involving AI have come to the courts. However, owing to the growing use of AI technologies across all industries, these problems of responsibility are no longer just academic questions. Responsibility for AI will need to be considered by lawyers of all types, whether they are advising on transactions involving AI, drafting insurance policies or involved in litigation and arbitration.

At least with regards to automated vehicles, the UK has attempted to address questions of liability via the Automated and Electric Vehicles Act 2018. The Act provides in section 2 that where an automated vehicle is insured, and an accident is caused by the vehicle when driving itself, then the insurer is liable for any damage. Extending compulsory insurance to cover accidents caused by autonomous vehicles provides certainty for victims by establishing a liable party. An owner/insurer cannot avoid liability by saying “the self-driving car did it”.

However, the Act is at best a partial solution in that it does not address underlying questions of responsibility for the AI’s actions. Under section 5, the insurer retains a right to sue “any other person liable to the injured party in respect of the accident is under the same liability to the insurer or vehicle owner.” Accordingly, the Act avoids the knotty questions of responsibility for harm caused by AI.

In section 3(2), the Act provides that the owner or insurer is not liable to the person in charge of the vehicle where the accident caused “was wholly due to the person’s negligence in allowing the vehicle to begin driving itself when it was not appropriate to do so.” This provision shows that, at present, the focus is on when it is inappropriate to delegate a decision to AI. However, in the future it might be considered negligent not to use AI. This might occur in situations where the AI system is demonstrably better than humans.

Liability for self-driving cars has received much media attention, especially following a number of fatal crashes. Nonetheless, it is important to recall that problems in determining responsibility arise for all uses of AI and not just in vehicles. Similar questions of causation, foreseeability and intervening acts arise when considering the autonomous actions of AI in any area, whether it is a trading algorithm released on to the financial markets, or a medical program designed to diagnose patients.

Though the preceding discussion has centred on the responsibility for harm caused by AI, the autonomous nature of these systems can also lead to difficulty in ascribing ownership where AI creates a design or product which would be covered by intellectual property protections such as copyright or a patent, had this been done by a human.

Ethics of AI

Part of the difficulty in assigning responsibility for AI’s actions arises because at present we lack an ethical framework for how it should take difficult decisions. The absence of an ethical framework for AI is important because we cannot begin to decide whether a person nominally responsible for the AI is at fault if we cannot say what the AI program ought to have done in a given situation.

A well-known thought experiment asks how a driverless car is to decide whom to prioritise in a collision between different groups of passengers and pedestrians. Human drivers are not usually held liable for instantaneous decisions of this type (such as swerving to avoid an obstacle), but AI systems might be held to higher, or at least different standards, given their greater processing power and faster reactions. This issue was considered sufficiently serious in Germany that the Federal Ministry of Transport and Digital Infrastructure has issued guidance to car manufacturers on the principles to be taken into account.

Ethical trade-offs arise not just in the question of how AI should act, but also whether it should be used at all in some circumstances. Many have argued that AI should not make life or death decisions in autonomous weapons (or “killer robots”). The same debates might apply to an AI program which has to make similar life or death decisions if used in an Accident and Emergency Ward, to triage patients (deciding who will have a better or worse chance of survival by determining the order in which they are treated).

It is sometimes said that there should always be a “human in the loop” to ratify or second-guess any decision by AI. Article 22 of the GDPR (the right to object to a decision based solely on automated decision making) may even make human input mandatory for certain important decisions, such as approvals of bank loans.

Rules for robots

In light of the issues identified above, one option is to treat AI as “business as usual”, leaving principles to be adapted and created organically through case law. The problem with this is that test cases are rarely the best way to create new policy, especially in situations where difficult ethical and societal decisions need to be made.

In my view, it would be preferable for governments to work proactively, together with companies, academia, the legal industry and the public to lay down rules tailored to AI. This could be by amendment to existing rules, or by creating entirely new ones.

Regardless of which option is chosen, AI will continue to have an increasing impact on all areas the world economy. Lawyers will have a key role to play in shaping its relationship with society. In the absence of many regulations on AI at present, we have an important opportunity to build a new system.

Jacob Turner is the author of “Robot Rules: Regulating Artificial Intelligence”.

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this post on: