Who crashed into my fence? Self-driving vehicle industry, AI and Law of Negligence
Every year over 1.3 million people die in car accidents. While according to the research from 2016, about 94% of crashes could be connected to human error. Could the development of the self-driving vehicle industry be an answer? But what about negligence - is it possible for it to co-operate with self-driving vehicles on various AI advancement levels?
In 2015, five of the American states approved testing of self-driving vehicles which was regarded as one of the most significant steps in the vehicle industry - an overhaul of means by which people would commute. Tesla, one of the most prominent producers of self-driving vehicles, succeeded in establishing a theory declaring that the number of accidents could be lowered by 50% if the car was driven on autopilot. But let’s start with what classes as a self-driving car.
Artificial Intelligence types and levels of autonomy
There are two main types of AI - narrow and general. Narrow AI is designed to perform one particular task, in which it develops further and more often than not, it outperforms humans. The general AI, on the other hand, is a more sophisticated system that mirrors human cognitive abilities and intelligence. That also includes self-learning and various problem-solving algorithms. Some scientists are sceptical and believe that it is nearly impossible to achieve the concept of general AI. Here, however, we focus on self-driving vehicles considered as narrow AI.
The Society of Automotive Engineers established a world-widely respected Five-Level system of the vehicles’ autonomy. While Level Zero means that the human is fully responsible for the vehicle performance (including accidents), Level Three uses narrow AI systems supporting the driving. It would still make the driver legally responsible to intervene, and for example, stop the car when necessary. This is what is called AI-Conditional. Levels Four and Five do not require human input and set their own goals and develop freely, meaning that the driver does not hold any control over it.
Moreover, both levels of automation do not require pedals or the steering wheel to be installed in the vehicle. At the moment these types of AI cars are undergoing testing in various countries around the globe and are not available to the vast public. This leads to a further question - who would be responsible for causing a road accident involving a self-driving vehicle?
Who to be held liable for the damage caused by an AI vehicle?
Negligence is a law that has been created by humans for humans and can be described as an act or failure to act, in a way that causes harm to another person. A certain standard of care is expected, and in case of accidents, the test of ‘reasonable person’ (what would a reasonable person have done?) would be applied by the court. Negligence would still apply in the case of AI-conditional vehicles since the human would still be legally obliged to intervene if the AI system failed. The failure to act would potentially cause the driver to be liable for the injuries caused to the victim.
In the case of a self-driving vehicle that does not require human input, the ‘reasonable person’ test could not be applied to the inanimate AI system and, therefore, would fail before the court. Arguably, some would say that the AI self-driving vehicle’s designer should be held liable for the damages. Under the traditional legal frame, it may be difficult to prove that the designer is liable for the AI system that develops and advances by itself. Foreseeability is part of the law of negligence, where the victims are obliged to prove that the harm was predictable. It could be difficult or nearly impossible to establish due to the complexity of AI and standards that have not been previously reviewed by the court in the United Kingdom. It, of course, may cause further compensation issues and could cause real distress to the victims of the accidents caused by AI machines.
Some legal theorists argue that Product Liability is a branch of tort law that could potentially hold manufacturers responsible for the damages. Especially, that it is a law that does not require an intent to commit an offence (unlike criminal law). It is argued that strict liability should be imposed on those connected with the operation of technologies, or those who benefit from their operation – such as big million-pound companies creating AI vehicles.
Law of Negligence hindering development of AI?
Each vehicle would also have to obtain regular updates of its software to ensure the safety of people in the streets. If updates fail to provide such a standard, part of the maintenance process would not be satisfactory enough, and therefore, the car itself would be classed as a defective product. The main obstacle of such an approach is hindering the development of AI-based vehicles. Not many companies could afford compensation and legal costs with little or no support from the government and supporting technological advancement laws.
It is crucial to endure a fair balance between tech development and society’s safety on the road. No-Fault Compensation System (NFCS) established in the Automated and Electric Vehicle Act could be a way forward in the matter. Every driver in the United Kingdom is obliged to hold a valid insurance policy.
Creating a policy for self-driving and semi-automated vehicles would feel like a natural step forward into the world of the new way of driving. It could equally build a safety net for the victims, ensuring their protection is the main priority, and also stop law and financial penalties from hindering tech development. The Automated and Electric Vehicle Act also takes away the concern of fraudulent claims or improper use of the system by the consumers, such as software altercations or failure to install critical software updates.
Self-driving cars and their utilisation in day-to-day life may assist society in the long run, leading to an extensive drop in the number of road traffic accidents. If manufacturers could support such change, it may be reasonable to not hold them liable to an extent that would hinder the development of Artificial Intelligence. The self-driving vehicles industry has had a tremendous impact on the advancement of the Law of Negligence, and it is still not nowhere near where it will be in the next ten years. Further changes to the law are inevitable to obtain an up-to-date legal system that equally supports technological developments and provides people with fair access to justice. Without such, AI development would be massively hindered, and the chance to limit the number of deaths on the roads - lost.