Who is responsible for my racist bot?

Manufacturers of products that use artificial intelligence are responsible for any damage at all times. In order to better protect users’ rights, the EU Commission is tightening the liability directive.

This summer, Meta’s new chatbot has been the subject of derision. Days after Blenderbot 3 from Facebook’s US parent company went online, the machine learning program had turned into a racist spreader of fake news.

The same thing happened in 2016 with the chatbot Tay. Developed by Microsoft to engage in conversations with real people on Twitter. Tay also took a wrong turn and was soon taken offline by Microsoft.

Real harm to real people

The affairs surrounding programs like Tay and Blenderbot are ridiculous and relatively harmless. At most, their story is a painful lesson that a robot tends toward right-wing extremism when instructed to interact with real people online.

Still, machine learning computer systems are certainly capable of causing real harm to real people. And it’s not just self-driving cars that misjudge what causes a collision.

It is also worrying when serious software programs using AI techniques exhibit unexpectedly racist behaviour. These are programs that are used, for example, in security cameras or for analyzing job applications.

Citizens must be able to trust robots

Whether it is autonomous transport, automation of complex processes or more efficient use of agricultural land, the EU expects a lot from the technological innovations made possible by artificial intelligence. But AI applications can only succeed if citizens do not lose trust in the technology. That is why the EU Commission already introduced a law on artificial intelligence last year. The new liability directive is the follow-up to this.

The law regulates the conditions under which artificial intelligence may be used. For example, it is prohibited to market ‘smart’ products that pose a threat to “human safety, livelihood or rights”. Examples include toys that encourage children to engage in dangerous behavior or AI systems that enable governments to closely monitor citizens.

AI applications are only allowed in transport, education, hospitals and personnel policy under strict conditions. The latter concerns, for example, software used in selection procedures. For example, the conditions are less strict for the use of chatbots, although it is mandatory that users must always know that they are in contact with a machine and not with a person. According to European politicians, artificial intelligence in computer games or spam filters is without risk.

Outdated laws

The only question is who is responsible for damages caused by the use of products containing artificial intelligence. According to the EU, the liability directive, as it applies today, is obsolete after 40 years. Under current law, a manufacturer is liable for damages caused by a defective product.

But in an analysis by the European Commission, officials conclude that this definition falls short in the digital age. “But in the case of artificial intelligence-based systems, such as autonomous cars, it can be difficult to detect a defective product”. In particular, establishing a causal relationship between a design flaw and damage is problematic in machine learning systems.

The ‘behaviour’ of an artificially intelligent system changes over time. This ‘learning’ is often such a complex process that it is sometimes impossible to trace why a system has made a particular ‘decision’. This can be due to the design of the software, but also the quality of the data that the computer uses to learn. According to the EU Commission, there is a danger that it is actually impossible for users to prove by damage that it is the result of a defect in the ‘smart’ computer. At the same time, this situation creates legal uncertainty for manufacturers, which can hinder investment in new technologies.

Predictable rules

With the new liability directive, the EU Commission hopes to introduce ‘predictable rules’. The norm is that damages caused by products using AI techniques are compensated.

It also makes it easier for users to get their rights in court. Instead of proving causation, the EU Commission’s proposal states that from now on ‘presumption of causation’ is enough to claim compensation. In addition, victims will have the right to access the company’s evidence to support their case. The Commission aims to protect manufacturers’ legal certainty by introducing the right to legally challenge a claim for damages based on a presumption of causation.

Update against racism

The EU Commission sees the legal conditions for the use of artificial intelligence and the new liability rules as two sides of the same medium. The law prohibits the marketing of machine learning systems that exhibit discriminatory behavior. The new directive regulates that a manufacturer remains liable if such an algorithm unexpectedly starts to exhibit prohibited behaviour. This forces developers to continue to monitor their products and sin with an update to provide a brainwash.

Leave a Comment