본문 바로가기
HOME About Yonsei Yonsei at a Glance

About Yonsei

Yonsei News

Who is Liable for Torts by AI Robots?
Who is Liable for Torts by AI Robots?

71st ICONS Lunch Forum with Prof. Byoung Cheol Oh of Yonsei Law School on the impending legal issues of AI

In early May of 2018, the Institute of Convergence Science (ICONS) at Yonsei University in Seoul, Korea held the 71st ICONS Lunch Forum with Professor Byoung Cheol Oh of Yonsei Law School on the impending legal issues of Artificial Intelligence.



71st Institute of Convergence Science (ICONS) Lunch Forum Summary


“Who is Liable for Torts by AI Robots?”

Professor Byoung Cheol Oh, Yonsei Law School


People usually focus on the benefits of Artificial Intelligence (AI), but today I would like to talk about the possible harms and damages it could bring. Before talking about the application of the law, we first have to start with the basic principles of tort liability.


AI Robot is a complicated issue. Our law only addresses things that currently exist, because problems only arise when something exists. For instance, there is no existing law on aliens or unknown outer space creatures. Because AI robots are not commercially available or widely used currently, there is no law on AI robots yet. Today, we will be talking about the self-driving car, the closest thing to an AI robot.


AI Robot is a unique subject because it deals with a compound matter: AI and robot. Until recently, functions of AI were pretty simplistic — a professor from my Computer Science undergraduate years even said in class that AI is all a “Bluff.” This is because, first, scientists tried to incorporate all relevant data in one independent console and did not think that it would be possible to process big data through the internet. Second, the response speed was too slow for it to have any practical application. Third, the cost of hardware was prohibitive.


However, with the advent of AI, robots now have autonomy — albeit pre-designed — that we have never seen before. After its production, there is no human involvement afterward, giving AI robots full autonomy.


The law only recognizes actions by a human as legal acts, and only humans are liable for torts. This is why when dogs bite or attack other people, it is the owners of the dogs, not the dogs themselves, are held accountable for tort liability. Therefore, when self-driving cars bump into other cars, it is not considered a legal act in itself under traditional definition of illegality.


The key problem in the self-driving car debate is compensation for the damages incurred by self-driving cars. To do so, we must find grounds for tort liability.


The problem with AI robots is that it is difficult to find the cause in one actor. It is difficult to imagine that the company that designed the algorithm for AI intentionally designed an ‘evil’ robot from the start. Unless the companies intentionally designed, it is also difficult to prove negligence because they could not foresee such accident nor is it within their capacity to prevent it. For the drivers who drive the cars, by definition if it is a self-driving car, there is no room for human intervention. Therefore, it is difficult to apply traditional negligence liability in these cases.


There are other forms of torts, such as product liability. Unlike an explosive television or cola bottle malfunction, because the accidents happen in an area of exclusive control of the manufacturer, it seems easy to apply product liability for self-driving cars. However, even if the product’s issues drive from strict liability, the victims must prove the cause of the damages or malfunctions. The cause of the problem may not only be attributed to the algorithm, but also to the physical machine, maintenance, or the other car involved in the accident.


In conclusion, I suggest a probationary application of benefit liability, where the ground of liability is on one’s confidence in the safety, convenience, and effectiveness of the robot and enjoys the benefits.




*Author’s Note

The Institute of Convergence Science (ICONS) was established to meet the demand for new knowledge through integrated and interdisciplinary approaches to studies in the humanities, arts, social sciences, and natural sciences. Embracing theoretical inquiry and practical applications, ICONS enables effective communication and collaboration between its 38 different research centers by establishing a comprehensive network of researchers drawn from all four of Yonsei University’s campuses.

ICONS promotes creative and innovative convergence research while encouraging sustainable development in each of its research centers. The cooperation and collaboration enabled by ICONS’s vast research network will optimize Yonsei’s research environment while enhancing Yonsei’s status as a world-class research institution.




  • linkedin
  • facebook
  • print