Ten blog korzysta z plików cookies na zasadach określonych here
Close
17.03.2017
NEW TECH & INNOVATIONS

“I, Robot” – can the development of robotics and artificial intelligence be regulated by the law? (part one)

A robot may not harm humanity, or by inaction, allow humanity to come to harm –  this superior law formulated many years ago by Isaac Asimov was regarded as a manifestation of futuristic thinking. Today it’s no longer a question of the future, but reality. An era begins, during which we will be surrounded by androids created to look like humans, and many decisions will be taken by  autonomous artificial intelligence.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Rundaround, I. Asimov, 1943

Many years later the law was expanded by the superior law:

  1. A robot may not harm humanity, or by inaction, allow humanity to come to harm. (Robots and Empire, I. Asimov 1985).

We face the challenge of regulating responsibility of artificial intelligence by the law. The rules formulated by Asimov years ago should now be considered as guidelines for manufacturers, constructors, programmers and machine operators. By the time… when robots and artificial intelligence become self-aware and become our reality.

A patent for a robot

Over the past ten years, the number of robotic patent applications has increased threefold, with a 17 percent increase in the number of robots sold annually (2010 – 2014), and now it is already 29% (data of the International Federation of Robotics (IFR)). The dynamic development of robots, artificial intelligence, cognitive development, processing and analysis of big data will lead to robots taking over many of the works that are currently being performed by humans. Robots are now entering the areas reserved exclusively for the human mind – e.g. advanced software analyses agreements between contractors with much greater efficiency, greatly streamlining the negotiation process and eliminating the risk of human error.

An example of such an application is DoNotPay – naming the first lawyer-robot, in fact the so-called chatbot, an advanced program that interacts with users and has previously helped more than 160,000 drivers in London and New York in cancelling traffic fines. It has now been set up to help immigrants seeking asylum, providing them with free legal advice. Similarly to IBM ROSS, tested in law firms around the world, which analyses current legislation and case law in a fraction of a second, and enables communication with the system in the relevant native language – it answers questions by providing specific answers and indicating solutions. There is also advanced work on the use of artificial intelligence to solve court disputes (Intelligent Trial 1.0 system in China).

Robots are not only a bright future, but there is also a fear that in the coming years artificial intelligence will not outstrip human intellectual capabilities and to what extent we can feel comfortable with it. Data security issues and concerns about the direct and physical risks of using robots is also important.

Androids within the law

Certainly, people will use robots and artificial intelligence to create new things, as well as use them in relationships with other people. There is another risk here, whether such use, being aware of human nature, will not cause risks to human beings? Civil liability of robots should be regulated in this respect and self-limitation should be introduced to determine their effect.

Today, with the ability to learn and programmable decision-making, robots can interact with and influence the environment. What if the result of such activities, e.g. by a team of robots, will be a music piece? Will it be possible to assign a creative and legal protection to it?

Personal data and privacy issues will also be important, as both artificial intelligence and robots will be able to get almost unlimited access to information about us, surrounding us with imperceivable  influence and “care”. Assuming that they will continually communicate without human intervention, and even without knowledge, this can lead to a situation in which systems begin to evolve and make decisions for our own good in order to eliminate risks created by the human race itself. (the visions of such future can be seen the the ‘Eagle Eye’ and ‘I, Robot’ films).

This is not the end of the challenges. We also have extensive gaps in the contractual liability of artificial intelligence designed to choose contractors to negotiate contract terms, enter into contracts and make decisions. Existing provisions both in Poland and in the world do not keep up with that. On national grounds one of the recent amendments to the Civil Code has only brought an electronic form of legal action documents (Act of 10 July 2015 amending the Civil Code, Code of Civil Procedure acts and some other acts, Journal of Laws of 2015, item 1311). Leaving this issue of regulating the use of robots as ‘you take this robot as it is’ does not seem to be sufficient.

Robot just like a human? Rights and responsibilities

The fundamental question remains, how to qualify robots to one of the existing legal categories – whether they should be treated as animals or objects, legal persons or even as individuals? Currently, pragmatics is aiming at creating a completely new category, based on characteristics specific to this type of entity, and it will be attributed to them a specific set of rights and obligations, as well as appropriate regulations on liability for damages.

It is certain that in the light of the currently applicable law, robots will not be held liable for any act or omission resulting in harm to a third party. The analysis of such effects will always lead to a human machine operator who, in the course of his or her activity, should anticipate the effects of his or her actions, or to take steps in time to prevent hazards. What will happen, however, when we reach such a level of advancement that the artificial intelligence or robots are able to make their own decisions without human intervention?

So far, there are no grounds for robots and artificial intelligence to be attributed responsibility on the basis of guilt, it should be rather classified as liability based on a risk. In such a case, it will be necessary to establish a causal link between the behaviour of the robot and the damage suffered by the injured person upon presentation of the injury evidence. This should not, however, lead to the exclusion of the responsibility of the constructor and the programmer of the machine, which is in fact its creator. As a result, there are postulates of robotic compulsory insurance, modelled as the obligatory insurance for cars.

The world is preparing for changes

Robots’ responsibility is widely discussed in the world, especially in the United States, Japan, China and South Korea (where the Robotics Ethics Code was developed a few years ago). These countries contribute the most to the development of robotics, so they are pursuing advanced works to introduce changes to their legal systems to regulate new robotic applications.

Europe does not lag behind. The European Parliament attempts to create a legal framework for artificial intelligence. The activities to create the the European Robotics and Artificial Intelligence Agency have already started, and they aim to provide Member States with technical and ethical expertise that will be required to work to ensure that they respond appropriately to the new opportunities and challenges posed by the technological development of robots. In turn, the European Commission is working on a proposal for a common European definition of intelligent autonomous robots and their subcategories.

On the one hand, we see unusual technological changes that take place right in front of our eyes. On the other hand, as lawyers, we are fascinated by how the law tries to keep up with them, and the states try to create a predictable framework for moving around the world of innovation. No one can predict the final impact of this race. Undoubtedly, we live in interesting times.

[We are aware that this text could only arouse readers’ curiosity, and we will continue the topic in subsequent articles. Follow us on Facebook, where we inform about the latest entries]

#AI #artificial intelligence #responsibility #robot #robots

Would you like to be informed about the latest blog posts?

  • - Just provide your e-mail address and receive notifications about the latest posts on the SKP/IPblog blog directly to your inbox
  • - We will not send you spam messages

The administrator of your personal data is a SKP Ślusarek Kubiak Pieczyk sp.k. with its registered office in Warsaw, at ul. Ks. Skorupki 5, 00-546 Warszawa.

We respect your privacy, therefore the data provided to us will not be processed and made available outside the SKP for purposes other than those included in the Terms of Service. Detailed provisions regarding our IP Blog, including a catalog of your rights related to the processing of personal data, can be found in the Privacy Policy.