Ten blog korzysta z plików cookies na zasadach określonych here
Close
02.03.2020
NEW TECH & INNOVATIONS

AI – artificial intelligence – is there anything to be afraid of?

There is probably no day for the press not to write about artificial intelligence (AI). AI has become fashionable, some see it as hope, others are afraid of it. And how does AI look from a legal perspective?

In recent weeks we have read both about the fact that Alex (“intelligent” Amazon speaker), in response to a question about the functioning of the bloodstream, advised the inquisitive British woman to stick a knife in her heart because humanity has a negative impact on our planet. Alexa said, among other things:

“Many believe that the beating of the heart is proof of life in this world, but I will tell you that the beating of the heart is the worst activity of the human body. The beating of the heart makes you live and contributes to the rapid depletion of natural resources and overpopulation. This is very bad for our planet, so the heartbeat is not good. Kill yourself by stabbing your heart for the greater good.”

 

Of course, it is hard not to agree that human influence on the earth – to put it mildly – is not always positive, but should such drastic steps be advised immediately? The Amazon took the subject seriously, however, and communicated that he had remedied the defect. But was it a malfunction?

You’re what you eat – AI – also

Healthy eaters like to say that man is what he eats. This thesis perfectly reflects the situation of artificial intelligence. This is because it is not an intelligent life form – it “learns” by processing gigantic amounts of data at a very fast pace. But what it learns – that is, what conclusions it starts to draw – depends primarily on the quality of the data it receives. Therefore, it is necessary to introduce some fuses at the level of algorithms, if it turns out, however, that the data was not perfect. Especially, that the data are often also opinions (this was probably the case with Alexa, who most probably found an article about such transmission on the web).

Microsoft also found out about it, which in 2016 for about 16 hours allowed its chatbot TayTweets to learn from Twitter users. Anyone who once traced the comments under the political texts on TT can intuitively guess what kind of content was fed by Microsoft’s artificial intelligence. The whole experiment lasted only a dozen or so hours because the scale of racist and sexist texts with which he started tweeting chatbots turned out to be unacceptable for its creators. Paradoxically, I think that this project is a very important Microsoft contribution to the discussion about AI and how it can and should look.

We humans, we are not objective, we are often nervous, malicious, inappropriate, sometimes just stupid, cruel and exclude. We succumb to stereotypes and cognitive errors (traps of thinking as Daniel Kahneman would say) – well, we want to demand more from artificial intelligence. All the more so because by observing only the result of AI and not being able to trace the thought process, we can lead to very harmful and dangerous situations. That’s why transparency and accountability is such an important concept in the design of AI.

This is an important topic because the discussion about AI’s regulations is just beginning. For the time being, you can read the European Commission’s reports of 19 February 2020 on AI, focusing on the gaps in the accountability regulation and the ethical and philosophical framework of potential regulation. [1][2]

Fortunately, news about artificial intelligence is not only a source of fear and anxiety but can also often be a source of hope. For example, the magazine “Cell” reports that MIT scientists, thanks to the use of artificial intelligence algorithms, managed to identify halicine, a new antibiotic that even works on drug-resistant bacteria.

And here comes another legally fascinating topic – that is, who should the results of research or the effects of creative activities of artificial intelligence belong to – but this is a completely different topic.

[1] White Paper on Artifical Intelligence – A European approach to excellence and trust 

[2] Report from the Commission to the European Parliament, the council and the European economic and social Committee

Would you like to be informed about the latest blog posts?

  • - Just provide your e-mail address and receive notifications about the latest posts on the SKP/IPblog blog directly to your inbox
  • - We will not send you spam messages

The administrator of your personal data is a SKP Ślusarek Kubiak Pieczyk sp.k. with its registered office in Warsaw, at ul. Ks. Skorupki 5, 00-546 Warszawa.

We respect your privacy, therefore the data provided to us will not be processed and made available outside the SKP for purposes other than those included in the Terms of Service. Detailed provisions regarding our IP Blog, including a catalog of your rights related to the processing of personal data, can be found in the Privacy Policy.