Robotization and automation have been a part of developed economies for several decades now. Further development of these areas through the implementation of artificial intelligence and other forms of automation is an inevitable process and provides a unique opportunity for a technological leap and improved efficiency not only for individual entrepreneurs but also for countries. All global powers are aware of this.
The U.S., European Union, and China in recent years have announced strategies and introduced the first pieces of legislation to promote, among other things, large-scale development of artificial intelligence and automation. Each of the global players hopes to become a leader in this field, but the approaches vary. While it can be assumed that China will focus its efforts on maximizing production and financial efficiency, Western countries see some risks associated with the increasing use of machines, robots, and computer programs, both in the economy and in other areas of life.
European countries, while recognizing the opportunities arising from technological progress, are taking into account the risks associated with this process. Automation undoubtedly poses a threat to jobs, the retention, and generation of which is at the heart of key EU strategies. In turn, the implementation of artificial intelligence may threaten the right to privacy (e.g. by collecting and processing data without the knowledge of the person concerned), result in discrimination (even unintentional), and hinder the exercise of labour or consumer rights. A significant challenge is also that the increasing use of artificial intelligence involves the creation of very large databases. Therefore, as technology evolves, security lapses within companies can have increasingly dangerous consequences – both from the perspective of companies and the individuals whose data is processed within those companies.
Polish and EU legislation currently lacks a comprehensive regulation defining the principles of robotics and automation. Therefore, each company, before embarking on revolutionary moves in this field, should analyze its plans in terms of, among others, labor law, consumer rights, personal data protection, and sector-specific regulations. The common denominator of these regulations is to provide the individual with a minimum of rights and guarantees, primarily by respecting privacy, providing security measures, and human oversight of automated processes (according to the principle – the more automation, the more tests, and requirements), ensuring the possibility of appealing a decision made automatically, guaranteeing the transparency of the automated process, and providing reliable information to the individual on the principles of data use as well as the scope of processed data. It should also be assumed that concerning any innovative projects related to automation, it will be necessary for the public supervisory authorities to be able to “check” the assumptions of the planned project, as well as – as far as possible – to agree on it with the institutions and organizations responsible for protecting the rights of the individual. This may include consultation with, among others, organizations concerned with the protection of workers’ rights, consumer protection, persons with disabilities or children.
Labour law requirements
Undoubtedly, many challenges in connection with the automation process arise in confrontation with labor law provisions. The Labour Code indicates, among others, the scope of data that can be processed within the employment relationship. It also strictly regulates, among other things, the principles of video monitoring of employees and the equipment they use. When designing an automation process that directly or indirectly involves the company’s staff, it is important to keep these limitations in mind. Any actions that go beyond the scope of the labor code should be based on voluntariness and not cause adverse effects on employees. It is easy to claim that the employee’s consent to certain rules was forced by the employer and therefore cannot be considered effective. Importantly, it is necessary to take action to ensure that the new solutions do not lead to discrimination. This should be noted especially when implementing artificial intelligence solutions. Software that measures performance and suggests decisions regarding promotions, raises, or dismissals may not take into account the specific situation of some groups and, for example, may discriminate against people with disabilities, illnesses, or those caring for children. For these reasons, the introduction of automated processes in employee affairs should be subject to special oversight.
Technological development, and in particular the use of artificial intelligence, involves the use of increasingly sophisticated databases. If a company uses databases with data on legal entities in the process of automation, it is necessary to verify that the process is permitted under the concluded contracts and that its use will not expose the confidential information of another company. In the case of data on natural persons who are entrepreneurs, furthermore, the requirements under the General Data Protection Regulation (GDPR), which apply to self-employed persons to the full extent, must be taken into account.
Privacy, dignity, and consumer rights
When dealing with customers who are individuals, the implementation of automated solutions may face further limitations. The use of automated behavioral tracking and analysis systems may result in allegations of privacy and dignity violations, including when activities are performed anonymously (e.g., toys and home devices eavesdropping or peeping on users). In addition, special care is required when dealing with consumers. For example, the use of chatbots that present a company’s offer or systems that enable automated analysis of customer complaints may be considered permissible as long as such solutions meet several legal requirements, including that they do not hinder customers from enforcing their rights, are not misleading and also enable asking questions or traditionally filing complaints. This is particularly true if a customer’s case is so unusual that it will be incorrectly handled by an algorithm.
Automation using personal data can only be implemented with full application of the principles under RODO. This means that in any such project, the data subject should be comprehensively informed about the scope and purpose of the processing of their data.
Automated processes usually involve profiling, i.e. automated processing of data to assess personal factors (in particular work performance, economic situation, health, interests, reliability). The data subject should be informed about profiling and its consequences. Furthermore, making decisions towards the data subject (e.g. about granting a loan, making a personalized offer, etc.) based on profiling is only permitted when required by law, when necessary for the performance of a contract or when consent has been given. In practice, the above principles mean that profiling is permitted in specific situations indicated by law or when it is necessary for the performance of a contract with a person – provided that the principles and effects of profiling are accurately communicated. In other cases, such as when profiling is used for marketing or analytical purposes (such as developing artificial intelligence capabilities based on a customer database), this activity requires voluntary consent. In such a situation, the right to obtain human intervention, to express one’s position, and to know the reasoning behind decisions based on profiling should also be provided.
Implementing innovative automation solutions may raise concerns for the individuals concerned and enter an area where the rules are not clear. Therefore, it is worthwhile to thoroughly analyze the plans from a legal perspective and implement the “privacy by design” principle resulting from RODO (taking privacy into account already during the design phase). Negligence in this respect may not only bring the project to a halt but also expose you to financial penalties (up to EUR 20 million based on RODO regulations) as well as claims of persons whose rights have been infringed.
The article appeared on pages 62- 64 in Miesięcznik Automatyki, nr 3/2021. – https://automatykaonline.pl/Automatyka/Roczniki/2021/3-2021.
Author – Bartosz Mysiak – legal adviser at LSW Leśnodorski Ślusarek i Wspólnicy. He specializes in intellectual property law, new technologies law, media and advertising law, and personal data protection. In his professional practice, he focuses on providing services to companies in the field of e-commerce and new technologies, interactive advertising agencies, and entities related to science and research – in particular by providing opinions on innovative projects and drafting and negotiating complex contracts.