Ensuring cybersecurity is becoming an integral part of the development of artificial intelligence, which at the same time plays a key role in supporting efforts to protect digital systems. It would seem to be a perfect pairing, but only with properly implemented security solutions can AI systems operate in a manner that is resistant to cyberattacks.
Artificial intelligence (AI) has revolutionized many aspects of technology, and its application in cybersecurity offers enormous opportunities. AI also has the potential to revolutionize the virtual space protection sector, but its effectiveness will depend on how it is implemented and developed—after all, on the one hand, AI supports defense against threats, and on the other, it increasingly generates those threats.
With AI, incident response can be automated, minimizing the time needed to respond to a report. AI-powered systems can automatically block malicious IP addresses or immediately isolate infected devices on the network. On the other hand, thanks to machine learning (ML), it is possible to analyze huge amounts of data in real time and identify anomalies in network traffic that may indicate attacks, intrusion attempts, or data leaks. In addition, AI helps reduce human error, which can be responsible for data leaks and security breaches, by performing routine tasks such as analyzing logs, detecting phishing, and assessing application risk levels. For businesses, this translates into real benefits – faster response times, lower incident handling costs, and better reputation protection.
Of course, AI is not foolproof and should not be relied upon excessively. Errors in algorithms, poorly selected data, or lack of updates can lead to false alarms or real threats being ignored.
As we know, artificial intelligence is also sometimes used for criminal activities. Generative AI (e.g., deepfake, voice clone), which enables the creation of fake messages or identities with an extremely high degree of credibility, is particularly “popular” among cybercriminals. Attacks supported by AI used to analyze security algorithms and find their weaknesses, manipulate data to take control of the models themselves, poison data, or for attacks involving the introduction of malicious data into the model to cause unintended system behavior are becoming increasingly common. The risk of training data leakage in the model learning process is also significant.

Although the law is still catching up with technological developments, the European Union has already built a solid regulatory foundation aimed at protecting against digital threats and ensuring the sustainable development of AI technology. These include the NIS2 Directive, the AI Act, the Cyber Resilience Act, and, in Poland, the draft amendment to the Act on the National Cybersecurity System.
The draft amendment to the Act on the National Cybersecurity System (more information can be found at and here) defines the obligations of entities that are key and important for the security of IT systems in Poland, ensuring coordination of activities at the national level. The NIS2 Directive introduces uniform cybersecurity standards in the European Union, increasing the protection of important and key sectors, including critical service providers. The AI Act establishes a legal framework for the development and implementation of AI systems, emphasizing security, transparency, and technological responsibility, while the Cyber Resilience Act focuses on ensuring the security of digital products and their software.
The Directive on measures for a high common level of cybersecurity across the Union (more about NIS2 can be found here) encourages Member States to introduce legal regulations that encourage the use of innovative technologies, including artificial intelligence, which could improve the effectiveness of detecting and preventing cyberattacks.
The recitals of the Directive explicitly state that key and important entities should adopt a wide range of basic cyber hygiene practices and, where appropriate, strive to integrate technologies that improve cybersecurity, such as systems based on artificial intelligence or machine learning, in order to improve their capabilities and strengthen the security of networks and information systems. such as systems based on artificial intelligence or machine learning. However, it is emphasized that the use of innovative technologies, including artificial intelligence, should comply with EU data protection rules (including accuracy, minimization, fairness, transparency, and data security).
The EU regulation establishing harmonized rules on artificial intelligence entered into force on August 1, 2024, but will only apply from August 2, 2026.
To ensure a level of cybersecurity appropriate to the risk, providers of high-risk AI systems should implement appropriate security measures under the AI Act. These should include a variety of security control mechanisms to effectively prevent threats and respond to potential incidents. Where applicable, the information and communication technology (ICT) infrastructure on which the system relies should also be taken into account. This allows for a more comprehensive approach to protecting both the AI system itself and the entire infrastructure.
The new regulation, which entered into force on December 10, 2024, and will only be fully applicable from December 11, 2027, introduces harmonized cybersecurity requirements that will apply to all stages of digital product manufacturing: from design, through development and production, to market launch. The aim of this act is to harmonize regulations in European Union member states in order to increase security in the digital space and eliminate regulatory inconsistencies between countries.
These regulations will apply to all products that directly or indirectly connect to other devices or networks (with some exceptions), such as electronic equipment and software. One of the key requirements will be the obligation to affix the CE marking to products to confirm that they meet the standards of the new regulation in terms of cybersecurity, health, and environmental protection.
The Cybersecurity Act also aims to support consumers by enabling them to make informed choices when purchasing and using products containing digital elements. Thanks to the new regulations, users will be able to better assess the level of cybersecurity offered by a given product.
To maximize the potential of AI, companies must take a multi-layered approach, while being aware of both the opportunities and risks that AI presents. From a legal security perspective, the following are currently key considerations when implementing AI-based systems:
AI can be a very effective tool for ensuring the security of an organization, as long as it is implemented and used in a responsible and transparent manner, within the framework of applicable legal regulations.
Although some legal regulations will not apply until 2026 or even 2027, it is worth considering implementing appropriate systems and adapting business activities to these regulations now. For companies, this means investing not only in technology, but also in legal, ethical, and organizational awareness.
[1] Directive (EU) 2022/2555 of the European Parliament and of the Council of December 14, 2022, on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No. 910/2014 and Directive (EU) 2018/1972 and repealing Directive (EU) 2016/1148 (NIS 2 Directive) (OJ EU L 333, p. 80).
[2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024, on the establishment of harmonized rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Act on Artificial Intelligence) Text with EEA relevance (OJ EU L 2024, item 1689).
[3] Regulation (EU) 2024/2847 of the European Parliament and of the Council of October 23, 2024, on horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No. 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act) (OJ EU L of 2024, item 2847).