Every day, we witness innovations in science and technology, adapting to changes that, although they may seem small, are increasingly becoming a part of our everyday lives. One of these innovations is artificial intelligence (AI), which is becoming a key tool in protecting information systems and fighting cybercrime. Its ability to analyze large amounts of data and react in real-time enables efficient detection and prevention of cyberattacks. However, this topic also raises complex legal questions related to liability and the protection of personal and intellectual property rights. The main question is: Can the laws as we know them today even encompass such complexity?
To begin with, what makes AI superior to “classical” software? While traditional software operates based on predefined rules, AI learns from data and can recognize new, previously unseen patterns of behavior. AI learns and remembers user data, adapts to the user, and based on the stored data, seeks the best response, filters content – and perhaps collects more data than necessary?
Main Functions of AI in Cybersecurity
Today, artificial intelligence is widely used in cybersecurity – it is applied for virus detection, phishing prevention, user behavior analysis, and automated threat response. Gmail uses AI to filter suspicious emails, while tools like CrowdStrike Falcon autonomously block harmful activities. Still, such practices must be considered through the lens of privacy and data protection rights – is it permissible for a system without human oversight to make decisions that may have serious consequences, such as access denial or surveillance of users?
Legal Liability for AI System Errors
How many times have you asked an AI a question, then double-checked its response or conducted your own research, only to discover that the AI’s answer was wrong? Media and information literacy are becoming as important as grammar and spelling, and possessing accurate information has become a vital tool. If the tool is inaccurate, it can cause harm both to its owner and to third parties. Therefore, AI errors represent one of the key issues of contemporary legal science.
When an AI system autonomously decides – for instance, blocking a user or mistakenly identifying safe content as a threat – the question arises: Who is responsible? Since AI currently lacks legal personhood, responsibility may fall on the software manufacturer, the system administrator, or the organization using the technology.
In practice, courts most often rely on existing obligations law and the principle of strict liability, although this does not resolve all the challenges posed by machine autonomous decision-making. The question is whether the law should evolve, and whether, as proposed in the EU, a new category – electronic personhood – should be introduced to properly regulate these situations.
AI and Copyright Protection
AI systems are trained on large quantities of existing copyrighted works – texts, images, music – often without the authors’ consent, which may infringe their rights. This becomes particularly problematic when AI creates works in a recognizable artistic style, such as “Ghibli-style” illustrations. In a recent, well-known case, artists protested because an AI model generated images that imitated the distinctive style of Japan’s Studio Ghibli, without the rights holders’ permission. Such examples raise the question: Does a “style” itself enjoy legal protection, and does AI bypass the law through its “imitations”? What is unfortunately certain is that what an artist invested time, love, and effort into can now be mimicked in just a few clicks.
The current legal framework still does not provide a complete answer to the challenges posed by the use of artificial intelligence in cybersecurity and digital creativity. The European Union has proposed the AI Act, the first comprehensive law on artificial intelligence, classifying AI systems by risk level – with particularly strict rules for “high-risk” systems, including those in the field of security. Additionally, the General Data Protection Regulation (GDPR) in the EU restricts automated decision-making without human oversight (Article 22), which directly impacts AI systems making independent decisions.
Montenegro is in the process of aligning with European Union regulations, and clearer legal solutions will also be necessary within the Law on Copyright and Related Rights, the Law on Personal Data Protection, and the Law on Obligations.
The development of artificial intelligence deeply encroaches on legal gaps in copyright and other areas of law. In addition to this, the ethical aspect is significant: Does encouraging such practices undermine the value and integrity of human creativity? Legal systems, including ours, are still lagging behind technology and do not yet provide clear answers to many questions. It is becoming clear that artificial intelligence can be a good servant but a dangerous master – depending on how we use it and what boundaries we set.
Although AI offers enormous potential, especially in security, its development must be accompanied by modern and fair legal regulation – particularly in the area of copyright law, which must remain the cornerstone for protecting creative work in the digital age.