Artificial intelligence is today both a defensive tool and a new source of risks in cyberspace. Its impact is not yet all-powerful, but it is growing rapidly and changing the scale and quality of attacks. Preparedness remains key: technical measures, sensible rules, and above all informed users.
AI as both threat and shield
AI can automate attacks and malicious campaigns, increasing their scale and speed. At the same time, it brings new ways of active protection—from faster anomaly detection to adaptive responses to threats. Experts warn that this is an arms race: the same principles that create deceptive content can also help detect it.
Practice shows that the biggest leap is not in the “brilliance” of attacks, but in their scale. AI accelerates the preparation of texts, voices, and images, resulting in more convincing scams in a shorter time. Therefore the old rule still applies: technologies help, but without basic hygiene and user caution they will not protect everything.
Phishing, deepfakes and fake e‑shops
Phishing emails now often sound flawless in Slovak and come from addresses that look trustworthy. The main problem is volume: while one antivirus company reported about 55 fraudulent websites before Christmas, parallel monitoring uncovered as many as 441 sites with the same content management system, hosted from two data centers in China. Although the state cannot confirm every case by making a purchase, these are clear warning signs of a large-scale campaign.
With deepfakes there’s a “tug‑of‑war over milliseconds”: tools for creating fraudulent content and tools for detecting it are evolving hand in hand. In addition to statistical detectors and content consistency checks, embedding hidden watermarks into model outputs also helps. For users, a simple rule applies: verify the source and compare with official channels, especially for sensitive statements by public figures.
Safe use and where regulation should help
Most mistakes arise in the very use of AI. Sensitive corporate data entered into public services may resurface in model outputs, and internal chatbots connected to an API can inadvertently disclose information when faced with cleverly crafted queries. What helps is a thorough risk analysis, the “minimum necessary data” principle, multi-factor authentication, and regular user training, including simulated phishing campaigns.
Technical blocking at the “national IP level” is possible, but it’s thin ice—the tool could be abused. The European approach therefore seeks balance: the AI Act frames risk areas and ethics, but safe deployment in practice requires internal rules, oversight by domain experts, and ongoing education. Since research into offensive and defensive techniques is advancing simultaneously (e.g., the MITRE ATLAS matrix for AI), the last line remains an informed person who checks model outputs and does not blindly trust the first click.