Artificial intelligence is changing the cybersecurity of public administration faster than we can set rules. The greatest risk so far does not come from attackers, but from the reckless deployment of tools and careless handling of data. The state is seeking a balance between a useful assistant and a new attack surface.
AI in public administration: assistant and threat
The panelists agreed that today’s use of AI tends to reduce security, mainly because of how it is implemented. The pattern of “shadow IT” is repeating: officials reach for publicly available services and feed them internal documents or personal data without thinking, often outside the EU. The risk is not only leakage, but also that information may indirectly return to other users in the models’ responses.
A “local” deployment can also create a false sense of security. If an office runs a language model on its own premises but enables it to browse the web and access internal files, it is opening the doors just as wide. The problem therefore is not only where the model runs, but what permissions and connections we grant it. Without clear boundaries and controls, this is a new, poorly mapped risk surface.
Initial rules and what’s still missing
Some ministries are already acting: the Ministry of the Interior has internal guidelines and, in principle, prohibits using generative AI on internal data; access from the work network is restricted and exceptions are subject to rules. Across Europe, several data protection authorities have recommended or outright banned generative tools in public administration until it is clear where the data goes and how it is handled. The key is to clearly specify what is allowed, what is not, and under what conditions.
AI governance at the national level is only now being finalized. A division of competencies is taking shape: central coordination, cyber and security standards under the National Security Authority, and privacy protection under the Office for Personal Data Protection. But “paper” is not enough—the practice needs shared methodologies, testing of AI components, and oversight of how data is handled. Otherwise, the rules will be circumvented via private devices and unofficial tools.
From code to quanta: what it means in practice
AI today accelerates code creation and translation between programming languages, but the outputs must be carefully checked. Models often reproduce faulty patterns, do not “learn” from instructions within a single session, and forget previous fixes unless included in the context. Without review and security testing, vulnerabilities arise directly during development. In the future, agents tied to precisely defined functions and data may help more.
Another major topic is the advent of post-quantum cryptography. The state is already making preparations: new chip platforms, recommendations on algorithms, and a gradual transition in line with the EU and NATO. However, it’s not just about “swapping out an algorithm”—it will affect identities, cards, encryptors, and system interoperability, and it will be a years-long process. And what should be protected as a priority? Not only classified information, but also the availability and integrity of services, since outages and encryption of systems cause enormous damage.
In the triangle of people – technologies – processes, the hardest thing to strengthen is the human factor. You can buy technology and rewrite a process faster than you can train and retain a qualified team. The panel also pointed out the weak enforceability of accountability in public administration, which undermines security efforts. Until we change this, AI will tend to widen the cracks rather than deliver a safe acceleration of the agenda.