The European AI Act introduces clear rules for the use of artificial intelligence, while also emphasizing a balance between safety and innovation. A talk by Petr Hanzlík of DXC Technology showed what this means for public administration and why it’s important to start preparing now. The key requirements will fully apply in 2026.
Risk-based approach
The law divides AI systems into four categories according to their impact on people. Tools such as social scoring or manipulative systems influencing vulnerable persons, for example children, will be prohibited. High-risk solutions are those with a direct impact on citizens’ lives, such as recruitment filters or decisions on social benefits, and they will be subject to strict rules including risk management, audits, and human oversight. Limited risk includes, for example, chatbots, where transparency is required so that the user knows they are communicating with a machine; the rest constitutes minimal risk such as spam filters or AI in games.
Timing matters, too: from August 2026, obligations will begin to apply in particular to high-risk systems in the public sector. If a solution affects fundamental rights, a fundamental rights impact assessment will be required. Many practical implementation details will be clarified gradually, but the framework is set and penalties for violations can be substantial. Only time will tell whether the world will lean more toward the European path or the looser American approach.
How the public administration should prepare
Preparation doesn’t start with the first prototype, but with strategy and data. Organizations need alignment of AI initiatives with objectives, clear governance and responsibilities (governance), and processes that account for the specifics of AI from the very start of development. AI development is not like traditional IT, where you can “fix a line of code” the same day; here, data quality, risk management, and ongoing oversight are key. Transparency toward citizens is a must; for example, with a municipality’s chatbot it should be obvious that it is a machine.
The practical sequence is: assess the organization’s readiness, set rules and competencies, and only then address the compliance of specific systems. That’s where lawyers and consultants come in with an action plan to meet the AI Act’s requirements. Even though not everything is definitive today, you can win through preparation: have a vision, processes, and a focus on ethics and security without slowing useful innovations. The key word is to prepare—before 2026 arrives.