Generative AI is the next logical step in Denmark’s digitalization, but the country lacks an open and reliable model for the Danish language. The public sector is therefore grappling not only with opportunities but also dilemmas—from the ownership and operation of models to privacy and performance. The talk summarized where AI makes sense and what decisions lie ahead for policymakers and public agencies.
Where AI can help citizens
The most visible services break down language barriers: automatic translation would make it easier for people who do not speak Danish to contact authorities or hospitals. Then there are “virtual assistants”—reliable guides for questions about taxes, starting businesses, or social entitlements. Less visible but important is the automation of back‑office operations: faster case processing, document summarization, and support for officials in decision‑making.
An analytical report prepared by the association of Danish municipalities in cooperation with a pension fund and a consulting firm classifies use areas by risk and benefit. In the spirit of the European AI Act, it concludes that the extremes—from purely macro tasks (research, policymaking) to highly sensitive micro‑level services (e.g., issuing passports or specific health care)—are not ideal for generative AI. The greatest potential lies “in the middle,” in supporting activities with lower impact on the individual and on data. That is where models can deliver productivity without disproportionate risks.
Obstacles, trade‑offs, and what’s next
Technically, developing a foundation model is expensive: it requires massive data corpora, GPU infrastructure, and subsequent instruction fine‑tuning in the target language. The choice of strategy is crucial: large general‑purpose models are powerful but costly and tied to the vendor; smaller specialized ones can be tuned internally, but their performance remains an open question. Operation itself is also an issue—who will maintain the model securely and economically if the state does not want to rely on big companies’ cloud services? These decisions will determine not only costs but also sovereignty over data and systems.
The key trade‑off is between personalization and privacy: for services to respond “tailor‑made,” they need personal data; limiting it, however, can lead to insensitively pigeonholing people and to bias. Also at stake are funding policies, whether to train from scratch or build on existing models, and which principles to prioritize—innovation, or cultural and linguistic specifics. A preliminary scenario assumes that large commercial models will remain in the private sector, while the public sector, together with partners, will build medium‑sized models and a “model‑as‑a‑service” platform for agencies. Smaller specialized models are also an option, but their benefits and sustainability remain to be seen.