Artificial intelligence is changing healthcare faster than we can keep up debating the rules. Experts agree: it is a major opportunity, but also a risk that needs to be managed. The key is to know where sensitive data do (and do not) belong and who is responsible for them.
AI as both a helper and a risk in working with data
Panelists see AI as beneficial especially in diagnostics and in finding new patterns in medical data. However, they warn against careless use of generative tools: what you would not dare upload to a public cloud does not belong in someone else’s chatbot either. An alternative is your own, local models in a controlled environment that allow you to work with confidential data. AI can make translations or searching easier for doctors, but not at the cost of pasting an entire medical history into a "prompt".
The discussion reminded that large language models rely on "big data" and computing power, which today is provided mainly by the cloud. However, the cloud is not a universal answer: what matters is data classification and the level of risk the organization is prepared to accept. Technologies such as confidential computing promise that even the provider cannot see the data being processed, but their deployment is still ramping up. Healthcare organizations therefore need to consider a combination of local solutions and the cloud based on the sensitivity of the data.
Regulation and accountability: it’s no longer "just an IT problem"
New European rules are shifting responsibility in two directions. On the one hand, they place greater personal and financial liability on hospital statutory representatives, which may finally bring security support from leadership. On the other hand, the Cyber Resilience Act pushes vendors to design products "secure by design" and to bear a share of responsibility for their security. In practice, this means more communication with suppliers and an emphasis on demonstrable standards.
Supply chain risks are no longer theoretical: attacks are increasingly coming through external partners. An example is a case of tens of gigabytes of health data leaking via a compromised vendor, not directly through the organization. Slovak healthcare is still waiting for sector-specific security measures, but hospitals do not have to sit on their hands. Vendor due diligence, clear information classification, and contractual security requirements are the minimum that can be put in place immediately.
Security practice: from "baseline" through incidents to remote access
Experts consider building a "security baseline" in the spirit of zero trust to be fundamental: segment the infrastructure, minimize privileges, and isolate old or unpatched systems. The most pressing "pebbles in the shoe" are outdated operating systems, IoMT devices connected to the network, and shared, permanently unlocked workstations. Cultural change is often harder than technology, but it can be eased, for example, with fast, passwordless card-based sign-in. For remote access, a strict rule applies: no exposed desktop from the internet, and tools like AnyDesk or TeamViewer only temporarily and under control.
Incidents are arriving faster: the time it takes an attacker to get in and cause damage has dropped from tens of days to less than 24 hours, in a quarter of cases to roughly three days. Therefore, proactive security is essential: multi-factor authentication, high-quality and tested backups, collection and evaluation of logs, and a prepared response plan. Technologies alone are not enough if no one is watching them; where people are lacking, automation of detection and response can help. The panel’s conclusion was optimistic nevertheless: when security is discussed openly and continuously, healthcare has a chance to become more resilient every year.