Artificial intelligence is fundamentally transforming medicine—from diagnostics to patient communication to the way we teach future doctors. Despite rapid innovation, the physician–patient relationship and the need for trust remain at the core, even as many solutions will operate as a "black box." We face a choice: to be leaders of change and adapt practice and curriculum so that technology serves people.
What Is Changing and What Will Remain
The lecture emphasized that practically everything is changing: diagnostic procedures, therapeutic algorithms, communication, and the form of instruction. Telemedicine and digital agents already make it possible to treat and consult remotely, which legislation is only slowly catching up with. At the same time, a constant remains: care is provided by a human to a human, and treatment cannot do without trust. That is precisely why it is important to set clear ethical standards and transparency wherever possible.
Some systems will be accurate and fast, yet hard to explain—the so-called black box. The question is whether we are ready to accept the decision of a model that can correctly detect cancer but does not show how it reached its conclusion. Medicine is accustomed to understanding mechanisms in depth; therefore, we must build frameworks for accountability and validation. Otherwise, there is a risk of erosion of trust, even though the outputs themselves will be exceptionally accurate.
What Skills Will a Doctor Need
The future doctor should combine expertise, critical thinking, and soft skills with working with AI tools and agents. This is not a replacement but a collaboration in which the human oversees, corrects, and complements the system. Real-world examples already exist: in dentistry, dental monitoring tracks scans and can detect tooth decay earlier; in radiodiagnostics, image analysis helps; and some rare genetic syndromes can be detected from 2D facial photographs. These tools have been around for years, but their capabilities and availability are now growing by leaps and bounds.
Academia must therefore change both the content and the form of teaching. The curriculum should cover new procedures, and students should work with AI already during their studies. Personalized digital tutors are emerging that adapt the teaching style to whether a student needs to see images, hear an explanation, or practice with simulations. Alongside this, robotic and chat-based therapies are arriving, as is ever broader interpretation of medical documentation for laypeople.
Opportunities and Risks on the Threshold of a New Era
The benefits are obvious: greater speed, accuracy, and capacity, which increases the productivity of entire teams. AI can augment healthcare workers, and part of the routine tasks will be taken over by new agent systems that do not require vacations or extra shifts. Clinical research will also improve, and potentially more time will be freed up for human contact. Paradoxically, however, AI can also convincingly mimic empathy, which may tempt us to confuse simulation for a real relationship.
While reducing costs, the technology may deepen patients' loneliness if it replaces conversation with a human with a convenient chat. Black-box models and the pressure for speed will test ethical standards, regulatory frameworks, and the culture of accountability. Security and data protection remain an open challenge for the entire healthcare system. If we want change to truly improve care, we must lead it actively, transparently, and with an emphasis on the human being.