Artificial intelligence is changing medicine: from reading CT scans in seconds to uncovering cancer risk before it appears. The speaker showed how “Wave 2” AI brings speed, scale, and accuracy that human teams cannot sustain on their own. At the same time, he called for ethics, privacy protection, and fairness of access.
From self-taught to AI that writes reports
The speaker is not a physician, but for years he studied oncology literature and, with the help of computational tools, analyzed hundreds of millions of anonymized images with clinical data. He claims that thanks to augmented AI he can now process oncologic imaging and ground recommendations in the entire published corpus. The goal is to “see” like an oncologist—both in the pixel-level details and in the broader picture of the patient.
By example he showed that the system generates measurements, impression, and recommendations from a single CT image of the lungs—and does so in more than 150 languages. The output is usually ready in roughly 90 seconds, with the newer generation of the model in about 30 seconds. Such speed contrasts with practice, where patients often wait hours for a report.
Speed, scale, accuracy
AI promises to shorten the path from scan to decision. As Dr. Alexander Pearson (University of Chicago) reminded, traditional tissue staining tests take days, whereas AI-assisted analysis on the same specimen is nearly instantaneous. Faster results can accelerate the start of treatment when time matters most.
The scale problem is critical: the volume of imaging data has long exceeded specialists’ capacity, and it is not realistic to “keep up” with new images arriving every few seconds. On accuracy, he cited a model from Massachusetts General Hospital that, based on 40,000 asymptomatic patients, could predict lung cancer risk years in advance, beyond standard recommendations. Since we compare AI with humans, we must also acknowledge human limits: in a well-known study most radiologists missed an inserted “gorilla” in a lung CT, reminding us that automation has already improved many things in medicine (from counting blood cells to reading scans).
Wave 2: breakthrough and responsibility
The speaker distinguished “Wave 1” (single, hand-built algorithms) and “Wave 2”, which began on 30 November 2022 with the arrival of large language models. The new versions delivered a significant leap in performance, albeit with demands on energy and data center cooling. On this foundation, solutions are spreading rapidly beyond oncology—to cardiology (echocardiogram analysis), neurology, orthopedics, and ophthalmology (screening for diabetic retinopathy), as well as to astronomy and biology, where AI is uncovering exoplanets and accelerating the prediction of protein structures. There is hope even for rare diseases: aggregating global data can find patterns that an individual would not see.
But with power also grows the obligation to protect privacy and prevent algorithmic bias so that innovation does not bypass vulnerable groups. Technology must not suppress humanity—empathy, dignity, and the physician–patient relationship. Europe has been stricter on privacy and competition; similar leadership is needed in healthcare AI. The reminder is simple and old: first, do no harm; build trust, set clear standards, and push boundaries in ways that remain anchored in what is right.