Rare diseases are uncommon, but as a group they affect millions of people. A lecture by forensic physician and lawyer Petra Kováča showed how artificial intelligence could more quickly suggest certain diagnoses from facial photographs and what this opens up in law and bioethics.
The face as a source of clues for artificial intelligence
Some genetic syndromes manifest in the face, as is well known, for example, in Down syndrome. With subtler deviations, AI can work with shape metrics and ratios (interocular distance, jaw shape, or forehead height), and the result is not certainty but probability. It does, however, require large, high-quality training data; otherwise, sensitivity and accuracy differ even between populations—what works for one ethnic group may not work the same for another. Importantly, even ordinary photos from IDs or phones may suffice, which shortens time and bypasses the need to collect a biological sample – and that is precisely what some find unsettling.
Between benefit and risk: law, ethics, and practice
Two scenarios are possible: anonymous mapping of the occurrence of selected diagnoses in the population and individual alerts for specific people. For statistics, anonymized data and sufficiently large regions would be used to prevent identification of individuals (for example, at the level of broader EU regions). Even so, a solid legal basis is needed, to respect the right to "not know" about one’s disease and to prepare cybersecurity measures so that government-held photographs do not leak and are not processed in an environment that does not guarantee the protection of personal data. For individual outputs, the questions of consent, proportionality, and oversight by bioethicists would be even more pressing.
There was also a consideration of use in newborn screening: if AI were well trained, it could serve as a quick "hint" for targeted examinations. However, we must not forget that algorithms make mistakes and should function as a support tool, not as the final word. Practical deployment would require public interest, funding, a risk and return analysis, and a thorough assessment of impacts on privacy. As to whether such solutions will take root in a few years, no responsible promises can be made today – but the discussion has already begun.