Dr. Maximilian Muenke has a superpower: He can diagnose disease just by looking at a person’s face. Specifically, he can spot certain genetic disorders that make telltale impressions on facial features. “Once you’ve done it for a certain amount of years, you walk into a room and it’s like, oh, that child has Williams syndrome,” he said, referring to a genetic disorder that can affect a person’s cognitive abilities and heart.
Its an incredibly useful skill, as genetic sequencing becomes more widespread. For one thing, it can be the factor that sends someone to get a genetic test in the first place. For another, people in many parts of the world have no access to genetic tests at all.
Practical application:
This inspiration lead to train a computer to do the same thing. Software analyzes a patient’s face for signs of disease could help clinicians better diagnose and treat people with genetic syndromes. Older attempts relied largely on clunky scanners, a tool that was better suited to lab and not for field work.
Face2Gene, a program developed by Boston-based startup FDNA, has a mobile app that clinicians can use to snap photos of their patients and get a list of syndromes they might have.
Meanwhile, Muenke and his colleagues at the NIH last month published an important advance: the ability to diagnose disease in a non-Caucasian face.
It’s a promising preliminary sign. But if facial recognition software is to be widely useful for diagnoses, software developers and geneticists will need to work together to overcome genetics’ systemic blind spots.
Diagnoses vs. Probabilities:
The algorithms work on same principles: Measuring the size of facial features and their placements to detect patterns. They are both trained on databases of photographs doctors take of their patients.
NIH works with partners around the world to collect their photos, while FDNA accepts photos uploaded to Face2Gene.
However, they differ in a key way: NIH- predicts if someone has a given genetic disorder
Face2Gene- gives out only diagnoses not probabilities.
The app describes photos as being a certain percent similar to photos of people with one of the 2,000 disorders for which Face2Gene has image data, based on the overall “look” of the face as well as the presence of certain features.
That’s intentional. Face2Gene is meant to be more like a search engine for diseases — a means to an end. “We are not a diagnostic tool, and we will never be a diagnostic tool,” said FDNA CEO Dekel Gelbman. Drawing that bright line between Face2Gene and “a diagnostic tool” allows FDNA to stay compliant with FDA regulations governing mobile medical apps while avoiding some of the regulatory burden associated with smartphone-based diagnostic tools.
Diversity needed:
The algorithm the NIH uses — developed by scientists at Children’s National Health System in Washington, D.C., — seems to work well so far: In 129 cases of Down syndrome, it accurately detected the disorder 94 % of the time.
For DiGeorge syndrome, the numbers were even higher: It had a 95% accuracy rate across all 156 cases.
Face2Gene declined to provide similar numbers for their technology. “Since Face2Gene is a search and reference informational tool, the terms sensitivity and specificity are difficult to apply to our output,” Gelbman cautioned.
Setbacks:
The data for non-white population is heavily lacking. “In every single textbook, the ones we had [when I trained] in Germany and the major textbooks here in the US, there are photos of individuals of northern European descent,” Muenke said. “When I told this to my boss, he said there have to be atlases for children from diverse backgrounds. And there aren’t. There just aren’t.” (Today there is that resource, based on Muenke and the NIH’s work.)
Hence diagnosis diseases from a face alone presents an additional challenge in majority of cases without northern European descent, because some facial areas that vary with ethnic background can often overlap with areas that signify a genetic disorder. Eventually, software will also have to be able to deal with mixed ethnic backgrounds.
For example, children with Down syndrome often have flat nasal bridges — as do typically developing African or African-American children. Across different races and ethnicities of children there were only two reliable identifiers that could be used to diagnose Down syndrome — the angles between landmark points on the child’s nose and eye, according to a paper Muenke and Marius Linguraru at Children’s National published with their colleagues earlier this year. All the other “typical” features weren’t significantly more likely to show up when children were compared to ethnically matched controls.
The fact that using Caucasian face as a reference can sometimes be the least representative choice. As said by Linguraru Caucasian population had the most diverse facial patterns in terms of DiGeorge Syndrome.
NIH and Face2Gene requires more researchers to fix this problem. Therefore, confirming a suspected disorder with genetic tests is standard practice today and there are no genetic labs in Africa registered in the NIH’s Genetic Testing Registry. Asia and south America are also relatively underserved.
It’s possible that might change, with time and effort. In addition to his work as a researcher, Muenke directs a program that brings health care professionals from developing countries to the US for a month-long crash course in medical genetics. (The program is funded by the NIH’s Fogarty International Center; President Trump eliminated funding for the center in his 2018 “skinny” budget proposal announced in March.)