Eye see what you did, there, machine-learning boffins
By looking for common patterns in images of retinal scans and matching them up with the data in the patients’ medical records, one algorithm could determine if someone was a smoker or non-smoker to an accuracy of 71 per cent. Another algorithm focused on the blood vessels in the eye could tell if someone had severe high blood pressure or not, a sign associated with increased chances of stroke.
Their models can also predict other factors such as age, gender, and the chance of a heart attack or stroke, the boffins claim in a paper published in Nature Biomedical Engineering journal on Monday.
“Given the retinal image of one patient who (up to 5 years) later experienced a major [cardiovascular] event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the cardiovascular event 70% of the time,” Lily Peng, a product manager at Google Brain, explained in a blog post this week.
The training dataset was collected by EyePACS, a programme developed by doctors to test for diabetic retinopathy, an eye disease that can affect people with diabetes. The dagnostics dataset is predominantly made up of Hispanic people. The validation dataset is also includes patients taken from UK Biobank, a health charity, mainly made up of Caucasian people.
Scientists from Stanford University, Google Brain and Verily – the latter being an Alphabet company focused on life sciences – used over 1.6 million retinal scans taken from 284,335 patients to train their models. Another 25,996 images were held back to validate the algorithms.
The level of accuracy is, apparently, similar to the more traditional method of drawing blood to measure cholesterol levels. Peng said the work “may represent a new method of scientific discovery.”
“Traditionally, medical discoveries are often made through a sophisticated form of guess and test — making hypotheses from observations and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can be difficult because of the wide variety of features, patterns, colors, values and shapes that are present in real images.
“Our approach uses deep learning to draw connections between changes in the human anatomy and disease, akin to how doctors learn to associate signs and symptoms with the diagnosis of a new disease. This could help scientists generate more targeted hypotheses and drive a wide range of future research,” she concluded. ®