An anonymous reader quotes a report from ExtremeTech: Remember how that Google Neural Net learned to tell the difference between dogs and cats? It's helping catch skin cancer now, thanks to some scientists at Stanford who trained it up and then loosed it on a huge set of high-quality diagnostic images. During recent tests, the Algorithm performed just as well as almost two dozen veteran dermatologists in deciding whether a lesion needed further medical attention. The algorithm is called a deep convolutional neural net. It started out in development as Google Brain, using their prodigious computing capacity to power the algorithm's decision-making capabilities. When the Stanford collaboration began, the neural net was already able to identify 1.28 million images of things from about a thousand different categories. But the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis. Dermatologists often use an instrument called a dermoscope to closely examine a patient's skin. This provides a roughly consistent level of magnification and a pretty uniform perspective in images taken by medical professionals. Many of the images the researchers gathered from the Internet weren't taken in such a controlled setting, so they varied in terms of angle, zoom, and lighting. But in the end, the researchers amassed about 130,000 images of skin lesions representing over 2,000 different diseases. They used that dataset to create a library of images, which they fed to the algorithm as raw pixels, each pixel labeled with additional data about the disease depicted. Then they asked the algorithm to suss out the patterns: to find the rules that define the appearance of the disease as it spreads through tissue. The researchers tested the algorithm's performance against the diagnoses of 21 dermatologists from the Stanford medical school, on three critical diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In their final tests, the team used only high-quality, biopsy-confirmed images of malignant melanomas and malignant carcinomas. When presented with the same image of a lesion and asked whether they would "proceed with biopsy or treatment, or reassure the patient," the algorithm scored 91% as well as the doctors, in terms of sensitivity (catching all the cancerous lesions) and sensitivity (not getting false positives).
Read more of this story at Slashdot.