Skip to Content

Student feature: What do Facebook and cancer detection have in common? Artificial intelligence

February 27, 2019
Tucker Netheron, GSBS student

This article was written by GSBS student Tucker Netheron, the first place winner of the 2018 GSBS Annual Report Science Writing Contest. Netheron is affiliated with the Program in Medical Physics and his advisor is Laurence Court, Ph.D.

Do you ever wonder how Google is able to complete your sentence in the search bar? Or how Facebook and other social media apps are able to recognize faces in your pictures? Or perhaps how Amazon is able to send you online shopping recommendations that are eerily coincidental? I have heard in the past that computers are only as smart as the programmers that program them. But what if computer programs are smart enough to program themselves to perform a particular task? This is exactly what artificial intelligence (AI) is able to do, given they have enough data from which to learn. Whenever you like something on Facebook, buy something on Amazon, or tag someone in a photo, you are hand-labeling a dataset that a company’s AI program can use to classify people’s faces or make predictions about your personal preferences.

Feel uneasy about the thought of companies building predictive models about your spending habits, daily commute, or partner preference on a dating app? Unfortunately, privacy will become a luxury in the years to come and hiding yourself from the World Wide Web will be increasingly difficult. In addition, data will become the new currency for companies and institutions that stand to gain from a better understanding of their customer base. But fear not, while AI is making advancements in marketing, dating apps, and other areas, scientists and doctors are also using these AI algorithms to detect pneumonia in x-ray images, develop robotic surgery, and even screen for cancer. Through computer programming and collaboration with many experts, AI is being applied to health care and medicine.

My research applies a type of AI called deep learning to detect cancer in bone. Specifically, my lab mates and I hypothesize that deep learning can be used to detect bone metastases in medical images. Currently, this is done by a radiologist. Doubt that a computer could ever out-perform a doctor? A recent study published by Stanford University (November 2015) has demonstrated that AI outperforms radiologists when diagnosing pneumonia.

Research utilizing AI programs is typically performed in training and testing phases. To program or “train” our deep-learning AI model, my lab gathered CT scans for over 130 patients from MD Anderson’s Radiation Therapy Department Archive and labeled each image as cancerous or non-cancerous. After formatting this data, the images were passed to a deep-learning model that we modified from the computer science community (arxiv.org/abs/1409.1556).

While the training is occurring and the program is running, it attempts to solve an optimization problem. The AI program looks at groups of labeled images and calculates the most prominent features inside of them. For instance, if the AI program were given a dataset of fruits and told to categorize each one, it would construct feature maps (specialized images or matrices) that would help it to tell the difference between each fruit. Imagine the average apple in your mind — it has a certain shape, color, size, and other distinguishing visual properties. This thought in your brain is a feature map of an apple. In the computer, feature maps can be constructed by mathematic operations called convolutions. These feature maps, when plugged into a large mathematic formula, result in a single number — a label that distinguishes each fruit (0 for apple, 1 for banana, 2 for orange, etc.). In our case, 0 for not suspicious and 1 for suspicious. When the program finds the best combination feature maps that result in the correct labeling of images, training is complete.

In the testing phase, the model makes predictions based on images it has never seen. For each image the model reviews, we calculate how often it predicted the correct label. From our test set, our model achieved 95 percent accuracy in distinguishing between images that were treated for bone cancer and images that were not treated. This is important because our researchers may be able to use this model to quickly indicate medical images that are suspicious. Our lab also plans to use this model to help fully automate radiotherapy treatments for patients that require emergency treatments due to intense bone pain. Expediting such treatments can help patients receive safe and efficient care in a shorter amount of time than is currently possible.

Image of AI deciphering images of apples

While these AI programs are extremely accurate, they can sometimes produce unexpected results. Bring up your internet browser (Bing or Google) and type in the word “apple.” We would expect to see red, green, or yellow fruits; however, you may notice that logos for Apple, Inc. also appear in the search   results. I have used Facebook’s recently released AI (github.com/facebookresearch/Detectron) to tell us what it thinks is in the image above. In this case, it thought the yellow apple was an orange. 

AI can make connections and generalizations that are more complex in nature than a human may be able to make. While this example is a simple one, it illustrates an important point — it is essential to evaluate how the AI model performs and where its assumptions fail or make unexpected predictions. This, along with applying this model to other parts of the body is another aim of our lab’s work.

Image of AI deciphering a spinal tumor

After validating this model’s performance for many anatomical sites, our lab plans to use it to aid cancer clinics in detecting and automating the treatment of painful bone cancers. The image on the opposite page is the result from my lab’s AI model predicting the presence of bone metastases along the spine. Like the appleorange mislabeling, our model may detect regions of the spine that indicate phenomena other than cancer such as cysts, hyperdensity, and trauma. Exploring what it is telling us makes this work both exciting and challenging.

AI is a powerful tool that has made advances in understanding our world and its people. Its capabilities are daunting, but contain a powerful means of harnessing the ability to better improve not only social apps and self-driving cars, but also health care and medicine.

site var = gsbs