AI Converges on Life Science
Lesedauer: 0 Minuten
How Will It Help Medical Professionals of the Future?
This post is the first in a series about the impact of Artificial Intelligence is having in our world. For each, we asked a GLG Network Member to share insight from their unique perspective. Here, Matthew Lungren, MD MPH, Associate Director of the Stanford Center for Artificial Intelligence in Medicine and Imaging, writes about how AI is transforming medicine, specifically radiology.
Because digital healthcare increasingly involves a massive amount of information, it can be difficult for healthcare systems to use that data in a way that maximizes the potential benefit to patient care. But advancements in Artificial Intelligence (AI) have already shown incredible impact in many industries by processing large volumes of complex interdependent data to provide actionable insights. Healthcare is poised to become the next industry transformed by this technology.
The Augmented Clinician of the Future
It is likely that radiology – a specialty that for nearly 20 years has perfected the digital acquisition, storage, and delivery of vast quantities of medical imaging – will see the most initial impact from AI.
In the beginning, AI models could perform at human expert level on less complicated imaging examinations for specific tasks, such as diagnosing pneumonia on chest x-rays. Today at Stanford, we’re working with AI models on specialized imaging examinations of the brain – called CT arteriograms. These are used to diagnose vascular pathology and identify life-threatening abnormalities.
Our AI model quickly identified tiny aneurisms that are difficult for human experts to detect. What’s more, radiologists, neurosurgeons, and even trainees, could diagnose these dangerous lesions more accurately than when they scrutinized the images themselves.
If widely validated, any clinician could use this and many similar AI models at the point-of-care to help interpret medical imaging. This is potentially great news especially for patients or health care systems with limited access to trained radiologists.
Domain Expertise and AI Bias
As advanced technology and computer-science find applications in medicine, it’s become clear that the cultures of technology and medicine are not natural partners. Silicon Valley often adheres to Zuckerberg’s mantra, “Move Fast and Break Things.” Medical professionals (with their culture of carefully controlled trials) hew to their Hippocratic oath “First, Do No Harm.”
These underlying philosophies seem to be at odds with one another. If one approach predominates progress can grind to a halt. The ideal situation is to create multi-disciplinary teams with both clinical and tech backgrounds. These teams can deliver the best applications, even if there is a more thoughtful patient approach to a clinical roll-out.
While this allows for exciting rapid development, early adopters must be cautioned. There is a risk of early, premature deployment of AI tools without full evaluation, which can lead to dangerous and expensive mistakes, and work is still ongoing to understand the ideal way to identify and mitigate potential challenges.
One important challenge is bias. We have seen this in AI applications like facial recognition. In many cases, the accuracy of the best commercial models only applied to Caucasian male faces, and the AIs consistently underperformed for women and people of color.
This is a salient lesson to inform healthcare applications. We must be cognizant of these potential biases and put safeguards in place so that these new technologies do not exacerbate health care disparities, particularly for marginalized or disadvantaged populations.
What About Privacy?
In most healthcare settings, sharing data between institutions expands the data set and helps us to draw smarter conclusions, but the need to stringently protect that data makes the exchange difficult and time consuming.
Privacy is indeed important, but potentially impactful technologies require large volumes of data, so it’s essential that we find a way to share data responsibly if we’re to advance the field of medicine. If we don’t work together, it will be difficult to build AI models that perform well for everyone and avoid pitfalls like the bias problem mentioned above.
At Stanford, we are actively investigating an alternative to direct data sharing between institutions called “federated learning.” Federated learning allows us to take an AI that has trained on our own data set and transfer the model to another institution so it can train on another data set. This approach allows the model to “learn” from patients in many different populations without having to move or send any patient data, ensuring both patient privacy and high-performing models that can generalize to different populations.
In theory, you could use this approach an infinite number of times to train an AI model on a new data set in a different population. Here, the AI would be more accurate for more patients because it trained on a wide sample of population and diseases classes.
While still in the early stages, we believe this approach will likely be among the key solutions to the data privacy issues that are undoubtedly on the top of many people’s minds. Patients will not only trust what we’re doing, but they can rely on the outcomes that we’re predicting will work for them and their loved ones.
AI is More than Images; It is Everywhere in Healthcare.
AI will continue to transform the medical industry in ways we can readily see in the clinic but even more often in behind-the-scenes ways that are not patient-facing. For health care providers and clinicians, understanding the basic principles of AI will become a critically important skillset to understand how these powerful, but potentially limited, tools operate, where they are valuable, and where they might fail.
As a medical community, we all must have more than a conversational understanding of these tools so that we can continue to make the best decisions for our patients, know when to trust these new AI tools to help us make the right calls, and acknowledge in what circumstances our own human intuitions alone remain the most valuable.
Matthew Lungren MD MPH is the Associate Director of the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI.stanford.edu) and a Clinician Scientist at Stanford University Medical Center. He is recognized as a national leader in the field of machine learning and deep learning in healthcare and medical imaging applications as well as global public health. Matthew holds leadership positions on AI in imaging panels for large medical organizations such as the Radiologic Society of North America (RSNA) and has been an invited keynote speaker at more than a dozen national medical conferences, major academic centers, as well as the NIH to present his research on artificial intelligence in medicine and imaging.
Kontakt
Geben Sie Ihre Kontaktdaten ins Formular ein – wir melden uns umgehend bei Ihnen.
Abonnieren
Erhalten Sie die neuesten Erkenntnisse und Einsichten vom globalen Marktplatz für Wissen