24 x 7 World News

AI helps medical professionals read confusing EEGs to save lives

0

Researchers at Duke University have developed an assistive machine learning model that greatly improves the ability of medical professionals to read the electroencephalography (EEG) charts of intensive care patients.

Because EEG readings are the only method for knowing when unconscious patients are in danger of suffering a seizure or are having seizure-like events, the computational tool could help save thousands of lives each year. The results appear online May 23 in the New England Journal of Medicine AI.

EEGs use small sensors attached to the scalp to measure the brain’s electrical signals, producing a long line of up and down squiggles. When a patient is having a seizure, these lines jump up and down dramatically like a seismograph during an earthquake — a signal that is easy to recognize. But other medically important anomalies called seizure-like events are much more difficult to discern.

“The brain activity we’re looking at exists along a continuum, where seizures are at one end, but there’s still a lot of events in the middle that can also cause harm and require medication,” said Dr. Brandon Westover, associate professor of neurology at Massachusetts General Hospital and Harvard Medical School. “The EEG patterns caused by those events are more difficult to recognize and categorize confidently, even by highly trained neurologists, which not every medical facility has. But doing so is extremely important to the health outcomes of these patients.”

To build a tool to help make these determinations, the doctors turned to the laboratory of Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke. Rudin and her colleagues specialize in developing “interpretable” machine learning algorithms. While most machine learning models are a “black box” that makes it impossible for a human to know how it’s reaching conclusions, interpretable machine learning models essentially must show their work.

The research group started by gathering EEG samples from over 2,700 patients and having more than 120 experts pick out the relevant features in the graphs, categorizing them as either a seizure, one of four types of seizure-like events or ‘other.’ Each type of event appears in EEG charts as certain shapes or repetitions in the undulating lines. But because these charts are rarely steadfast in their appearance, telltale signals can be interrupted by bad data or can mix together to create a confusing chart.

“There is a ground truth, but it’s difficult to read,” said Stark Guo, a Ph.D. student working in Rudin’s lab. “The inherent ambiguity in many of these charts meant we had to train the model to place its decisions within a continuum rather than well-defined separate bins.”

When displayed visually, that continuum looks something like a multicolored starfish swimming away from a predator. Each differently colored arm represents one type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the surer it is of its decision, while those placed closer to the central body are less certain.

Besides this visual classification, the algorithm also points to the patterns in the brainwaves that it used to make its determination and provides three examples of professionally diagnosed charts that it sees as being similar.

“This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark,” said Alina Barnett, a postdoctoral research associate in the Rudin lab. “Even if they’re not highly trained to read EEGs, they can make a much more educated decision.”

Putting the algorithm to the test, the collaborative team had eight medical professionals with relevant experience categorize 100 EEG samples into the six categories, once with the help of AI and once without. The performance of all of the participants greatly improved, with their overall accuracy rising from 47% to 71%. Their performance also rose above those using a similar “black box” algorithm in a previous study.

“Usually, people think that black box machine learning models are more accurate, but for many important applications, like this one, it’s just not true,” said Rudin. “It’s much easier to troubleshoot models when they are interpretable. And in this case, the interpretable model was actually more accurate. It also provides a bird’s eye view of the types of anomalous electrical signals that occur in the brain, which is really useful for care of critically ill patients.”

This work was supported by the National Science Foundation (IIS-2147061, HRD-2222336, IIS-2130250, 2014431), the National Institutes of Health (R01NS102190, R01NS102574, R01NS107291, RF1AG064312, RF1NS120947, R01AG073410, R01HL161253, K23NS124656, P20GM130447) and the DHHS LB606 Nebraska Stem Cell Grant.

Leave a Reply