Reviewing a patient’s electronic health records to assess their health, make a diagnosis or plan treatment is one of the most time-consuming parts of a medical appointment. This is often quite frustrating as getting to know the timeline of a given parameter from the test results or reading the findings of another medical consultant requires many clicks.
Even physicians well-versed in IT systems need time to move from tab to tab, study the notes added by other physicians or nurses, manually compare figures and analyze data in their heads. All this takes time – finding the desired information or concluding already available data equates to valuable minutes away from every medical appointment.
Healthcare software providers strive to solve this problem by introducing dashboards displaying a transparent summary of critical data. For instance, standardized data – like laboratory test findings – are presented as clear charts, while recently prescribed drugs and interactions between them are shown in a separate table.
The weakest link – data in notes
Unfortunately, a large portion of valuable knowledge is trapped in a physician’s loose notes. Ever since they became digital, their clarity has improved. But this only relates to visual clarity – every physician has their noting style, and haste results in numerous grammatical errors, abbreviations and discipline-specific jargon. Paradoxically, these notes contain significant nuances which nobody reads.
Computers and artificial intelligence (AI) handle unified data analysis very well. That said, even the most advanced algorithms find it challenging to understand notes. Models used in one hospital often fail in another. A universal AI model would be the perfect solution.
This is what MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) researchers are working on. They aim to create linguistic models facilitating the extrapolation of important information from loose notes containing abbreviations, jargon and acronyms. The researchers would like physicians to be able to use the data collected in the notes, which are only reviewed rarely because of their number and lack of order, and would thus help in making clinical decisions.
A system for processing natural medical language has to be very accurate and immune to the immense diversity of health-related datasets. Such AI models achieve 86% performance levels in terms of acronym reading accuracy. The MIT team prepared additional methods that increase that figure to 90%.
EMR – or forgotten binders?
There are many commonly used abbreviations in medical jargon. For example, a note, “pt will dc vanco due to n/v,” means: the patient (pt) was taking the antibiotic vancomycin (vanco) but experienced nausea and vomiting (n/v) severe enough for the care team to discontinue (dc) the medication.
The current version of the medical abbreviation and acronym dictionary contains as many as 600,000 entries. If several abbreviations occur one after another, the AI model pairs them – just like the human brain processes information. Hence the name of the technology: is natural language processing (NLP). The result, in the form of “translated” sentence structures, is subsequently analyzed and the sense is verified, which is then ordered in the form of clear interpretation. This stage is referred to as post-processing.
Let’s assume that a physician wants to know why patient X takes medicine Y. The physician introduces a query into the system. To answer it, the model simply browses the general data and provides the cause of taking that medicine that is the most common but only statistically speaking. A more complicated interference path can be forced, one which pairs general information on medicine application with the text notes in the patient’s medical records. The other method uses high personalization, as taking medicine Y may be associated with other concomitant diseases.
Breakthroughs that help physicians work
Preparing suitable algorithms for analyzing texts with different entry formats is one research direction. Another one is to order the notes during their preparation. In this area, researchers are working on natural language processing (NLP) systems that extract data from conversations between the physician and the patient. The data would then be automatically entered into the EMR. A similar mechanism could be applied to handwritten text, where the AI system would, in real-time, capture phrasings where the physician would only have to verify them.
The researchers emphasize that the development of artificial intelligence is so fast that algorithms are becoming ever more precise in interpreting even chaotic notes full of jargon and abbreviations because they understand the context based on other data found in the EMR. In the years to come, we hope for a breakthrough in how data is entered into EMR and analyzed.