How To Design Electronic Medical Records That Doctors Love

Monday, October 25, 2021
EHR
News
A new chapter in medicine was supposed to begin with replacing paper patient records with electronic ones. Doctors were promised quick access to data, while patients should benefit from improved quality of care and personalized medicine. Instead, frustrated doctors spend hours entering medical notes into a computer or browsing through electronic records every day. This is not the digitalization they dreamed of. First, the bad news. Today’s EDM is only as good as the available technology. Similar challenges with recording and interpreting data also exist in other sectors. Neither artificial intelligence nor voice recognition systems are good enough to transform the doctor’s interaction with EDM. But there is also good news – the development of AI is so fast that in the next few years, it will be used more frequently in the doctor’s practice, improving the convenience of work.

New interaction between man and machine

Until recently, the great hope for a breakthrough was IBM Watson – an artificial intelligence system to support clinical decision-making, especially in oncology. However, the first attempt to engage AI in the work of clinicians was disappointing. The problem was not the technology itself but the limited access to data. Nevertheless, IBM has managed to build some exciting solutions. One example is Patient Synopsis, which helps radiologists better understand patient data. The system takes data from the EDM and then presents – in the form of a list in a single window – a summary of those that are relevant to the patient’s diagnosis. Another interesting example is the MedKnowts system developed by researchers at MIT and Beth Israel Deaconess Medical Center. It combines the processes of searching through medical records and documenting patient information in one interactive interface. The EHR system does not display all patient information to the doctor but only the most critical elements. For example, suppose MedKnowts identifies the clinical term “diabetes” in the text typed by the clinician. In that case, it automatically displays a “diabetes chart” containing medications, lab values and excerpts from previous records relevant to diabetes management. This is a different approach from the current EHR design that focuses on showing information in either chronological or alphabetical order. MedKnowts also automatically fills in fields with patient information as you take notes. The apparent convenience, however, has come up against several obstacles. One of them, paradoxically, was convincing doctors to... change their work practices. For several years now, doctors have been accustomed to using their current IT systems – sometimes, they intuitively clicked on subsequent windows. And even when reaching information required dozens of clicks, they did it without thinking – the more intuitive and friendly system required learning a new way of interacting with the machine. That’s why researchers are now working on creating an adaptive system, where the software gradually adapts to the doctor’s work and the data being entered. In the future, the doctor will not be condemned to a system that is rigid in its functionality and structure – they will actively shape the evolution of its architecture through the type of data entered and their way of working. While customization is already possible through a system of options, modules and predefined templates, this is not enough. This structure means that even when the doctor uses only selected tabs during each visit, others are still displayed in the foreground, and the file looks exactly the same for both healthy patients needing preventive decisions and sick patients where other information is required. For the EHR experience to improve, integration at the data and software level is essential. From within the EHR, the doctor must be able to access literally all of the other doctors’ notes. Otherwise, even small holes in the medical history cause uncertainty and the need to order additional tests.

Dictation into the EHR

Will accurate voice recognition and voice control systems bring a dramatic improvement in the way notes are entered into the EHR? There are many open questions: will doctors want to talk to a computer all day? Is dictation into the EHR comfortable for the doctor and patient? As in other areas related to technology transformation, in voice recognition tools, technology alone is not sufficient. Nevertheless, the challenge is less than we thought a few years ago, as the amount of information a doctor needs to enter/dictate into the system will decrease over time. RIS/LIS/PACS and EHR systems are becoming better integrated with each other. The interoperability of e-health systems is improving. Thanks to digitization, information from tracking devices and sensors is automatically sent to the patient’s record. AI systems can make initial health assessments based on symptoms recorded by the patient even before they see a doctor. Thus, the doctor’s task will only be to enter observations from the patient interview, i.e. non-measurable information such as general health assessment, subjective well-being, non-medical determinants of health, etc. But here, too, progress is rapid and much previously unmeasurable data can be recorded thanks, for example, to facial emotion recognition technologies. In the case of voice recognition in the doctor’s office, we are talking about two functions: controlling IT systems by voice and voice recognition for medical notes. The latter involves systems that can transcribe the voice of the doctor and patient and assign the extracted critical data to standardized fields in IT systems. Although advances in technology in recent years have been enormous, as can be seen by using the transcription offered by ZOOM or by giving commands to Siri or Amazon Alexa, we are probably still a few years away from the perfection needed in medicine. Even if some manufacturers claim 99% accuracy. A study published in JAMA in 2018 (“Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists”) – based on 217 clinical notes randomly selected at two healthcare organizations – found that EHRs created by speech recognition software contained an error rate of 7.4%. In contrast, the authors of “Electronic Health Record Interactions through Voice: A Review” suggest that voice recognition tools can improve work efficiency and remove the limitations imposed by classic graphical user interfaces, but further research is needed to understand the impact of these technologies on workflow and security. Another concern is privacy. Some technologies use neural networks to map doctor-patient conversations and convert them into notes in the EHR. However, they require mounting devices with microphones to record each interaction. To capture even more data based on patient behaviour or facial expressions would also require cameras. Doesn’t this compromise the intimate atmosphere that a patient burdened with a disease expects? Doctors are waiting for the next generation of EHR systems. These are referred to as “ambient clinical intelligence systems.” The dream of fewer clicks and a more patient-centred visit is still a long way off. However, its materialization will not only be a question of new technologies but, above all, of the doctors’ acceptance of the new working methodology. It may turn out that inefficient systems have become firmly established in medicine and saying goodbye to them will not come easily.