When and how to measure patient satisfaction to detect bottlenecks

Monday, May 27, 2024
Digital Health
News

A well-conducted patient satisfaction survey (PREMs, Patient-Reported Experience Measures) is an invaluable source of information about what patients appreciate and dislike. However, nowadays, a wealth of knowledge can also be derived from data collected through medical software.

Objective data and uncomfortable conclusions

Patient opinion surveys come with many pitfalls. Poorly formulated questions can lead to conclusions that are either evident without a study or merely confirm preconceived notions. It's not uncommon for survey results to end up ignored because the identified problems – such as long waiting times or registration queues – cannot be immediately resolved. This is why medical facilities still rarely assess customer satisfaction. Reasons include a lack of time, cost concerns, or insufficient motivation to monitor service quality regarding patient satisfaction.

Surveys can offer insights into minor improvements patients and medical staff value. Additionally, conducting satisfaction surveys signals a commitment to service quality improvement.

The crucial shift here is how we approach the results: rather than expecting positive feedback, it's better to proactively seek critical or uncomfortable feedback, which can be most valuable for implementing patient-desired improvements. With the progressing digitalisation, much of the data healthcare facilities seek is already available in electronic health records (EHR) or appointment scheduling software. Check what you have before you ask patients.

Getting the questions right (or skipping a survey)

Whether conducted in a doctor's office or via a call center, a survey questionnaire should always be concise, easy to complete, and tailored for specific objectives. Questions should be developed collaboratively with staff across all levels – registration, doctors, nurses – to address particular issues, avoiding vague inquiries. Collaborative survey development and clear communication of its purpose can alleviate staff concerns about its intent.

The most challenging aspect is formulating clear objectives. Surveys can target satisfaction with appointment processes, first impressions (facility appearance, equipment), communication (adequate explanation of treatment results by doctors, patient comfort during registration), staff empathy and respect, and satisfaction with care outcomes.

With clear objectives, crafting questions becomes easier, allowing the determination of whether a survey is necessary at all. For instance, to improve patient service at registration, first examining data from IT systems and external sources is prudent. Analysing scheduling data can reveal service bottlenecks and suggest organisational adjustments.

An example is a survey being conducted to investigate the reasons for no-shows. Data on this subject can be found in the literature, making a study unnecessary. A more sensible solution is to improve appointment scheduling arrangements and gather data on missed appointments. This approach allows us to identify which patient demographic groups are most likely to experience no-shows, when they occur, and the associated costs they generate. Such data forms a solid knowledge base. The same principle applies to other parameters, such as average appointment waiting times etc.

Paper is still unavoidable to reach all the patients' groups

Classic voluntary surveys using paper forms in waiting rooms remain popular for guaranteeing anonymity. However, the context in which patients complete these forms is crucial. Negative results might be expected if, for example, appointments are delayed due to urgent, unscheduled patients despite the questionnaire addressing matters beyond appointment times. Telephone surveys are more reliable, as they capture patients' retrospective evaluations.

In addition to demographic metrics, medical metrics that do not identify patients (e.g., preventive vs. chronic health issues) should be considered. Responses can vary significantly based on these factors.

Reliability and interpretation

To ensure representative results, patient samples must be selected randomly. For example, surveying one out of ten patients over two months post-visit or sending surveys by mail to 500 randomly selected patients from a database are viable methods.

Online surveys are convenient for medical facilities, providing automated aggregation and graphical results representation. However, this approach may exclude respondents who do not use computers regularly. A mixed method – combining online and postal surveys – offers a solution.

Ultimately, more responses reduce survey error and yield more open-ended feedback. While 50 surveys may suffice for a small practice, medium-sized practices may require several hundred responses.

Once results are summarised, interpretation is critical. Positive feedback should be highlighted and celebrated, while negative feedback should prompt reorganisation decisions such as training initiatives, e-health implementations, process revisions, or new doctor-patient communication strategies.