Artificial intelligence could make radiology reports for X-Rays, CT and MRI scans twice as easy to understand without compromising clinical accuracy, according to new research by the University of Sheffield
Researchers found that when these reports were rewritten using AI systems, such as Chat GPT, the reading level dropped from University level to that of a school pupil aged 11-13
This could have significant benefits for patients and healthcare systems, reducing anxiety and confusion, improving health equity for people with lower health literacy or people who use English as a second language. It would also free up clinicians’ time to focus on treatment and care decisions
Artificial intelligence could soon help patients make sense of complex medical scan results, making them far easier to understand without losing clinical accuracy, a major new study by the University of Sheffield suggests.
The research found that when radiology reports for X-Rays, CT and MRI scans were rewritten using advanced AI systems such as ChatGPT, patients found them almost twice as easy to understand compared with the original versions.
Analysis showed that the reading level dropped from "university level” to one more closely aligned with the comprehension of a school pupil aged 11-13.
The findings suggest that AI-assisted explanations could become a standard companion to medical reports, helping to improve transparency and trust across healthcare systems, including the NHS.
Researchers reviewed 38 studies published between 2022 and 2025, covering more than 12,000 radiology reports that had been simplified using AI. These rewritten reports were evaluated by patients, members of the public and clinicians, to assess both patient understanding and clinical accuracy.
Radiology reports are traditionally written for doctors rather than patients. However, initiatives promoting patient-centred care, such as the NHS App, alongside new policies mandating greater transparency of medical records, mean patient access to these reports has expanded rapidly.
Lead author of the study, Dr Samer Alabed, Senior Clinical Research Fellow at the University of Sheffield and Honorary Consultant Cardio Radiologist at Sheffield Teaching Hospitals NHS Foundation Trust, said: “The fundamental issue with these reports is they’re not written with patients in mind. They are often filled with technical jargon and abbreviations that can easily be misunderstood, leading to unnecessary anxiety, false reassurance and confusion.
“Patients with lower health literacy or English as a second language are particularly disadvantaged. Clinicians frequently have to use valuable appointment time explaining report terminology instead of focusing on care and treatment. Even small time savings per patient could add up to significant benefits across the NHS.”
While doctors reviewing these AI-simplified reports found that the vast majority were accurate and complete, around one per cent contained errors such as the incorrect diagnosis. Showing that while this approach is highly promising, it still needs careful oversight.
Of the 38 studies reviewed, none were conducted in the UK or in NHS settings, a significant gap which Dr Samer says the research team is now seeking to address.
“This research has highlighted several key priorities. The most important is the need for real world testing in NHS clinical workflows to properly assess safety, efficiency, and patient outcomes,” said Dr Samer.
“This includes human-oversight models, where clinicians review and approve AI-generated explanations before they are shared with patients. Our long-term goal is not to replace clinicians, but to support clearer, kinder, and more equitable communication in healthcare.”
The research underscores the University’s ambition to transform ideas into impact, a true embodiment of independent thinking and shared ambition.
To read the research paper in full, click here.