Years ago, after finishing a patient
visit, doctors would jot down a few quick notes mainly for their own reference.
Now, those casual notes have been replaced by detailed electronic health
records (EHRs) that coordinate care between specialties and facilities, while
also serving purposes like billing and legal documentation.
In primary care today, physicians
spend around two hours daily filling in EHRs for patients. Between 2009 and
2018, the average length of these records grew by 60 percent. Yet much of this
work is simple recordkeeping rather than clinical reasoning. So why take
doctors away from patients—or hire entire teams of scribes to handle it—when AI
could now take on this role instead?
AI Scribes Enter the Scene
New AI-powered scribes are now being
tested in clinics across the US and globe, offered by companies such as SOAPsuds AI Medical
Scribe Freed, Tali, Revmaxx, and others in
a market exceeding $2 billion. It’s not just private providers making use of them;
the U.S. Department of Veterans Affairs signed agreements for trial programs
with Nuance and Abridge in 2024. Some reports suggest nearly 30 percent of
practices already use this kind of technology. Most of these systems work in a
similar way: they listen during patient visits, transcribe the discussion, and
then format the details into the standard medical note format. Notes can be
ready within seconds or minutes of the conversation. For many clinicians, this
is both promising and a little unsettling. AI has the ability to streamline
work, but it’s also been known to make confident but incorrect statements, draw
from flawed data, and miss the latest clinical guidelines.
Human
Errors Are Not Uncommon
Still, these shortcomings—bias,
mistakes, or outdated information—are also found in human documentation. In
fact, human notes can be far from perfect. A VA study showed that 90 percent of
doctor-written notes contained at least one error when compared with recorded
visits. Another review found that 96 percent of speech recognition–based notes had mistakes, and even after review, 42 percent still contained
inaccuracies. An emergency department study discovered that some exams were
documented even when they didn’t take place, with barely over half being
confirmed by observation. Patient concerns also often fail to make it into the
record at all. When compared with this reality, AI
scribes may seem less risky. Errors already exist in records that guide medical
decisions, determine risk, and train predictive models. If AI reflects bias,
it’s often inherited from the human-written notes it learns from, which already
contain those biases.
Data
Privacy Remains a Concern
Privacy risks from AI tools are
significant. Audio from medical visits may be stored by outside vendors,
creating potential vulnerabilities. But medical data breaches are already
frequent and large in scale. Since 2021, over 700 breaches have occurred
annually. In 2024 alone, 703 incidents impacted more than half of the U.S.
population—over 181 million people. Patients usually have no option to withhold
their data from AI use if they want care. HIPAA, the main U.S. health privacy
law, has not been updated in years, but AI documentation tools likely won’t
worsen an already fragile system.
Is
AI More Reliable than Human Scribes?
AI may be as unreliable as human
scribes and other current methods of medical note-taking like remote or
telescribes, but it is far quicker and efficent. This faster output could
provide notable benefits. Doctors might spend more face-to-face time with
patients. After-hours charting could be reduced. Burnout—which rose to record
levels during the pandemic—might ease. These outcomes would all be positive. If
AI allows doctors to focus more on patient care instead of constant data entry,
and its accuracy matches what’s already common, it could represent meaningful
progress.