The Integration of RPAs, EHRs, and AI – A New Era for Healthcare Tech

“There are forces that want to see AI virtually unregulated because they worry regulation will slow progress. But we need to ensure the public that the responses they are getting are trustworthy.”

– Roy Ziegelstein, MD, Faculty Member, Johns Hopkins School of Medicine

As healthcare systems grapple with rising patient demands, clinician burnout, and mounting administrative burdens, a new generation of digital tools is reshaping how care is delivered. Robotic process automation (RPAs), electronic health records (EHRs), and artificial intelligence (AI) are increasingly working in tandem to streamline workflows, surface clinical insights, and reduce time spent on documentation, marking a pivotal shift in modern healthcare technology.

RPAs automate repetitive administrative tasks such as billing, appointment scheduling, and insurance verification. EHRs serve as centralized digital repositories for patient health data, replacing paper charts and enabling information sharing across providers. AI builds on both systems by analyzing vast datasets, summarizing clinical information, assisting with patient communication, and supporting decision-making at the point of care. Together, these technologies promise faster, more efficient, and more personalized carem but they also raise new questions about trust, regulation, and the future role of clinicians.

Dr. Roy Ziegelstein—editor-in-chief at DynaMed®, a cardiologist, educator, and leader in the medical humanities at Johns Hopkins University—views today’s AI moment as part of a longer historical pattern of technological change in medicine: “This is nothing new,” he notes, emphasizing that innovation has always brought both opportunity and uncertainty. 

The key, he argues, lies in oversight and accountability. “AI is scary if it’s not regulated, but that’s true about any technology… There has to be a lot of trust in AI tools, but for that to happen, they have to be regulated.”

At the same time, Dr. Ziegelstein sees tangible benefits already unfolding in clinical practice. AI can now summarize complex patient records, draft portal responses, assist with clinical documentation in real time, and rapidly retrieve medical evidence, tasks that once took hours or even days: “AI can help clinicians at the point of care in a very, very real way,” he says, pointing to a transformation that is happening not in the distant future, but right now.

As RPAs, EHRs, and AI continue to converge, healthcare stands at the threshold of a new era, one defined by speed, data-driven insight, and evolving ethical and professional responsibilities. The challenge ahead is not just technological, but human: ensuring that innovation enhances care without undermining clinical judgment, equity, or trust.

Meet the Expert: Roy Ziegelstein, MD, MACP

roy-ziegelstein
Roy Ziegelstein

Dr. Roy Ziegelstein has more than 30 years of experience in medical education and healthcare. He joined Johns Hopkins in 1986 after earning his MD from Boston University. He completed his internal medicine residency and chief residency on the Osler Medical Service and his cardiology fellowship at Johns Hopkins School of Medicine before joining the faculty there in 1993. He has held numerous leadership positions, including director of the Internal Medicine Residency Program, executive vice chairman, vice chair for humanism in the Department of Medicine at Johns Hopkins Bayview Medical Center, and vice dean for education at Johns Hopkins University School of Medicine. 

A dedicated educator and co-director of the Aliki Initiative on patient-centered care, Dr. Ziegelstein has received numerous awards for teaching excellence and is an internationally recognized expert on the connection between depression and cardiovascular disease.

In his role as editor-in-chief, Dr. Ziegelstein focuses on advancing evidence-based medicine, strengthening practice-based guidance, and enhancing the role of DynaMed® as a trusted clinical resource. His expertise is invaluable as EBSCO Clinical Decisions progresses in its generative artificial intelligence journey with Dyna AI, delivering critical information to clinicians at the point of care faster and more reliably than ever.

Challenges and Advancements in AI Use

As artificial intelligence becomes more integrated into clinical care, its effectiveness increasingly depends on how well clinicians are trained to use it. Dr. Roy Ziegelstein stresses that AI should be viewed as a clinical tool rather than an autonomous decision maker: “Every medical student learns to use tools in providing care to their patients. We are not born knowing how to use a stethoscope,” he explains. 

“If a student is not trained to use a stethoscope, it might not just be less useful. It could actually be harmful, because the clinician might think they are hearing something and base a treatment decision on a tool they are using inappropriately. AI is the same. AI is a tool just like the stethoscope. It enables rapid retrieval of information, but you have to know how to use it.”

To address this challenge, Dr. Ziegelstein anticipates expanded AI training for both medical students and practicing physicians, particularly in clinical decision support: “We are going to see increased training in the use of AI tools,” he says. “Without proper training, the AI tool might just not be useful—it might be harmful.” 

He also expects more formal evaluation and oversight of AI systems, similar to how clinicians assess medical devices or diagnostic tools. “There will be increased certification and evaluation of clinical decision support AI tools,” he notes, helping providers understand their strengths, limitations, and appropriate use cases.

At the same time, technological advancements are making AI more seamlessly integrated into electronic health records, allowing insights to be delivered directly within clinical workflows. Rather than requiring clinicians to search for information, AI tools can proactively surface relevant guidance based on patient data and provider practice patterns. 

Dr. Ziegelstein points to existing examples in cardiology, where EHR-integrated decision support systems can suggest alternative imaging options in real time. These developments signal a shift toward more responsive, data-informed care, while reinforcing the need for training and oversight to ensure AI enhances rather than undermines clinical judgment.

Maintaining Clinical Decision Making

As AI becomes more capable of generating recommendations and clinical insights, a key concern is preserving the central role of physician judgment. Dr. Roy Ziegelstein challenges the assumption that doctors are always inherently better decision makers than machines in every context. “If a doctor does not know anything about the patient they are seeing, I am not sure that a doctor is better than a machine,” he says, acknowledging that many patients report feeling unknown or unseen by their physicians. 

Time pressure, burnout, and short appointment windows often limit how deeply clinicians can understand a patient’s personal circumstances, values, and lived experience.

Dr. Ziegelstein argues that the future of high-quality care lies not in choosing between AI and human judgment, but in combining them effectively. AI, he emphasizes, cannot replace clinical insight or the relational aspects of medicine. “The AI tool does not know the patient,” he notes. Instead, its greatest value comes when it supports a clinician who truly understands the person behind the chart, including their social context, financial situation, vulnerabilities, and support system. When AI-powered evidence and analysis are paired with a physician who knows the patient as a human being, he says, “you have got something very special.”

Addressing Biases in AI Tools

As large language models and clinical decision support systems become more widely used in healthcare, concerns about bias and misinformation remain at the forefront. Dr. Ziegelstein cautions against treating AI as a single, uniform technology: “There is too much in the press and in the medical literature that treats AI as if it is one thing,” he says. “It is not. These are different types of AI tools, and some of them have no meaningful human oversight at all.” 

When AI systems draw from broad, unvetted information sources, he warns, they risk amplifying inaccuracies, distortions, and embedded societal biases rather than improving care.

To address these risks, Dr. Ziegelstein argues that human oversight is essential at every stage of AI development and deployment. “You need humans to vet the content so that it is trustworthy, but also to look for evidence of bias,” he explains. He points to emerging models that prioritize rigorous review and accountability, including tools that are heavily curated by medical experts and allow users to flag potential bias in AI-generated responses. “We take that very seriously,” he notes, emphasizing that identifying and correcting bias is not optional, but foundational to building AI systems clinicians and patients can trust.

Certification and Regulation of AI Tools 

As AI becomes more embedded in healthcare, questions about oversight, certification, and accountability are growing more urgent. When asked who might set standards for medical AI, Dr. Roy Ziegelstein is candid about the uncertainty. “I do not know the answer to that,” he says, noting that it remains unclear whether organizations like the FDA, the American Medical Association, or another body will ultimately take the lead in regulating these tools. What is clear, he argues, is that the stakes are high, especially when AI is influencing clinical decisions that directly affect patient outcomes.

Dr. Ziegelstein describes a tension between two competing forces: the push to accelerate AI innovation and the need to ensure safety, reliability, and public trust. “There are forces that want to see AI virtually unregulated because they worry regulation will slow progress,” he explains. “But we need to ensure the public that the responses they are getting are trustworthy.” 

He compares the moment to the early days of aviation, when safety standards and regulatory oversight became essential to building confidence in air travel. While the future regulatory framework for medical AI remains uncertain, Dr. Ziegelstein believes public demand for vetted, trustworthy tools will ultimately shape how and where formal oversight takes hold.

Training Medical Students in AI Use 

Trust and discernment in clinical guidance are skills that must be learned, especially for medical students who are still forming their professional judgment. “Every clinician needs to know which AI clinical decision support tool is trustworthy,” he says. “But medical students need to know it more than anyone else because they are just learning.” 

He compares AI literacy to how students historically learned which professors and clinicians were reliable sources of medical knowledge. In the same way that past generations learned to evaluate human expertise, future clinicians will need to develop the ability to judge the credibility, limitations, and appropriate use of AI tools.

To support this learning, Dr. Ziegelstein describes a new educational model being developed for medical students called the Triple A Framework: Ask, Audit, and Apply. “First, students need to know how to ask the AI tool the right question,” he explains, noting that poorly framed prompts can lead to misleading or irrelevant responses. 

The second step, Audit, requires students to critically evaluate AI-generated claims by examining the quality and source of the underlying evidence. He highlights systems that rely on heavily vetted, evidence based data and clearly grade the strength of recommendations as examples of what students should look for. 

The final step, Apply, focuses on using AI insights in the context of the individual patient. “The tool does not know the patient,” he notes. “Students and clinicians must apply recommendations in a way that reflects the patient’s values, circumstances, and lived experience.” Together, these principles aim to prepare future clinicians to use AI thoughtfully, responsibly, and in the service of better patient care.

Kimmy Gustafson

Kimmy Gustafson

Writer

With her passion for uncovering the latest innovations and trends, Kimmy Gustafson has provided valuable insights and has interviewed experts to provide readers with the latest information in the rapidly evolving field of medical technology since 2019. Kimmy has been a freelance writer for more than a decade, writing hundreds of articles on a wide variety of topics such as startups, nonprofits, healthcare, kiteboarding, the outdoors, and higher education. She is passionate about seeing the world and has traveled to over 27 countries. She holds a bachelor’s degree in journalism from the University of Oregon. When not working she can be found outdoors, parenting, kiteboarding, or cooking.