Skip to content
diagram

The IT Leadership's Roadmap to Safer AI Adoption in Healthcare

Healthcare continues its digital transformation with the integration of artificial intelligence (AI) and electronic health records (EHRs) that presents both unprecedented opportunities and complex challenges.  

Racing to adopt artificial intelligence (AI) across clinical and operational workflows, IT leaders are increasingly finding themselves at the center of a high-stakes balancing act, driving innovation while safeguarding patient safety, regulatory compliance, and system integrity along with financial accountability. 

The integration of AI into electronic health records (EHRs) has tremendous potential, but without the right governance, validation, and oversight, that potential can quickly become a liability. To mitigate potential risks and pitfalls, providers should emphasize safe AI adoption, starting with IT and digital leadership and implementing a structured risk management framework like SAFER and GRaSP. 

The AI Risk Reality: What IT and Digital Leaders Are Up Against 

AI is embedded in systems like Epic and Cerner, automating clinical summaries to sepsis prediction. But real-world performance often deviates from what was promised by vendors. 

In Epic’s Sepsis Model, it was initially marketed with high predictive accuracy (AUC 0.76–0.83); however, it showed a much lower AUC of 0.63 in a real-world study, missing most cases of sepsis while triggering alerts in patients without it. This kind of underperformance can disrupt workflows and erode clinician trust, exposing health systems to preventable harm and liability. 

Inaccuracies can disrupt medical diagnosis, treatment, and an organization’s operational structure. Currently, IT leaders are contending with: 

  • Shadow AI: Tools implemented by departments without the IT department’s knowledge
  • Lack of local validation: AI models that haven’t been tested on your patient population
  • Inconsistent clinical controls: No structured oversight for AI-generated documentation
  • Governance gaps: Committees making AI decisions without technical insight
  • Unclear ROI and cost modeling: Difficulty tracking AI’s actual financial impact 

Frameworks for Responsible AI and EHR Risk Management 

To aid in compliance and safe AI adoption, healthcare IT leaders must move beyond ad hoc solutions with two core frameworks, SAFER and GRaSP. 

The SAFER Framework: A Foundation for EHR Risk Management 

The updated 2025 Safety Assurance Factors for EHR Resilience (SAFER) Guide introduced by CMS provides a structured framework for assessing and improving EHR safety. With a reduction from 147 to 88 recommendations, the revised guidelines focus on clarity, relevance, and alignment with current clinical and regulatory needs. 

Key SAFER Guide updates include: 

  • Emphasis on AI integration and cybersecurity
  • Enhanced guidance on patient-clinician communication
  • Real-world testing and multidisciplinary governance 

A case study from a national network of ambulatory clinics demonstrated how the SAFER assessment can identify both quick wins and long-term remediation opportunities to improve patient safety.  The assessment revealed 216 gaps, ranging from missing warnings for allergy documentation, duplicate orders, and inappropriate doses, to a lack of processes and procedures regarding testing of new EHR functionality. 

GRaSP: A Proactive Approach to EHR Safety 

Our Guided Risk and Safety Program (GRaSP) offers a comprehensive methodology for identifying and mitigating EHR-related risks before harm occurs. Unlike reactive models, GRaSP emphasizes: 

  • A library of 1,500+ controls across clinical, technical, and revenue cycle domains
  • Integration of automated, policy, training, and monitoring controls into workflows
  • Heat maps that identify control gaps for prioritization and remediation 

A compelling case study revealed how GRaSP helped a major health system uncover 87 patient harm risks in the high-risk areas of oncology, periop, and emergency, and 126 controls /interventions that would not have otherwise been identified prior to the occurrence of patient harm. 

See How Change Management Transforms AI Adoption

The AI Adoption Lifecycle: 7 Pillars IT Leaders Must Own 

Safe AI implementation doesn’t end once it’s live. It demands lifecycle management across these pillars: 

  1. Governance: Dedicated AI oversight structures, clear accountability, policies, tracking, and regulatory/legal compliance
  2. Technology: Data readiness, cloud computing infrastructure, integration, privacy, and security
  3. Financial: TCO, benefits/impact, ROI tracking, impact on patient safety related to operational and MedMal costs
  4. Clinical Risk & Controls: SAFER compliance, compliance monitoring with controls integration, risk, and controls framework (guided risk and safety program)
  5. Model Testing: Test plan, scenario development, success metrics (e.g., accuracy, drift, bias, hallucinations), data structure and sampling, piloting processes
  6. Transparency: Patient communication, trust-building strategies, accessibility/usability, challenge procedures
  7. Monitoring: Surveillance capture of incidents and anomalies, ML Ops, AI Ops, loop closure with developers 

These 7 Pillars of AI Lifecycle Management serve as a blueprint for responsible AI deployment. 

Clinical Controls: Bridging AI and EHR Safety 

Clinical controls are the connective tissue between AI systems and EHR workflows. Examples include: 

  • Configurable alerts for allergy documentation
  • Metadata tagging for AI-generated summaries
  • Feedback mechanisms for clinicians to flag AI errors 

These controls not only enhance safety but also build trust among clinicians and patients navigating the AI-enabled care environment. 

Making the Business Case for Safe AI 

For IT leaders, demonstrating ROI is critical, but the return on AI investment isn't just efficiency. It’s in reducing risk, avoiding costly errors, and maintaining operational integrity. Health systems that lack proper governance often encounter: 

  • Increased medical malpractice claims
  • Reputational damage from patient safety incidents
  • Regulatory penalties due to poor documentation
  • Ineffective vendor relationships due to unclear performance tracking 

By embedding GRaSP and SAFER principles, IT leaders can show measurable improvement in all these areas, while building clinician trust. 

How EisnerAmper Supports IT and Digital Leaders 

The role of IT and Digital leadership is evolving. It's no longer just about infrastructure; it's about integrating innovation safely. In the AI era, IT leaders who take ownership of governance, risk, and clinical safety will not only protect their organizations, but they’ll also lead them forward.  

EisnerAmper works alongside health system IT and Digital leaders to assess, strengthen, and scale their AI governance efforts for sustainability through: 

  • AI governance maturity assessments
  • Implementation of GRaSP controls and SAFER assessments
  • Clinical controls integration and monitoring systems
  • Cross-functional education for governance committees and digital leaders 

If your organization is beginning its AI journey or expanding use cases across service lines, our team provides a proven path to safer, smarter implementation. Now’s the time to move from reactive oversight to proactive leadership. Contact the EisnerAmper Safer AI team to schedule a discovery session today. 

What's on Your Mind?

a man wearing a suit and tie

Arvind P. Kumar

Arvind Kumar is Managing Director in the Health Care Services Group and Head of Digital Health Services within the firm.


Start a conversation with Arvind

Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.