Skip to Content
3/2/2026

AI Compliance & Governance – Rules of the Road for Healthcare Leaders

By
John P. Carter
& Bryan Rotella
Play Video

Summary

As artificial intelligence rapidly embeds itself into healthcare workflows, leaders face a defining question: how do you govern a technology that behaves less like a tool and more like a non-human team member?

On this episode of Perspectives with Pinnacle, John P. Carter is joined by AI governance expert, Bryan Rotella to explore what responsible AI adoption truly requires in healthcare. Together, they unpack the risks of unchecked AI use, the emerging phenomenon of “Bring Your Own AI” (BYOAI), and why traditional IT oversight alone is insufficient in a regulatory environment where patient safety, institutional reputation, and legal accountability are on the line.

Drawing from historical parallels like early automobile safety reforms and modern compliance frameworks, the conversation reframes AI governance as a structured, clinical-style risk assessment: diagnose the baseline risk, identify workflow exposure, and implement a clear treatment plan. The episode ultimately underscores that we are at a “trust tipping point.” AI’s promise is enormous, but only if organizations build simple, visible, and supervised rules that protect patients and preserve trust.

Key Takeaways from the Conversation

  • AI as a Non-Human Team Member
    Artificial intelligence is not just software, it operates like an intelligent coworker embedded in workflows. That means it must be selected, vetted, supervised, and continuously audited just like any other member of the care team. Treating AI as “just technology” underestimates both its impact and its risk.
  • The Rise of BYOAI (Bring Your Own AI)
    Much like the BYOD smartphone era, clinicians and staff are already using AI tools on personal devices often without formal authorization. Every interaction is recorded and potentially discoverable in litigation or government investigations. Without guardrails, organizations may be exposed to negligent supervision claims.
  • “Simple, Seen, and Supervised” Governance
    Borrowing from early traffic safety reforms, the answer is not to stop innovation, but to teach it manners. Clear policies, visible disclosures, defined oversight structures, and active supervision are essential. Governance must be understandable and operational, not buried in technical jargon.
  • Compliance as a Competitive Advantage
    Healthcare organizations are not consistently informing patients when AI is involved in care decisions. Additionally, documentation, coding, or preauthorization processes create risk without an effective compliance program. Transparent disclosure and oversight can become differentiators in the marketplace. Governance, done well, builds trust and trust sustains innovation.
  • Board-Level Liability and Oversight
    AI developers are not currently bearing product-liability-style responsibility. Instead, healthcare organizations may face exposure for negligent supervision. Boards must move beyond ROI questions and begin asking:

• How does AI impact patient care?
• How does it affect revenue integrity?
• What reputational risks are we accepting?

AI governance must be elevated to the board level, and not simply siloed within IT.

  • The Case for an AI Preparedness Director
    AI governance should not live exclusively within the IT department. Organizations need a designated AI Preparedness Director or safety committee focused on human-AI interaction, workflow auditing, training, and continuous monitoring. Governance must be active and effective, and not static documentation.

Ultimately

The episode makes clear that healthcare is at a critical inflection point. AI offers transformative potential in oncology, radiology, administrative efficiency, and value-based care. But adoption without structure invites risk.

By reframing AI governance in healthcare’s own language such as risk diagnosis, gap analysis, treatment plans, and continuous monitoring, leaders can integrate innovation responsibly. The winners of the AI era will not be those who adopt the fastest, but those who build trust through safety, transparency, and accountability.

Want to Go Deeper?

This episode only scratches the surface of what AI governance requires in today’s healthcare landscape. If your organization is evaluating AI tools, building internal oversight structures, or preparing your board for increased liability exposure, now is the time to move from experimentation to structured governance.

To explore how Pinnacle can help your organization build an active and effective AI governance framework, contact us today.

What You’ll Learn

00:45 — AI as a Non-Human Team Member
03:55 — The Risks of BYOAI and Discoverability
09:10 — Teaching AI “Manners”: Simple, Seen, Supervised Rules
17:27 — Patient Disclosure and Trust as Differentiators
30:24 — Board-Level Liability and Negligent Supervision
31:20 — Building an AI Preparedness Framework