As artificial intelligence rapidly embeds itself into healthcare workflows, leaders face a defining question: how do you govern a technology that behaves less like a tool and more like a non-human team member?
On this episode of Perspectives with Pinnacle, John P. Carter is joined by AI governance expert, Bryan Rotella to explore what responsible AI adoption truly requires in healthcare. Together, they unpack the risks of unchecked AI use, the emerging phenomenon of “Bring Your Own AI” (BYOAI), and why traditional IT oversight alone is insufficient in a regulatory environment where patient safety, institutional reputation, and legal accountability are on the line.
Drawing from historical parallels like early automobile safety reforms and modern compliance frameworks, the conversation reframes AI governance as a structured, clinical-style risk assessment: diagnose the baseline risk, identify workflow exposure, and implement a clear treatment plan. The episode ultimately underscores that we are at a “trust tipping point.” AI’s promise is enormous, but only if organizations build simple, visible, and supervised rules that protect patients and preserve trust.
• How does AI impact patient care?
• How does it affect revenue integrity?
• What reputational risks are we accepting?
AI governance must be elevated to the board level, and not simply siloed within IT.
The episode makes clear that healthcare is at a critical inflection point. AI offers transformative potential in oncology, radiology, administrative efficiency, and value-based care. But adoption without structure invites risk.
By reframing AI governance in healthcare’s own language such as risk diagnosis, gap analysis, treatment plans, and continuous monitoring, leaders can integrate innovation responsibly. The winners of the AI era will not be those who adopt the fastest, but those who build trust through safety, transparency, and accountability.
This episode only scratches the surface of what AI governance requires in today’s healthcare landscape. If your organization is evaluating AI tools, building internal oversight structures, or preparing your board for increased liability exposure, now is the time to move from experimentation to structured governance.
To explore how Pinnacle can help your organization build an active and effective AI governance framework, contact us today.
00:45 — AI as a Non-Human Team Member
03:55 — The Risks of BYOAI and Discoverability
09:10 — Teaching AI “Manners”: Simple, Seen, Supervised Rules
17:27 — Patient Disclosure and Trust as Differentiators
30:24 — Board-Level Liability and Negligent Supervision
31:20 — Building an AI Preparedness Framework