AI and MedTech Road to adoption and Post-market governance
AI toolkit
A practical internal toolkit I authored to turn AI/MedTech regulation, evidence, data governance and implementation considerations into usable advisory guidance.
Context
AI-enabled medical technologies create a translation problem: a product may be technically interesting, but adoption depends on regulation, evidence standards, clinical safety, information governance, interoperability and monitoring after deployment.
My role
I authored the toolkit during my Health Innovation East placement. It was practical internal guidance-support material for innovation advisory work, not formal regulatory advice, legal advice or a substitute for specialist clinical-safety review.
Approach
I drew on official publications from regulatory and health-system sources, alongside learning from the East of England AI Forum, and translated the material into usable guidance for advisory contexts.
- AI as a medical device and software as a medical device framing.
- MHRA and UKCA regulatory awareness.
- NICE evidence expectations and DTAC considerations.
- DCB0129 and DCB0160 clinical safety awareness.
- Information governance, data protection, interoperability, risk and post-market monitoring.
Selected outputs
- Structured guidance-support material.
- Topic summaries and practical explanatory sections.
- Governance-aware framing for innovator support.
- Checklists that helped turn broad AI/MedTech issues into more concrete questions.
What this shows
The value of the work was translation. I took a dense, fast-moving governance area and turned it into material that could support better questions in advisory conversations.