In this episode of AI for a Better World, C.M. “Cathy” Rubin speaks with Professor Pearse Keane—Professor of Artificial Medical Intelligence at University College London and consultant ophthalmologist at Moorfields Eye Hospital—about how a simple OCT eye scan plus AI can spot disease early and help save vision at scale. From an NHS system overwhelmed by demand to a 2018 Nature Medicine breakthrough and the creation of INSIGHT—the UK hub with 35M images—Pearse shares lessons on privacy, open science (RETFound), and the road from code to clinic. This is a hopeful, practical path to preventing blindness worldwide. The episode is edited by Sergio Castaneda.
The Global Search for Education is pleased to welcome Sergio Castaneda.
Sergio, how did you design the narrative spine—from your “cover one eye” cold open to Pearse’s intro—and what beats did you move, compress, or drop after the first stringout?
When you invite viewers to “cover one eye,” I assume they play along. I pair your VO with an eye chart so the audience immediately feels the stakes. For Pearse’s intro, I build a quick montage: his presentations, Moorfields clinic scenes, technicians running OCT, and the scans on screen—so viewers grasp who he is and why it matters. While refining the cut, I compressed duplicate set-ups and trimmed side anecdotes so the spine flows cleanly: problem → breakthrough → impact.
What specific edit techniques helped translate dense science into clear, watchable moments—VO vs. on-cam, lower-thirds, graphics, and pacing?
I alternate Pearse’s on-camera moments with focused overlays—labeled OCT stills, a simple motion chart for the NHS backlog, and a clean “code to clinic” timeline. Lower-thirds carry short, plain-English phrases. When a stat appears, I anchor it to a visual—device, scan, or chart—so it’s easy to follow. I keep the pacing tight and return to faces often to maintain a human connection.
How did ethical considerations shape your cut—especially Elaine’s patient story and the DeepMind data-sharing segment (what stayed on screen, what you withheld, and how you framed context to avoid sensationalism)?
With Elaine Manna, I used clips from Moorfields’ YouTube channel and everyday moments from her life after treatment. No intrusive medical close-ups—only visuals that support her voice. In the privacy segment, I let Pearse explain the work on patient privacy, public involvement, and governance. I avoided dramatic music or alarmist captions; the tone stays informational, so rigor—not fear—leads the segment.
When presenting scale (INSIGHT) and RETFound, what choices in b-roll, sequencing, J/L-cuts, and sound design maintained momentum and landed a practical CTA without hype?
I scale the story from one eye to millions of images—clinic OCT → data pipeline → INSIGHT as the hub. J/L-cuts keep Pearse’s VO flowing over b-roll from FII Institute and HDR UK, plus a diagram that shows how RETFound flags early disease. Subtle sonic lifts mark transitions, but I keep the bed minimal so information leads. The CTA is practical—screen earlier, share methods openly—rather than hype.
After producing this episode, what best practices will you carry into future AI-and-health stories—especially around visuals, citations, and guardrails?
Clarity first: show the device, the scan, and the workflow. Pair key claims with a source card or on-screen reference. Keep disclosures simple and consistent. Always center the patient experience—benefits and boundaries—in equal measure. That balance builds trust and keeps the story engaging and responsible.
Thank you, Sergio!
C.M. (Cathy) Rubin and Sergio Castaneda
Don’t miss AI for a Better World — Pearse Keane: AI Eye Scans to Save Sight, now streaming on the Planet Classroom Network. This original series is curated by Planet Classroom.





