As AI tools move steadily from research settings into real-world ophthalmology workflows, one question is becoming increasingly difficult to ignore: are patients comfortable with AI influencing their treatment decisions? In macular services – where retreatment decisions can determine whether vision is stabilized or lost – trust matters.
A new Eye Open study by researchers at the Macular Society explores this issue directly, assessing whether patients with macular disease find AI acceptable in making retreatment decisions based on retinal imaging.
Rather than asking patients a single yes/no question, the authors used conjoint analysis – a method designed to explore how people trade off competing priorities. Participants were recruited via the Macular Society’s monthly e-newsletter (82,000 subscribers) and completed an online survey hosted on SurveyMonkey, selected for its accessibility for people with visual impairment.
The study focused on four factors that could plausibly shape patient confidence in AI-led pathways:
A new Eye Open study by researchers at the Macular Society explores this issue directly, assessing whether patients with macular disease find AI acceptable in making retreatment decisions based on retinal imaging.
Rather than asking patients a single yes/no question, the authors used conjoint analysis – a method designed to explore how people trade off competing priorities. Participants were recruited via the Macular Society’s monthly e-newsletter (82,000 subscribers) and completed an online survey hosted on SurveyMonkey, selected for its accessibility for people with visual impairment.
The study focused on four factors that could plausibly shape patient confidence in AI-led pathways:
First reader: human vs AI
Error rate: 5%, 10%, or 20%
Speed of result: 1, 2, or 4 days
Second reader/checker: none vs human vs AI
A total of 374 participants responded, but only 181 completed the full ranking task, with incomplete responses excluded. Among those completing, 43% had wet AMD and 35% dry AMD.
The results were strikingly clear. The two most important factors for participants were:
Error rate (importance 34.4%)
Presence of a second reader/checker (importance 33.6%)
In other words, patients cared far more about how reliable the decision was – and whether it was verified – than who or what made it. Notably, participants did not express a meaningful preference for human versus AI as the first reader – suggesting that, for many, the “AI vs clinician” debate may be less important than clinicians assume. One recurring comment in the “free comments” section of the survey suggested that a human making final decisions while being supported by AI could be viewed as the best combination.
The implication of this survey is practical. If we want AI adoption to succeed in macular clinics, we should focus less on “selling AI” and more on delivering what patients clearly want: high performance and accuracy, transparency, speed, and robust checking.