Discussion Guide and Transcript
Season Three - Episode Three
Research Ethics Reimagined Podcast Season Three: Episode Three "Scientific Research for the Common Good with Alex John London, PhD"
- In this episode of PRIM&R's podcast, "Research Ethics Reimagined," we explore the philosophical foundations of research ethics and the challenges of deploying artificial intelligence in medicine with Alex John London, K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon University, where he directs the Center for Ethics and Policy. Professor London discusses his book For the Common Good, which argues that justice should be the foundational principle of research ethics. Professor London also offers his assessment of AI's promise and limitations in healthcare. Listen on Spotify | Listen on Apple| Listen on Amazon
Discussion Questions
- 1.) Justice as the Foundation of Research Ethics
- Dr. London argues justice should be the "foundational first principle" of research ethics, drawing on philosopher John Rawls's idea that justice is "the first virtue of social institutions." How might centering justice rather than treating it as one principle among several change the way IRBs and research institutions evaluate proposed studies?
- He suggests that as institutional structures are "gradually being eroded away," the research ethics community faces a critical question about "what's baby and what's bathwater." How can the field distinguish between bureaucratic processes that can be streamlined and fundamental protections that must be preserved or reimagined in new forms?
2.) AI in Medicine: Navigating Hype, Methodology, and Real-World Impact
- Dr. London describes research showing that AI models answered medical board questions correctly even without being shown the diagnostic image, suggesting the models exploited statistical patterns in the answer choices rather than analyzing clinical content. What does this kind of finding suggest about how institutions should evaluate AI tools before deploying them in clinical settings?
- He draws a critical distinction between prediction and intervention, noting AI systems built to predict outcomes may not perform well when used to guide clinical decisions that change a patient's course of care. How should this limitation shape the way researchers, clinicians, and regulators think about where AI can genuinely add value in healthcare?
3.) Social Value and the Research Enterprise
- Dr. London characterizes research as a collaborative enterprise involving stakeholders with "parochial interests" — from academics needing publications to pharmaceutical companies under the patent clock to participants seeking cures or compensation. How can research ethics frameworks better align these competing interests around the shared goal of producing high-quality evidence?
- He argues that study participation, as considered through the lens of game theory, is not a “prisoner's dilemma” but a "stag hunt," where cooperation benefits all parties as long as enough others also participate. How might this reframing change the way institutions recruit participants and communicate about the value of research participation?
Key Terms
Equipoise: The ethical requirement that genuine uncertainty exists among experts about which treatment in a clinical trial is superior, used to justify randomization; Dr. London argues this concept was widely misunderstood during the COVID-19 pandemic, leading to poor-quality early studies
Stag Hunt: A game theory model in which cooperation benefits all parties as long as enough participants also cooperate; London argues this example better represents the structure of research participation Confounding: When an apparent relationship between variables is actually driven by a third, unaccounted-for factor; London highlights this as a critical concern both for clinical trials and for AI systems trained on real-world data that may reflect patterns unrelated to clinical effectiveness
Stag Hunt: A game theory model in which cooperation benefits all parties as long as enough participants also cooperate; London argues this example better represents the structure of research participation Confounding: When an apparent relationship between variables is actually driven by a third, unaccounted-for factor; London highlights this as a critical concern both for clinical trials and for AI systems trained on real-world data that may reflect patterns unrelated to clinical effectiveness
Additional Resources
- For the Common Good: Philosophical Foundations of Research Ethics - London's book argues that justice should be the foundational principle of research ethics, available as a free open-access download from Oxford University Press
- WHO Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models - WHO guidance on AI ethics in healthcare that London helped develop as a member of the expert group
- PRIM&R's Research Ethics Timeline - A resource for exploring the milestones of research ethics, including developments in federal research protections
Transcript
Please note, a transcript generator was used to help create written show transcript.
The transcript of this podcast is approximate, condensed, and not meant for attribution. Listen to the full conversatio on PRIM&R’s Research Ethics Reimagined podcast.
Catherine Batsford: Welcome back to Research Ethics Reimagined. I’m Catherine Batsford.
Dan McLean: And I’m Dan McLean.
Catherine Batsford: Alex John London is the K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon University, where he directs the Center for Ethics and Policy. He has also served as chief ethicist at the Block Center for Technology and Society at Carnegie Mellon. As an elected fellow of The Hastings Center, his work sits at the intersection of ethics, medicine, biotechnology, and artificial intelligence. Professor London’s book, For the Common Good: Philosophical Foundations of Research Ethics, was published by Oxford University Press in 2022. He is the author of more than 100 papers and book chapters in journals including Science, JAMA, and The Lancet, and is the co-editor of Ethical Issues in Modern Medicine, one of the most widely used textbooks in the field.
His work in AI ethics centers on the structural obstacles to building safe and effective technologies and on creating mechanisms for social trust and accountability. As a member of the World Health Organization expert group on ethics and governance for AI for health, Professor London helped develop guidance published in 2024. He has also served on two National Academy of Medicine committees shaping frameworks for emerging health technologies. Throughout his career, Professor London has helped shape key ethical guidelines for oversight of research with human participants. We are also proud to say that Professor London serves as a member of PRIM&R’s board.
Thank you for joining us today to explore the rapidly evolving field of AI deployment and its use in medicine.
Alex John London: Thank you for having me. It’s a pleasure to be here.
Dan McLean: We like to start these conversations by asking how you found your way into this field. You’ve been at Carnegie Mellon for more than 25 years focusing on ethics, philosophy, and policy. How did that path begin?
Alex John London: I started graduate school at the University of Virginia during what was, in some ways, a heyday of bioethics there. John Arras was hired while I was a graduate student. When I entered graduate school, I didn’t have much interest in bioethics and knew nothing about research ethics. I was studying ancient philosophy and ethics.
Someone asked if I wanted to be John Arras’ research assistant. I said no — I wanted to do what I thought was “real philosophy.” Then they told me it would increase my stipend, and I reconsidered. But John was an incredible teacher and mentor. Sitting in and teaching his bioethics classes, I realized this was real philosophy — real issues that mattered and were happening in the world today.
He loved research ethics, and we would talk about deep philosophical problems in research ethics, both practical and conceptual problems. He would say these were real philosophical problems, and he didn’t understand why more people were not interested in them. I certainly became interested. So although I continued my traditional philosophy work, I developed a parallel track in bioethics and research ethics, and I’ve been working in that space ever since.
Dan McLean: Your book, For the Common Good: Philosophical Foundations of Research Ethics, came out in 2022. Why did you decide to write that book?
Alex John London: I had written many papers on different topics in research ethics, but they were all guided by a single set of underlying concerns about the field. I realized I needed to bring those ideas together and make the underlying concerns explicit. And I think those concerns are even more relevant now than when I wrote the book, given what is happening in research and technology today.
Catherine Batsford: What is the core argument you want readers to take away from the book?
Alex John London: One of the main ideas is that the foundations of research ethics are filled with tensions. The field has become very practical and focused on particular institutions and guidelines, but sometimes without thinking deeply about the larger goals those structures were meant to serve.
My argument is that justice should be the foundational principle for research ethics. If we think of research ethics as a social institution, then it should be grounded in justice and connected to other social institutions. From there, we can better understand familiar practices and principles used in research oversight.
Catherine Batsford: So when reviewing research, we should think about justice first?
Alex John London: Yes, but justice understood in relation to social institutions and the broader goals of research. Research ethics often focuses on fairness in subject selection or distribution of benefits and burdens. Those are important, but they do not fully capture what justice requires. We also need to consider whether research has social value, whether it contributes to the common good, and whether it benefits the communities where it is conducted.
Dan McLean: You talk about social value and the common good. Why is that idea important now?
Alex John London: Because we rely on research to generate knowledge that helps health systems function, helps clinicians treat patients, and helps societies respond to public health threats. The COVID-19 pandemic illustrated this clearly. High-quality research is not optional in an emergency. It is essential because institutions rely on that evidence to make decisions that affect people’s lives.
Poor-quality research during the early pandemic led to misleading conclusions and delayed effective treatments. That showed why there is a social imperative to conduct research, but that imperative must still be balanced with respect for participants.
Catherine Batsford: There is a section in your book where you say study participation is not a prisoner’s dilemma. Can you explain that idea?
Alex John London: Some people argue that research participation is like a prisoner’s dilemma — everyone benefits from research, but no one wants to participate. I examined that argument and concluded that research actually has a different structure. Participation makes sense if enough people participate to produce useful knowledge.
The real issue is social value. If studies are poorly designed or fail to recruit enough participants to produce meaningful knowledge, then participation does not generate value. Improving the social value of research helps align individual interests and societal interests.
Dan McLean: We’ve talked about social value and community benefit, but we also have for-profit companies involved in drug development. Is there tension between social value and responsibility to shareholders?
Alex John London: There can be, but research is a collaborative activity involving many stakeholders — academics, industry, universities, participants, and regulators — all with different motivations. The goal of research ethics is to create incentives and institutional structures that align those interests around producing high-quality scientific evidence.
Catherine Batsford: Do you see a more global research enterprise in the future?
Alex John London: I hope so. Many health challenges are global, and collaboration across countries can benefit everyone. But there are also political and economic pressures that can limit collaboration and data sharing. Improving the quality of research everywhere benefits everyone, so I remain optimistic that collaboration will continue to be important.
Dan McLean: Let’s talk about artificial intelligence. How should principles like justice apply to AI in medicine?
Alex John London: The AI community has become very aware of issues like bias and fairness, which is a good thing. But there is still a large knowledge gap between AI experts and medical experts. Medicine involves a great deal of uncertainty, and AI systems depend heavily on the quality of the data they are trained on. If the data are incomplete or biased, the models will be limited.
Catherine Batsford: So AI is a tool that can help accelerate progress if the data are good and systems are designed properly.
Alex John London: Exactly. The question is where AI provides genuine value and how we build the evidence and infrastructure to use it responsibly.
Dan McLean: There seems to be both anxiety and optimism about AI. Where do you fall?
Alex John London: I am both optimistic and cautious. We have seen similar hype cycles before with other technologies. AI can be very powerful, but the challenge is moving from proof of concept to systems that actually improve patient care in real-world settings.
Dan McLean: Some people worry that bias is the biggest risk in AI systems. Do you agree?
Alex John London: Bias is a serious concern, but there are also more fundamental issues. Many AI systems are designed to predict outcomes based on past data, but medicine often involves interventions — doing something new. Prediction and intervention are not the same, and we need strong evidence that these systems actually improve outcomes in real clinical settings.
Catherine Batsford: To connect this back to your book and the idea of the common good, how should we move forward with AI in medicine?
Alex John London: We should see AI as part of the broader research enterprise. Many of the same methodological and ethical issues apply — uncertainty, evidence generation, and evaluation. If we focus on evidence, social value, and aligning incentives across stakeholders, we can use these technologies in ways that benefit both individuals and society.
Dan McLean: Professor London, thank you for joining us today.
Catherine Batsford: Thank you so much for being here.
Alex John London: Thank you. It was my pleasure.
Dan McLean: And I’m Dan McLean.
Catherine Batsford: Alex John London is the K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon University, where he directs the Center for Ethics and Policy. He has also served as chief ethicist at the Block Center for Technology and Society at Carnegie Mellon. As an elected fellow of The Hastings Center, his work sits at the intersection of ethics, medicine, biotechnology, and artificial intelligence. Professor London’s book, For the Common Good: Philosophical Foundations of Research Ethics, was published by Oxford University Press in 2022. He is the author of more than 100 papers and book chapters in journals including Science, JAMA, and The Lancet, and is the co-editor of Ethical Issues in Modern Medicine, one of the most widely used textbooks in the field.
His work in AI ethics centers on the structural obstacles to building safe and effective technologies and on creating mechanisms for social trust and accountability. As a member of the World Health Organization expert group on ethics and governance for AI for health, Professor London helped develop guidance published in 2024. He has also served on two National Academy of Medicine committees shaping frameworks for emerging health technologies. Throughout his career, Professor London has helped shape key ethical guidelines for oversight of research with human participants. We are also proud to say that Professor London serves as a member of PRIM&R’s board.
Thank you for joining us today to explore the rapidly evolving field of AI deployment and its use in medicine.
Alex John London: Thank you for having me. It’s a pleasure to be here.
Dan McLean: We like to start these conversations by asking how you found your way into this field. You’ve been at Carnegie Mellon for more than 25 years focusing on ethics, philosophy, and policy. How did that path begin?
Alex John London: I started graduate school at the University of Virginia during what was, in some ways, a heyday of bioethics there. John Arras was hired while I was a graduate student. When I entered graduate school, I didn’t have much interest in bioethics and knew nothing about research ethics. I was studying ancient philosophy and ethics.
Someone asked if I wanted to be John Arras’ research assistant. I said no — I wanted to do what I thought was “real philosophy.” Then they told me it would increase my stipend, and I reconsidered. But John was an incredible teacher and mentor. Sitting in and teaching his bioethics classes, I realized this was real philosophy — real issues that mattered and were happening in the world today.
He loved research ethics, and we would talk about deep philosophical problems in research ethics, both practical and conceptual problems. He would say these were real philosophical problems, and he didn’t understand why more people were not interested in them. I certainly became interested. So although I continued my traditional philosophy work, I developed a parallel track in bioethics and research ethics, and I’ve been working in that space ever since.
Dan McLean: Your book, For the Common Good: Philosophical Foundations of Research Ethics, came out in 2022. Why did you decide to write that book?
Alex John London: I had written many papers on different topics in research ethics, but they were all guided by a single set of underlying concerns about the field. I realized I needed to bring those ideas together and make the underlying concerns explicit. And I think those concerns are even more relevant now than when I wrote the book, given what is happening in research and technology today.
Catherine Batsford: What is the core argument you want readers to take away from the book?
Alex John London: One of the main ideas is that the foundations of research ethics are filled with tensions. The field has become very practical and focused on particular institutions and guidelines, but sometimes without thinking deeply about the larger goals those structures were meant to serve.
My argument is that justice should be the foundational principle for research ethics. If we think of research ethics as a social institution, then it should be grounded in justice and connected to other social institutions. From there, we can better understand familiar practices and principles used in research oversight.
Catherine Batsford: So when reviewing research, we should think about justice first?
Alex John London: Yes, but justice understood in relation to social institutions and the broader goals of research. Research ethics often focuses on fairness in subject selection or distribution of benefits and burdens. Those are important, but they do not fully capture what justice requires. We also need to consider whether research has social value, whether it contributes to the common good, and whether it benefits the communities where it is conducted.
Dan McLean: You talk about social value and the common good. Why is that idea important now?
Alex John London: Because we rely on research to generate knowledge that helps health systems function, helps clinicians treat patients, and helps societies respond to public health threats. The COVID-19 pandemic illustrated this clearly. High-quality research is not optional in an emergency. It is essential because institutions rely on that evidence to make decisions that affect people’s lives.
Poor-quality research during the early pandemic led to misleading conclusions and delayed effective treatments. That showed why there is a social imperative to conduct research, but that imperative must still be balanced with respect for participants.
Catherine Batsford: There is a section in your book where you say study participation is not a prisoner’s dilemma. Can you explain that idea?
Alex John London: Some people argue that research participation is like a prisoner’s dilemma — everyone benefits from research, but no one wants to participate. I examined that argument and concluded that research actually has a different structure. Participation makes sense if enough people participate to produce useful knowledge.
The real issue is social value. If studies are poorly designed or fail to recruit enough participants to produce meaningful knowledge, then participation does not generate value. Improving the social value of research helps align individual interests and societal interests.
Dan McLean: We’ve talked about social value and community benefit, but we also have for-profit companies involved in drug development. Is there tension between social value and responsibility to shareholders?
Alex John London: There can be, but research is a collaborative activity involving many stakeholders — academics, industry, universities, participants, and regulators — all with different motivations. The goal of research ethics is to create incentives and institutional structures that align those interests around producing high-quality scientific evidence.
Catherine Batsford: Do you see a more global research enterprise in the future?
Alex John London: I hope so. Many health challenges are global, and collaboration across countries can benefit everyone. But there are also political and economic pressures that can limit collaboration and data sharing. Improving the quality of research everywhere benefits everyone, so I remain optimistic that collaboration will continue to be important.
Dan McLean: Let’s talk about artificial intelligence. How should principles like justice apply to AI in medicine?
Alex John London: The AI community has become very aware of issues like bias and fairness, which is a good thing. But there is still a large knowledge gap between AI experts and medical experts. Medicine involves a great deal of uncertainty, and AI systems depend heavily on the quality of the data they are trained on. If the data are incomplete or biased, the models will be limited.
Catherine Batsford: So AI is a tool that can help accelerate progress if the data are good and systems are designed properly.
Alex John London: Exactly. The question is where AI provides genuine value and how we build the evidence and infrastructure to use it responsibly.
Dan McLean: There seems to be both anxiety and optimism about AI. Where do you fall?
Alex John London: I am both optimistic and cautious. We have seen similar hype cycles before with other technologies. AI can be very powerful, but the challenge is moving from proof of concept to systems that actually improve patient care in real-world settings.
Dan McLean: Some people worry that bias is the biggest risk in AI systems. Do you agree?
Alex John London: Bias is a serious concern, but there are also more fundamental issues. Many AI systems are designed to predict outcomes based on past data, but medicine often involves interventions — doing something new. Prediction and intervention are not the same, and we need strong evidence that these systems actually improve outcomes in real clinical settings.
Catherine Batsford: To connect this back to your book and the idea of the common good, how should we move forward with AI in medicine?
Alex John London: We should see AI as part of the broader research enterprise. Many of the same methodological and ethical issues apply — uncertainty, evidence generation, and evaluation. If we focus on evidence, social value, and aligning incentives across stakeholders, we can use these technologies in ways that benefit both individuals and society.
Dan McLean: Professor London, thank you for joining us today.
Catherine Batsford: Thank you so much for being here.
Alex John London: Thank you. It was my pleasure.
Research Ethics Reimagined guests are esteemed members of our community who generously share their insights. Their views are their own and do not necessarily reflect those of PRIM&R or its staff.