• Home
  • About Us
    • Who We Are
    • PRIM&R Partners
    • Resources
    • Ampersand Blog
    • Career Center
  • Conference
    • Annual Conference
    • Exhibit 2026
    • Exhibit 2025
    • Ampersand 2024 Proceedings
    • Past Conferences
  • Certification
    • Eligibility
    • CIP Resources
    • CPIA Resources
    • Certification Exams
    • CIP and CPIA Councils
    • CIP and CPIA Recertification
  • Programs
    • Education at PRIM&R
    • AI Education Bundle
    • Flash Learn
    • Workshops
    • Seminars
    • Symposium
    • Panel
    • Courses
    • Webinars
    • Certificates
  • Membership
    • Member Benefits
    • Member Hub
  • Understanding Research
    • Research Ethics Timeline
    • Lead the Way
    • Research Ethics Reimagined Podcast
    • People and Perspectives
  • Public Policy
    • Research Ethics in Public Policy
    • Advocacy Toolkit
    • Resources
    • Federal Public Policy Briefings
    • Policy Comments
  • Support Us
    • Donate
    • Volunteer
    • Sponsor
    • Bookstore
  • Policies
    • PRIM&R Policy
    • Events Code of Conduct
    • Reporting Misconduct
  • Blog
login

Discussion Guide and Transcript

Season Two - Episode Seven

Research Ethics Reimagined Season Two - Episode Seven "AI, Trust, and the Future of Bioethics with Vardit Ravitsky, PhD"

  • In this episode of PRIM&R's podcast, "Research Ethics Reimagined," we explore the intersection of artificial intelligence, trust, and bioethics with Dr. Vardit Ravitsky, President and CEO of the Hastings Center for Bioethics. Dr. Ravitsky discusses the rapid implementation of AI in healthcare and biomedical research, strategies for combating misinformation, and maintaining organizational values during challenging political times. She also shares practical advice for emerging professionals in bioethics and biomedical research. Listen on Spotify | Listen on Apple| Listen on Amazon
Discussion Questions
  • 1.) AI Implementation and Trust
  • Dr. Ravitsky emphasizes building "trustworthy and ethical" datasets for AI tools in biomedical research. What steps should research institutions take to ensure their AI implementations don't perpetuate existing biases or create new forms of inequality?
  • The National Academy of Medicine's AI Healthcare Code of Conduct provides guidance for multiple stakeholder groups - from patients to CEOs. How might this comprehensive approach change how your organization thinks about AI implementation?
2.) Organizational Leadership in Challenging Times
  • The Hastings Center for Bioethics is "doubling down" on justice and equity work despite political pressures. What else could organizations do to remain effective in a polarized environment?
  • Dr. Ravitsky advocates for "no knee jerk reactions" and taking time to be "patient, considered and reflective" rather than constantly responding to rapid changes. What does this approach look like in practice for research institutions?
3.) Combating Misinformation and Building Trust
  • Dr. Ravitsky describes walking a "fine line" between listening to different perspectives and not giving a platform to misinformation. What strategies can researchers use to engage productively with science skeptics?
  • Dr. Ravitsky advises young professionals to "diversify your skill set" and become "public intellectuals" rather than "ivory tower intellectuals." How might this guidance apply to current professionals navigating uncertain times in research and bioethics?

Key Terms

Bioethics: The study of ethical issues arising from advances in medicine, biology, and healthcare technology AI Scribes: Artificial intelligence tools that help doctors synthesize and organize their clinical notes and documentation
Echo Chambers: Information environments where people encounter only beliefs or opinions that confirm their existing views
Additional Resources
  • The Hastings Center for Bioethics - Independent bioethics research institute where Ravitsky serves as President and CEO, focusing on ethics in medicine, science, and technology.
  • National Academy of Medicine AI Healthcare Code of Conduct - Comprehensive guidance document for AI implementation across healthcare systems that Ravitsky helped develop.
  • Bridge to AI Program - NIH initiative creating trustworthy datasets for biomedical AI research, with Ravitsky serving as principal investigator on two projects.

Transcript

Please note, a transcript generator was used to help create written show transcript. Written transcript of podcast is approximate and not meant for attribution.
PRIM&R: Welcome to Research Ethics Reimagined, a podcast created by Public Responsibility in Medicine and Research, or PRIM&R. Here, we talk with scientists, researchers, bioethicists, and some of the leading minds exploring the new frontiers of science. Join us to examine research ethics in the 21st century and learn why it matters to you.
Ivy R. Tillman: Today we are pleased to have with us Dr. Vardit Revitsky, the President and CEO of the Hastings Center for Bioethics. The Hastings Center is an independent, nonpartisan bioethics research center, which is among the most influential bioethics and health policy institutes in the world. Vardit is a part time senior lecturer on global health and social medicine at Harvard Medical School, and passed full professor at the Bioethics Program, School of Public Health, University of Montreal. She's the past president of the International Association of Bioethics and a fellow of the Canadian Academy of Health Sciences. Vardit has published more than 200 articles and commentaries and has delivered more than 300 talks. She is a regular contributor to the media on bioethical issues. Her research focuses on the ethics of genomics and reproduction, as well as the use of AI in health. Vardit is a principal investigator on two Bridge2AI research projects, funded by the National Institutes of Health, that expanded the use of AI in biomedical and behavioral research. She also serves on the steering committee of the National Academy of Medicine, to develop an artificial intelligence code of conduct. Thank you for being here with us today, Vardit.
Vardit Ravitsky: My pleasure. I'm honored to speak with you.
Ivy R. Tillman: First, I wanted to thank you once again for being the keynote speaker for our annual conference, PRIMR25 , this November in Baltimore. We are very much looking forward to hearing from you at our annual conference. So without giving away too much, can you share with our audience what some of the highlights of your remarks will be?
Vardit Ravitsky: Of course, AI is top of mind for everybody. Whether you're in biomedical research, delivering care, running a health care system, or working at an insurance company, it touches on everybody's professional lives—and, let's be honest, our personal lives as well. Absolutely. So we thought of making that the focus of the keynote. I'm hoping to map some of the actual uses that are currently hitting the ground, as well as what's around the corner, and then focus on the trust we need to establish and promote for those tools to be effective and beneficial. Ivy R. Tillman: Wonderful. I'm so excited to hear from you at the conference, and excited for those who've already registered—and those who plan to register—to hear your keynote. So thank you once again. And before we get too far into our conversation, I would like to explore your career path. I always love to know the story of how you arrived at where you are, and how you got involved in this field. Vardit Ravitsky: Oh, that's always a fun story to tell, right? Because origins explain a lot where we are today. I grew up in a family of philosophers. Everybody in my family was in humanities, philosophy and in education. So as a little girl, I thought that that was the only profession available. And it's actually quite funny, because when I went to study philosophy in university, people said, What are you going to do with that? And my response was, Wait, there's something else? But the funny thing is that when I started doing philosophy as an undergrad, I realized I was most attracted to philosophy of science on one hand, and ethics on the other hand. And I thought, So how do I combine these interests? And then, as a young woman, something personal happened to me. A friend, a much older friend who was going through IVF, asked me if I would donate an egg. That threw me into a deep reflection on what reproductive technologies mean today—all those new ways, back in the '90s, that we were starting to have babies. And I had to ask myself, Wait, what does it mean to be genetically related versus socially related? What is the meaning of now having surrogate mothers, and egg and sperm donors, and babies created by three, four, or five people? So I went to the literature to try to help myself understand what I was feeling, and I read quite a bit. And then I realized that what I was reading was bioethics—and I was hooked. I said to myself, Okay, this is the field that combines everything that I care about: ethics, science and technology, and issues of human identity. What does it mean to be a human being when these technologies around us change fundamentally who we are and how we relate to others? So, early on, it was reproduction. It changes how we have families. Then it became genetics—genetic identity is so central to our lives, but also challenges us. End of life—I worked quite a bit on cultural perspectives on end of life, because technologies also change how we die and when we're considered to be dead. And now AI feeds right into that, because it forces us to question what it means to be human when, you know, you chat with some of these algorithms and feel as if you're talking to another human, even though you know you're not. It really forces us to question yet again who we are and how we relate to others—but in a totally fresh way. So I feel like my whole career was about identity and the intersection of ethics and science, but technology keeps throwing new challenges at us, which keeps the work always fresh and interesting. Ivy R. Tillman: Fascinating. And when you talked about the center of your work being identity and the role that identity plays in science, trust, and technology, I think it's central to many of the conversations we're having today—not only within the field, but also within the general public. So, yeah, fascinating story—and origin story—as you described it. And so I'd like to begin to discuss a little bit about the work you're doing at the Hastings Center for Bioethics. You're involved in so many different, interesting projects. You talked about AI. We know of a few others. But from your perspective, what are some of the highlights so far for 2025? Vardit Ravitsky: The main highlight for us in 2025 is that we launched a new strategic plan for the coming five years. The technology is moving so fast that thinking on a five-year horizon really is. But this is a strategic plan that casts broadly what our priorities are and what themes we want to address. And we're really going to double down on issues of justice and equity, because, as we all know, the world is not making it easy today for researchers, clinicians, and patients to cope with these issues. We still live in a country that has terrible health disparity. And the political climate is such that exploring these is considered brave now. You have to be a moral leader to ask questions that previously were really obvious—about social justice, about access to care, about our rights as patients. So we're really leaning into those questions. It has always been a focus of our work, and we're sticking to our values and our mission. We're also leaning strongly into issues of trust, because the challenges are only becoming bigger—with AI, with political pressures. We always knew that we disagree on values. Now we disagree on facts—or we don't even define facts in the same way. So the challenges to trust have become daunting. And a part of our strategic priority is to really tackle, from every direction possible, what it means to be trustworthy in this environment and how we can help people—patients, clinicians, the entire system—build trust and promote it.
So that's at the level of the strategic plan. We have some really exciting projects that involve AI. One of them you mentioned in your introduction, Bridge2AI. This is an NIH-wide initiative where the NIH was really visionary in understanding that biomedical research is going to be revolutionized by AI tools, and that the basis for that is good data. If the data is biased, not representative of the patients we want to help, if you don't engage with all the stakeholders in the process, if you don't ensure that your workforce is diverse, the outcomes will be very suboptimal.
And so this initiative is actually about building flagship data sets that are trustworthy and ethical. It sounds simple—oh my gosh, it's so complex—because we have four different projects. Each one collects a different type of data, and for each type of data, the challenges of trustworthiness and being ethical are different.
So the Hastings Center takes a big part in that project. We also have an innovative project called Hastings on the Hill that is meant to take the bioethical insights regarding AI and health and translate them to policymakers—to bring them to the Hill. Now, we all know that whether you're a staffer or a member of Congress, you're busy and everybody is screaming for your attention. So how do we create a tool that is interesting, captivating, and accessible, that really integrates issues and values into your way of thinking when the time comes to regulate?
We created what we call a patient journey. This is a narrative about a Mrs. Jones who goes through a health crisis, and at every step of the way, AI is somehow involved in her care, whether she knows it or not. And we use the narrative—the power of storytelling, absolutely—as a way to surface all those potential issues and benefits, so that as you read through the story, you are engaged but also start asking yourself, Wait, if that were me, what would I care about? Would I want to know? Would I care about my privacy? Would I want to have the option to opt out? Or if this were my mom, how would I feel?
It's supposed to be engaging and yet very educational, so that if you're a regulator or policymaker, you at least pause to consider what the ethical issues are that you should address.
Other than that, we do a lot of engagement activities. For example, we had a partnership with Cedars-Sinai, the large health care system in Los Angeles, and we organized together a conference on—surprise, surprise—trust and accountability regarding AI and health. You had a room with clinicians, lawyers, and patients, and you could just sense in the room—I don't want to say fear, but deep concerns. For doctors: Are they going to lose their jobs? For those who run health care systems: How do we make wise choices about what tools to purchase and how to implement them? From patients, chaplains, and families: What does this mean for me? Am I going to get better? Is this a threat to me?
So it was just a wonderful engagement activity to have such a conference with a health care system bringing ethics to the ground.
Would you like me to also smooth a few of the sentences for readability, or should I keep strictly to punctuation corrections as I’ve done here? Ivy R. Tillman: Fascinating. So, do you have plans to do more of those types of collaborative engagements? Because it sounds like all the stakeholders were there. When I say stakeholders, I mean those interested parties—those who are in the development, the actual conduct, but also those who benefit from said technologies. Do you plan on doing more of those types of activities? Vardit Ravitsky: Yes. So, Cedars-Sinai was so excited about the success of this event that we're going to do another one in 2026. Wonderful. But beyond that, we're partnering widely to bring these concerns to the public in a way that, as you just said, engages multiple voices and multiple stakeholders.
For example, we have a partnership with the Museum of Science in Boston on a series called The Big Question. It's a conversation that we have with experts—again, accessible for the general public—about various big questions in science, technology, and ethics. And the one that was just released this week is called What Does It Mean to Be Human? We talk precisely about how AI forces us to unpack this age-old philosophical question in new ways.
We also have partnerships with global bioethics centers. We're going to have a conference in Paris in the spring about AI as an existential threat and opportunity, bringing leading voices in ethics and philosophy from all over the world to discuss this with the public.
So we're shooting in all directions, employing various engagement tools—from academic conferences all the way to podcasts and online publications—to foster inclusive conversations, so that all voices are included, because, as we said in the beginning, this touches everybody. Ivy R. Tillman: It absolutely does. I want to go back to the strategic plan that you discussed, particularly around the double-down on the issues of justice and equity, which, of course, are very closely aligned with PRIM&R's goals—but also with me personally, right? I am often asked, and would love to get your perspective, particularly right now in the times that we're in: How do you double down, and how do you continue this really important work that's essential to everything else you're doing in justice and equity? Vardit Ravitsky: I've been having conversations with other presidents and leaders of organizations that do ethics, especially in the public—especially in the early days of the new administration, when we were all feeling quite dizzy from the pace of change around us. And one interesting thought that emerged, which I feel was shared by all the leaders in the field, was no knee-jerk reactions. Don't be reactive. There were days that I went in to give a talk, and by the time I came out, the world had changed five times. Yes.
And you could spend all your energy just reacting, issuing statements, and we said, no, let's sit for a second. You know, we have the luxury of being patient, considered, and reflective. And let's react in ways that do not respond to this crazy pace that is meant to throw us into disarray and disorient us.
So, doubling down, first of all, means taking a deep breath and sticking to your values. You don't start changing language, shifting your values, and reframing your mission because you're constantly trying to respond. The flavor of sticking to values and sticking to mission is one thing—and it has become very costly in some cases, right? You're under threat of funding being taken away, under threat of not being able to support your organization. So the sense of moral courage emerges again.
Just speaking with the programs becomes an expression of bravery. So that's one answer. But the other is, I think, doubling down. Of course, you continue to do your research and your engagement activities, but you do them in a thoughtful, wise way. Ivy R. Tillman: Sure. Vardit Ravitsky: I think what the current atmosphere shows us is that we are in our echo chambers often, and some of us forget to listen to what's happening outside. And some of us are struggling—truly, honestly struggling—to understand those other voices. They're so foreign to us.
So I think doubling down is not just staying in the echo chamber and continuing on the exact same path. Sometimes it means broadening our perspective, listening more, you know, being more inclusive in what voices we listen to. Ivy R. Tillman: Sure. Vardit Ravitsky: And it doesn't mean letting go of notions of science and evidence and what deserves trust. Ivy R. Tillman: Correct. Vardit Ravitsky: But being truly inclusive means learning to be more sensitive to the world around you and learning how to engage those voices that are really challenging for you and difficult to incorporate. So it's doubling down also on the listening—on what you mean when you say, I'm inclusive. Not comfortable inclusive. Ivy R. Tillman: Great point. Vardit Ravitsky: And doubling down on unpacking those notions that are so controversial today. You throw the D-word into a room—diversity—and you're causing an explosion, right? Ivy R. Tillman: Right. Vardit Ravitsky: But let's unpack what we mean. Different people use this term in different ways—some for political reasons, some for great reasons. Let's unpack. Let's go deeper.
So doubling down is not just staying on track. It's also unpacking further and further what we mean, adding clarity and sensitivity to the language we use and to the debates we're having. Ivy R. Tillman: Amazing. Amazing. Thank you for sharing those insights. So I want us to switch gears. We're going to talk about some of the artificial intelligence and the work that you're doing there with the rapid implementation of AI in medical care and biomedical research, and the issues that you're focusing on. As I mentioned, you're on that steering committee at the National Academy of Medicine on the AI health care code of conduct.
It's really interesting and fascinating, the work that's being done there. Can you describe it a bit to our audience and elaborate on what this code of conduct will do? How does it intersect, perhaps, with research and ethical concerns around AI and research? Vardit Ravitsky: It's a great opportunity to showcase the work that we've been doing. First of all, the work has been completed, and the code of conduct has been published. It's available online. We had a launch webinar attended by hundreds of people describing the work. A few things make this code of conduct stand out, because there are many documents internationally trying to provide guidelines and guardrails for the implementation of AI in health.
First, this is really a 30,000-foot view in the sense that it's across the health care system. It works for patients who have concerns, for CEOs who run health care systems, for insurers, and for regulators. It's a very high-level view of what principles and what we call commitments should guide this implementation. But while being so high level, it also becomes granular enough to be useful by applying those high-level principles and commitments to various stakeholder groups. And that's where it becomes really interesting to me.
So, to give you a flavor of the commitments—what we call the code commitments—that should guide AI in health: We're talking about things like advance humanity, always making sure that humans and their interests are at the heart of what you're doing; ensure equity; engage impacted individuals; improve workforce well-being.
I mentioned that some clinicians feel threatened by AI. Sure. And I've even heard young people questioning whether they want to go into medicine or biomedical research at all, because they're concerned about having careers at the end of their training. Another core commitment is to continually monitor the performance of the tools to innovate and learn. So these are very high level.
To make them operational, we apply them per stakeholder perspective. We have sections of the code of conduct that speak to AI developers; researchers; health care systems and payers; patients, families, and communities; federal agencies; quality and safety experts; and ethics and equity experts. So we didn't leave it at the very general level of “this is what you should think about.” We brought it down to: If you belong to this stakeholder group—and the ones I named now cover everybody in society—this is what's particularly relevant to you. Here are some cases and examples of how you can apply these commitments.
So I feel that we created something that, on one hand, is very comprehensive and high level but also very operationalized and granular. And that's, to me, a winning approach. Ivy R. Tillman: It is. Vardit Ravitsky: Another—maybe on a more personal note—what blew my mind in the work of this steering committee that guided the development of the code is how diverse and inclusive we were. You saw, in one room around one table, CEOs of health care systems. Kaiser was there, UnitedHealth was there, Mayo was there. You saw patient representatives. You saw industry—Microsoft and Google—at the table.
And you saw a variety of experts such as ethics (myself), law, and others. Very leading voices in each of those fields. But the diversity around the table, and how people found a way to speak across the disciplinary divides and across their various interests as well—sure. And a sense of shared purpose that we were all there to help, at the end of the day, patients, and to improve the delivery of care and how biomedical research is conducted.
There was a real sense of solidarity and shared values that I think was fundamental to building the trust needed to produce such a document. And now we're in the phase of disseminating it, and you're helping me do that. So this is great. Ivy R. Tillman: Wonderful. I love that we can support that. And it's a model that can be used in other areas as well. It sounds like this collaborative model was really unique and needed. I'm going to move to misinformation and disinformation, particularly going back to when you discussed some of the priorities of the Hastings Center's strategic plan. You talked about issues of trust, AI, political pressures, disagreeing on facts, and where trust sits—and really, trustworthiness.
So, you know, we don’t have to get into great detail about how public trust in science is at a critical crossroads. And it's something that, of course, we here at PRIM&R are focused on. You've spoken in the past about your experiences on social media combating misinformation and disinformation. What are some of the strategies that you've used to disrupt misinformation about research and science? Vardit Ravitsky: You know, at the Hastings Center, we always say good ethics starts with good facts. And of course, during COVID, when I was in the media to promote public health interventions and help protect the public—especially in the midst of the crisis, when it was literally a matter of life and death—I think the number one strategy when you try to tackle misinformation, especially on very controversial topics such as during COVID or now with the vaccine debate, is this: the first thing you do is stick to the facts. We always say good ethics starts with good facts. And at the Hastings Center, we include evidence in all of our analyses and all of our publications.
The problem with that is, of course, that we're now living in a reality where what counts as evidence—how to distinguish facts from value and opinion—has itself become controversial and polarizing. So the way I see the challenge, and this is the fine line I'm trying to walk when I do media, debates, or when I publish or design research, is that on one hand, you want to have conversations with the other side. All the experts say you don't just throw information at people and change their minds. It doesn't work. Right. So you have to listen.
You have to acknowledge where that other voice is coming from, unpack the fears and the origin story of why people are struggling with expertise, with authority, with science. But at the same time, you can't give in to what is outside of what we consider to be scientific evidence. You can't give a stage to misinformation in the process of listening. And even though you want to be nonpartisan—you know, the Hastings Center is famously a nonpartisan research institute—it doesn't mean that all sources of information and all perspectives are equal.
So walking this fine line when you're trying to actually have impact and help people make good choices about their health, about their families, or help people in their careers decide how to run their research and how to deliver care, you want to start from a place of listening and acknowledgment. But you also want to stay grounded in what you know—absolutely, what science knows at a given time. And that depends on the topic that you're discussing and how hard you have to push back against, again, the flavor-of-the-day misinformation.
AI is going to make all of this much harder, because it's going to become difficult to distinguish sources of information, and the misinformation is becoming more and more convincing in how it's presented. Yes. But that is the fine line. It's very easy to just go on TV and say, We, the experts, know that, and just give the facts and then remind people that they should value justice and care. That's easy. What's hard is to speak in a way that does not antagonize, does not make people feel that they were never heard, and does not use your authority—which they completely challenge—to just have this top-down approach of We're going to force you to do certain things because we know better.
So you have to find a different tone for the conversation. Right. Ivy R. Tillman: Wow. Once again, wonderful perspective and advice on tackling misinformation. So you mentioned the concerns that you hear from younger professionals—individuals considering biomedical research, but also bioethics. What kind of advice would you give to someone, say a university student, who's considering a career path similar to yours?
What advice would you give them regarding a career in bioethics or a career in biomedical research? Because, you know, at PRIM&R, we're very much concerned about the pipeline that has been disrupted for young professionals in research ethics, bioethics, and biomedical research. So what type of advice would you share with someone right now? Vardit Ravitsky: You know, Ivy, this is really a funny moment for me, because at the Hastings Center we run a series called Bioethics Chats, where I chat with leading voices in the field. I always conclude my conversation with them by asking the exact same question that you just asked me. So I'm finding myself on the other side—I'm on the spot now. I always put them on the spot. Ivy R. Tillman: Sure. Vardit Ravitsky: There are days when young scholars—PhD students, postdocs, or even young academics—get in touch and ask for career advice and mentorship. And they really ask me, What are my chances? Ivy R. Tillman: Right. Vardit Ravitsky: And I feel that it's so unfair for me with my, you know, established position in this field to say, Oh, go with your heart and stick to your values and don't give up and it will be okay, because I don't know if it will be okay. And with funding being taken away left, right, and center, sometimes I feel, Who am I to tell young people that they should follow the path that they started and give them a sense of security that if you just have grit and resilience, eventually you'll find your way?
The world is changing so much that I don't know about my way. How can I convey that sense of, you know, Stick with it? And at the same time, I still am a believer that if you have great passion and you know what you want, something will pan out. And by something, I mean it's not always going to be the exact career that you imagined. Which is why my advice—beyond my own self-reflection of How dare you give advice in this day and age?—the actual advice would be: diversify your skill set and also your expectations.
If you're an academic, you may end up in government or in industry or in a nongovernment organization. So open your mind and remember that the topics and values you care about can be addressed and contributed to in multiple ways. Yes, maybe your whole life you saw yourself as a university professor, but you may end up being a policy advisor.
And skill sets—you know, I don't think we have the luxury of being ivory tower intellectuals anymore. I think we should all be public intellectuals. Start writing for the media early on. Put yourself out there. It's risky, yes. But become known. Do the op-ed, do the podcast. Be more excited about speaking to the public than terrified of it, even though it can be costly, as I saw myself.
So, yeah—diversify your skill set, diversify how you think about what science is and what bioethics is, and keep an open mind as you advance in your career. Seize opportunities even if they don't align precisely with how you saw yourself. Keep your identity flexible. Ivy R. Tillman: Absolutely. Wonderful, wonderful advice for those entering the profession, but also for those who are currently in the profession too, right? We see it changing—the field ever evolving. So thank you. Thank you for that.
And so, as we head to wrapping up, my last question relates to this: we've talked about where we are now and your thoughts there. When you think about the future—and that future for us used to mean five years from now—but let's just talk about a year from now, right? Because we know that'll change so quickly. What are you most optimistic about for the future? Vardit Ravitsky: I'll answer specifically in the context of AI. Ivy R. Tillman: Sure. Vardit Ravitsky: I see such huge benefits, and they're low-hanging fruit. You know, AI scribes that help doctors synthesize their notes, and the tools that are now increasingly approved for clinical use to help us with diagnostics. Ivy R. Tillman: Sure. Vardit Ravitsky: I think the potential benefits in actually helping provide better care—whether it's reducing human error in diagnostics and treatment recommendations, or just better monitoring of complex cases—are substantial. You know, the rate of human error in medicine and in research is very high. And I think one of the mistakes we make is to compare AI to nothing, or to compare AI to an optimally perfect world. But when we start integrating AI into biomedical research design and into the delivery of care, we have to remember that we should compare it to what we have now, which is very imperfect.
And so it's good to talk about the concerns, the lack of regulation, the lack of accountability, and all of our fears. That's great—we need to tackle that, we need to address that. But let's also take a moment to recognize the incredible benefits that are here and just around the corner. Ivy R. Tillman: Yes. Vardit Ravitsky: In actually making care safer, in making the work of clinicians easier and better so that they can spend more time with their patients, in what it means to realign your skill set—not to lose your job—in what it means to accelerate research—not make researchers obsolete, but rather give them powerful tools that can accelerate their work. I love spending time in the yes, not in the naysaying and the yaysaying, and in remembering that these are incredible tools that for decades we thought would be the holy grail of medicine and research. To some degree they are, and we're finally seeing that.
So let's end on this positive note of focusing—as clinicians, as patients, as bioethicists, as researchers—on those incredible benefits that we're beginning to see now. And when we look into the future, ensure that they're delivered in a way that doesn't generate more harm. Ivy R. Tillman: Wonderful. Thank you. Thank you for spending time with me. Thank you for the conversation and for imparting your wisdom and insights. They are so needed at this time, and we’re very appreciative of you joining us today for our podcast. Vardit Ravitsky: Thank you for this great opportunity, and I'm really looking forward to the keynote. Ivy R. Tillman: Absolutely. We are too.
Research Ethics Reimagined guests are esteemed members of our community who generously share their insights. Their views are their own and do not necessarily reflect those of PRIM&R or its staff.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.