Research Ethics Reimagined Transcript, Ep.2, 

“Research and AI with Mary L. Gray, PhD”

Transcript, Ep.2, “Research and AI with Mary L. Gray, PhD”

Host: Ivy R. Tillman, EdD, CCRC, CIP, Executive Director of PRIM&R 

Guests: Mary L. Gray, PhD

 

A transcript generator was used to help create written show transcript. Written transcript of podcast is approximate and not meant for attribution. 

Ivy Tillman: Welcome to Research Ethics Reimagined, a podcast created by Public Responsibility in Medicine and Research, or PRIM&R. Here, we talk with scientists, researchers, bioethicists, and some of the leading minds exploring the new frontiers of science. Join us to examine research ethics in the 21st century and learn why it matters to you.

Ivy Tillman: I'm your host, Ivy Tillman. Let's dive in.

Ivy Tillman: Today, I'm pleased to have with me Mary Gray. Thank you. Who is the Senior Principal Researcher at Microsoft Research and Faculty Associate at Harvard University's Berkman Klein Center for Internet and Society. She also maintains a faculty position in the Luddy School of Informatics, Computing, and Engineering with affiliations in Anthropology and Gender Studies at Indiana University.

Ivy Tillman: Mary earned her PhD in communication from the University of California at San Diego in 2004. In 2020, Mary was named a MacArthur Fellow for her contributions to anthropology and the study of technology, digital economies, and society. Mary has authored several books, including Ghost Work, which she co-wrote with computer scientist Siddharth Suri, which explores the invisible human workforce that powers the web at Microsoft. Mary co-founded and chairs the Microsoft Research Ethics Review Program, which is the only federally registered institutional review board of its kind. Hello, Mary, and thank you for joining us today on Research Ethics Reimagined.

Ivy Tillman: I'm looking forward to our discussion as we dig into your thoughts on our current and reimagined ethical frameworks associated with data and artificial intelligence.

Mary L Gray: Thanks so much for the invitation. I'm really excited to talk with you, Ivy.

Ivy Tillman: Yeah, so let's get started. Can you describe how you began this unique work that sits at the intersection of research ethics and AI?

Mary L Gray: I mean, it's a funny thing to realize that I've been thinking about these questions since graduate school. And in many ways, because my IRB, which was a fantastic group of folks, struggled with how to understand what are the expectations of researchers who are working, in my case at the time, with young people who were identifying as or coming out as lesbian, gay, bi, trans, or questioning. And they weren't exactly sure how to handle the ethics of reading discussion posts online or having websites that revealed a lot of information, personal information, about young people.

Mary L Gray: So, it really started me on a quest to inform other researchers about how they could think outside of the federal regulations when they're looking at a lot of social activity that otherwise is treated as text removed from people's lives. Fast forward to 10 years ago, Microsoft research was just starting to think about how it would bring social scientists to the table as they were developing systems for society and really imagining that most of what we would be thinking about was “What's the impact? What happens after technology is present?”

Mary L Gray: But my interest is actually what happens both before technology center people's lives and often the ways in which whether you have technology in your life or not, it, it shapes you. It has a force to reckon with.

Ivy Tillman: That's really interesting how your research informed where you are now and still leading that effort.

Ivy Tillman: Do you see yourself as an advocate in this space?

Mary L Gray: Oh, that's such an interesting question because I feel like in a lot of ways, I am an advocate for the value of these basic questions you can bring to any methodology. But particularly for social sciences to understand the human condition and our place in the world that we would want to start with an awareness of the value of engaging people.

Mary L Gray: So, it's always been a, you know, the heart and soul for me of anthropology. That's my disciplinary home, and I joke all the time like there's no way to do anthropology in a responsible way that's sustainable. If people don't know what you're doing, as soon as they find out what you're up to, you're banned, appropriately so.

Mary L Gray: So, it's a kind of logic of seeing research, particularly basic scientific research, as always about a deep engagement with people's experiences and their social interactions. It's not just their individual moments at work or in life. It's to see how much we're always trying within the social sciences to understand social worlds.

Mary L Gray: So, I've become an advocate for thinking about computer science and engineering as very much involved in the social worlds that we now can't quite imagine without say, a mobile phone or other kinds of technology being a part of that.

Ivy Tillman: As you were describing your journey, that word just kind of illuminated to me: advocacy. And really, kind of leading that effort of explaining, because, with IRBs and ethics boards, there's this prescribed way of considering research and that prescribed way really does not fit what's happening right now. And so, we need advocates in this space, right?

Mary L Gray: I think that, you know, the hardest part is for me, the exciting thing about science and disciplines as they form, particularly when it's the ways of thinking and the methodologies that disciplines have, when those start coming together and creating new ways of asking questions, or even new ways of thinking about what questions to ask, that should mean we have to update our priors about ethics. And so, it's exciting, but it's so clear that we haven't, particularly for the disciplines that feed information systems, there hasn't been that moment of reckoning with how deeply involved these, not just these technologies, but quite literally when it comes to something like artificial intelligence, how dependent innovation is on studying. Like “What are people doing there? What do they think in there? Who are they talking with?” You know, that is a fundamental reality of systems that rely on a lot of data generated by people to advance. I'm pretty passionate about this.

Ivy Tillman: Well, I can tell. Your energy is also energizing me around it, thinking about how informing the IRB about the ethics associated with data and people's behaviors particularly.

Ivy Tillman: But let's take it to the disciplines, right? How would you suggest IRBs engage or even, you know, research ethics professionals engage with these disciplines, who for so long have not even considered what they're doing as being research?

Mary L Gray: I think about that on the daily. I mean, literally, when we started the ethics program at Microsoft Research, there were several of us who had the very practical matter of, “We need this to publish.”

Mary L Gray: So, you know, that's often backing into any ethical review process, as I have to. It breaks what is actually really valuable about how those basic questions of respect, beneficence, justice, they are also methodological techniques for different disciplines. That's been true.

Mary L Gray: So, knowing what we know after 40 years of thinking about what it means to bring this framework to research with people and people's data, it was a chance to think beyond, “I have to have it.” How does this actually improve the work I do? How does it improve the data I'm going to generate? How does it improve my long-standing relationships with groups, individuals who are going to continue to inform what I learned about the world?

Mary L Gray: How will it help the public trust me more, and trust my students? So, it started it out as the pragmatic need of, “I have to have this piece of paper,” which is very frustrating for most IRB professionals. That's a terrible thing. So, being able to approach this as computer scientists learning, “How will this improve their outcomes as researchers?” I think approaching it that way, and having everyone approach it that way, and also to approach it as, “We don't know what the right methods are that are also aligned with those standing ethical expectations.” And we actually could argue that those ethical expectations are due for an update, to really reflect what are the things we missed when we first put those on the table for biomedical research and for behavioral sciences.

Ivy Tillman: I love how you framed that. Considering particularly how IRB professionals and chairs and boards engage these disciplines coming from a different perspective of, “How will it improve your research,” right? And “How will it improve your outcomes in the public trust?” To me, that's a lot easier and it builds that collegial relationship, versus the oftentimes, adversarial relationships that we all experience, right?

Mary L Gray: Especially for students. I mean, if there's a place I'm most passionate, it's who comes to the IRB often is a graduate student who's not able to get the mentoring they need, or they're doing something pretty novel with their methodology and nobody can give them guidance.

Mary L Gray: So, I've seen that every summer since I've been here. we have a PhD internship program. Every year, we have the best of computer science and engineering, and I can see how they yearn for that guidance. And it's figuring out how to have that guidance not only come from the programming staff of our program, but other researchers. To bring in the peer review that is robust and open and curious and humble about where we're at right now, because we've got work to do. We don't know how to study society at scale. I mean, that's a new thing.

Ivy Tillman: That’s fascinating. I'm learning new things and new ways of thinking and framing. Particularly, you know, my background is IRB. And so, a lot of what you're saying just resonates with me. Particularly, as we're engaging different disciplines. So, I have another question for you. The world of AI is converging with the world of research and consumers in ways that may surprise our listeners. We think we know, but I think you probably know a little bit more than we do.

Ivy Tillman: Can you share a couple of examples of how AI has been used in the past that many of us would not even know about?

Mary L Gray: One of my favorites is... well actually, let me take one step back. I think the hardest thing about talking about artificial intelligence is that it means many things. It's an umbrella term in the discipline.

Mary L Gray: And so, depending on the computer scientists that you're talking with, they may have a very particular definition for what is AI. And so, I think things get quite cloudy quite quickly because we are all in need of some basic language and some basic understanding of what are these terms? What do they mean? Let me break it down. I like to call AI “software.” I mean, it's basically software. It's very sophisticated, so I don't want to diminish how sophisticated most of what we call artificial intelligence today is. It’s quite sophisticated. But the current versions that we would all be surprised to learn are like soap dispensers that automatically release an amount of soap.

Mary L Gray: That's really one of my favorite examples of that.  Coming from a world of what's called computer vision that's looking at image recognition. It's trying to sense there is motion. It's a motion detector, but it's a sensor that is basically just think a tiny bit of software and hardware that's collecting information about what can be recognizable as an image and how to train the software you want to build to respond to that image. So, in the case of a soap dispenser, the building of a soap dispenser was basically creating software that could sense movement and most importantly, see an image that maps on to a hand or skin. Unfortunately, most of those models were built on white skin.

Mary L Gray: So, some listeners are not surprised to learn that soap dispensers do not work for their skin.

Ivy Tillman: Yep. I'm one.

Mary L Gray: I am strangely another. Because I am so, so white, the audience can't see me, but I am quite, quite pale. In all cases, when it's trying to use an image, it's using millions and millions and millions of images. And then it's finding what's the most typical image, what's the average of that image. So, whether you're talking about skin color and how it was labeled, because importantly it doesn't see skin color. Here's a label that says, “this is the most typical skin color in the batch of data” that was used to train this bit of software on releasing soap.

Mary L Gray: So that's one of, it's not the earliest, but it's a very mundane example that I think helps us all see the way in which just a fairly, perhaps innocuous, you could say, well intentioned approach to, “I want to create software that provides this service” that depends on learning particular kinds of data can quickly go off the rails if it was never thinking. You would need to have an approach. And we still don't have an approach. You'd need to have an approach to seeing skin color that is quite different than what we do today. It's certainly different than what humans do and how we recognize difference. So, it's a really interesting case of something that I find quite profound because it meant that there were folks building that software who didn't think about skin color as something as biological as pigmentation. Also incredibly cultural, because in most cases they weren't looking at different pigmentation. They were looking at pictures that were labeled white, peach, black. Like that's the kind of labeling of most of the material. I don't know if that's too like out there.

Ivy Tillman: No, it just aligns with our experience of the software and the product, right? And so, what you're speaking of, I actually had to learn it in a book I was reading, right? And so, it explained to me my experience with the product and the software behind it. And so now when I'm out in public and I have to use those, I don't get as frustrated. I understand why. Every time it's a reminder of the work yet to be done.

Mary L Gray: So, on that one, Ivy, I think the hardest part, or the understandable thing is that particularly for computer science and engineering, it's mathematical. It's thinking very much as “We need a bigger set. We need to complete the set.” So, artificial intelligence really relies on a lot of examples of a decision. What is artificial intelligence doing? It is quite literally software that is modeling decision that it can see. So, whether that's decision, “this is when you release soap” or if the decision is, “this is how you complete a sentence.”

Mary L Gray: Those are all cases where the current techniques within computer science and engineering are operating from. We have lots of examples. So, let's use all of those examples to train a model to automatically respond and in many ways, mimic. It is what it would look like if you had those examples in front of you.

Mary L Gray: Now, I'm simplifying it greatly and I'm insulting every computer scientist who might be listening to this right now. But really, at the end of the day, it relies on, “There's a really clear decision so that I can model, is it this or is it that?”

Mary L Gray: So if you hold on to that, the hardest part about all of this is artificial intelligence has just figured out how to automate things that are quite obvious to a human, but at the same time in really subtle ways, it makes it much harder for us to see “Is that usually the sentence I would say?” if I just start saying it with autocomplete. You think about all those places, like the real challenge here is sorting through, especially from our own point of view, who is lost in the assumptions, in the model of how things usually go. And so, the parallels hold of like, what's lost in the case of the soap dispenser is every skin tone. And if the move is, “well, I need to get every skin tone,” it misses. That is a Sisyphean task. As long as people continue to have babies, we're going to have a lot of mix. Most importantly to me, when it comes to language or when it comes to the places we're seeing artificial intelligence integrated into everyday decision making, like, “Should I look at a resume or not?” Those are places where we haven't fully reflected on what are our starting assumptions about what's typical, and who is not able to use any of these systems. My favorite example, code switching. If you speak more than two languages, or if you speak two languages, and part of how you speak is switching between those two languages, it is nearly impossible to model that. We don't have enough examples, but even if we had enough examples, talk to a teenager and see if how they code switch changes the next generation.

Mary L Gray: We're just always changing how we interact with each other. That's a good thing. So, we should assume that AI actually cannot effectively model anything that has an infinite number of possibilities in any place where we deliberate. Like if you think about democracy, if the whole point is like we're debating like what is the best way to do something, it's because there is not one right way to do something.

Mary L Gray: And AI can only model where it has examples of a decision that's really clear and crisp. It's this or it's that. It's yes or it's no.

Ivy Tillman: That's challenging because that is not our reality.

Mary L Gray: No, that's not our reality. It's every computer scientist's dream. I know I'm really insulting my colleagues.

Mary L Gray: I mean, most people in computer science engineering know that what they're doing is reducing for just the sake of argument or in crafting a model, the real complexity of social life. But biomedicine does it too. Look at all the sciences where we have been okay with a certain amount of injustice.

Mary L Gray: To be efficient, to be able to keep moving, we're going to keep building our model. We'll get more complicated. We're introducing an approach to science through computer science and engineering that quite frankly, it resists the interest in getting more complicated and other than “let's expand the set.” So, that's what I worry about, is that if the move toward making things complicated is “I'll just get a map of the world. I'll literally just capture every experience,” we will continue to fail to see you get one shot at that. You leave out folks. It doesn't just mean next time you go around, you include them. You made them trust it less.

Ivy Tillman: Yes.

Mary L Gray: So, there's that whole concern. There's also just the practical reality of there are so many things beyond the black and white, beyond the this or that, the yes or no, that if we look at it, we'd say that's not where artificial intelligence should be. And we're not making those distinctions today, mostly because that's a very social way of seeing the world.

Ivy Tillman: Right. But, you know, it complicates the role. In my opinion, ethics boards and IRBs play in review of research involving the software or artificial intelligence, and it makes it quite intimidating. So, what advice would you have for individuals trying to do the right thing, but not really knowing what's right sometimes in this case?

Mary L Gray: I mean, this is why I am so interested in us taking on like this is our chance to, together, take a step back and think, “What are some ways through this problem that really update how we would want to approach biomedicine,” which also uses data science and all of the behavioral sciences that all use computational approaches.

Mary L Gray: The reality is it's not just talking about computer science and engineering, but all of the sciences that have become data driven in ways that computationally we've never been able to do it before. They all, we all, need to rethink, “How do we want to do this? How do we do this?” So, I think, yes, it's intimidating to talk. I certainly get intimidated talking with some of our researchers who are incredibly good at techniques that I don't understand as a researcher. But the thing that I do know is that they are interested in modeling the ways that our world operates, and I am too. And I don't know a scientist who isn't interested in theorizing how things work. That is the basic science. That is the method that is the scientific method. So, to overcome the intimidation is to say, well, okay, if we're interested in basic science, we're interested in the scientific method for generalizable and transferable knowledge, right? Not generalizing this is how all things are all the time, but that mix of what's generalizable, and then what are things that are really quite specific and contextual?

Mary L Gray: That's all the sciences. So, holding those together, what are ways that we can approach ethics as a methodological challenge in the rough? Like most of our ethical dilemmas come from, I would argue, researchers trying out new ways to learn about how the world works and running over people in the process.

Mary L Gray: And that's not to forgive anybody for it. It's to say, how do we say then your first order of business, dear researcher, is to be thinking first and foremost, how do I maintain the trust? How do I keep engagement? How do I see how diversity and inclusion are not a completion of a set that is always going to be evolving. I'm excited about that.

Ivy Tillman: That was very powerful what you just said. Very powerful. And in these spaces and in these conversations that we've had over the years, we've not necessarily framed it like that. Particularly when it relates to the relationship between those who provide the oversight and those who conduct the research.

Mary L Gray: I think that's the part that, I'll go back to that example we were playing with earlier, I don't know anybody who's done programming who doesn't have a story to tell about a graduate student who came into their office and just felt unmoored by like, “how do I do this research design?”

Mary L Gray: So we call it oversight, but the day to day, there are a lot of folks already who were playing that role of mentors. I mean, and I think what bothers me is that shouldn't be something done entirely by a program that's outside of a discipline. Like what I'd love to see us do is like, let's get back to rings of peer review that prep each other for a conversation with the expertise of somebody in the IRB who's bringing a different expertise. I mean, I think at this point, the hardest thing is that domain expertise is sharp, finely tuned to regulations. And that's understandable. This is a regulation free zone. I would love us to have some rules, don't get me wrong, but it's going to remain for the near future, which is too much time.

Ivy Tillman: Yeah.

Mary L Gray: It's going to remain this place where most people using these techniques of computation, using these approaches to modeling human experiences, and probably most important to me in almost all cases right now, we are completely dependent on people's data, which means I'm scraping the internet. I'm buying a data set.

Mary L Gray: So, there is no way forward with AI without engaging people's material. And you're right. Like my toes curl when I think about what are the things I know that are knowable. Not just about individuals, but about our relationships with each other. That to me, it's no longer about privacy. It is about a fundamental right to the respect to my social life, to not have that treated as fodder.

Mary L Gray: You know, I think we can all relate to that. I feel like IRB professionals, you know, those are those folks who are always the Geiger counter for like something, something feels off. And I feel like researchers need that reflected back to them. I think the reality is for computer science and engineering, they're at the very beginning of learning why they should listen, how they would listen and how they would do differently if they're trying to gauge, social impact of their work.

Ivy Tillman: Fascinating, fascinating. You know, when you mentioned the layered peer review model, it's embedding those ethical and some of the regulatory conversations there. Just the limited amount of research that I've conducted, it was there at the design level, and because I had done IRB work, I understood it, right? But that's not where those conversations are happening, particularly for students.

Mary L Gray: And we could change that in a heartbeat. I mean, I think the good news is like, this is all within, certainly within PRIM&R. I mean I think one of the things that really drew me to PRIM&R was here is the folks who were, I won't say on the front line, that feels too loaded, but they're often in the position of seeing that something could be done differently. That's going to enhance the public's connection and valuing of scholarship. They can see it often a mile away and, you know, it's being able to get those campuses with a new set of researchers who are actually quite interested in not breaking things anymore.

Mary L Gray: So, it's kind of exciting to me that we have this whole new cohort of a discipline that's never thought it was in need of this conversation when it comes to these conversations. And I've got a decade to prove that of these students who come in and they want to know, like, “how would I be respectful? What would that look like?”

Mary L Gray: They're good-hearted people. They're fine. The kids are all right. And that's true across most of the disciplines, I mean, I think most of the social sciences and the biomedical sciences have been learning diminishing returns if you do something just to get a paper out. We don't have time for that.

Ivy Tillman: I love how it's evolved, just in the 23 years that I've been involved, I've seen the evolution and it's exciting to see, particularly with our students. They're beginning to think about ethics and doing the right thing and wanting to. And I think that's where, that's the intersection that we sit in right now. The opportunity.

Mary L Gray: And that's like one cohort. You know, that's like within five years, 10 years, we could just move them all to a very different place.

Ivy Tillman: That's exciting and encouraging. I wanted to ask you another question. Since AI often gets an overall positive or negative framing, is there a middle ground or a way forward for best use cases?

Mary L Gray: Oh, I'm glad you asked that because I think in many ways, this is our opportunity to define that middle way that is seeing where it is a really great tool for reproducing what we rely on as the typical outcome. Being able to separate that out from places where we know we don't want to override our ability to deliberate.

Mary L Gray: So, put plainly, any place where we want to be able to have a genuine discussion and openness to what direction should we go? That's a place that we want to be mindful and intentional about keeping AI at bay, because it's not going to help us. So, for example, if I'm trying to decide, do I send my coworker an angry email right now? Don't automate that. Don't look at my past emails and try and figure out for me. Right? The hardest thing is like, I think we can see why we would want, not just that we want to because we'll get a better outcome, but because we want to maintain our humanity. That we want to make explicit decisions about, we could use this AI here, we could use it for hiring, we could use it for firing, we could use it for evaluating students abilities to learn in class.

Mary L Gray: We can use it for all those things, but there are places where doing so is overriding and condition us to stop paying attention. And that is not to say there aren't great applications of AI. There are. So back to the example of the soap dispenser, that's fixable. It's actually gotten better. So, if you're having a bad experience with a soap dispenser, it's just that it's old and hasn't been updated.

Mary L Gray: So it can get better. The middle way is seeing what are the ways in which it cannot improve because there are places we both don't want to override creative novel ways of being in the world and because it'll get it wrong because both those things are different, but both are true. There are things that can't do and there are things we shouldn't want it to do in the first place.

Ivy Tillman: Exactly. Oh wow. Perfect conclusion. Thank you, Mary. Thank you for just spending time with me talking about this and expanding my ways of knowing and thinking, but also just, you know, lending to the beginning of many conversations that we want to have at PRIM&R regarding artificial intelligence and this ethical framework and engaging disciplines. So, thank you.

Mary L Gray: My pleasure. I feel like PRIM&R has been a beacon for a long time, and it's a really great amazing opportunity to bring what it knows and the collective intelligence, as you said, of this community to the table where it hasn't really been engaged as much as it could be. So, I'm happy to keep talking about this stuff.

Ivy Tillman: Well, I will definitely be reaching out to you. Thank you.

Mary L Gray: My pleasure.

 

Announcer: Thank you for listening to research ethics reimagined, a podcast created by PRIM&R and produced by Syntax +Motion. Please subscribe and share with friends and colleagues we're having on the record, please. Be sure to join us next month as we continue our conversation with scientists, researchers, bioethicists, and explore new frontiers of science.