Discussion Guide and Transcript
Season Three - Episode One
Research Ethics Reimagined Podcast Season Three: Episode One "Ethical Challenges in Suicide Research with Matthew Nock, PhD"
- In this episode of PRIM&R's podcast, "Research Ethics Reimagined," we explore the ethical and methodological complexities of suicide and self-harm research with Dr. Matthew Nock, the Edgar Pierce Professor of Psychology at Harvard University and former chair of the Harvard IRB. Dr. Nock discusses how research demonstrates that asking about suicide does not increase risk, the importance of IRB-researcher collaboration, and the challenges of real-time monitoring and intervention with high-risk participants. He shares insights from developing consensus guidelines on ethical conduct of suicide research and emphasizes the critical need for advancing this often-stigmatized field of study. Listen on Spotify | Listen on Apple| Listen on Amazon
Discussion Questions
- 1.) IRB-Researcher Collaboration and Trust
- Dr. Nock describes his approach as IRB chair: "How can we approve this study?" rather than reflexively blocking suicide research. What has been your IRB’s approach to this type of high risk research? What methods can your IRB employ to collaboratively address their concerns with the researcher?
- He emphasizes avoiding "hostile attribution bias" - assuming the other party has harmful intent. What practical steps can both IRBs and researchers take to build trust and transparent communication, particularly when reviewing ethically complex studies?
2.) Evidence-Based Decision Making in Research Ethics
- Dr. Nock notes that randomized controlled trials demonstrate asking about suicide does not increase distress or suicidal thoughts. How should IRBs balance evidence-based findings with individual concerns when reviewing potentially sensitive research methods?
- He describes pausing his research to convene a multi-day consensus meeting with IRB members, lawyers, NIH staff, and people with lived experience to develop intervention standards. When should researchers or IRBs initiate similar pause-and-convene processes for emerging ethical questions?
3.) Real-Time Monitoring and Intervention Protocols
- Dr. Nock's team monitors high-risk participants via smartphone apps and intervenes when risk ratings reach 8 out of 10, while acknowledging others might choose different thresholds. How should researchers and IRBs determine appropriate intervention points when there are no established standards?
- He asks: "What if this was your child?" when deciding intervention protocols, while also noting the need to avoid making methodology so weak it produces unhelpful science. How can researchers balance the imperative to protect participants with the need for methodologically sound research that advances knowledge?
Key Terms
Implicit Association Test (IAT): A reaction time test measuring automatic associations; adapted by Dr. Nock to predict suicide risk based on millisecond responses to death-related stimuli, performing better than self-report measures
Evaluative Conditioning: A technique pairing suicide-related images with naturally aversive stimuli (like snakes or spiders) to create an aversion to self-harm, which Dr. Nock's research shows can reduce suicidal behavior
Consensus Statement on Ethical Conduct: Guidelines developed through a multi-stakeholder process defining standards for research with people at high risk for suicide, including when to intervene and how to communicate with participants
Evaluative Conditioning: A technique pairing suicide-related images with naturally aversive stimuli (like snakes or spiders) to create an aversion to self-harm, which Dr. Nock's research shows can reduce suicidal behavior
Consensus Statement on Ethical Conduct: Guidelines developed through a multi-stakeholder process defining standards for research with people at high risk for suicide, including when to intervene and how to communicate with participants
Additional Resources
Transcript
Please note, a transcript generator was used to help create written show transcript.
The transcript of this podcast is approximate, condensed, and not meant for attribution. Listen to the full conversatio on PRIM&R’s Research Ethics Reimagined podcast.
This episode contains some sensitive discussions regarding suicide that may be difficult for some listeners. If you or someone you know is struggling, please visit 988lifeline.org or call or text a 988 Lifeline counselor anytime, day or night by dialing 9-8-8. This 3-digital number connects callers to the National Suicide & Crisis Lifeline.
988 provides support to people who are experiencing emotional distress or suicidal thoughts — or are worried about a loved one who is. You do not need to be suicidal to contact 988. The services are free and available 24/7, 365 days a year.
Tonya Ferraro, MEd, who serves as Accreditation Policy Analyst with AAHRPP is a guest co-host for this episode.
Catherine Batsford: Welcome to "Research Ethics Reimagined," a podcast from Public Responsibility in Medicine and Research, where we explore the ethical, regulatory, and human dimensions of research in a rapidly evolving scientific landscape. I’m Catherine Batsford, co-host of today’s episode, and I’m joined by my colleague and co-host, Tonya Ferraro.
In this episode, we’re pleased to welcome Matthew Nock, the Edgar Pierce Professor of Psychology at Harvard University and former chair of the Department of Psychology. Matthew is a leading expert in the study of suicide and self-harm, and his research has significantly shaped how the field understands risk, prediction, and prevention of suicidal behavior.
To get us started, can you share a little bit about your journey in psychology and what initially drew you to this area of research?
Matthew Nock: Absolutely, and thank you so much for having me. Thank you, Catherine and Tonya, for the conversation. I’m a professor, so I could talk for hours, but I’ll try to keep myself brief.
I got interested in the study of suicide somewhat by happenstance. I was an undergraduate at Boston University and spent a semester abroad in London. As part of the academic experience, students were placed in internships. Some students worked at places like the Guinness Brewery or record companies. I ended up in a psychiatric hospital.
There was a unit in that hospital serving patients who were violent or self-injurious, and I was placed there. I was really taken aback by the clinical severity of what I saw. I had encountered violence before as a college student, but self-injury—people cutting themselves, burning themselves, attempting suicide—struck me as particularly severe and perplexing. I didn’t really understand it.
At the time, I wanted to be a practicing clinician. I thought, if I can learn to effectively treat people who are self-injurious and suicidal, then other things I might encounter—depression, anxiety, and so on—should be easier. I just never got out of it. I started doing research on suicide as a postbaccalaureate and quickly learned how little we know about suicide, despite how large the problem is.
Suicide is a leading cause of death in the United States and around the world. It is the second leading cause of death among people aged 10 to 34 years, behind unintentional injury. Suicide takes more lives than all wars, genocide, interpersonal violence, and murder combined. Globally, people are more likely to die by suicide than by homicide.
Yet we hear far more about other causes of death. We don’t talk much about suicide, and historically there has been relatively little research on it—certainly not as much as there should be. When I started more than 20 years ago, there was even less. There is more research now, but that’s how I got into the field. I encountered suicidal and self-injurious behavior during a clinical rotation, started studying it, and never left.
Catherine Batsford: It is such a taboo subject.
Matthew Nock: It really is. It’s often a conversation stopper—on airplanes, at parties. If I say I’m a psychologist, people might ask what it means that they like the color blue. But when I say I study suicide and self-injury, people often pause, look down, or turn to someone else.
That’s part of the problem. Suicide is taboo and mysterious, and people worry that talking about it will make things worse or give people ideas. This concern shows up in institutional review board discussions as well. People are afraid of saying the wrong thing, and so they clam up.
From an IRB perspective, suicide research has long been challenging. IRBs are designed to protect human subjects, not increase risk, so it’s natural that board members have questions. Are the methods we use to study suicidal thoughts and behaviors inadvertently making things worse?
I think it’s important for researchers and IRBs to understand what we’ve learned over the past few decades and where the science stands. One thing you didn’t mention in the introduction—one of the lines I’m proudest of on my CV—is that I served as chair of the Harvard IRB for six years. I got to work closely with Tonya and many others in the Harvard IRB community.
I first got involved with IRBs as a graduate student at Yale. I was doing suicide research, and the IRB had a lot of questions about my study. They asked me to come in and talk through it, and eventually they said, “Why don’t you join the IRB?” I joined the medical school IRB as a graduate student and really loved it.
It opened my eyes to important questions about what we do as researchers—what’s helpful, what’s harmful, and what safeguards we should build in. That experience carried forward into my faculty position at Harvard, where I joined the IRB and later became chair. Much of our work bridges psychology and medicine, so we often work across multiple IRBs, which adds complexity but also perspective.
Tonya Ferraro: That’s actually a great segue. When I was a research project manager, I told my PI I wanted to work for an IRB to learn the process from the inside out. He joked that I was going to the dark side. Then I met you as chair, and I distinctly remember you rolling up your sleeves and saying, “Okay, how can we approve this study?” I loved the tone you set.
As someone who has served as an IRB chair, how can IRBs and researchers build trust and work as collaborators?
Matthew Nock: Thank you for that. As researchers, we don’t get a lot of positive feedback, so I appreciate it.
I think openness, transparency, and collaboration go a long way. Not viewing each other as “the dark side” is a good place to start. There’s a concept in psychology called hostile attribution bias—assuming hostile intent where none exists. Researchers sometimes assume IRBs are trying to block their work, when in reality IRBs are trying to protect participants.
I’ve always appreciated IRBs that take the stance of, “How can we help you do this work safely and ethically?” rather than “We can’t allow this.” As a suicide researcher, I’ve heard colleagues say their IRBs simply refused to allow suicide research. I think that’s unfortunate.
A common concern is whether asking about suicide increases risk. This is a very reasonable question, and it’s one many IRBs raise. Fortunately, there are now multiple randomized controlled trials showing that asking people about suicidal thoughts does not increase distress, suicidal ideation, or suicidal behavior. Asking the question is not harmful on average.
This is an example of how healthy interaction between researchers and IRBs can lead to better science. Questions come up, we study them, and we learn whether our methods are harmful or not. Clinically and interpersonally, it’s important for people to know that asking someone about suicide is not inherently dangerous.
Catherine Batsford: So it really is a balance—collecting meaningful data, breaking through taboos, and educating IRBs at the same time.
Matthew Nock: Exactly. And it requires flexibility on the part of researchers. Many studies look at group averages, but individuals vary. For example, we use behavioral tasks that show suicide-related images repeatedly. On average, these tasks do not increase distress, but some individuals do find them upsetting.
Because of that, our consent forms are very explicit. We explain what participants will see, note that some people may experience distress, and emphasize that they can stop at any time. Research is iterative, and so are interactions with IRBs and participants. We have to remain flexible and responsive. Tonya Ferraro: During your keynote, you brought up the concept that many of these concerns are empirical questions. One of the things I love about research is that data can tell a very different story than what might feel intuitively true. What are some findings that surprised you, or perhaps surprised IRBs, when you presented them?
Matthew Nock: That’s a great question, and I agree with the sentiment. There’s an old quote from W. Edwards Deming that I really like: “In God we trust; all others must bring data.” We can have beliefs and opinions, but many of these questions are ultimately empirical.
That’s one of the reasons I was drawn to science. I was a fairly oppositional kid, and science encourages questioning. There’s a system for evaluating what the data show, and we have to remain humble about that.
One thing that surprised me was the predictive power of some behavioral tests, particularly adaptations of the Implicit Association Test. Many people are familiar with the IAT, which was originally developed to examine implicit attitudes related to race, age, and gender. I worked with colleagues to develop a version focused on suicide and self-injury.
We brought that test into emergency department settings, and people’s reaction times to suicide-related stimuli predicted suicide attempts above and beyond self-report. That surprised me because translating lab-based measures into clinical settings often doesn’t work. In this case, it did—and it replicated.
We also tested other cognitive tasks, such as a version of the Stroop task. Initially, we saw promising results, but when we replicated the study, the effect disappeared. That first finding was likely a false positive. I appreciate that science emphasizes replication because it keeps us honest.
Another surprising line of work involved a behavioral intervention using evaluative conditioning. One of our postdoctoral fellows led this work. We paired suicide-related images with naturally aversive stimuli, such as snakes or spiders, through a simple matching task. Participants completed this task over about a month.
We saw significant reductions in self-injurious and suicidal behavior. I didn’t believe it at first, so we repeated the study. Same result. We did it a third time, again with the same result, and then published the set of studies together.
For something as difficult to treat as suicidal behavior, seeing a replicable effect like that was genuinely surprising and encouraging. Anytime we see something work consistently, especially in this area, it feels like progress.
Tonya Ferraro: Have you seen research findings move beyond the lab and actually change practice, training, or policy?
Matthew Nock: Not as much as I would like. I’ll be a little critical here, even though I’m ultimately optimistic.
We have seen the development of evidence-based treatments, such as cognitive behavioral therapy and dialectical behavior therapy, and those have helped. Some of our work has contributed to that literature. But I think we need to move faster.
Much of our current work focuses on implementation. That brings its own ethical and IRB questions—particularly around when research ends and clinical care begins. About 90% of our research now takes place in real-world clinical settings: emergency departments, psychiatric inpatient units, and similar environments.
We’re trying to take what we’ve developed in the lab and see whether it helps clinicians better identify who is at risk and intervene effectively in real time. We’re also doing a lot of work with new technologies, such as smartphones and wearable sensors, to monitor people during high-risk periods.
The goal is to identify not just who is at risk, but when risk increases—what day, what hour—and then deliver interventions at the right moment to help keep people safe.
That work raises difficult ethical questions. If we’re following someone at high risk for suicide, when should we intervene? At what threshold? Who should intervene? How should we intervene? Is intervening itself harmful?
If we want to test different approaches—for example, calling someone versus using an automated tool or chatbot—can we ethically randomize those responses in real time? These are hard questions, and they come up frequently in conversations with IRBs.
People have different levels of comfort with risk. Clinicians, researchers, and institutional counsel often see these issues differently. My responsibility, as I see it, is to do right by participants while using the privilege of research to generate knowledge that can meaningfully help people.
That’s my north star: advancing science in a way that is maximally beneficial to society while minimizing harm. Catherine Batsford: There’s a delicate balance between collecting meaningful data, addressing taboos so people will talk about these issues, and educating IRBs at the same time—that there is evidence behind this work and that asking difficult questions is not inherently harmful.
Matthew Nock: Yes, and it also requires flexibility on the part of researchers. As I mentioned earlier, studies showing that asking people about suicide does not increase harm are based on group averages. We also do research using behavioral tasks, such as the Implicit Association Test, where we show people suicide-related images or words—things like pill bottles or nooses—as symbolic representations of suicide.
We present these stimuli repeatedly, sometimes dozens of times. The reason we do this is that people’s reaction times—measured in milliseconds—can tell us who is at risk for a suicide attempt above and beyond what people report themselves. There’s real clinical significance to that.
A question IRBs often ask is whether repeatedly showing these images increases distress or suicidal thinking. We’ve conducted multiple studies examining this question, and the data show that, on average, these images do not significantly increase distress, suicidal thoughts, or thoughts of death.
That said, in some individual cases, participants report that they found the images distressing. Because of that, our consent forms are explicit. We explain that, on average, the task does not increase risk, but that some people may experience discomfort, and participants can stop at any time.
I see this as an iterative process. Research itself is iterative, and our interactions with IRBs and participants need to be iterative as well. We need to remain flexible and responsive as we learn more.
Tonya Ferraro: During your keynote, you talked about framing many of these concerns as empirical questions. I love that idea—that data can challenge what feels intuitively true. Are there other findings that surprised you or surprised IRBs when you presented them?
Matthew Nock: That’s a great question. I think one of the biggest surprises for me has been how well some behavioral measures perform in predicting suicide risk. The Implicit Association Test is one example. It was originally developed to study implicit attitudes related to race, age, and gender, and it has been featured in mainstream media.
We developed a version focused on suicide and self-injury, and when we brought it into emergency department settings, it predicted suicide attempts beyond self-report measures. That’s not something that happens often when you move a task from the lab into a clinical environment, so that was striking.
Another example involves replication. We once adapted a version of the Stroop task and found strong initial effects predicting suicidal behavior. When we repeated the study, the effect disappeared. That initial finding was likely a false positive. That experience reinforced for me how important replication is in science.
On the other hand, we’ve also seen effects replicate repeatedly. One of our postdoctoral fellows led a study using evaluative conditioning, where we paired suicide-related images with naturally aversive images, like snakes or spiders, in a matching task. Participants completed the task repeatedly over the course of about a month.
We observed significant reductions in self-injurious and suicidal behavior. I was skeptical, so we did the study again. Same result. We did it a third time, again with the same outcome. We published all three studies together.
For an outcome as difficult to treat as suicide, seeing something replicate that consistently was genuinely surprising and encouraging. Anytime something works—and continues to work—I get excited, because it suggests we’re making progress.
Tonya Ferraro: Have you seen any of this research translate into changes in practice, training, or policy?
Matthew Nock: Not to the extent I would like. I’m optimistic, but I’m also critical of our progress.
We do have effective treatments—cognitive behavioral therapy, dialectical behavior therapy—and those have been developed and refined over the past couple of decades. Some of our work has contributed to that evidence base. But I think we need to be doing much more.
More recently, we’ve been focusing on implementation research, which brings its own ethical and IRB challenges. A key question is when research ends and clinical practice begins. Most of our work now—probably about 90%—takes place in clinical settings, such as emergency departments and psychiatric inpatient units.
We’re trying to move what we’ve developed in the lab into the field, to help clinicians better identify which patients are at risk and intervene effectively. We’re also doing a lot of work with new technologies. We use smartphones and wearable sensors to monitor people during high-risk periods and try to determine exactly when risk increases—what day or what hour—and then intervene in real time.
That raises complex ethical questions. If someone reports persistent suicidal thoughts, how and when should we intervene? At what level of risk? Who should intervene? Is intervening itself harmful?
If we want to test different intervention strategies—calling someone versus using an automated tool or chatbot—can we ethically randomize those responses in real time? These are questions we wrestle with constantly, both within our research team and in conversations with IRBs.
Different stakeholders have different levels of comfort with risk. Clinicians, researchers, and institutional counsel often view these situations differently. My role is to balance those perspectives while doing right by participants and advancing science in a way that is most likely to help people.
That remains my north star: advancing science while being maximally beneficial to society. Catherine Batsford: So, when there’s a possibility of intervention, does the consent process look different?
Matthew Nock: The consent process is extremely important in studies involving suicide risk. We are very clear about what we are going to do, what we are not going to do, and when we will do it.
For example, in our monitoring studies, we install smartphone apps on participants’ phones. These participants are often recruited from hospitals after presenting with high suicide risk. We tell them explicitly that we monitor responses during certain hours—for example, between 9 a.m. and 9 p.m.—and that we will do our best to respond if we believe they are at imminent risk of harming themselves.
At the same time, we are very clear that participants cannot rely on us as a clinical service. We cannot promise that we will intervene in every situation. We can promise that we will try to intervene. Our formal standard is that we will intervene within 24 hours, although in practice our response time is often much faster, sometimes within 20 minutes.
We have real-time alerts when someone crosses a predefined risk threshold, and we follow a very detailed protocol that specifies exactly what actions we take at each level of risk. But we want participants to understand that this is not a replacement for clinical care. We are not a “Minority Report” system that will swoop in and prevent harm in every case.
If we randomize participants to different conditions, we explain that clearly during consent. We tell them that we are testing different approaches to responding or intervening. Transparency is essential.
When we first began doing this kind of monitoring work, I actually paused our research for a period of time. I wasn’t confident that we had clear standards for how and when to intervene, and at the time, the field didn’t either. Smartphones were relatively new in this context.
I reached out to colleagues at the National Institute of Mental Health and asked whether we could convene a consensus meeting. We brought together IRB members, researchers, clinicians, institutional counsel, representatives from funding agencies, and people with lived experience of suicidal thoughts and behavior.
We spent several days working through these issues in detail: what the consent process should include, when and how to intervene, how to use collateral contacts, and how to respond if we cannot reach someone who appears to be at high risk.
That work resulted in a consensus statement on the ethical conduct of research with people at high risk for suicide, which was published in 2021. We now use that guidance routinely, and we are considering whether it needs to be updated as technology and practice evolve.
When we encounter questions that could benefit from empirical evidence, we design studies to answer them. For example, we are actively studying whether it is more effective to have a human call someone at high risk or to use an automated or chatbot-based intervention.
Many people assume that a human call is always better, but there are reasons to question that assumption. An automated tool can respond immediately, whereas a human may take 20 minutes or longer. That time gap can matter.
We also see situations where a participant says, “Yes, I’m having thoughts of suicide, but I’m at work or at school right now, and I can’t talk.” In those cases, an automated interaction may be more appropriate in the moment.
So we are testing whether different approaches work better at different levels of risk or in different contexts, and whether the best response varies across individuals or even within the same individual over time.
Tonya Ferraro: I just want to pause and acknowledge that you stopped your research to address these ethical questions. When you’re close to a project and always feel behind, it’s hard to take that kind of pause. I really appreciate that you did that work, and that it contributed to broader standards in the field.
Matthew Nock: Thank you. It was anxiety-provoking to pause, especially when there is always pressure to keep moving. But this is real life. Statistically speaking, nearly everyone listening to this podcast has been touched by suicide in some way.
About 15% of people in the United States report having seriously considered suicide at some point in their lives, and about 5% report having made a suicide attempt. Many of us know someone—a family member, a friend, a colleague—who has struggled with suicidal thoughts or behavior, or who has died by suicide.
Because of that, I feel a tremendous responsibility to make sure we are doing this work carefully and ethically. Researchers and clinicians differ in how much risk they are willing to tolerate.
For example, we often use a 0-to-10 scale to assess risk. In our studies, we may set a threshold at 8 out of 10, above which we intervene. Some clinicians ask why we don’t intervene at 5, when risk becomes more likely than not. Others argue that we should not intervene at all, because intervention alters the natural course of behavior and turns an observational study into an intervention study.
For me, that feels like not enough. One comment during our consensus meeting really stuck with me: “What if this were your child? Your spouse? Your sibling?” If someone reports a 10 out of 10 and we simply watch and wait, that doesn’t sit right with me.
Our thresholds are based on data and practical constraints, and we stand behind them, even though they are imperfect. I would rather accept some methodological compromise than fail to act when we have an opportunity to keep someone safe.
We have had situations where participants attempted suicide during monitoring, and we were able to locate them and connect them with emergency services. In one recent case, a participant overdosed, we intervened, and they survived. That reinforces why this work matters.
The goal is not just to publish papers, but to improve health and human functioning. Catherine Batsford: How do you roll people off a study safely?
Matthew Nock: That’s a great question. We do it very carefully, and it’s something we plan for from the beginning. It’s also part of the consent process.
At the end of a study, we conduct debriefings and often include qualitative interviews. We ask participants what they liked about the study, what they didn’t like, and what their experience was like overall. We are very clear about when the study is ending and that we will no longer be monitoring them. We remove the apps from their phones and make sure they understand that the monitoring period has concluded.
One of the more challenging situations arises when parents are involved, particularly when we are following adolescents. I’m a parent of three children myself, so I understand this perspective well.
In one of our studies, we monitor adolescents for six months after discharge from a psychiatric hospitalization, which is an especially high-risk period. Over time, our statistical models have become increasingly accurate at identifying when risk is elevated.
During the study, if a child reports high suicide risk, our protocol is to contact the parents, speak with the child, conduct a risk assessment, and take steps to keep them safe. Many parents appreciate this extra set of eyes on their child during a vulnerable time.
At the end of the study, some parents ask whether we can continue monitoring. Unfortunately, we cannot. The study has a defined end point, and we have to follow the protocol. We are very upfront about this from the beginning and reinforce it again at the end.
While this can be difficult, it also highlights the potential clinical utility of the work. These tools may eventually have applications beyond research, not only for suicide risk, but also for other episodic or high-risk behaviors such as substance use, alcohol use, or eating disorder behaviors.
We know from our monitoring studies that suicidal thoughts can fluctuate hour by hour and day by day. Traditional models of care—such as weekly therapy sessions—often do not capture that variability well. Technologies that can respond in the moment may offer additional support, if they are implemented ethically and without being intrusive.
Catherine Batsford: How do you and your team take care of yourselves? This is incredibly heart-wrenching work.
Matthew Nock: That’s an important question. We talk a lot about self-care on our research team.
Our team includes about 30 people—staff, graduate students, postbaccalaureate research assistants, and undergraduates. We work hard to maintain a supportive, collegial environment. It may sound strange given the nature of the work, but it’s a friendly, caring, and often humorous group, which I think is essential.
We do experience suicide attempts among participants, and we have lost participants to suicide over the years. That’s painful. I always remind myself, and my team, that this is why we do the work. In the same way that oncology researchers lose patients despite their best efforts, we are confronting a condition that we still cannot predict or prevent as well as we would like.
Many people on our team have personal experiences with suicide, either through family or friends. That makes the work both more meaningful and more emotionally demanding.
We also try to practice what we teach. We encourage taking breaks, supporting one another, and engaging in activities outside of work. For me, that includes spending time with my family, exercising, working on an old Jeep, riding motorcycles, and sometimes just watching something completely nonacademic and mindless at the end of the day.
Tonya Ferraro: You’ve been in your career for a while, and you mentor a lot of students. I like to ask what you wish you had known when you were starting out—something no one told you, but that you now share with mentees.
Matthew Nock: That’s a good question. One thing I wish I had understood earlier is how free and dynamic this career is.
I recently had a friend from college visit my lab, and he commented that he has worked with the same five people since college. In contrast, academic research is constantly changing. New students join, new ideas emerge, and new perspectives enter the field.
I didn’t think much about mentorship early on. I was focused on the science and the questions I wanted to answer. Over time, mentorship became one of the most rewarding parts of the job. Watching students develop their own ideas, carry out studies, publish, and eventually lead their own labs is incredibly fulfilling.
It feels a bit like watching people grow up—not in a condescending way, but in a deeply gratifying one. Seeing students become independent scientists and professionals gives me a lot of hope for the future of the field. Catherine Batsford: On the flip side, I’m a new IRB member. I’m seeing some of these studies come across my desk. What questions should I be asking? What should I be reassured by? What should I be looking for to better understand the work you’re trying to do?
Matthew Nock: That’s a great question. My main recommendation is communication—reach out to researchers and ask questions.
IRB members have an important role, and information is power. Ask researchers to explain what they’re doing, why they’re doing it, and how their methods align with current standards in the field. I’m often asked by IRBs to provide expert input on whether a particular approach is consistent with best practices, and I’m always happy to do that.
The regulations should be used as a guide, not a barrier. The more researchers understand what IRBs are responsible for, and the more IRBs understand the science, the easier and more collaborative the process becomes.
If researchers and IRBs approach each other with transparency and openness—rather than withholding information or assuming bad intent—it leads to better outcomes for everyone, especially participants.
Catherine Batsford: That’s fantastic. Tonya, do you have any more questions?
Tonya Ferraro: No, I think I’m good. This has been great.
Matthew Nock: It’s been wonderful talking with both of you. Sorry for rambling—I can go on forever if I’m not stopped.
Catherine Batsford: We could listen forever.
Matthew Nock: I appreciate that.
Catherine Batsford: I’d like to end on something you mentioned during your conference session. You talked about how suicide rates haven’t really changed over time. I found that both reassuring and concerning. Could you speak to that a bit?
Matthew Nock: Sure. People often say that suicide rates have increased dramatically over the past 20 years. That’s true—but they also decreased during the 20 years before that. Overall, the suicide rate today is very similar to what it was about 100 years ago.
Is that good or bad? It depends on how you look at it. The pessimistic view is that, unlike many other leading causes of death, suicide rates have not declined. Mortality from cancer, pneumonia, motor vehicle accidents, HIV, and other conditions has decreased substantially over time because science has advanced and those advances have been implemented.
Suicide hasn’t followed that same trajectory yet.
The optimistic view is that this tells us something important: when science advances and is translated into practice, outcomes improve. If you look at the past five to 10 years, our ability to identify suicide risk, predict when risk increases, and deliver effective interventions has improved significantly.
There are more people studying suicide now than ever before, and while funding is still insufficient, there is more attention to the issue. I’m concerned about the current funding environment for science overall, but suicide prevention is a bipartisan issue. No one wants to lose loved ones to suicide.
I’m optimistic that researchers, IRBs, clinicians, funders, policymakers, and communities can work together to turn the tide. Suicide disproportionately affects young people, and preventing those losses is worth sustained effort and investment.
Tonya Ferraro: That really speaks to the idea that research is a public good. There’s a cost to not doing research, especially during uncertain times.
Matthew Nock: There absolutely is. Suicide is one of the leading contributors to years of potential life lost because it affects young people so heavily. We can measure the costs of inaction, even if the benefits of research take time to emerge.
It’s in all of our interests to continue advancing science and improving care—not just for suicide, but for other conditions as well, such as substance use and alcohol use disorders. We don’t want people to suffer, and it’s our responsibility as a society to keep this work moving forward.
Catherine Batsford: Thank you so much, Matthew.
Matthew Nock: Thank you. And thank you to PRIM&R for focusing on this topic. Suicide is not something people naturally lean into, but shining a light on difficult issues is one of the best ways to reduce stigma and make progress. I’m grateful for the opportunity to be part of this conversation.
In this episode, we’re pleased to welcome Matthew Nock, the Edgar Pierce Professor of Psychology at Harvard University and former chair of the Department of Psychology. Matthew is a leading expert in the study of suicide and self-harm, and his research has significantly shaped how the field understands risk, prediction, and prevention of suicidal behavior.
To get us started, can you share a little bit about your journey in psychology and what initially drew you to this area of research?
Matthew Nock: Absolutely, and thank you so much for having me. Thank you, Catherine and Tonya, for the conversation. I’m a professor, so I could talk for hours, but I’ll try to keep myself brief.
I got interested in the study of suicide somewhat by happenstance. I was an undergraduate at Boston University and spent a semester abroad in London. As part of the academic experience, students were placed in internships. Some students worked at places like the Guinness Brewery or record companies. I ended up in a psychiatric hospital.
There was a unit in that hospital serving patients who were violent or self-injurious, and I was placed there. I was really taken aback by the clinical severity of what I saw. I had encountered violence before as a college student, but self-injury—people cutting themselves, burning themselves, attempting suicide—struck me as particularly severe and perplexing. I didn’t really understand it.
At the time, I wanted to be a practicing clinician. I thought, if I can learn to effectively treat people who are self-injurious and suicidal, then other things I might encounter—depression, anxiety, and so on—should be easier. I just never got out of it. I started doing research on suicide as a postbaccalaureate and quickly learned how little we know about suicide, despite how large the problem is.
Suicide is a leading cause of death in the United States and around the world. It is the second leading cause of death among people aged 10 to 34 years, behind unintentional injury. Suicide takes more lives than all wars, genocide, interpersonal violence, and murder combined. Globally, people are more likely to die by suicide than by homicide.
Yet we hear far more about other causes of death. We don’t talk much about suicide, and historically there has been relatively little research on it—certainly not as much as there should be. When I started more than 20 years ago, there was even less. There is more research now, but that’s how I got into the field. I encountered suicidal and self-injurious behavior during a clinical rotation, started studying it, and never left.
Catherine Batsford: It is such a taboo subject.
Matthew Nock: It really is. It’s often a conversation stopper—on airplanes, at parties. If I say I’m a psychologist, people might ask what it means that they like the color blue. But when I say I study suicide and self-injury, people often pause, look down, or turn to someone else.
That’s part of the problem. Suicide is taboo and mysterious, and people worry that talking about it will make things worse or give people ideas. This concern shows up in institutional review board discussions as well. People are afraid of saying the wrong thing, and so they clam up.
From an IRB perspective, suicide research has long been challenging. IRBs are designed to protect human subjects, not increase risk, so it’s natural that board members have questions. Are the methods we use to study suicidal thoughts and behaviors inadvertently making things worse?
I think it’s important for researchers and IRBs to understand what we’ve learned over the past few decades and where the science stands. One thing you didn’t mention in the introduction—one of the lines I’m proudest of on my CV—is that I served as chair of the Harvard IRB for six years. I got to work closely with Tonya and many others in the Harvard IRB community.
I first got involved with IRBs as a graduate student at Yale. I was doing suicide research, and the IRB had a lot of questions about my study. They asked me to come in and talk through it, and eventually they said, “Why don’t you join the IRB?” I joined the medical school IRB as a graduate student and really loved it.
It opened my eyes to important questions about what we do as researchers—what’s helpful, what’s harmful, and what safeguards we should build in. That experience carried forward into my faculty position at Harvard, where I joined the IRB and later became chair. Much of our work bridges psychology and medicine, so we often work across multiple IRBs, which adds complexity but also perspective.
Tonya Ferraro: That’s actually a great segue. When I was a research project manager, I told my PI I wanted to work for an IRB to learn the process from the inside out. He joked that I was going to the dark side. Then I met you as chair, and I distinctly remember you rolling up your sleeves and saying, “Okay, how can we approve this study?” I loved the tone you set.
As someone who has served as an IRB chair, how can IRBs and researchers build trust and work as collaborators?
Matthew Nock: Thank you for that. As researchers, we don’t get a lot of positive feedback, so I appreciate it.
I think openness, transparency, and collaboration go a long way. Not viewing each other as “the dark side” is a good place to start. There’s a concept in psychology called hostile attribution bias—assuming hostile intent where none exists. Researchers sometimes assume IRBs are trying to block their work, when in reality IRBs are trying to protect participants.
I’ve always appreciated IRBs that take the stance of, “How can we help you do this work safely and ethically?” rather than “We can’t allow this.” As a suicide researcher, I’ve heard colleagues say their IRBs simply refused to allow suicide research. I think that’s unfortunate.
A common concern is whether asking about suicide increases risk. This is a very reasonable question, and it’s one many IRBs raise. Fortunately, there are now multiple randomized controlled trials showing that asking people about suicidal thoughts does not increase distress, suicidal ideation, or suicidal behavior. Asking the question is not harmful on average.
This is an example of how healthy interaction between researchers and IRBs can lead to better science. Questions come up, we study them, and we learn whether our methods are harmful or not. Clinically and interpersonally, it’s important for people to know that asking someone about suicide is not inherently dangerous.
Catherine Batsford: So it really is a balance—collecting meaningful data, breaking through taboos, and educating IRBs at the same time.
Matthew Nock: Exactly. And it requires flexibility on the part of researchers. Many studies look at group averages, but individuals vary. For example, we use behavioral tasks that show suicide-related images repeatedly. On average, these tasks do not increase distress, but some individuals do find them upsetting.
Because of that, our consent forms are very explicit. We explain what participants will see, note that some people may experience distress, and emphasize that they can stop at any time. Research is iterative, and so are interactions with IRBs and participants. We have to remain flexible and responsive. Tonya Ferraro: During your keynote, you brought up the concept that many of these concerns are empirical questions. One of the things I love about research is that data can tell a very different story than what might feel intuitively true. What are some findings that surprised you, or perhaps surprised IRBs, when you presented them?
Matthew Nock: That’s a great question, and I agree with the sentiment. There’s an old quote from W. Edwards Deming that I really like: “In God we trust; all others must bring data.” We can have beliefs and opinions, but many of these questions are ultimately empirical.
That’s one of the reasons I was drawn to science. I was a fairly oppositional kid, and science encourages questioning. There’s a system for evaluating what the data show, and we have to remain humble about that.
One thing that surprised me was the predictive power of some behavioral tests, particularly adaptations of the Implicit Association Test. Many people are familiar with the IAT, which was originally developed to examine implicit attitudes related to race, age, and gender. I worked with colleagues to develop a version focused on suicide and self-injury.
We brought that test into emergency department settings, and people’s reaction times to suicide-related stimuli predicted suicide attempts above and beyond self-report. That surprised me because translating lab-based measures into clinical settings often doesn’t work. In this case, it did—and it replicated.
We also tested other cognitive tasks, such as a version of the Stroop task. Initially, we saw promising results, but when we replicated the study, the effect disappeared. That first finding was likely a false positive. I appreciate that science emphasizes replication because it keeps us honest.
Another surprising line of work involved a behavioral intervention using evaluative conditioning. One of our postdoctoral fellows led this work. We paired suicide-related images with naturally aversive stimuli, such as snakes or spiders, through a simple matching task. Participants completed this task over about a month.
We saw significant reductions in self-injurious and suicidal behavior. I didn’t believe it at first, so we repeated the study. Same result. We did it a third time, again with the same result, and then published the set of studies together.
For something as difficult to treat as suicidal behavior, seeing a replicable effect like that was genuinely surprising and encouraging. Anytime we see something work consistently, especially in this area, it feels like progress.
Tonya Ferraro: Have you seen research findings move beyond the lab and actually change practice, training, or policy?
Matthew Nock: Not as much as I would like. I’ll be a little critical here, even though I’m ultimately optimistic.
We have seen the development of evidence-based treatments, such as cognitive behavioral therapy and dialectical behavior therapy, and those have helped. Some of our work has contributed to that literature. But I think we need to move faster.
Much of our current work focuses on implementation. That brings its own ethical and IRB questions—particularly around when research ends and clinical care begins. About 90% of our research now takes place in real-world clinical settings: emergency departments, psychiatric inpatient units, and similar environments.
We’re trying to take what we’ve developed in the lab and see whether it helps clinicians better identify who is at risk and intervene effectively in real time. We’re also doing a lot of work with new technologies, such as smartphones and wearable sensors, to monitor people during high-risk periods.
The goal is to identify not just who is at risk, but when risk increases—what day, what hour—and then deliver interventions at the right moment to help keep people safe.
That work raises difficult ethical questions. If we’re following someone at high risk for suicide, when should we intervene? At what threshold? Who should intervene? How should we intervene? Is intervening itself harmful?
If we want to test different approaches—for example, calling someone versus using an automated tool or chatbot—can we ethically randomize those responses in real time? These are hard questions, and they come up frequently in conversations with IRBs.
People have different levels of comfort with risk. Clinicians, researchers, and institutional counsel often see these issues differently. My responsibility, as I see it, is to do right by participants while using the privilege of research to generate knowledge that can meaningfully help people.
That’s my north star: advancing science in a way that is maximally beneficial to society while minimizing harm. Catherine Batsford: There’s a delicate balance between collecting meaningful data, addressing taboos so people will talk about these issues, and educating IRBs at the same time—that there is evidence behind this work and that asking difficult questions is not inherently harmful.
Matthew Nock: Yes, and it also requires flexibility on the part of researchers. As I mentioned earlier, studies showing that asking people about suicide does not increase harm are based on group averages. We also do research using behavioral tasks, such as the Implicit Association Test, where we show people suicide-related images or words—things like pill bottles or nooses—as symbolic representations of suicide.
We present these stimuli repeatedly, sometimes dozens of times. The reason we do this is that people’s reaction times—measured in milliseconds—can tell us who is at risk for a suicide attempt above and beyond what people report themselves. There’s real clinical significance to that.
A question IRBs often ask is whether repeatedly showing these images increases distress or suicidal thinking. We’ve conducted multiple studies examining this question, and the data show that, on average, these images do not significantly increase distress, suicidal thoughts, or thoughts of death.
That said, in some individual cases, participants report that they found the images distressing. Because of that, our consent forms are explicit. We explain that, on average, the task does not increase risk, but that some people may experience discomfort, and participants can stop at any time.
I see this as an iterative process. Research itself is iterative, and our interactions with IRBs and participants need to be iterative as well. We need to remain flexible and responsive as we learn more.
Tonya Ferraro: During your keynote, you talked about framing many of these concerns as empirical questions. I love that idea—that data can challenge what feels intuitively true. Are there other findings that surprised you or surprised IRBs when you presented them?
Matthew Nock: That’s a great question. I think one of the biggest surprises for me has been how well some behavioral measures perform in predicting suicide risk. The Implicit Association Test is one example. It was originally developed to study implicit attitudes related to race, age, and gender, and it has been featured in mainstream media.
We developed a version focused on suicide and self-injury, and when we brought it into emergency department settings, it predicted suicide attempts beyond self-report measures. That’s not something that happens often when you move a task from the lab into a clinical environment, so that was striking.
Another example involves replication. We once adapted a version of the Stroop task and found strong initial effects predicting suicidal behavior. When we repeated the study, the effect disappeared. That initial finding was likely a false positive. That experience reinforced for me how important replication is in science.
On the other hand, we’ve also seen effects replicate repeatedly. One of our postdoctoral fellows led a study using evaluative conditioning, where we paired suicide-related images with naturally aversive images, like snakes or spiders, in a matching task. Participants completed the task repeatedly over the course of about a month.
We observed significant reductions in self-injurious and suicidal behavior. I was skeptical, so we did the study again. Same result. We did it a third time, again with the same outcome. We published all three studies together.
For an outcome as difficult to treat as suicide, seeing something replicate that consistently was genuinely surprising and encouraging. Anytime something works—and continues to work—I get excited, because it suggests we’re making progress.
Tonya Ferraro: Have you seen any of this research translate into changes in practice, training, or policy?
Matthew Nock: Not to the extent I would like. I’m optimistic, but I’m also critical of our progress.
We do have effective treatments—cognitive behavioral therapy, dialectical behavior therapy—and those have been developed and refined over the past couple of decades. Some of our work has contributed to that evidence base. But I think we need to be doing much more.
More recently, we’ve been focusing on implementation research, which brings its own ethical and IRB challenges. A key question is when research ends and clinical practice begins. Most of our work now—probably about 90%—takes place in clinical settings, such as emergency departments and psychiatric inpatient units.
We’re trying to move what we’ve developed in the lab into the field, to help clinicians better identify which patients are at risk and intervene effectively. We’re also doing a lot of work with new technologies. We use smartphones and wearable sensors to monitor people during high-risk periods and try to determine exactly when risk increases—what day or what hour—and then intervene in real time.
That raises complex ethical questions. If someone reports persistent suicidal thoughts, how and when should we intervene? At what level of risk? Who should intervene? Is intervening itself harmful?
If we want to test different intervention strategies—calling someone versus using an automated tool or chatbot—can we ethically randomize those responses in real time? These are questions we wrestle with constantly, both within our research team and in conversations with IRBs.
Different stakeholders have different levels of comfort with risk. Clinicians, researchers, and institutional counsel often view these situations differently. My role is to balance those perspectives while doing right by participants and advancing science in a way that is most likely to help people.
That remains my north star: advancing science while being maximally beneficial to society. Catherine Batsford: So, when there’s a possibility of intervention, does the consent process look different?
Matthew Nock: The consent process is extremely important in studies involving suicide risk. We are very clear about what we are going to do, what we are not going to do, and when we will do it.
For example, in our monitoring studies, we install smartphone apps on participants’ phones. These participants are often recruited from hospitals after presenting with high suicide risk. We tell them explicitly that we monitor responses during certain hours—for example, between 9 a.m. and 9 p.m.—and that we will do our best to respond if we believe they are at imminent risk of harming themselves.
At the same time, we are very clear that participants cannot rely on us as a clinical service. We cannot promise that we will intervene in every situation. We can promise that we will try to intervene. Our formal standard is that we will intervene within 24 hours, although in practice our response time is often much faster, sometimes within 20 minutes.
We have real-time alerts when someone crosses a predefined risk threshold, and we follow a very detailed protocol that specifies exactly what actions we take at each level of risk. But we want participants to understand that this is not a replacement for clinical care. We are not a “Minority Report” system that will swoop in and prevent harm in every case.
If we randomize participants to different conditions, we explain that clearly during consent. We tell them that we are testing different approaches to responding or intervening. Transparency is essential.
When we first began doing this kind of monitoring work, I actually paused our research for a period of time. I wasn’t confident that we had clear standards for how and when to intervene, and at the time, the field didn’t either. Smartphones were relatively new in this context.
I reached out to colleagues at the National Institute of Mental Health and asked whether we could convene a consensus meeting. We brought together IRB members, researchers, clinicians, institutional counsel, representatives from funding agencies, and people with lived experience of suicidal thoughts and behavior.
We spent several days working through these issues in detail: what the consent process should include, when and how to intervene, how to use collateral contacts, and how to respond if we cannot reach someone who appears to be at high risk.
That work resulted in a consensus statement on the ethical conduct of research with people at high risk for suicide, which was published in 2021. We now use that guidance routinely, and we are considering whether it needs to be updated as technology and practice evolve.
When we encounter questions that could benefit from empirical evidence, we design studies to answer them. For example, we are actively studying whether it is more effective to have a human call someone at high risk or to use an automated or chatbot-based intervention.
Many people assume that a human call is always better, but there are reasons to question that assumption. An automated tool can respond immediately, whereas a human may take 20 minutes or longer. That time gap can matter.
We also see situations where a participant says, “Yes, I’m having thoughts of suicide, but I’m at work or at school right now, and I can’t talk.” In those cases, an automated interaction may be more appropriate in the moment.
So we are testing whether different approaches work better at different levels of risk or in different contexts, and whether the best response varies across individuals or even within the same individual over time.
Tonya Ferraro: I just want to pause and acknowledge that you stopped your research to address these ethical questions. When you’re close to a project and always feel behind, it’s hard to take that kind of pause. I really appreciate that you did that work, and that it contributed to broader standards in the field.
Matthew Nock: Thank you. It was anxiety-provoking to pause, especially when there is always pressure to keep moving. But this is real life. Statistically speaking, nearly everyone listening to this podcast has been touched by suicide in some way.
About 15% of people in the United States report having seriously considered suicide at some point in their lives, and about 5% report having made a suicide attempt. Many of us know someone—a family member, a friend, a colleague—who has struggled with suicidal thoughts or behavior, or who has died by suicide.
Because of that, I feel a tremendous responsibility to make sure we are doing this work carefully and ethically. Researchers and clinicians differ in how much risk they are willing to tolerate.
For example, we often use a 0-to-10 scale to assess risk. In our studies, we may set a threshold at 8 out of 10, above which we intervene. Some clinicians ask why we don’t intervene at 5, when risk becomes more likely than not. Others argue that we should not intervene at all, because intervention alters the natural course of behavior and turns an observational study into an intervention study.
For me, that feels like not enough. One comment during our consensus meeting really stuck with me: “What if this were your child? Your spouse? Your sibling?” If someone reports a 10 out of 10 and we simply watch and wait, that doesn’t sit right with me.
Our thresholds are based on data and practical constraints, and we stand behind them, even though they are imperfect. I would rather accept some methodological compromise than fail to act when we have an opportunity to keep someone safe.
We have had situations where participants attempted suicide during monitoring, and we were able to locate them and connect them with emergency services. In one recent case, a participant overdosed, we intervened, and they survived. That reinforces why this work matters.
The goal is not just to publish papers, but to improve health and human functioning. Catherine Batsford: How do you roll people off a study safely?
Matthew Nock: That’s a great question. We do it very carefully, and it’s something we plan for from the beginning. It’s also part of the consent process.
At the end of a study, we conduct debriefings and often include qualitative interviews. We ask participants what they liked about the study, what they didn’t like, and what their experience was like overall. We are very clear about when the study is ending and that we will no longer be monitoring them. We remove the apps from their phones and make sure they understand that the monitoring period has concluded.
One of the more challenging situations arises when parents are involved, particularly when we are following adolescents. I’m a parent of three children myself, so I understand this perspective well.
In one of our studies, we monitor adolescents for six months after discharge from a psychiatric hospitalization, which is an especially high-risk period. Over time, our statistical models have become increasingly accurate at identifying when risk is elevated.
During the study, if a child reports high suicide risk, our protocol is to contact the parents, speak with the child, conduct a risk assessment, and take steps to keep them safe. Many parents appreciate this extra set of eyes on their child during a vulnerable time.
At the end of the study, some parents ask whether we can continue monitoring. Unfortunately, we cannot. The study has a defined end point, and we have to follow the protocol. We are very upfront about this from the beginning and reinforce it again at the end.
While this can be difficult, it also highlights the potential clinical utility of the work. These tools may eventually have applications beyond research, not only for suicide risk, but also for other episodic or high-risk behaviors such as substance use, alcohol use, or eating disorder behaviors.
We know from our monitoring studies that suicidal thoughts can fluctuate hour by hour and day by day. Traditional models of care—such as weekly therapy sessions—often do not capture that variability well. Technologies that can respond in the moment may offer additional support, if they are implemented ethically and without being intrusive.
Catherine Batsford: How do you and your team take care of yourselves? This is incredibly heart-wrenching work.
Matthew Nock: That’s an important question. We talk a lot about self-care on our research team.
Our team includes about 30 people—staff, graduate students, postbaccalaureate research assistants, and undergraduates. We work hard to maintain a supportive, collegial environment. It may sound strange given the nature of the work, but it’s a friendly, caring, and often humorous group, which I think is essential.
We do experience suicide attempts among participants, and we have lost participants to suicide over the years. That’s painful. I always remind myself, and my team, that this is why we do the work. In the same way that oncology researchers lose patients despite their best efforts, we are confronting a condition that we still cannot predict or prevent as well as we would like.
Many people on our team have personal experiences with suicide, either through family or friends. That makes the work both more meaningful and more emotionally demanding.
We also try to practice what we teach. We encourage taking breaks, supporting one another, and engaging in activities outside of work. For me, that includes spending time with my family, exercising, working on an old Jeep, riding motorcycles, and sometimes just watching something completely nonacademic and mindless at the end of the day.
Tonya Ferraro: You’ve been in your career for a while, and you mentor a lot of students. I like to ask what you wish you had known when you were starting out—something no one told you, but that you now share with mentees.
Matthew Nock: That’s a good question. One thing I wish I had understood earlier is how free and dynamic this career is.
I recently had a friend from college visit my lab, and he commented that he has worked with the same five people since college. In contrast, academic research is constantly changing. New students join, new ideas emerge, and new perspectives enter the field.
I didn’t think much about mentorship early on. I was focused on the science and the questions I wanted to answer. Over time, mentorship became one of the most rewarding parts of the job. Watching students develop their own ideas, carry out studies, publish, and eventually lead their own labs is incredibly fulfilling.
It feels a bit like watching people grow up—not in a condescending way, but in a deeply gratifying one. Seeing students become independent scientists and professionals gives me a lot of hope for the future of the field. Catherine Batsford: On the flip side, I’m a new IRB member. I’m seeing some of these studies come across my desk. What questions should I be asking? What should I be reassured by? What should I be looking for to better understand the work you’re trying to do?
Matthew Nock: That’s a great question. My main recommendation is communication—reach out to researchers and ask questions.
IRB members have an important role, and information is power. Ask researchers to explain what they’re doing, why they’re doing it, and how their methods align with current standards in the field. I’m often asked by IRBs to provide expert input on whether a particular approach is consistent with best practices, and I’m always happy to do that.
The regulations should be used as a guide, not a barrier. The more researchers understand what IRBs are responsible for, and the more IRBs understand the science, the easier and more collaborative the process becomes.
If researchers and IRBs approach each other with transparency and openness—rather than withholding information or assuming bad intent—it leads to better outcomes for everyone, especially participants.
Catherine Batsford: That’s fantastic. Tonya, do you have any more questions?
Tonya Ferraro: No, I think I’m good. This has been great.
Matthew Nock: It’s been wonderful talking with both of you. Sorry for rambling—I can go on forever if I’m not stopped.
Catherine Batsford: We could listen forever.
Matthew Nock: I appreciate that.
Catherine Batsford: I’d like to end on something you mentioned during your conference session. You talked about how suicide rates haven’t really changed over time. I found that both reassuring and concerning. Could you speak to that a bit?
Matthew Nock: Sure. People often say that suicide rates have increased dramatically over the past 20 years. That’s true—but they also decreased during the 20 years before that. Overall, the suicide rate today is very similar to what it was about 100 years ago.
Is that good or bad? It depends on how you look at it. The pessimistic view is that, unlike many other leading causes of death, suicide rates have not declined. Mortality from cancer, pneumonia, motor vehicle accidents, HIV, and other conditions has decreased substantially over time because science has advanced and those advances have been implemented.
Suicide hasn’t followed that same trajectory yet.
The optimistic view is that this tells us something important: when science advances and is translated into practice, outcomes improve. If you look at the past five to 10 years, our ability to identify suicide risk, predict when risk increases, and deliver effective interventions has improved significantly.
There are more people studying suicide now than ever before, and while funding is still insufficient, there is more attention to the issue. I’m concerned about the current funding environment for science overall, but suicide prevention is a bipartisan issue. No one wants to lose loved ones to suicide.
I’m optimistic that researchers, IRBs, clinicians, funders, policymakers, and communities can work together to turn the tide. Suicide disproportionately affects young people, and preventing those losses is worth sustained effort and investment.
Tonya Ferraro: That really speaks to the idea that research is a public good. There’s a cost to not doing research, especially during uncertain times.
Matthew Nock: There absolutely is. Suicide is one of the leading contributors to years of potential life lost because it affects young people so heavily. We can measure the costs of inaction, even if the benefits of research take time to emerge.
It’s in all of our interests to continue advancing science and improving care—not just for suicide, but for other conditions as well, such as substance use and alcohol use disorders. We don’t want people to suffer, and it’s our responsibility as a society to keep this work moving forward.
Catherine Batsford: Thank you so much, Matthew.
Matthew Nock: Thank you. And thank you to PRIM&R for focusing on this topic. Suicide is not something people naturally lean into, but shining a light on difficult issues is one of the best ways to reduce stigma and make progress. I’m grateful for the opportunity to be part of this conversation.
Research Ethics Reimagined guests are esteemed members of our community who generously share their insights. Their views are their own and do not necessarily reflect those of PRIM&R or its staff.