Free Educational Resources

Ethics of Research with Human Subjects in the Digital Age   

A PRIM&R and Drexel University Collaboration

Today, researchers in a wide range of fields, including the biomedical, behavioral, cognitive, educational, and social sciences, use a variety of digital technologies to recruit human research participants, implement interventions, collect and analyze data, and disseminate findings, a trend that has been amplified during the current COVID-19 pandemic. Examples of relevant digital technologies, which are also ubiquitous in everyday life, include mobile phones/smartphones, wearable computing devices, social networking platforms, digital body sensors, and health apps. Artificial intelligence is now commonly embedded in digital applications, and algorithms and models evolve based on the personal information that people provide through their use of these devices and technologies, which allow for personalized experiences in daily activities, such as shopping, reading news articles, and watching movies. Furthermore, people’s perceptions about the privacy of their personal information continue to evolve with the pervasive use of these technologies.

The scale at, and way in which data using digital technologies can be collected and analyzed for research differ significantly from traditional in-person laboratory experiments. The growth of digital technologies has also allowed researchers increased access to a more diverse group of potential research participants, as well as expanded opportunities to study a broad range of complex human behaviors unobtrusively, as they occur in real-time. However, availing of these digital technologies, in a rapidly changing landscape in which personal information is routinely collected, analyzed, managed, and shared combined with people’s changing perceptions about privacy of personal information raises questions about the suitability of the prevailing ethical framework for research with human participants – the Belmont Principles. Thus, this project aims to systematically explore whether the current ethical framework is adequate for protecting human research participants in the digital age and to develop educational resources for the scientific community and the public.

An interdisciplinary working group will explore the fundamental questions including:

(1) How do current ethical, legal/regulatory, or social issues, either in degree or kind, address (or not address) the use of digital technologies in human research?

(2) How have people’s perceptions on sharing personal information changed with the ubiquitous use of digital technologies, and how will those evolving privacy norms affect the ethics of research using digital technologies?

(3) How has the ubiquitous use of algorithms—the use of artificial intelligence—in everyday digital technologies, impacted ethical dimensions of human research?

(4) What is the proper ethical framework for addressing uses of digital technologies when conducting research with human participants with a wide range of technology literacy and privacy perceptions?

The outcomes of the project will be broadly disseminated to faculty researchers, research staff, students, regulatory agencies, professional societies, academic institutions, and the community through a public conference, informational videos, webinars, informational brochures, and guidelines for training platforms. All project progress will be publicly disseminated through the Public Responsibility in Medicine and Research (PRIM&R) website, podcasts, blogs, and social media.

Steering Committee:

Co-Chairs: Jina Huh-Yoo (Drexel University) & Sangy Panicker (PRIM&R)

Members: Barbara Barry (Mayo Clinic), Edward Kim (Drexel University), Maria Marquine (University of California, San Diego), Dena Plemmons (University of California, Riverside), Joshua August Skorburg (University of Guelph), and Logan DA Williams (Independent Consultant)

Working Group Members:

Jonathan Beever (University of Central Florida), Linda Charmaraman (Wellesley College), Kay Connelley (Indiana University), Jonathan Herington (University of Rochester), Stephen Hilgartner (Cornell University), Judy Illes (University of British Columbia), Bonnie Kaplan (Yale University), and Marion Underwood (Purdue University).

[1] This project is funded by the National Science Foundation (NSF) [Award # (FAIN) 2124894]

Impact of Ubiquitous Digital Technologies Workshop

Wednesday, February 22, 2023  

Researchers in a variety of fields, including the biomedical, behavioral, cognitive, educational, and social sciences, have leveraged digital technologies to recruit human participants, to implement interventions, collect and analyze data, and to disseminate findings, a trend that has been amplified during the current COVID-19 pandemic.

The scale at and the manner in which information from emerging digital technology can be collected and analyzed for research differ greatly from traditional in-person laboratory experiments. Artificial intelligence is now commonly embedded within smartphone applications. Algorithms and models continuously evolve with the personal information that people provide through their use of digital technologies. The changing landscape in which personal information is collected, analyzed, and shared and people’s changing perceptions regarding personal information raise questions about the suitability of the prevailing ethical framework for research with human participants.

Research Ethics & Oversight in the Digital Age Webinar

October 18, 2022 

Over the last decade, advances in digital technology and the proliferation of mobile devices have afforded researchers unprecedented access to massive amounts of data generated by users of these technologies. It has also provided researchers access to a more diverse group of potential participants than was possible in traditional laboratory or clinic-based research. The digital revolution has also expanded opportunities for researchers to study a wide range of complex human behaviors unobtrusively, as they occur in realtime. 

However, use of these digital technologies, in a rapidly changing landscape in which personal information is routinely collected, analyzed, managed, and shared, combined with people’s changing perceptions about privacy of personal information, raises questions about the suitability of the prevailing ethical framework for research with human participants – the Belmont Principles. Thus, researchers and institutional review boards (IRBs) could benefit from guidance on the oversight as well as conduct of research using digital technologies. In this webinar, we will discuss several ongoing projects to address this need, including a research collaboration between PRIM&R and Drexel University. 

An overview of the issues and challenges for the research enterprise, and PRIM&R's project, will be accompanied by presentation of some current resources and tools available for IRBs and others who are reviewing studies that incorporate these digital age approaches. 

Responsible Management & Sharing of American Indian/Alaska Native Participant Data under NIH Policy Webinar


Learn how to create a data-sharing plan that operationalizes flexibilities built into the Data Managment and Sharing (DMS) policy to respect Tribal sovereignty. Discuss considerations and best practices for responsible, respectful management and sharing of AI/AN participant data under the NIH DMS Policy.

More than Meets the IRB

PRIM&R and the Human Research Protection Office at Washington University in St. Louis(WU) have partnered to create More than Meets the IRB: A joint initiative of Washington University in St. Louis and PRIM&R, a series of relevant, and educational podcasts, provided at no cost to an ever-growing audience of research ethics professionals and lay people.

Subscribe to the podcast on iTunes or Stitcher, listen on Google Play Music, or sign up to be notified when a new podcast is available.