Frequently Asked Questions

The following are a compilation of frequently asked questions (FAQs) that address information about established review procedures and criteria, updates from the latest round of reviews, and a summary of findings. More detailed information on the specific process and criteria used to conduct the review can be found on the Review Process page. If you have a question that is not answered here, please email us at youthgov@air.org.

Last Updated July 2023.

General Questions About The Review

What is the Teen Pregnancy Prevention Evidence Review?

Since 2009, the U.S. Department of Health and Human Services has sponsored a systematic review of the teen pregnancy prevention research literature. This review helps identify programs with evidence of effectiveness in reducing teen pregnancy, sexually transmitted infections and HIV, and associated sexual risk behaviors. The main purpose of the Teen Pregnancy Prevention Evidence Review (TPPER) is to review research to examine study quality and assess whether program models have demonstrated positive impacts on sexual risk behavior and sexual health outcomes. These programs reflect a variety of approaches in the field (for example, positive youth development, sexual health education, sexual risk avoidance, clinic-based programs, and healthy relationships). In addition to being a resource to organizations that work to prevent teen pregnancy, the TPPER is used by the Office of Population Affairs’ Teen Pregnancy Prevention (TPP) grant program and the Administration for Children and Families’ Personal Responsibility Education Program (PREP) to inform which program models can be selected for replication by grantees.

The TPPER is managed by the Office of the Assistant Secretary for Planning and Evaluation in collaboration with the Office of Population Affairs, and the Administration for Children and Families’ Family and Youth Services Bureau within the U.S. Department of Health and Human Services. The TPPER is conducted through a contract with Mathematica.

Who sponsors the Teen Pregnancy Prevention Evidence Review?

The TPPER is a joint effort sponsored by three agencies in the U.S. Department of Health and Human Services: the Office of the Assistant Secretary for Planning and Evaluation, the Office of Population Affairs within the Office of the Assistant Secretary for Health, and the Family and Youth Services Bureau within the Administration for Children and Families.

How many program models meet the Teen Pregnancy Prevention Evidence Review criteria for showing evidence of effectiveness?

Fifty-two program models have evaluation studies that meet the Teen Pregnancy Prevention Evidence Review criteria for evidence of program effectiveness. The program models represent a variety of program approaches, including sexual risk avoidance, sexual health education, clinic-based, healthy relationship, and positive youth development approaches.

When were the reviews conducted?

The findings from the initial review were released in spring 2010 as part of the former Office of Adolescent Health (now Office of Population Affairs) Teen Pregnancy Prevention grant announcement. The review findings are updated periodically as new research emerges. Findings from the most recent update were released in spring 2023 and covers a subset of studies identified between the period from October 2016 through May 2022.

How does the U.S. Department of Health and Human Services use the results of the review?

Within the U.S. Department of Health and Human Services, the Office of Population Affairs’ Teen Pregnancy Prevention (TPP) program and the Administration for Children and Families’ Personal Responsibility Education Program rely on the TPP Evidence Review findings to inform which program models can be selected for replication by grantees. Please contact the appropriate program office for questions about whether specific program models are eligible for federal funding.

Review Procedures and Criteria

What criteria did the U.S. Department of Health and Human Services use to conduct the review?

In developing the review criteria, the U.S. Department of Health and Human Services (HHS) drew on evidence standards used by several well-established evidence assessment projects and research and policy groups, such as the What Works Clearinghouse, Blueprints for Healthy Youth Development, and the National Registry of Evidence-Based Programs and Practices. Based on standards used in these other processes, this review defined the criteria for the quality of an evaluation study and the strength of evidence for a particular intervention. Using these criteria, HHS then defined a set of rigorous standards an evaluation must meet in order for a program to demonstrate evidence of effectiveness. HHS reviews and updates the standards periodically to stay current with best practices in evidence reviews. Each iteration of the protocol is available here on the Teen Pregnancy Prevention Evidence Review website. The latest protocol is version 6.0. (PDF, 22 pages). This document summarizes how the standards changed between versions 5.0 and 6.0.

How did the U.S. Department of Health and Human Services define high and moderate quality study ratings?

The high study quality rating was reserved for randomized controlled trials with low rates of sample attrition, no reassignment of sample members, no systematic differences in data collection between the research groups, and more than one subject or group (school, classrooms, and so on) in both the intervention and comparison conditions. The moderate study quality rating was considered for studies using quasi-experimental designs and for randomized controlled trials that did not meet all the review criteria for a high-quality rating. To meet the criteria for a moderate study quality rating, a study had to demonstrate equivalence of the intervention and comparison groups on race, age, and gender; report no systematic differences in data collection between the research groups; and have more than one subject or group (school, classroom, and so on) in both the intervention and comparison conditions. Studies based on samples of youth ages 14 or older also had to demonstrate baseline equivalence of the intervention and comparison groups on at least one behavioral outcome measure.

Who conducted the reviews?

Trained researchers from Mathematica conducted the reviews. Two team members assessed each impact study; the first member conducted a detailed review of the study following a protocol developed by Mathematica and approved by a U.S. Department of Health and Human Services interagency work group; the second member assessed and verified the review for accuracy and completeness.

Did the review require studies to have a randomized controlled trial evaluation?

No. In addition to studies that use a randomized design, the review considered quasi-experimental (also known as matched comparison group design) studies that do not employ random assignment.

Did the review look only at U.S. studies?

Yes. The review was limited to studies of programs serving youth in the United States.

Did the age criterion of 19 or younger refer to the age of participants at the time of initial intervention or the maximum age of program participants?

The age criterion of 19 or younger refers to age at the start of the intervention. Participants might have been older than 19 during the study period or when outcome measures were assessed.

How did you handle outcome measures of poor or questionable quality?

Measures with serious limitations in terms of their validity or interpretation were excluded. For example, the review did not consider reports from males of their female partners’ use of birth control pills, or scales of behavioral risk that combine multiple measures into a single outcome.

Did the review consider findings for subgroups of participants?

Yes. In addition to findings for the full study sample, the review considered findings for two subgroups based on (1) gender and (2) sexual experience at baseline. The review considered the same outcome measures for these subgroups as for the full study sample—namely, measures of sexual risk behavior and its health consequences.

Why wasn’t race or ethnicity included as one of the priority subgroups?

The subgroup assessment was limited to address concerns about “multiple comparisons” or “multiple hypothesis testing.” As the number of subgroups examined in a particular study increases, the probability of finding a statistically significant impact also increases, just by chance. To address this issue, we chose to limit the number of subgroups considered for providing evidence of effectiveness. When selecting these subgroups, there were many relevant options to consider, such as race/ethnicity, gender, sexual experience, socioeconomic status, family structure, and many others. The U.S. Department of Health and Human Services ultimately chose to focus on gender and baseline sexual experience as the two subgroups for the review to consider. Moreover, in many studies, the sample sizes are too small to assess impacts separately by race or ethnicity. Future rounds of review will consider ways to include more subgroups.

Do the studies have to appear in peer-reviewed journals in order to be included in the review?

No. The review is not limited to peer-reviewed journal articles. The review includes studies reported as part of book chapters, government reports, unpublished manuscripts, or other documents.

By not restricting the pool of eligible studies to peer-reviewed publications, we are able to identify more recently evaluated studies. In addition, not all peer-reviewed publication venues are the same in terms of quality of the review. Instead, we focus on the quality of the evaluation study and assess the impacts based on the established review criteria. We require authors of unpublished reports to provide complete information needed to assess the quality of the evaluation study and its outcomes, and we ask them to make their publication available to the public upon request, if it’s not published.

What outcomes did the review consider when examining evidence of effectiveness?

The U.S. Department of Health and Human Services determined that program models with evidence of effectiveness must demonstrate evidence of a favorable, statistically significant impact on at least one of the following outcomes: sexual activity (initiation; frequency; or rates of vaginal, oral, and/or anal sex); number of sexual partners; contraceptive use (consistency of use or one-time use, for condoms or another contraceptive method); sexually transmitted infections or HIV; or pregnancy or birth. It is possible that programs effective in influencing these behaviors also affect other types of adolescent health-risk behaviors. However, to be included in the review, programs must examine program impacts on at least one measure of sexual behavior or its health consequences.

Does the Teen Pregnancy Prevention Evidence Review discuss the implementation readiness of program models?

The Teen Pregnancy Prevention Evidence Review describes the components of implementation readiness, including implementation requirements and guidance (including training materials and resources), and allowable adaptations.

Will there be changes to the Teen Pregnancy Prevention Evidence Review criteria in the future?

The Teen Pregnancy Prevention Evidence Review contract is ongoing and before each round of review we consider changes to the eligibility, quality and effectiveness criteria to stay current with the field. Therefore, there may be changes to the criteria in the future.

Review Findings

Is there a list of studies that were reviewed and did not meet the U.S. Department of Health and Human Services’ criteria for evidence of effectiveness?

A full list of the studies reviewed through May 2022 is available under the Reviewed Studies section of the website.

How do I determine if a program has evidence of effectiveness?

Only programs with current evidence of effectiveness appear under the “Find a Program” tab. If the program is marked “Inactive,” that means the program had evidence of effectiveness, but the program is not publicly available or implementation is no longer supported, or the only evidence of effectiveness is older than 20 years, so the program no longer meets the eligibility criteria.

To determine if a program has current evidence of effectiveness, go to the “Find and Compare” feature to select a program(s) of interest. You can also use the built-in filters—for instance, to identify programs designed to serve a particular population or designed to be offered in a particular setting.

The evidence for each program is presented by outcome domain. The quantity, shape, and color of the symbols provide a summary of the evidence in the domain (for instance, favorable evidence or conflicting evidence). The size of the shape indicates the number of studies with a moderate or high rating that examined outcomes in that domain (not just studies that showed favorable effects in the domain). There is a key on the Find and Compare page that explains the symbols and ratings.

Where can I find information about the effect sizes for each of the relevant behavioral outcomes?

The review team extracted effect size information (when available) for all reviewed studies and assembled it in a Microsoft Excel file. In addition, in 2014, the review team released a research brief summarizing findings from an effort to collect and report program effect size information from studies included in the Teen Pregnancy Prevention Evidence Review. Visit the Publications section of the website to download the research brief and the associated effect size Microsoft Excel file.

Why do the findings of the U.S. Department of Health and Human Services Teen Pregnancy Prevention Evidence Review differ from similar reviews I’ve seen from other groups?

Each evidence review used a slightly different set of procedures and criteria. Although there is usually overlap across reviews in which program models meet evidence standards, differences in the criteria used to screen and assess studies might lead to some difference in the program models identified as evidence based. In addition, each evidence review might have assessed a study’s evidence of effectiveness on a different outcome. For example, a positive youth development program might have been reviewed as part of the Teen Pregnancy Prevention Evidence Review for evidence of effectiveness on contraceptive use behavior, and the same program might be included in another evidence review to assess impacts on educational outcomes, violent behavior, or other outcomes.

Why aren’t there more programs with evidence of effectiveness for high-risk populations such as Latinos and Native Americans?

The review does not aim to identify programs that serve any specific population. Rather, the goal of the review is to identify programs with the strongest evidence of effectiveness. If certain high-risk populations were underrepresented across the reviewed studies, it points to a gap in the research literature and need for additional research to identify effective programs for these groups. If you are aware of studies for high-risk populations that have not been reviewed, please send them in during the next call for studies period. The next call for studies is expected in spring 2023.

I don’t understand or I disagree with the rating a study received.

The About the Review section of the website provides a detailed description of the process used to determine the study ratings and an explanation of the criteria. This information should help explain the rating given to any particular study.

I’m looking for a particular program with evidence of effectiveness, but it does not show up. Why was this particular program excluded?

There are six reasons a program might not have met the Teen Pregnancy Prevention Evidence Review criteria for evidence of effectiveness.

  1. The program was not evaluated.
  2. The review team did not identify any studies of the program during the literature search or through the call for papers.
  3. The program might have been evaluated, but the study did not meet screening criteria (for example, looking at an eligible outcome).
  4. The program’s evaluation was not sufficiently rigorous.
  5. The program did not provide evidence of positive impacts on one of the key outcome measures for the full study sample or a priority subgroup. The study would, however, be included in the list of reviewed studies.
  6. The evaluation provided evidence of negative impacts on one of the key outcome measures.

Plans For Updating The Review

How often will the review be updated?

The Teen Pregnancy Prevention Evidence Review program is currently active, with studies identified through May 2022 included in the summary of findings released in April 2023. The “Find a Program” tab, online database, and more detailed descriptions of the new programs are summarized in program profiles that were posted on the website in summer 2023. We expect to update the website with findings from the next round of review in 2024.

When new programs are deemed evidence-based, will they be considered eligible for funding under future replication grant funding opportunities if new money becomes available?

It is up to the individual program offices to determine whether grantees will be able to replicate program models that show evidence of effectiveness in the future. Though the findings from the Teen Pregnancy Prevention Evidence Review might be used to inform any future funding announcements, program offices might consider additional factors when deciding which programs are eligible for replication.

Will future rounds of the Teen Pregnancy Prevention Evidence Review assess research from new studies of program models that have already been reviewed and recognized as evidence-based?

The Teen Pregnancy Prevention Evidence Review program is currently active, with studies identified through May 2022 included in the summary of findings released in April 2023. The “Find a Program” tab, online database, and more detailed descriptions of the new programs were summarized in program profiles that were posted on the website in summer 2023. As the U.S. Department of Health and Human services identifies new studies of existing evidence-based programs, it incorporates these studies into the review process so the studies can contribute to the evidence base for that program. The website will be updated with findings from the next round of review in 2024.

Can I submit a study for review?

Yes. We will post a call for studies on the home page of this website for the next round of review when it opens. In addition, a notice will go out to those who sign up for the youth.gov newsletter. We expect to open the next call for studies in 2024.