Evaluation

Evaluating your home visiting program

Evaluating your home visiting program

A presentation to support your team evaluate your home visiting program (PowerPoint, 3.71 MB)


Building capacity for home visiting evaluation through relational technical assistance

Building capacity for home visiting evaluation through relational technical assistance

A presentation to support your team in building evaluation capacity (PowerPoint, 8.37 MB)


Building Capacity to Conduct Scientifically and Culturally Rigorous Evaluations

Building Capacity to Conduct Scientifically and Culturally Rigorous Evaluations

A presentation to support your team build evaluation capacity (PowerPoint, 3.43 MB)


Evaluation in Practice: Developing an Evaluation Plan With Scientific and Cultural Rigor

EVALUATION IN PRACTICE

Developing an Evaluation Plan With Scientific and Cultural Rigor

An interview with Native American Professional Parent Resources, Inc. (NAPPR), staff

Staff photo of Native American Professional Parent Resources, Inc. (NAPPR)

What is your evaluation question?

Do Native families participating in tribal home visiting that receive a culturally enhanced version of Parents As Teachers (PAT) (parent-child activities and family group connections) demonstrate increases in cultural self-efficacy, cultural interest, and cultural connectedness compared with Native families that receive standard (non-culturally enhanced) PAT through Early Head Start?

How did you balance cultural and scientific rigor when developing your evaluation plan?

First, it took time to develop internal evaluation capacity and mutual understanding among university evaluators and NAPPR staff. It was important for us to allow ample time to form trusting relationships and build shared ownership and investment in the research process. It also took time for the program to stabilize so that outcomes could be evaluated effectively.

We had to find the right study focus and research question. Once we determined that our outcome of interest would be “‘cultural connectedness,'” we had to decide how we were going to measure such a complicated construct. We chose to develop our own measure, consulting with and drawing on the work of other researchers. In the process of developing cultural enhancements, we had to navigate tribal governance systems: Who has authority to call a culturally enhanced activity “Pueblo”? How can someone get that authorization? Consulting with our home visiting model developer about enhancing the curriculum to include culturally-tailored home visit and group activities took time as well.

Throughout the process, we consulted with the our program’s community advisory board, parent advisory group, and staff. There was an ongoing feedback loop with these groups. We wanted their input and consultation at every stage of development, so the study became a regular item on meeting agendas.

How did your commitment to balancing cultural and scientific rigor influence decisions you made about the evaluation?

Balancing cultural and scientific rigor was a study-long process. We questioned what we could do to be more culturally responsive each step of the way. For example, because we serve a population that is tribally diverse, we decided against the idea of developing tribal-specific cultural activities. Instead, we developed intertribal activities that would appeal to participants from different tribes with prompts for families to share their own tribal values and traditions. By designing our intervention to be more intertribal, we decided that our home visitors were not going to be teachers but facilitators for cultural activities. This was important for our evaluation, because it meant the intervention would vary somewhat from family to family. Having a tribally diverse population also meant the definition of “cultural connectedness” could vary among participants. We worked hard to develop survey language relevant to participants from a range of tribes. We also built focus groups into our evaluation design, in addition to surveys, to capture the diverse ways participants perceive and experience cultural connectedness.

How did TEI help?

TEI helped us understand federal expectations and supported us in finding the right evaluation focus for our program and outlining a preliminary evaluation plan. TEI also supported us in achieving a good balance of cultural and scientific rigor, often by asking questions that prompted us to rethink proposed approaches and reach for greater rigor, but also by acknowledging our progress and successes along the way.


TEI Data Collection Toolkit

Data Collection in the Home: A TEI Toolkit

Print Friendly, PDF & Email

The data collection toolkit was developed to support data collection with AIAN families in their homes. Guided by years of work providing technical assistance to Tribal Home Visiting Program grantees, the toolkit addresses common grantee needs and challenges. Although it was designed for grantees—including program managers, evaluators, home visitors, and other staff—it also may be useful for early childhood programs and others who serve AIAN communities. The data collection toolkit supports culturally rigorous data collection.

The toolkit was designed to help programs—

  • Understand the value of data collection
  • Prepare for data collection
  • Collect high-quality data
  • Use tools to develop data collection processes, collect data, and implement quality assurance

Why Data Collection Is Important

Good decisions are driven by good data—information that is consistent, accurate, and complete. Quality data help programs tell stories about participating families, services, and outcomes that they can rely on to inform decision making.

Data collection has never been more important. Programs need data to apply for increasingly competitive funding opportunities, meet ambitious grant reporting requirements, and address participants’ needs with evidence-based strategies. Tribal Home Visiting Program grantees are required to collect data for continuous quality improvement, performance measurement, and program evaluation.

How To Navigate and Use the Toolkit

Introduction to the Data Collection Toolkit

The purpose of the toolkit, intended audiences and how to use it. Download Introduction in Microsoft Word format (.docx, 872kb).

Module 1: Understanding the Value of Data Collection

Training staff on the basics of data and how they can collect and use data. Download Module 1 in Microsoft Word format (.docx, 919kb).

Module 2: Preparing for Data Collection

Planning and building a foundation to collect quality data. Download Module 2 in Microsoft Word format (.docx, 919kb) and additional resources used in Module 2: Activity 2.2 Jeopardy Game (PowerPoint) and Tool 2.4 Data Collection Scheduler (Excel).

Module 3: Collecting High-Quality Data

Supervising data collection and implementing quality assurance. Download Module 3 in Microsoft Word format (.docx, 919kb) and an additional resource used in Module 3: Tool 3.11 Inter-Rater Agreement (Excel).

Toolkit Modules Representing Stages of Data Collection

TEI Toolkit Modules graphic

Intended Audiences

Program managers

Program managers may deal with data collection from planning and oversight to data entry and analysis. They typically make decisions and ensure that staff understand their role in data collection and are trained and supported. Open communication between program managers and staff is crucial for troubleshooting challenges. Program managers are often asked to present data to stakeholders and funders, so they must have a solid understanding of why data collection is important and how it works.

Data coordinators

Data coordinators (also called data managers) play a critical role in collecting, entering, managing, and reporting data. Data coordinators help home visitors keep track of which forms need to be filled out and when. Having a data coordinator to focus on data-related tasks maximizes the time home visitors and program managers can spend serving families.

Evaluators

Like program managers, evaluators ensure that staff appropriately use, interpret, and store data. They develop and implement guidelines for administering and interpreting evaluation instruments. Examples include writing data collection protocols, establishing consent processes, identifying and reviewing instruments, and selecting data systems. Evaluators may support data entry and analysis, data quality reviews, and reporting. They also help promote collaborative community-based evaluation practices.

Home visitors

Home visitors are the faces of the home visiting program in the community, and they are typically responsible for collecting data from program participants. Home visitors help ensure that the program collects high-quality data in a way that is comfortable for the families the program serves. They are often tasked with explaining data collection to families, administering data collection instruments, entering data into databases, and communicating assessment results to the families served by the program.



Using PICO To Build an Evaluation Question

Using PICO To Build an Evaluation Question

Print Friendly, PDF & Email

PICO1 is a framework that can help evaluators and programs develop a concise but rigorous evaluation question. A PICO question can tell you in just a few words what you aim to learn from an evaluation.
PICO stands for—

Target POPULATION that will participate in the intervention and evaluation

INTERVENTION to be evaluated

COMPARISON that will be used to see if the intervention makes a difference

OUTCOMES you expect the intervention to achieve


Why the PICO Framework Is Helpful

The PICO framework can help your team develop an evaluation question that contains the key components of a rigorous evaluation. One of these key components is having a strong theory behind what your program is trying to achieve. By including the Population, Intervention, Comparison and Outcomes into the evaluation question, PICO can help your team think through through the following questions:

  • Is the Intervention a good fit for the target Population?
  • Is the Intervention likely to produce these Outcomes?
  • Will the Comparison help us understand whether it was the Intervention –or possibly something else–that produced the Outcomes?

PICO helps teams develop an evaluation question with standard components and identify an appropriate evaluation design by determining the comparison that will be used. A PICO question includes key information about your evaluation in a short summary, making it a useful format to share with others.

EXAMPLES

Do families participating in home visiting (P) that meet regularly with parent mentors (I) keep more home visiting appointments and stay in the program longer (O) than families who do not meet with parent mentors (C)?

Population: Families participating in home visiting services
Intervention: Home visiting services that include meeting regularly with parent mentors
Comparison: Families that receive home visiting services but don’t meet with parent mentors
Outcomes: Increased retention and dosage (i.e., families stay in the program longer and keep more appointments)

Do women who are pregnant with their first child (P) who receive home visiting services (I) experience better birth outcomes (O) compared with pregnant women who gave birth at the clinic before home visiting was implemented (C)?

Population: Women pregnant with their first child
Intervention: Home visiting services
Comparison: Pregnant women who gave birth at the clinic before the program was implemented
Outcomes: Birth outcomes (e.g. birth weight, gestational age)


How TEI Supports Grantees in Using the PICO Framework

TEI has initial discussions with each Tribal Home Visiting Program grantee about the PICO format during the program planning phase. The discussions typically include program staff, evaluators, advisory board members, and other program partners. TEI often helps facilitate a discussion about what the team wants the program to do, whom it should serve, and what it can accomplish. The team also begins to think about what type of comparison might work for their evaluation and be appropriate for their community.

Later, grantees refine their thinking until they have a feasible evaluation question using the PICO format that reflects the interests of the community and meets the grant requirements. This process typically involves gathering input from a community advisory board, elders, or tribal leaders. Grantees then develop a one-page summary of the evaluation design, measures, data collection plan, and analysis. Next, they move on to develop a full evaluation plan. TEI supports grantees throughout this process as determined by local need and interest.


RESOURCES

Learn how the PICO approach has been applied in the Children’s Bureau’s Permanency Innovation Initiative: The PII Approach: Building Implementation and Evaluation Capacity in Child Welfare – (PDF, 1.2mb)

View materials from a presentation on how TEI has used PICO to help grantees develop evaluation questions:

Develop a PICO question for two evaluation scenarios in this exercise: TEI Exercise: Developing a PICO Question – (Word, 22kb)


FOOTNOTE

[1] Testa, M., & Poertner, J. (Eds.). (2010). Fostering accountability: Using evidence to guide and improve child welfare policy. New York, NY: Oxford University Press.


Evaluating Tribal Home Visiting Using Single Case Design

Single case design (SCD) is a scientifically rigorous research method used to measure the impact of an independent variable (or intervention) on single “cases” of study. A basic SCD usually has the following key features:1

  • The unit of intervention and analysis includes individual cases, which can be a single participant (e.g., an adult or child) or a cluster of participants (e.g., classroom, community).
  • Each case in the study serves as its own comparison, so that dependent variables (or targeted behaviors) are measured repeatedly on the same case prior to the intervention and compared with measurements taken during and after the intervention.
  • The dependent variable is measured repeatedly within and across different phases or levels of the intervention to allow for identification of patterns.

Data points for each case are graphed to compare an individual behavior across intervention phases and analyze the relationship between the independent and dependent variables.

Example: Single Case Design Study Results

Chart example for Single Case Design Study Results

Why Choose SCD?

SCD is an appropriate method when the targeted behavior (i.e., dependent variable) is sensitive to change and defined precisely to allow for consistent, repeated measurement. It’s also an appropriate design for studies with small sample sizes that may not have the desired power for the statistical analysis to detect an effect when there is one. SCD works well for some Tribal Home Visiting Program grantees that are serving a limited number of families.

In addition to being a good fit for small sample sizes, SCD is an alternative to traditional experimental comparison designs, which require one group to receive an intervention and another “control” group to not receive it. Some grantees feel that withholding a service from families for research purposes is not appropriate, so they avoid experimental designs unless a naturally occurring control group is available in the community. In many ways, SCD aligns well with the inclusive cultural beliefs of tribal communities, because each participant receives the intervention and serves as his or her own comparison.

SCD is most common in fields of psychology and education and is typically used in school settings using observational measures. Tribal Home Visiting Program grantees have used SCD in innovative ways to evaluate home visiting in tribal communities and to evaluate cultural enhancements to home visiting models.

 

How TEI Supports Grantees Using SCD

TEI provides technical assistance to grantees using SCD for their local evaluations in a variety of ways:

  • Facilitating introductory Webinars on SCD with examples specific to home visiting
  • Connecting grantees with leading researchers in the field of SCD for assistance with their evaluation plans
  • Coordinating SCD learning circles for peer sharing and discussions on the analysis and reporting of SCD findings

 


RESOURCES


FOOTNOTE

1. Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case design technical documentation. Retrieved from What Works Clearinghouse.


Scientifically and Culturally Rigorous Evaluation

Scientifically and Culturally Rigorous Evaluation

Evaluations of tribal programs are strongest when they have both scientific and cultural rigor. Together, these types of rigor help to make sure results are valid, or accurate, for the research community and the community the program serves.[i] [ii] [iii]

Scientific rigor requires evaluations to use an appropriate evaluation design and systematic methods to answer evaluation questions. Cultural rigor requires evaluations to be inclusive of and responsive to local cultural practices. It attempts to ensure that information is gathered in appropriate and meaningful ways.[iii] For example, evaluators in a tribal community may get input from elders to develop the evaluation plan or use oral traditions, such as storytelling, to collect information. Evaluations without cultural rigor may fail to recognize and appreciate the strengths of the community and tribal program.[iv]

How TEI Supports Rigorous Local Evaluations

TEI builds the capacity of Tribal Home Visiting Program grantees to evaluate their programs in ways that are both scientifically and culturally rigorous.

Support for scientific rigor in tribal communities may include-

  • Translating research terms into everyday language so that program staff, advisory board members, and others who may not be familiar with research can provide input into the evaluation plan
  • Exploring evaluation designs that provide an alternative to random assignment, such as historical comparisons, naturally occurring comparison groups, and within-person comparisons
  • Providing training materials and resources for home visitors and other program staff to support high-quality data collection
  • Supporting the development of systematic data collection plans

Support for cultural rigor in tribal communities may include-

  • Using a community-engaged technical assistance process that encourages and allows time for gaining input from advisory councils, tribal leadership, staff, and community members
  • Encouraging grantees to develop evaluation questions that reflect the interests of their tribal organizations and communities
  • Honoring local cultural protocols and incorporating these into evaluation planning and methods
  • Exploring ways of evaluating cultural activities and measuring outcomes that are important to the community and local culture


RESOURCES

Read more about merging scientific and cultural rigor: A Roadmap for Collaborative and Effective Evaluation in Tribal Communities (PDF, 1.11 MB)

Learn more about how scientific rigor is defined by the Maternal, Infant, and Early Childhood Home Visting Program (MIECHV) program: Design Options for Home Visiting Evaluation: Evaluation Technical Assistance Brief (PDF, 267 KB)


FOOTNOTES

[i] Coryn, C. (2007). The holy trinity of methodological rigor: A skeptical view. Journal of MultiDisciplinary Evaluation, 4(7), 26–31.

[ii] Kirkhart, K. E. (2005). Through a cultural lens: Reflections on validity and theory in evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.). The role of culture and cultural context: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 21–39). Greenwich, CT: Information Age Publishing.

[iii] Tribal Evaluation Workgroup. (2013). A roadmap for collaborative and effective evaluation in tribal communities. Washington, DC: Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from https://www.acf.hhs.gov/sites/default/files/cb/tribal_roadmap.pdf

[iv] LaFrance, J., & Nichols, R. (2010). Reframing evaluation: Defining an indigenous evaluation framework. Canadian Journal of Program Evaluation, 23(2), 13–31.