Index of Conferences

Index of Conferences

When program staff and evaluators attend conferences, they can draw knowledge and inspiration from others doing similar work. They can also hear about new research. Finally, they can build a professional network that helps them learn from their peers and address common challenges.

Developed for Tribal Home Visiting Program staff and evaluators, this is a comprehensive, user-friendly list of conferences relevant to home visiting and tribal programs. You can click on each column topic below to alphabetically sort that column.

Conference lists from partner organizations:

National Head Start AssociationEvents Calendar
Head StartEvents and Conferences
National Association for the Education of Young ChildrenAffiliate Conferences
National Institute of Child Health and Human DevelopmentScientific Meetings, Conferences & Events

Tribal Home Visiting Dissemination Toolkit

Tribal Home Visiting Dissemination Toolkit


The Tribal Home Visiting Dissemination Toolkit is a set of tools designed to support Tribal Home Visiting Program grantees in disseminating information about their programs. Grantees may use these tools to share program findings, lessons learned, and success stories with their communities and other interested stakeholders. The toolkit also provides grantees with style guides and talking points to facilitate effective communication when conversing with different target audiences.


The goals of the dissemination toolkit are–

  • To strengthen Tribal Home Visiting Program grantees’ ability to raise awareness of their programs
  • To ensure that multiple audiences understand what has been accomplished by grantees
  • To influence the field surrounding evidence-based interventions targeting Native populations
  • To share information that may be useful for policy and program purposes


Examples of tools featured in the dissemination toolkit include are below.

Presentation Template – Evaluation

A template you can use to communicate about your evaluation plan, process, and findings - (PowerPoint, 205kb)

Presentation Template on Performance Measurement

A template you can use to share what you’ve learned through your performance measurement (benchmarks) process. - (PowerPoint, 185kb)

Program Pitch Template

Template that can help you communicate a clear and concise message about your program to multiple audiences - (PDF, 190kb)

Engaging Media

A document to support your team in developing and executing an effective strategy for engaging media (PDF, 660KB)

Social Media Guide

A document you can use to strategically and successfully incorporate social media into your program’s dissemination efforts (PDF, 2.38MB)

Digital Story Creation

A document that introduces digital storytelling as a compelling and emotionally engaging way to share program successes (PDF, 425KB)

Using Dissemination To Reach Families and Professionals in Your Community

This document can support your program recruitment efforts by providing a strategy for thinking about how to share information about your program with providers and families throughout your community (PDF, 548KB).

TEI Data Collection Toolkit

Data Collection in the Home: A TEI Toolkit

Print Friendly, PDF & Email

The data collection toolkit was developed to support data collection with AIAN families in their homes. Guided by years of work providing technical assistance to Tribal Home Visiting Program grantees, the toolkit addresses common grantee needs and challenges. Although it was designed for grantees—including program managers, evaluators, home visitors, and other staff—it also may be useful for early childhood programs and others who serve AIAN communities. The data collection toolkit supports culturally rigorous data collection.

The toolkit was designed to help programs—

  • Understand the value of data collection
  • Prepare for data collection
  • Collect high-quality data
  • Use tools to develop data collection processes, collect data, and implement quality assurance

Why Data Collection Is Important

Good decisions are driven by good data—information that is consistent, accurate, and complete. Quality data help programs tell stories about participating families, services, and outcomes that they can rely on to inform decision making.

Data collection has never been more important. Programs need data to apply for increasingly competitive funding opportunities, meet ambitious grant reporting requirements, and address participants’ needs with evidence-based strategies. Tribal Home Visiting Program grantees are required to collect data for continuous quality improvement, performance measurement, and program evaluation.

How To Navigate and Use the Toolkit

Introduction to the Data Collection Toolkit

The purpose of the toolkit, intended audiences and how to use it. Download Introduction in Microsoft Word format (.docx, 872kb).

Module 1: Understanding the Value of Data Collection

Training staff on the basics of data and how they can collect and use data. Download Module 1 in Microsoft Word format (.docx, 919kb).

Module 2: Preparing for Data Collection

Planning and building a foundation to collect quality data. Download Module 2 in Microsoft Word format (.docx, 919kb) and additional resources used in Module 2: Activity 2.2 Jeopardy Game (PowerPoint) and Tool 2.4 Data Collection Scheduler (Excel).

Module 3: Collecting High-Quality Data

Supervising data collection and implementing quality assurance. Download Module 3 in Microsoft Word format (.docx, 919kb) and an additional resource used in Module 3: Tool 3.11 Inter-Rater Agreement (Excel).

Toolkit Modules Representing Stages of Data Collection

TEI Toolkit Modules graphic

Intended Audiences

Program managers

Program managers may deal with data collection from planning and oversight to data entry and analysis. They typically make decisions and ensure that staff understand their role in data collection and are trained and supported. Open communication between program managers and staff is crucial for troubleshooting challenges. Program managers are often asked to present data to stakeholders and funders, so they must have a solid understanding of why data collection is important and how it works.

Data coordinators

Data coordinators (also called data managers) play a critical role in collecting, entering, managing, and reporting data. Data coordinators help home visitors keep track of which forms need to be filled out and when. Having a data coordinator to focus on data-related tasks maximizes the time home visitors and program managers can spend serving families.


Like program managers, evaluators ensure that staff appropriately use, interpret, and store data. They develop and implement guidelines for administering and interpreting evaluation instruments. Examples include writing data collection protocols, establishing consent processes, identifying and reviewing instruments, and selecting data systems. Evaluators may support data entry and analysis, data quality reviews, and reporting. They also help promote collaborative community-based evaluation practices.

Home visitors

Home visitors are the faces of the home visiting program in the community, and they are typically responsible for collecting data from program participants. Home visitors help ensure that the program collects high-quality data in a way that is comfortable for the families the program serves. They are often tasked with explaining data collection to families, administering data collection instruments, entering data into databases, and communicating assessment results to the families served by the program.

Glossary of Terms Used by TEI

  • Continuous quality improvement (CQI) – A systematic, intentional process to improve services.
  • Cultural rigor – Consideration and respect for the local culture in the evaluation, to ensure knowledge is gathered in an appropriate and meaningful way.
  • Dependent variable – Observable target behavior that an intervention seeks to change.
  • Evaluation design – Plan for conducting a study (or evaluation) to assess and explain change enacted through program implementation.
  • Independent variable – The intervention, practice, or treatment being tested that should produce observable changes in a dependent variable.
  • Plan-Do-Study-Act (PDSA) – A four-step, repeatable problem-solving model for improving a process or carrying out change.
  • Power – The probability that a study will detect an effect when there is one; larger sample sizes increase power and are more sensitive to detecting results that can be generalized.
  • Scientific rigor – Use of an appropriate study design to answer evaluation questions, with systematic methods that enhance validity.
  • Valid – Accurately, truthfully, and correctly representing the phenomenon the study intended to describe.

Using PICO To Build an Evaluation Question

Using PICO To Build an Evaluation Question

Print Friendly, PDF & Email

PICO1 is a framework that can help evaluators and programs develop a concise but rigorous evaluation question. A PICO question can tell you in just a few words what you aim to learn from an evaluation.
PICO stands for—

Target POPULATION that will participate in the intervention and evaluation

INTERVENTION to be evaluated

COMPARISON that will be used to see if the intervention makes a difference

OUTCOMES you expect the intervention to achieve

Why the PICO Framework Is Helpful

The PICO framework can help your team develop an evaluation question that contains the key components of a rigorous evaluation. One of these key components is having a strong theory behind what your program is trying to achieve. By including the Population, Intervention, Comparison and Outcomes into the evaluation question, PICO can help your team think through through the following questions:

  • Is the Intervention a good fit for the target Population?
  • Is the Intervention likely to produce these Outcomes?
  • Will the Comparison help us understand whether it was the Intervention –or possibly something else–that produced the Outcomes?

PICO helps teams develop an evaluation question with standard components and identify an appropriate evaluation design by determining the comparison that will be used. A PICO question includes key information about your evaluation in a short summary, making it a useful format to share with others.


Do families participating in home visiting (P) that meet regularly with parent mentors (I) keep more home visiting appointments and stay in the program longer (O) than families who do not meet with parent mentors (C)?

Population: Families participating in home visiting services
Intervention: Home visiting services that include meeting regularly with parent mentors
Comparison: Families that receive home visiting services but don’t meet with parent mentors
Outcomes: Increased retention and dosage (i.e., families stay in the program longer and keep more appointments)

Do women who are pregnant with their first child (P) who receive home visiting services (I) experience better birth outcomes (O) compared with pregnant women who gave birth at the clinic before home visiting was implemented (C)?

Population: Women pregnant with their first child
Intervention: Home visiting services
Comparison: Pregnant women who gave birth at the clinic before the program was implemented
Outcomes: Birth outcomes (e.g. birth weight, gestational age)

How TEI Supports Grantees in Using the PICO Framework

TEI has initial discussions with each Tribal Home Visiting Program grantee about the PICO format during the program planning phase. The discussions typically include program staff, evaluators, advisory board members, and other program partners. TEI often helps facilitate a discussion about what the team wants the program to do, whom it should serve, and what it can accomplish. The team also begins to think about what type of comparison might work for their evaluation and be appropriate for their community.

Later, grantees refine their thinking until they have a feasible evaluation question using the PICO format that reflects the interests of the community and meets the grant requirements. This process typically involves gathering input from a community advisory board, elders, or tribal leaders. Grantees then develop a one-page summary of the evaluation design, measures, data collection plan, and analysis. Next, they move on to develop a full evaluation plan. TEI supports grantees throughout this process as determined by local need and interest.


Learn how the PICO approach has been applied in the Children’s Bureau’s Permanency Innovation Initiative: The PII Approach: Building Implementation and Evaluation Capacity in Child Welfare – (PDF, 1.2mb)

View materials from a presentation on how TEI has used PICO to help grantees develop evaluation questions:

Develop a PICO question for two evaluation scenarios in this exercise: TEI Exercise: Developing a PICO Question – (Word, 22kb)


[1] Testa, M., & Poertner, J. (Eds.). (2010). Fostering accountability: Using evidence to guide and improve child welfare policy. New York, NY: Oxford University Press.

Evaluating Tribal Home Visiting Using Single Case Design

Single case design (SCD) is a scientifically rigorous research method used to measure the impact of an independent variable (or intervention) on single “cases” of study. A basic SCD usually has the following key features:1

  • The unit of intervention and analysis includes individual cases, which can be a single participant (e.g., an adult or child) or a cluster of participants (e.g., classroom, community).
  • Each case in the study serves as its own comparison, so that dependent variables (or targeted behaviors) are measured repeatedly on the same case prior to the intervention and compared with measurements taken during and after the intervention.
  • The dependent variable is measured repeatedly within and across different phases or levels of the intervention to allow for identification of patterns.

Data points for each case are graphed to compare an individual behavior across intervention phases and analyze the relationship between the independent and dependent variables.

Example: Single Case Design Study Results

Chart example for Single Case Design Study Results

Why Choose SCD?

SCD is an appropriate method when the targeted behavior (i.e., dependent variable) is sensitive to change and defined precisely to allow for consistent, repeated measurement. It’s also an appropriate design for studies with small sample sizes that may not have the desired power for the statistical analysis to detect an effect when there is one. SCD works well for some Tribal Home Visiting Program grantees that are serving a limited number of families.

In addition to being a good fit for small sample sizes, SCD is an alternative to traditional experimental comparison designs, which require one group to receive an intervention and another “control” group to not receive it. Some grantees feel that withholding a service from families for research purposes is not appropriate, so they avoid experimental designs unless a naturally occurring control group is available in the community. In many ways, SCD aligns well with the inclusive cultural beliefs of tribal communities, because each participant receives the intervention and serves as his or her own comparison.

SCD is most common in fields of psychology and education and is typically used in school settings using observational measures. Tribal Home Visiting Program grantees have used SCD in innovative ways to evaluate home visiting in tribal communities and to evaluate cultural enhancements to home visiting models.


How TEI Supports Grantees Using SCD

TEI provides technical assistance to grantees using SCD for their local evaluations in a variety of ways:

  • Facilitating introductory Webinars on SCD with examples specific to home visiting
  • Connecting grantees with leading researchers in the field of SCD for assistance with their evaluation plans
  • Coordinating SCD learning circles for peer sharing and discussions on the analysis and reporting of SCD findings




1. Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case design technical documentation. Retrieved from What Works Clearinghouse.

Scientifically and Culturally Rigorous Evaluation

Scientifically and Culturally Rigorous Evaluation

Evaluations of tribal programs are strongest when they have both scientific and cultural rigor. Together, these types of rigor help to make sure results are valid, or accurate, for the research community and the community the program serves.[i] [ii] [iii]

Scientific rigor requires evaluations to use an appropriate evaluation design and systematic methods to answer evaluation questions. Cultural rigor requires evaluations to be inclusive of and responsive to local cultural practices. It attempts to ensure that information is gathered in appropriate and meaningful ways.[iii] For example, evaluators in a tribal community may get input from elders to develop the evaluation plan or use oral traditions, such as storytelling, to collect information. Evaluations without cultural rigor may fail to recognize and appreciate the strengths of the community and tribal program.[iv]

How TEI Supports Rigorous Local Evaluations

TEI builds the capacity of Tribal Home Visiting Program grantees to evaluate their programs in ways that are both scientifically and culturally rigorous.

Support for scientific rigor in tribal communities may include-

  • Translating research terms into everyday language so that program staff, advisory board members, and others who may not be familiar with research can provide input into the evaluation plan
  • Exploring evaluation designs that provide an alternative to random assignment, such as historical comparisons, naturally occurring comparison groups, and within-person comparisons
  • Providing training materials and resources for home visitors and other program staff to support high-quality data collection
  • Supporting the development of systematic data collection plans

Support for cultural rigor in tribal communities may include-

  • Using a community-engaged technical assistance process that encourages and allows time for gaining input from advisory councils, tribal leadership, staff, and community members
  • Encouraging grantees to develop evaluation questions that reflect the interests of their tribal organizations and communities
  • Honoring local cultural protocols and incorporating these into evaluation planning and methods
  • Exploring ways of evaluating cultural activities and measuring outcomes that are important to the community and local culture


Read more about merging scientific and cultural rigor: A Roadmap for Collaborative and Effective Evaluation in Tribal Communities (PDF, 1.11 MB)

Learn more about how scientific rigor is defined by the Maternal, Infant, and Early Childhood Home Visting Program (MIECHV) program: Design Options for Home Visiting Evaluation: Evaluation Technical Assistance Brief (PDF, 267 KB)


[i] Coryn, C. (2007). The holy trinity of methodological rigor: A skeptical view. Journal of MultiDisciplinary Evaluation, 4(7), 26–31.

[ii] Kirkhart, K. E. (2005). Through a cultural lens: Reflections on validity and theory in evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.). The role of culture and cultural context: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 21–39). Greenwich, CT: Information Age Publishing.

[iii] Tribal Evaluation Workgroup. (2013). A roadmap for collaborative and effective evaluation in tribal communities. Washington, DC: Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from https://www.acf.hhs.gov/sites/default/files/cb/tribal_roadmap.pdf

[iv] LaFrance, J., & Nichols, R. (2010). Reframing evaluation: Defining an indigenous evaluation framework. Canadian Journal of Program Evaluation, 23(2), 13–31.