Grant Report Guidelines and Forms

Are you a current grantee of the New York Health Foundation? Review and download the required guidelines for your interim and final grant reports here.

Interim Report Guidelines

Download pdf

Final Report Guidelines

Download pdf

Interim and Final Financial Report Form

Download Excel File

Resources for Applicants and Grantees

NYHealth wants to ensure that new grantees feel connected and have access to information that can help make their projects a success.

Quick Guide for Grantees

We are always proud to welcome new organizations to the growing community of NYHealth grantees. This welcome sheet provides helpful information for new grantees and additional resources to keep in mind over the duration of an NYHealth-supported project.

Download PDF

Communications Tipsheet for Grantees

We want to help spread the word about your work. These tips will help guide you as you think about communications related to your NYHealth-supported project.

Download PDF

Sustaining Improved Outcomes: Toolkit

The NYHealth-supported toolkit is designed to get both grantees and funders to consider sustainability from the onset of their grant projects. It provides resources and tools to help integrate sustainability strategies and practices into all stages of the grant process.

Download PDF

Tools and Guidelines for Planning Effective Project Evaluations

This set of guidelines, developed by the Center for Health Care Strategies for NYHealth, is designed to help grant applicants think about and formulate a program evaluation as part of an NYHealth grant proposal.

Learn more about project evaluation

NYHealth Grantee Portal Assistance

This video resource will help you navigate the NYHealth Grantee Portal, including step-by-step guides for logging in, managing applications, and updating contact information.

Get assistance with the Grantee Portal

Please submit all reports through the NYHealth Grantee Portal.

GRANTEE PORTAL

Tools and Guidelines for Planning Effective Project Evaluations:

Introduction

NYHealth is committed to supporting projects that have a measurable, meaningful impact on New York’s health care system and on the health of New Yorkers. Most successful grant proposals to the Foundation include plans for some type of evaluation to assess whether the project has achieved its stated goals, to quantify the impact of the project, or to capture some other aspect of the project’s activities and outcomes.

We recognize that many grant applicants could benefit from some information and tools about evaluation as they prepare their grant proposals. By thinking ahead about the various parts of a program evaluation, the Foundation believes applicants will be better able to effectively carry out and sustain their projects.

NYHealth understands the real-world constraints that applicants face in preparing a program evaluation, such as limited time, resources, and experience. There is no one exact blueprint for a program evaluation; different applicants will have various ways of measuring the success and outcomes of their proposed project.

The guidelines below, developed by the Center for Health Care Strategies for NYHealth grant applicants, offer practical and realistic strategies to consider in planning, designing, and implementing an evaluation plan that can improve the outcomes and impact of a project.

Must-Read Resources for Evaluation Planning

A network of primary health care practices in New York State secured grant funding to start a new program to improve care for its patients with diabetes. The funding was for an education program for health care providers about best practices for diabetes care, with an emphasis on the need for regular patient visits and blood sugar testing. Early on, the program manager had several questions about how the project was going, including what worked and what did not. She was also interested in gathering information that could showcase her program’s impact to the community and to her funder. To get this valuable information, the program manager planned a program evaluation.

What is evaluation?

Evaluation is a way of assessing the effectiveness of a project or program and of measuring whether its proposed goals have been met. Program managers and staff often use evaluations to improve program effectiveness. Program evaluation and related activities can answer questions on whether a program is working or how it can be improved.

What kinds of questions can an evaluation ask?

An evaluation can be guided by several types of questions:

1) Implementation: Are the program’s activities put into place as originally intended? What adaptations were made and why?

2) Effectiveness: Is the program achieving the goals and objectives it was intended to accomplish?

3) Efficiency: Are the program’s activities being produced with appropriate use of resources, such as budget and staff time?

4) Cost-Effectiveness: Does the value or benefit of achieving the program’s goals and objectives exceed the cost of producing them?

5) Attribution: Is the progress on goals and objectives a result of the program, rather than to other things that are going on at the same time?

What are the benefits of conducting an evaluation?

There are many benefits to conducting an evaluation. First, designing an evaluation opens up communication among the leaders of an organization, the managers, and the staff. Doing so can encourage analytical thinking and allow for honest discussions about the program among everyone involved with the project. In addition, conducting an evaluation provides an opportunity to revisit the goals of an existing program, and to bridge any gaps that may exist between the vision of the program and the reality of the program operation.

What are the challenges of conducting an evaluation?

Conducting an evaluation is not without its challenges: operating with limited resources (e.g., time, money, and/or expertise) and limited experience with the evaluation process (e.g., a lack of understanding about the difference between terms like outputs and outcomes). The goal of this NYHealth evaluation planning guide is to provide information and resources that minimize some of these challenges.

Below are some questions to consider when writing the evaluation section of your NYHealth grant proposal. It is not necessary to address each of these questions directly in your proposal—these are simply suggestions to help you frame your thinking and guide an evaluation plan that makes sense for your proposed project.

1. What kind of project have you developed?

The type of project you propose can help determine which kind of program evaluation is most appropriate. NYHealth supports the following types of projects:

Policy/Advocacy-Oriented Projects: Policy/advocacy projects focus on activities to influence health care policymakers, including elected officials, health systems, payers, and providers.

Direct Service-Oriented Projects: Direct-service projects typically aim to address identified needs of a community, organization, or population through specific activities, services, or actions. For more on evaluating these types of activities, continue to Question 2.

2. Will you do a process evaluation, an outcome evaluation, or both?

Decide which type of program evaluation best fits your project or the specific results you hope to achieve.

Process Evaluation assesses how the program is implemented, and focuses on program resources, activities, and outputs (the direct products of program activities). Process evaluation allows applicants to take the pulse of a program’s implementation, and can answer questions about program operations and service delivery.

Outcome Evaluation measures the program’s outcomes and assesses program effectiveness. Outcomes are the actual changes resulting from program activities, and can include short-term outcomes and long-term outcomes. It is important to ensure that the outcomes measured are realistic and reasonable within the duration and scope of the evaluation—often, for grants of one or two years, evaluations focus on short-term rather than long-term outcomes.

It is often useful to examine both process and outcomes when evaluating a program. Combining the two demonstrates both if a program was implemented as intended and whether it had the intended effects.

Imagine that a health care agency decides to conduct a process evaluation of an education program for health care providers on best practices for diabetes care. Questions that could guide the process evaluation include:

  • What are the components of the program?
  • How satisfied were health care providers with the program?
  • What motivated providers to enroll in the program?
  • What (if any) barriers were there to providers’ participation in the program?
  • What is the average cost per provider participant?

If the same agency decides to conduct an outcome evaluation, questions would focus on the actual changes resulting from the program activities and might ask:

  • Did provider knowledge about best practices increase because of the program?
  • Did providers who participated improve their monitoring of patients with diabetes?
  • Was there an improvement in patient care (e.g., increase in the number of appointments, number of nutrition consultations ordered.)?
  • Did the program have any unexpected effects?

3. What will your evaluation measure?

Measures (or indicators) are the information that will be collected and/or analyzed during the evaluation. Measures can be drawn from existing data or may need to be collected specifically for the evaluation.

Guided by the process evaluation questions listed above in Question 2, the health care agency might develop the following measures:

  • Number of providers who attended the program.
  • Number of providers who completed the program (e.g., CME credit awarded).
  • Number of training sessions that providers received.
  • Level of providers’ satisfaction with program.
  • Cost per provider participant.

And it might develop the following outcome evaluation measures:

  • Number and percentage of providers with an increased understanding of best practices, or average amount of improvement in knowledge.
  • Number and percentage of providers who increased the number of nutrition consultations ordered for their patients, or average increase in the number of consultations ordered.
  • Number and percentage of patients with diabetes who attended at least one nutrition consultation during the year.

4. What is your research design and sampling plan?

The Research Design is the glue that holds a project together—it is a plan for outlining how information is to be gathered for an evaluation. Research designs can be complicated or simple, and describing all the options is beyond the scope of this guide. The most appropriate design will depend on an evaluation’s purpose, need for scientific rigor, resources, and more.

Imagine that the health care agency wants to explore whether its education program (the intervention) results in providers ordering nutrition consultations for more patients with diabetes. For providers who complete the program, the evaluation might compare the percentage of nutrition consultations ordered for patients with diabetes during the year before the program and for the year after. This is a simple pre- and post-intervention design.

This design can be strengthened by the addition of a comparison group—the agency might also collect data on the percentage of patients for whom a nutrition consultation was ordered by a group of providers who never attended the program. Including a comparison group will give added confidence that any differences seen are, in fact, a result of the program and not just a random occurrence. Not all evaluations will have the need or resources for a comparison group in their evaluation. For more information on constructing a comparison group, see Research Design.

The research design should also outline a Sampling plan, which describes who will be included in the evaluation. Sometimes, an evaluation will use data from the entire population. Other times, this is not necessary or feasible, especially if the evaluation involves collecting new data. For example, if the health care agency wanted to learn about what motivated providers to enroll in their education program, they might carry out in-depth interviews with the participants. But in-depth interviews take a lot of time and resources, and if the program is large, it is probably not necessary to interview every participant. Instead, the evaluation might choose a random sample of 20 provider participants to interview.

5. What data will you use to evaluate your program?

There are often many sources of data that can be used when carrying out an evaluation, including financial records, program-monitoring documents, internal reports, administrative records, and client databases. Primary data, or data collected specifically for the purpose of the evaluation, can include surveys, interviews, or observations. Secondary data are any data that have been collected for another purpose, but might be useful for the evaluation. This might include patient chart reviews, minutes from board meetings, or local census data.

To collect data for a process evaluation of the provider education program, the health care agency might use the following data sources:

Measure Data Source
Number of providers who attended the program Attendance records
Number of training sessions administrated Training materials
Level of satisfaction among providers Training evaluation forms

To collect data for an outcome evaluation, the health care agency might use the following data sources:

Measure Data Source
Number and percentage of providers with an increased understanding of best practices Pre- and post- intervention surveys (written)
Number and percentage of patients with diabetes who attended at least one nutrition consultation during the year Chart review

6. How will you analyze your evaluation data and information?

Qualitative Research uses non-numerical data to provide rich, nuanced narratives about experiences and behaviors. It is the stories behind the numbers. Qualitative data (e.g., focus groups, interviews, patient notes) are analyzed to find themes or patterns that help to explain key relationships.

Quantitative Research yields numerical data (e.g., Body Mass Index, cholesterol levels, blood pressure measurements), usually analyzed through statistics. Descriptive statistics do just that—describe the data, and include totals, percentages, and averages. Inferential statistics help to determine whether differences are statistically significant (i.e., whether differences are unlikely to have occurred by chance). Inferential statistics can be used to draw conclusions beyond the sample group to the larger population.

Evaluations can generate a great deal of data and information. How that data and information are analyzed should be driven by the evaluation questions and the type and amount of data collected. The approach to data analysis can affect a research design and sampling plan, so it is important to develop the data analysis plan upfront.

7. Will you carry out an internal evaluation or hire an external evaluator (or some combination of the two)?

There are many considerations to take into account when deciding whether to conduct an in-house evaluation or to seek out outside help. If an evaluation requires independence and objectivity or is highly technical, or if staff time and resources are limited, an external evaluator might be the better choice. Alternatively, if a program has little money to spend on evaluation and has internal expertise, an internal evaluation might be preferable. Hybrid models (internal and external team members) can combine the best of both worlds.

See more on Internal versus External Evaluation for guidance on the decision whether to hire outside expertise. For help locating an external evaluator, see Finding and Choosing an External Evaluator.

What is policy/advocacy evaluation?

Policy and advocacy projects focus on activities to influence health care policymakers, including elected officials, health systems, payers, and providers. Activities may include public education, capacity building, network formation, relationship building, communications, and leadership development.

Evaluation of advocacy and policy projects measures changes in outputs, short-term outcomes, and long-term outcomes, as well as the impact that can be attributed to the specific policy advocacy strategy or activity.

What types of policy and advocacy outcomes can an evaluation measure?

Policy/advocacy evaluation questions can address:

Outputs: Specific advocacy activities and/or tactics, such as electronic outreach; earned media; paid media; coalition and network building; grassroots organizing and mobilization; rallies and marches; public education; public service announcements; briefing and presentations; polling; issue/policy analysis and research; policymaker education; and relationship-building with decision makers. Examples:

  • For briefings and presentations: organization held 10 briefings or presentations for 300 community members.
  • For public education activities: organization developed and disseminated educational materials to 5,000 community members.

Short-term outcomes: Changes in organizational capacity, collaboration, organizational visibility, awareness, attitude, and public support. Examples:

  • For organizational capacity: staff increased their ability to get and use data; staff improved their media skills and contacts.
  •  For organizational visibility: organization received increased number of requests for reports or information (including downloads or page views); received increased number of invitations to speak as expert on a particular policy issue.

Long-term outcomes: Changes in policy development; placement on policy agenda; policy adoption; policy blocking; policy implementation; and policy monitoring and evaluation. Examples:

  • For placement on policy agenda: policies formally introduced as bills, regulations, or administrative policies.
  • For policy implementation: policies implemented or administered in accordance with requirements.

Impacts: Improved services, systems, and social and physical conditions. Examples:

  • For improved services and systems: more programs offered; easier access to programs or services; higher quality services.
  • For improved social and physical conditions: more community members have a regular source for health care services; improved cholesterol and blood pressure levels among patients with diabetes; fewer emergency department visits.

Note: It is not always feasible to measure long-term outcomes and impacts, which may extend beyond the time frame of the program or be influenced by factors beyond the control of the program.

What is a logic model?

A logic model is a visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to do, and the changes or results you hope to achieve. While the term “program” is often used, a logic model can prove equally useful for describing group work, community-based collaboratives, and other complex organizational processes.

What are the key components of a logic model?

The logic model may be used as a tool for planning, implementation, evaluation, and communications. Six key components of a logic model are:

  1. Inputs: Resources, contributions, and investments that go into the program (e.g., money, staff, volunteers, facilities).
  2. Activities: The processes, tools, events, technology, and actions that are part of program implementation (e.g., health education workshop, staff training, one-on-one counseling).
  3. Outputs: Direct products of program activities and may include types, levels, and targets of services to be delivered by the program (e.g., number of participants attending a training, number of counseling sessions).
  4. Outcomes: The more immediate results or changes in program participants’ behavior, knowledge, skills, status, or level of functioning. Outcomes can be either short-term (change in knowledge and skills) or long-term (behavioral change).
  5. Impact: Longer-term, broader, or cumulative changes in organizations, communities, or systems (e.g., decision-making, action, behavior, practice, policies, social action).
  6. External Influences: The program’s environment or surroundings include a variety of factors that interact with and influence the program action.

ViewView a sample Logic Model. a sample Logic Model.

Note: Much of the above information is based on the W.K. Kellogg Foundation’s Logic Model Development Guide.

What is process evaluation?

Process evaluation is a method of assessing how a program is being implemented. Process evaluation focuses on the program’s operations, implementation, and service delivery, whereas outcome evaluation focuses on the effectiveness of the program and its outcomes (see: Outcome Evaluation). According to one scholar, “When the cook tastes the soup, that’s process; when the guests taste the soup, that’s outcome.”

What questions can a process evaluation ask?

Process evaluation questions address the who, what, when, and how many of a program’s activities and outputs. Examples of process evaluation questions include:

Reach: Who did the program reach? This may include the number of people, whether the people it reached were in the target audience (according to demographic characteristics), and what proportion of the target audience was reached.

Sample questions: Were new HIV policies disseminated to all school districts during the past school year? Did all students identified with asthma receive the Open Airways curriculum?
Quality of implementation: How well was the program delivered?

Sample questions: How did the activities or components of the event go? What aspects of the event worked well? Was activity implemented properly, according to standards or protocol? What aspects did not work so well?

Satisfaction: How satisfied were the people involved in the program? This seeks feedback from the event participants, partner organizations, and program staff.

Sample questions: Was the venue convenient? Was the timing of the event appropriate? Were the different parts of the event easy to navigate? Was the program staff friendly and/or helpful?
Barriers: What got in the way of success? This attempts to understand why something didn’t happen, and may identify key environmental variables.

Sample questions: Were there any challenges to program participation? What lessons have been learned that might be useful if this event happened again?

Note: The above information has been adapted from “Evaluating Socio Economic Development, SOURCEBOOK 2: Methods & Techniques for Formative Evaluation,” December 2003.

What is outcome evaluation?

Outcomes are the actual changes resulting from program activities: changes in participant knowledge, attitude, behavior, health status, health care utilization, and disease incidence. Outcomes are often confused with program outputs, which are the products of a program’s activities (e.g., the number of people served by a program, the number of classes taught). Outcome evaluation measures the program’s outcomes and assesses program effectiveness, whereas process evaluation measures how a program is implemented (see: Process Evaluation).

What types of outcomes can an evaluation measure?

Outcome evaluation questions can address:

Short-term outcomes: These include the awareness, knowledge, opinions, attitudes, and skills gained by participants.

  • For a nutrition class aimed at patients with diabetes: patients increased their understanding of good eating habits; patients were motivated to change their eating habits.
  • For at-risk mothers receiving educational home visits: participating expectant mothers understood the importance of prenatal medical care.

Intermediate outcomes: These include changes in behavior.

  • For a nutrition class aimed at patients with diabetes: patients’ eating habits improved.
  • For at-risk mothers receiving educational home visits: participating expectant mothers attended all recommended prenatal doctor visits (after receiving education home visits).

Long-term outcomes*: These include changes in a participant’s behavior, condition, or status.

  • For a nutrition class aimed at patients with diabetes: patients’ health improved.
  • For at-risk mothers receiving educational home visits: participants’ babies were born healthy.

*Note: It is not always feasible to measure long-term outcomes. Expected outcomes might be beyond the time frame of the program, or may be influenced by many factors outside of the program.

What is a measure?

A measure is an explicit, specific, clear, observable, and realistic piece of evidence.

What makes a good measure?

Measures can sometimes be difficult to develop, because we are often going from an intangible concept (such as the outcome: healthy newborns) to something that is specific and measurable (the number and percentage of newborns born weighing at least 5.5 lbs). Sometimes, a concept (or outcome) will require multiple measures, as is the case with healthy newborns. In this case, possible measures could be newborn weight, responsiveness, or survival.

Most importantly, measures need to be measurable and clear. In the example above, newborn responsiveness at birth would be a vague measure that would be interpreted differently by different people. A better measure would be the number and percentage of newborns with an Apgar score of 7 or higher, which is unambiguous and well defined.

Additionally, measures need to be realistic and reasonable within the time span of an evaluation. It is important not to hold a program accountable for measures beyond its control. For example, if a program has a two-year grant to improve diabetes care in a clinic, using the number of diabetes deaths in that city as a measure of program success is probably not a good idea because 1) many of the diabetes deaths will be for people who have never had any contact with the program’s clinic, and 2) it might not be reasonable to expect the program to have a large impact on diabetes deaths during the short time frame of the grant. In this case, it is probably more appropriate to focus on shorter-term outcomes for those patients served by the clinic. Measures might include: the number and percentage of patients with diabetes who received a nutrition referral, the number and percentage of patients with HbA1c levels at or below 7, and the change in patient eating habits as measured by a standardized instrument.

What is Research Design?

A research design provides the structure of any scientific work. It is a plan outlining how information is to be gathered for an assessment or evaluation. Typical tasks include identifying the data-gathering method (or methods), the tools to be used for gathering information, the group or population from whom the data will be collected, and how the information will be organized and analyzed. A critical decision is the choice of a counterfactual, which measures what the possible outcomes or impact would have been if there hadn’t been an intervention. This can be done using a pre-test, a comparison or control group, or both.

What are the types of Research Design?

The choice of design should be influenced by the resources (e.g., money, skill level) available as well as the degree of scientific rigor desired. Typically, an outcome evaluation uses one of three designs: randomized control trial, quasi-experimental designs with comparison groups, or pre- and post-intervention comparison.

  • Randomized Control Trial (RCT): This design option uses two or more groups of participants who are randomly assigned to either the treatment in question or to a control group that is not exposed to the treatment. Members of both groups receive the same pre-treatment and post-treatment assessments. The RCT is considered the gold standard of research design, but is not required for most program evaluations.
  • Quasi-Experiment with Comparison Group: This design option is similar to the RCT design, except that the comparison group is not randomly assigned. Comparison groups are chosen so that participants are as similar as possible to those in the treatment service or system being evaluated.
  • Pre- and post-intervention comparison: This design assesses participants on the same variables and over the same time period, before and after they complete treatment. This design option is not as rigorous as the two previous designs, but it is often more realistic for organizations with limited evaluation experience and/or resources. In this design, the only participants who are measured or assessed are those who received the treatment. Although pre- and post-intervention designs are less scientifically rigorous, they can produce useful results for purposes of accountability and program improvement.

What are the key features of Research Design?

  • Measures or observations can refer to a single measure (e.g., a measure of body weight), a single tool with multiple items (e.g., a 10-item self-esteem scale), a complex multipart tool (e.g., a survey), or a whole battery of tests or measures given out on one occasion.
  • Treatments or programs can refer to a simple intervention (e.g., a one-time treatment) or to a complex program (e.g., an employment training program).
  • Sampling is the process of selecting units (e.g., people) from a population of interest so that by studying the sample one can fairly generalize the results back to the population from which they were chosen.

What is Sampling?

Sampling is the process of selecting units (e.g., people) from a population of interest so that by studying the sample, one can fairly generalize the results back to the population from which they were chosen.

What kind of Sampling Frame can I select?

Projects can employ non-probability or probability sampling. The difference between the two is that probability sampling involves some type of random selection, while non-probability sampling does not.

Probability sampling techniques:

  • Random sampling occurs when there is an equal chance of selection to an evaluation group. The key to random selection is that there is no bias involved in the selection of participants from a sample; participants are chosen completely at random.
  • A stratified sample is a mini-reproduction of the population. Before sampling, the population is divided into characteristics of importance for the research (e.g., by gender, social class, education level, or religion. Then the population is randomly sampled within each category or level; for example, if 38% of the population is college-educated, then 38% of the sample is randomly selected from the college-educated population.

Non-probability sampling techniques:

  • Convenience sampling occurs when evaluation subjects are selected because they are easiest to access or on the basis of convenience and not because they are representative of the entire population. For example, talking to all the nurses on one shift or in one department based on ease of accessing them or likelihood that they can be reached for the study.
  • Purposive samples are non-representative subsets of some larger population and serve a very specific need or purpose. For example, having a specific group in mind, such as high-level hospital administrators.

What is qualitative research?

Qualitative research refers to investigation methods that use non-numerical data to tell the stories behind the numbers. For example, a researcher might know from a survey that 60% of patients return for follow-up care, but using qualitative methods like focus groups allows the researcher to examine why patients returned, and perhaps why they did not.

The aim of qualitative research is to gather an in-depth understanding of behavior and the reasons or rationale that govern this behavior. Qualitative methods investigate the why and the how of decision making, not just the what, the where, or the when. Typically, smaller, focused samples are used in qualitative research, rather than the large samples seen in quantitative research.

How are qualitative data collected?

Qualitative data can be collected in several ways. Typical methods include in-depth or semi-structured interviews, focus groups, or observational techniques.

  • In-depth interviewing involves conducting intensive individual interviews with a small number of people to explore their perspectives on a particular idea, program, or situation. For example, one could ask clients or staff who attended a program about their experiences and expectations related to the program or their thoughts about program operations, processes, and outcomes.
  • The focus group is a form of group interview and can be used to guide, focus, and inform planning and implementation of program activities. Most groups contain six to eight people, are audio-recorded, have a flexible question guide, are led by an experienced moderator, and are held in a location that is convenient to participants. Focus groups can be especially useful for pilot-testing intervention strategies, building organizational capacity, or generating best practices among key stakeholders.
  • Observational techniques are typically an immersive experience where the evaluator observes and documents key program activities (e.g., staff trainings or meetings) either as an outsider or as a pretend participant. This method is useful to assess the physical layout, visual atmosphere, and overall feel of a program site.

How are qualitative data analyzed?

Typically qualitative data are analyzed in three phases: organize, reduce, and describe.

  • Organize refers to activities such as creating transcripts and reading the data.
  • Reduce refers to identifying themes to classify the data or exploring relationships between the classifications of data.
  • Describe refers to activities such as writing a final report filled with rich descriptions in the words of the participants.

What is quantitative research?

Quantitative research uses numerical data (from surveys, administrative data, etc.) that can be analyzed using statistics. Findings can be generalized to the population that is being examined. Most outcome evaluations use quantitative data (and may use qualitative data as well).

How are quantitative data collected?

Quantitative data can come from a variety of different sources, including surveys, data from tests (e.g., tests of knowledge, Body Mass Index), and internal documents (including client databases, financial records, administrative records, medical records). Sometimes, qualitative data is described in a quantitative way (for example, counting the number of times that individuals mentioned in interviews that health care cost is a barrier to care). Once the qualitative data is converted into numbers, it can be analyzed numerically.

How are quantitative data analyzed?

The goal of analyzing quantitative data is to summarize and distill the data into something comprehensible. Analysis of quantitative data can be simple (e.g., averages, totals), or it can involve complicated statistics. How the data are analyzed will depend on the program’s evaluation goals.

There are entire books written on carrying out quantitative analyses. Most organizations can carry out a simple quantitative research project without outside help. If a more complex project is planned, it can be a good idea to engage the expertise of an evaluator, statistician, or other professional (see Internal Versus External Evaluator).

There are many considerations to take into account when deciding whether to conduct an in-house evaluation or to seek out outside help. If an evaluation requires independence and objectivity or is highly technical, or if staff time and resources are limited, an external evaluator might be the better choice. Alternatively, if a program has little money to spend on evaluation and has internal expertise, an internal evaluation might be preferable. Hybrid models (internal and external team members) can combine the best of both worlds. Read moreRead more on deciding between hiring an internal or external evaluation. on deciding between hiring an internal or external evaluation.

For information on locating an external evaluator, see: Finding and Choosing an External Evaluator.

Web resources for external evaluators/expertise:

American Evaluation Association (AEA). Under Find an Evaluator (on the top navigation bar): Search Evaluator Listing searches for an external evaluator by keyword or state, while Browse All Listings produces a list of all evaluators in the database.

New York University Wagner Capstone Program. Organizations can apply for assistance from graduate students to carry out planning and evaluation activities, at a small cost to the organization. Projects must fit within an academic year.

Qualitative Research Consultants Association (QRCA). Under Find a Researcher (on the top navigation bar): search by geographic location, language, specialty, industry, and more.

Other potential sources for external evaluators/expertise:

Local universities/colleges: Universities and colleges often have staff and graduate students who carry out evaluation activities. Evaluation expertise is sometimes found in the departments of social work, psychology, or in university medical or research centers.

Professional evaluators/evaluation firms: There are many professional evaluators and evaluation firms, both nonprofit, and for-profit, which can be found by asking for recommendations from funders or other agencies, or though Web searches.

Please note, this is not a comprehensive list, and inclusion on this list does not suggest a recommendation or endorsement by the New York Health Foundation, or its staff or consultants.

Key criteria for selecting an external evaluation consultant:

  • Formal academic preparation/specialized training
  • Evaluation expertise and experience
  • Evaluation philosophy or approach
  • Topical or substantive experience/knowledge: are they familiar with the program area?
  • Capacity: Do they have sufficient staff and time to do the work?
  • Track record: Check references and ask to see samples of work products
  • Geography: Will a consultant 1,000 miles away be as responsive as a local one?
  • Personal style: Can you work with this evaluator/group?
  • Cost: Is the cost reasonable and/or competitive? Is it affordable?

Policy or Advocacy Evaluation

Alliance for Justice, advocacy evaluation tool, https://allianceforsafetyandjustice.org/. This tool is designed to help organizations identify and describe their specific advocacy achievements, both for pre-grant and post-grant information.

Coffman, Julia. “A User’s Guide to Advocacy Evaluation Planning,” Harvard Family Research Planning, 2009. Available at http://hfrp.org/evaluation/publications-resources/a-user-s-guide-to-advocacy-evaluation-planning, accessed February 8, 2019.

Coffman, Julia. “Framing Paper: Current Advocacy Evaluation Practice,” presented at the National Convening on Advocacy and Policy Change Evaluation, 2009. Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/overview_current_eval_practice.pdf, accessed February 8, 2019.

Guthrie, Kendall, Justin Louie, Tom David, and Catherine Crystal Foster. “The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach,” Blueprint Research & Design, prepared for The California Endowment, October 2005. Available at http://www.calendow.org/uploadedFiles/Publications/Evaluation/challenge_assessing_policy_advocacy.pdf, accessed February 8, 2019.

Innovation Network, Advocacy Evaluation Resource Center. Available at http:/www.innonet.org/index.php?section_id=4&content_id=465, accessed February 8, 2019.

Raynor, Jared, Peter York, and Shao-Chee Sim. “What Makes an Effective Advocacy Organization? A Framework for Determining Advocacy Capacity,” TCC Group, prepared for The California Endowment, January 2009. Available at  https://nyhealthfoundation.org/wp-content/uploads/2019/02/What-Makes-an-Effective-Advocacy-Organization.pdf, accessed February 8, 2019.

Reisman, Jane, Anne Gienapp, and Sarah Stachowiak, “A Guide to Measuring Advocacy and Policy,” Organizational Research Services, prepared for the Annie E. Casey Foundation, 2007. Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/aecf-aguidetomeasuringpolicyandadvocacy-2007.pdf, accessed February 8, 2019.

Logic Model

University of Wisconsin Extension, “Program Development and Evaluation.” Available at https://fyi.extension.wisc.edu/programdevelopment/logic-models/, accessed February 8, 2019. A variety of templates that can be downloaded.

W. K. Kellogg Foundation, “Logic Model Development Guide.” Available athttps://wkkf.issuelab.org/resource/logic-model-development-guide.html, accessed February 8, 2019.

Stanford University, “Empowerment Evaluation: Principles in Practice.” Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/fetterman.pdf, accessed February 8, 2019. Collaborative, participatory, and empowerment evaluation document for planning, implementation, evaluation, and sustainability.

Process Evaluation

Rossi, P., Mark Lipsey, and Howard Freeman. Evaluation: A Systematic Approach, Seventh Edition. Thousand Oaks, CA: Sage Publications, 2004. A comprehensive and standard evaluation textbook. Somewhat wordy but an easy read.

Weiss, Carol H. Evaluation: Methods for Studying Programs & Policies, Second Edition. New Jersey: Prentice Hall Inc., 1998. An excellent and very concise evaluation textbook covering the entire range of evaluation activities.

Wholey, Joseph, Harry Hatry, and Kathryn Newcomer. Handbook of Practical Program Evaluation, Third Edition. Jossey-Bass, 2010. A practical book covering a number of specific evaluation topics.

W. K. Kellogg Foundation. Evaluation Handbook. Available at https://www.wkkf.org/~/media/62EF77BD5792454B807085B1AD044FE7.ashx, accessed February 8, 2019. Can be downloaded as pdf document or ordered online. Free. Covers many evaluation topics, from evaluation planning through to utilizing evaluation results. Spanish version also available.

Outcome Evaluation

McNamara, Carter, “Basic Guide to Program Evaluation (Including Outcomes Evaluation),” Authenticity Consulting, LLC. Available at www.managementhelp.org/evaluatn/fnl_eval.htm, accessed February 8, 2019. Good, concise, and easy-to-understand primer on evaluation.

United Way of America, “Measuring Program Outcomes: A Practical Approach,” 1996. Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/Meausing-Program-Outcomes_-A-Practical-Approach.pdf, accessed January 27, 2012. Good primer on outcome evaluation, with a wealth of tools.

Trochim, William, M. “Research Methods Knowledge Base,” Web Center for Social Research Methods, Cornell University, 2006 Available at www.socialresearchmethods.net/kb, accessed February 8, 2019. Free online (hardcopy available for purchase). Searchable information on a number of research topics, including research design, sampling, measurement, data analysis, and reporting.

Weiss, Carol H. Evaluation: Methods for Studying Programs & Policies, Second Edition. New Jersey: Prentice Hall Inc., 1998. An excellent and very concise evaluation textbook covering the entire range of evaluation activities.

Measures

McNamara, Carter. “Basic Guide to Program Evaluation (Including Outcomes Evaluation),” Authenticity Consulting, LLC. Available at www.managementhelp.org/evaluatn/fnl_eval.htm, accessed January 27, 2012. Good, concise, and easy-to-understand primer on evaluation.

Trochim, William, M. “Research Methods Knowledge Base,” Web Center for Social Research Methods, Cornell University, 2006 Available at http://www.socialresearchmethods.net/kb/measure.php, accessed January 27, 2012.

United Way of America. “Measuring Program Outcomes: A Practical Approach.” Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/Meausing-Program-Outcomes_-A-Practical-Approach.pdf, accessed February 8, 2019. Good primer on outcome evaluation, with a wealth of tools.

Weiss, Carol H. Evaluation: Methods for Studying Programs & Policies, Second Edition. New Jersey: Prentice Hall Inc., 1998. An excellent and very concise evaluation textbook covering the entire range of evaluation activities.

Research Design and Sampling

Columbia University Center for New Media Teaching and Learning, “Types of Sampling,” Quantitative Methods in Social Sciences. Available at http://ccnmtl.columbia.edu/projects/qmss/samples_and_sampling/types_of_sampling.html, accessed February 8, 2019.

New York University Blackboard Learning and Community, “What is Research Design? Available at www.nyu.edu/classes/bkg/methods/005847ch1.pdf, accessed February 8, 2019.

Trochim, William, M. “Research Methods Knowledge Base,” Web Center for Social Research Methods, Cornell University, 2006 Available at www.socialresearchmethods.net/kb/design.php, accessed February 8, 2019.

World Health Organization, “Outcome Evaluations,” 2000. Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/7_outcome_evaluations.pdf, accessed February 8, 2019.

Qualitative Research

Boyce, Carolyn and Palena Neale. “Conducting In-Depth Interviews: A Guide for Designing and Conducting In-Depth Interviews for Evaluation Input,” Pathfinder International, May 2006. Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/m_e_tool_series_indepth_interviews-1.pdf, accessed February 8, 2019.

Guion, Lisa, David Diehl, and Debra McDonald, “Conducting an In-Depth Interview,” University of Florida, available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/Conducting-An-In-Depth-Interview.pdf, accessed February 8, 2019.

Miles, Matthew and A. Huberman. Qualitative Data Analysis: An Expanded Sourcebook, Second Edition. Thousand Oaks, CA: Sage Publications,1994.

Minichiello, Victor, Rosalie Aroni, and Terrence Neville Hays. In-Depth Interviewing: Principles, Techniques, Analysis. Sydney, Australia: Pearson Education, 2008.

Patton, Michael Quinn. Qualitative Research and Evaluation Methods, Third Edition. Thousand Oaks, CA: Sage Publications, 2002.

Wolcott, Harry. F. Writing Up Qualitative Research, Third Edition. Thousand Oaks. CA: Sage Publications, 2008.

Quantitative Research

McCaston, M. Katherine; “Tips for Collecting, Reviewing, and Analyzing Secondary Data,” CARE, June 2005. Available at https://nyhealthfoundation.org/wp-content/uploads/2019/02/Tips_for_Collecting_Reviewing_and_Analyz.pdf, accessed February 8, 2019.

Trochim, William M. “Research Methods Knowledge Base, Second Edition,” Web Center for Social Research Methods, October 2006. Available at www.socialresearchmethods.net/kb, accessed February 8, 2019. Free online (hardcopy available for purchase). Searchable information on a number of research topics, including: research design, sampling, measurement, data analysis, and reporting.

Weiss, Carol H. Evaluation: Methods for Studying Programs & Policies, Second Edition. New Jersey: Prentice Hall Inc., 1998. An excellent and very concise evaluation textbook covering the entire range of evaluation activities.