Upcoming Events:

- Full Calendar -


Alabama Integrated Pest Management (IPM) Center

Evaluation Types and Techniques

Disclaimer: The description of evaluation principles and applications have been modified to suit the functional interest of the ACES Programmatic Areas.

Impact assessment in progress in Baldwin County - 2010

Evaluation Training Step 1

In 1994, Donald Kirkpatrick proposed the four-level Learning Evaluation Model (LEM) in his book Evaluating Training Programs (Berrett-Koehler Publication) that summarized many of his earlier publications. ACES agricultural outreach teams use LEM and Transformational Educational Model (TEM) with rigorous documentation of outcomes and impacts.

Basic steps in Extension evaluation:

  1. Reactions (satisfaction from services and products):
    • This is the most basic form of evaluation that should be conducted at every Extension event because it seeks audience's satisfaction level immediately after an educational event.
    • These surveys are inexpensive and less time consuming during development and execution.
    • Survey instruments for documenting reactions are simple and short, unless they are combined with one or more of the higher assessment levels. Typically, reactions are gathered via rapid surveys with multiple choice questions instead of long probing questionnaire.
    • Surveys instruments could be paper based or electronic (clicker technology) and group assessments are fine.
    • If you have difficulty designing satisfaction rating surveys, the please contact your program Team Leader immediately.
  2. Learning (change in behavior):
    • Evaluations at this level assess the simple changes in audience attitude, knowledge and skills that are elements of learning. Survey of learning is a critical component of the overall impact assessment for an outreach program because it could indicate the desire to change and the need to adopt new technologies.
    • In most simple form, pre- and post-test surveys can be conducted at educational meetings to assess gain in new knowledge among audience. A combination of open-ended and multiple choice questions can be used in the survey instrument. Interviews can also be conducted to assess learning.
    • Survey instrument could be a combination of paper-based (open-ended) and electronic techniques (multiple choice questions).
    • Change in capability and skills is more difficult to document but can be done with carefully designed "hands-on" type instruments.
    • If you need assistance designing survey instruments for the assessment of learning, then contact your program Team Leader immediately.
  3. Transfer (technology adoption):
    • This is a high level of evaluation that is more rigorous and time consuming to conduct compared to reaction and learning assessments.
    • Technology transfer survey means evaluation of application or adoption of new scientific advancements. Focus is on the implementation of improved practices as an indicator of learning (outcome).
    • Transfer surveys may need assistance and cooperation from other colleagues because these surveys are detailed and time-consuming. It is wise to secure specific funds from outreach grants for these high level technology transfer surveys.
    • Survey instrument may consist of a combination of paper based surveys, interviews, testimonials, observations, and problem analysis.
    • Feel free to consult colleagues when developing complicated surveys and draft questions carefully. It may be worthwhile to conduct few "preliminary surveys" with cooperative clients that could improve the evaluation instrument before full execution.
  4. Results (impact assessment):
    • This is the highest form of evaluation because it has broad goals, is complicated and resource-intensive. Result or impact assessment is a critical component for documenting overall success of an educational program.
    • There are two general ways results or impact evaluations can be conducted:
      • Process A. Impact statements can be compiled loosely from previous surveys of reaction, learning, and technology transfer (adoption). This process may be necessary for projects that did not have or had a small outreach component when they were initiated. Projects with limited human and capital resources could also use past assessments to their advantage. Request expert advice on how to extract results or impacts from Levels 1, 2, and 3 evaluations.
      • Process B. Impact statements ideally should be compiled from direct observation or assessment of clientele via various forms of communication. Study of members of an Advisory Panel could also be a cost-effective technique for estimating impacts. Survey instruments are carefully developed to include various levels of assessments that focus on social, economic, and environmental benefits of transformational educational program. Audience should be given appropriate amount of time to respond to questionnaire and it may be necessary to go on-site with evaluation instrument in order to motivate response.
    • Impact assessments can be conducted via interviews, direct farm visits, observation of adoption behavior, comparison of technology adopters versus technology nonadopters (or early versus late adopters), study of farm logs and journals, photographs, etc. Remember, impact assessment may require additional funds in designing the system and for the needed travel.
    • If you need assistance in selecting impact assessment techniques applicable to your program, please contact your Team Leader immediately.

Types of evaluation:

(Based on the Logic Model for Program Design and Implementation)

  1. Needs assessment: This type of assessment is the starting point toward the development of need-based outreach programs. Types of needs that can be documented via surveys include felt needs (audience demand for change) and unfelt needs (unknown need of audience). Unfelt needs can be converted into felt needs by adjusting context.

    Usefulness of needs assessment: Results of needs assessment may also be used to justify continuation of active educational programs and in establishing program priorities.

    When to conduct these surveys: At the beginning of programs, during programs, end of old programs (future directions)

    Questions to ask: What are the characteristics of audience? What are the needs of audience? Where do the audience find information (i.e., information searching behavior and pattern)? What are the best educational approaches for specific audience? What are the barriers to technology adoption?

    Tips to conducting needs assessment surveys: These surveys can be conducted for general audiences (random samples) or for Advisory Panel members (nonrandom sample). Paper-based questionnaires are excellent for deep probing surveys. Make sure you provide adequate time to your audience to respond to open-ended questions. Include needs survey in event agenda to motivate participation from audience. Take plenty of survey prints, pens and pencils if you want your audience to provide written answers. Electronic surveys using clickers also can be conducted for short questionnaire (response time can be adjusted easily).

  2. Process evaluation: Using process evaluation surveys one can determine the satisfaction rating of clientele, quality of services provided, beneficiaries of educational effort, and barriers to technology adoption. This is the most basic form of evaluation that can be used in supporting all higher levels of evaluation. All Extension personnel are urged to conduct event assessment surveys as a tool of documenting quality of outreach event, target audience, and learning processes.

    Usefulness of process evaluation survey: This type of evaluation survey is specially useful for documenting success in program implementation and for determining future needs of the diverse clientele.

    When to conduct these surveys: At each regional or multi-county event, during client visit to Extension offices for consultation.

    Questions to ask: What is your major occupation (target audience)? Were you satisfied with delivery methods? How would you rate the overall quality of presentations at this event? How would you rate the quality of Extension publications distributed today? Was there too much information? Will you recommend the educational training to your colleagues?

  3. Outcome evaluation: This step in evaluation assesses the relationship between products and services (outputs) delivered to the extent of behavioral changes that has occurred due to an educational program.

    Usefulness of outcome evaluation: Goal of these evaluations is to understand the overall effectiveness of program inputs and outputs in producing the desired changes in behavior. Thus, outcome evaluations are useful tools to document extent of learning as a self-motivated process that can modify or increase existing knowledge, attitude and skills.

    When to conduct these surveys: At crop production meetings, small group meetings on farm (e.g., pre- and post-test surveys or learning assessment)

    Questions to ask and tips to surveyors: Document short-term changes in audience behavior by developing survey questions that include some key words like "awareness", "knowledge", "opinion" and "motivation" within your survey instrument. You can document medium-term changes in audience behavior by using key words like "change in practices", "technology adoption decision", "actions taken", etc. Document the unintended outcomes also. You may have to incorporate multiple tactics (see below) to do sophisticated surveys that can provide a reliable assessment of program impacts. For on-farm visits, do not forget to take a reliable digital camera that can record high quality videos and still images of grower actions, interviews, and success stories.

  4. Impact evaluation: This is the final step to documenting program success. This type of evaluation is easier to conduct for long-term outreach programs but more difficult for short-term educational projects. Grant funded programs, which are generally focused on resolving a specific issue or client need, are easier to evaluate compared to broad programs catering to a diverse range of audience.

    Usefulness of impact evaluation: These evaluations can be used to develop success stories for programs based on significant environmental, economic and/or social improvements. Program managers for long-term educational initiatives should allocate substantial time, effort, and resources to impact evaluations that can benefit future initiatives.

    Questions to ask: Try to track changes in economic, environmental and/or societal changes occurring after complete execution of program. This may require extensive travelling and interaction with clients at locations. A variety of survey techniques may be needed to do impact assessments. Try to identify and reduce "background noise" during evaluation in order to increase reliability of findings. Focus on new products and services that could be causing shift in practices. Sudden major changes in technology adoption or rejection should recorded. Try to assess client motivation to technology adoption in the absence of the education program (e.g., compare adopters versus non-adopters, early adopters versus late adopters).

    Use of "Performance Indicators" to measure progress: Performance indicators can be used to measure progress of educational program. Indicators are developed on the basis of program objectives and include specific measures such as participation rate, change in client satisfaction, decrease or increase in usage of agricultural inputs, number of meetings or workshops, number of publications, etc. Many of the new Integrated Research and Extension grants require clear mention of indicators as systematized measures of progress/success. Multiple indicators are generally needed in an evaluation plan in order to demonstrate change in behavior as a result of program implementation. The major advantage of using indicators in an evaluation plan is that success can be measured continuously instead of waiting for year-end impact assessments. Testimonials collected via phone calls, workshops, and interviews can be part of performance indicator. Disadvantages of indicators include:

    1. Educators may get distracted from the overall program goals
    2. Indicators by themselves could lead to wrong conclusions.

To learn more about the fundamentals of program evaluation, visit:

References cited:

Go to information on Evaluation Techniques (STEP 2).

Go back to Evaluation Toolkit Main Page.

For feedback on this page, please email azm0024@auburn.edu.


IPM in Other States • National network sites: NationalNorth CentralNorth EasternSouthernWestern Find an Expert