Upcoming Events:

- Full Calendar -


Alabama Integrated Pest Management (IPM) Center

Evaluation Training Step 2

Myth: Evaluations are unplanned, informal, and irrelevant.

Fact: Extension evaluations are PLANNED and COORDINATED activities that utilize time, human, and/or capital resources to provide high quality feedback. Evaluation is "like a good story, it needs some (qualitative) evidence to put the lesson in context, and some (quantitative) facts and figures that reinforce the message" (Allen 2009). According to the University of Wisconsin Program Development and Evaluation team, every evaluations must have: 1.) specific utility - satisfy the information needs of specific users; 2.)feasibility - have realistic approach to assessments; 3.) propriety - surveys should be ethical and legal, and respect the privacy of others, and; 4.) accuracy - evaluations should provide reliable information that can be archived and used in future (Taylor-Powell and Henert, 2008; Milstein and Wetterhall, 1999).

Myth: One evaluation technique fits all requirements for collecting useful feedback.

Fact: Techniques of evaluation depends on the educational program, resources available, and the purpose of evaluation. Knowledge about the purpose of evaluation is a critical factor to simplifying the assessment process. Ideally, an evaluation plan should be developed by educators based on specific needs, i.e., result monitoring (achievement of goals) and process monitoring (how efficiently goals are being met). The plan should then be populated with techniques based on the type of assessment that needs to be completed (Allen 2009).

Myth: There is no one who can help me design and implement simple survey instruments.

Fact: ACES has plenty of resources at hand to help field agents and specialists develop effective surveys. This website is loaded with useful experience-based information that is easy to understand and quick to adopt. For assistance with additional evaluation issues, please contact Dr. Ayanava Majumdar by email or telephone call.

Techniques for collecting information:

Below is a general list of evaluation tools commonly used by Extension programs (Taylor-Powell, 2002; McNamara, 2006). This list may undergo changes as new techniques become available. Please review information available on fundamentals of evaluation Web page for correctly understanding descriptions for each technique. Please use this chart for selecting evaluation techniques that suit your program, audience, and resource availability.

Technique Description When to use the technique?
  • Useful for collecting standardized information with low cost.
  • Can be conducted electronically (onsite using clickers, offsite using Websites) or in person or via mail.
  • Surveys can be done quickly in groups if desired.
  • Surveys could be longer, e.g., formal interviews.
  1. Needs assessment*
  2. Process evaluations

Excellent tool for group evaluations, performance indicator surveys.

  • Behavioral changes due to technology training is recorded by direct visual done on farm or in classrooms.
  • Observations may be systematized or recorded randomly.
  • Time consuming technique when conducted on farm but may not be so when done in classroom settings.
  • Cost of assessment could be high in some cases.
  1. Process evaluations
  2. Outcome evaluations*

Excellent tool for individual and small group assessments.

Case Study
  • In-depth evaluation of certain individuals best done in natural environment of the client.
  • Could be highly resource intensive, so plan ahead when developing grants.
  • Could be done as a performance indicator during program implementation to check who is benefiting and who is being left out.
  • Excellent tool for impact evaluation with limited sample size
  1. Outcome evaluations
  2. Impact evaluations*

Excellent tool for individual assessments.

  • Very popular assessment technique for Extension needs and impact assessments in the developing nations.
  • Information can be collected during conversations (unstructured interview) or a survey instrument may be used (structured interview).
  • Interviews could be part of case-studies.
  1. Needs assessments
  2. Impact evaluations*
Group Assessment
  • Nominal group technique involves brainstorming for possible solutions to a specific problem, listing all solutions (generally on paper or dry-erase boards), and then eliminating duplicate responses. Audience then rank the listed items and conclusions are drawn. A moderator is needed to guide participants.
  • Focus groups can be used to measure program impacts assisted by a moderator designated from within the client group (to reduce bias).
  • Many other techniques are available but application is limited.
  1. Needs assessment
  2. Impact assessment for Advisory Panels*
Expert Review
  • Assessment is done by a panel of experts or consultants.
  • Advisory Panels can also be effective in providing review of program effectiveness.
  1. Useful for performance indicator surveys*
  • These are statement by individuals that document primary reaction and change in awareness among clientele.
  • Could be part of interviews and surveys.
  1. Outcome evaluations
  2. Impact assessments*

* recommended technique

Click here for information about how to select appropriate evaluation technique (STEP 3).

Other sources of information:

References cited:

Click here to go back to Main Page.

For feedback on this page, please email azm0024@auburn.edu.


IPM in Other States • National network sites: NationalNorth CentralNorth EasternSouthernWestern Find an Expert