ON THE AGENDA | SEPTEMBER 27TH, 2016 | MATT LEIGHNINGER and Tina Nabatchi

Evaluating Participation

Evaluation does not have to be overwhelming or insurmountable. With a little bit of advance planning, evaluation can produce useful information, and can even be fun.

Key Talents for Better Public Participation, Part 15

Evaluation – the process of collecting, analyzing, and using information to understand how a program operates and/or the outcomes and impacts it has – is important for many reasons.

First, evaluation can help improve program implementation and management, for example by identifying what works, what does not and where improvements can be made.

Second, evaluation can help verify and strengthen accountability structures, for example by helping to keep the program within the scope of a project or decision statement, ensuring that money and resources are being used appropriately and efficiently and monitoring quality control.

Within the context of public participation, evaluation may be necessary to help determine whether the participation opportunity is complying with relevant laws, rules and mandates, and whether it is adhering to and upholding objectives and values such as diverse representation, fairness and participant understanding about how their contributions will be used. In turn, this can increase the perceived legitimacy and importance of public participation.

Finally, more and better evaluation can improve the study and practice of public participation. It can even challenge the notion that official, conventional participation processes are static, predetermined and impossible to change. If you ask a participant to rate the school board meeting or public hearing she attended, it may plant the seed in her mind that the process is not immutable and can in fact be improved. While critical, evaluating public participation can be challenging:

  1. Public participation is inherently complex and value-laden. There are no widely held criteria for judging its success and failure, and evaluating across all possible areas of interest is impractical.
  2. Evaluation results are likely to be important and of interest to a number of audiences, but various audiences may value different criteria and information.
  3. Evaluation can be a daunting task. The technical issues involved can be intimidating, as can be the idea of assessing one’s peers, colleagues and own professional work.
  4. Time, money, personnel and other valuable resources are often in short supply.

Despite these challenges, evaluation does not have to be an overwhelming or insurmountable task. With a little bit of advance thinking and planning, evaluation can produce useful information, and can even be fun. The basic steps of evaluation are listed below.

Basic Steps of Evaluation


1) Pre-Design Planning and Preparation

  • Determine evaluation goals and objectives
  • Decide issues of timing and expense
  • Identify audience(s) for the evaluation
  • Select evaluator(s)

2) Evaluation Design

  • Determine the evaluation focus
  • Develop research questions and indicators/measures
  • Decide how to collect data

3) Evaluation Implementation

  • Collect and store high-quality data

4) Data Analysis and Interpretation

  • Analyze data and interpret results

5) Writing and Distributing Results

  • Decide how to best communicate the results
  • Prepare and disseminate materials about results

We cannot do each of these steps justice in a short section in one module – indeed entire books are written about evaluation. Here, we simply wish to provide some basic information to help participation leaders make wise choices about evaluation. In particular, we focus on two skills: understanding process and impact evaluation, and data collection.

Understanding Process and Impact Evaluation

Participation leaders should consider both process evaluation and impact evaluation.

In general, the purpose of a process evaluation is to better understand program implementation and management and to determine whether the program is operating as intended.

Process evaluations focus on the “what” questions: What was the program intended to be and do? What has it done in reality? What are the staff and management issues involved in the program?

Process evaluations can be particularly helpful for program managers. For example, a process evaluation can help determine what management practices did and did not work; whether a program is being managed well and efficiently; whether staff are appropriately trained and have developed effective working relationships; and areas for administrative development and improvement.

The general purpose of an impact evaluation is to assess whether a program achieved its goals and produced its intended effects.

Impact evaluations focus on the “so what” questions: What are the impacts on participants, communities, and public policy or public action? Do the impacts vary among groups? Is the program worth the costs?

Impact evaluations are useful not just for program managers, but can also produce useful information for public officials, civic leaders, funders, scholars, practitioners, participants and the general public. For example, an impact evaluation can help determine the outcomes of a program; whether the program generated any positive, negative or unintended consequences or effects; and whether the program is effective in comparison to alternatives.

Given this information, impact evaluations can help prioritize actions and inform decisions about program modification, expansion, replication or elimination.

Data Collection

Once participation leaders decide which type of evaluation to conduct, they must figure out how to collect data. Data collection techniques can be used individually or in combination, but decisions on data collection should be made in light of considerations about budget and time constraints and privacy and confidentiality issues.

Three sources of data are likely to be most useful:

Observational Data: Information collected by direct observation of public participation. This method is straightforward and comparatively simple, but it can be time-consuming and produce data that is challenging to analyze and verify.

Archival Data: Existing information about public participation, including meeting minutes, forms and records, program documentation and guides, etc. This method is comparatively fast, easy and inexpensive. However, the quality of archival data can vary considerably in terms of completeness, quality and content.

Data from Program Personnel, Participants and Stakeholders: Information can be collected through interviews, surveys and similar techniques. However, close attention must be given to the design of such instruments to ensure high quality and analyzable data.

Numerous resources exist to help with the evaluation of participation. Here are a few to check out:

Public Pathways: A Guide to Online Engagement Tools for Local Governments

A Manager’s Guide to Assessing the Impact of Government Social Media Interactions

A Manager’s Guide to Evaluating Citizen Participation

Making a Difference: A Guide to Evaluating Public Participation in Central Government

Evaluating Deliberative Public Events and Projects

National Collaborating Centre for Healthy Public Policy

The Pell Institute Evaluation Toolkit

Participedia

National Coalition for Dialogue and Deliberation assessment tools

Everyday Democracy

Next week, we'll finish out the series with more resources to help you in your public participation effort!


Read other blogs in this series:

Part 1: Ten Key Talents for Better Public Participation

Part 2: Building Coalitions and Networks

Part 3: Cultural Competence and Engaging Youth

Part 4: Recruiting Participants

Part 5: Communicating About Participation

Part 6: Managing Conflict

Part 7: Providing Information and Options: Issue Framing

Part 8: Providing Information and Options: Sequencing Discussions and Writing Discussion Materials

Part 9: Managing Discussions, Blog 1 of 3: Facilitating Face-to-Face Groups

Part 10: Managing Discussions, Blog 2 of 3: Recording and Online Moderation

Part 11: Managing Discussions, Blog 3 of 3: Ground Rules and Feedback

Part 12: Helping Participants Generate and Evaluate Ideas

Part 13: Helping Participants Make Group Decisions

Part 14: Supporting Action Efforts

Portions of this post were excerpted with permission of the publisher, John Wiley & Sons, Inc., from Public Participation for 21st Century Democracy by Tina Nabatchi and Matt Leighninger. Copyright© 2015 by John Wiley & Sons, Inc.. All rights reserved. This book is available at all bookstores and online booksellers.


Comments

Comment on this article.







Recent Blogs

HELP US BUILD A DEMOCRACY THAT WORKS FOR EVERYONE

Public Agenda knows what it takes to fuel progress on critical issues.
We need your support to keep things moving!


Join the Community

Donate

Take Action