By doing a better job of measuring engagement, we can help clear up some of the confusion about what engagement means and why it is important.
How can public engagement evolve in order to meet the needs and goals of citizens today? My previous post explored how public institutions may collaborate in their efforts to support engagement so that it becomes more efficient, systemic and sustained. For this final installment in the series, I’ll address the need for better ways to measure the perceptions, processes and outcomes of engagement, so that people know how to continually improve it.
Measuring engagement, especially in quantifiable ways, has always been difficult. There are a number of challenges, including:
- Difficulty in defining engagement. Many leaders understand engagement to mean the one-way dissemination of “correct” information to the community, in order to disprove “incorrect” information. Some see it as purely meaning face-to-face meetings, while others are focused mainly on online interactions.
- Differing forms of intensity. Engagement varies in intensity, from “thick” forms that are deliberative, labor-intensive and action-oriented, to “thin” forms that are fast, easy and potentially viral. Both are valuable, but for different reasons. Counting website hits or social media impressions may overemphasize the thin forms, while counting participation in meetings may overemphasize the thick forms.
- Just counting heads may give you the wrong impression. Counting participants in any setting may be deceptive because in places where conventional forms of engagement are the only ones being used, people tend to mostly engage when they are angry or fearful about decisions being made by government. In this sense, higher numbers of people “engaging” can be a sign that governments are failing to practice more proactive, productive forms of engagement.
- Inexperienced engagement staff. Counting staff positions dedicated to engagement as an indicator of government’s commitment can be misleading, since engagement is often defined in limited ways, these “engagement” job positions are often devoted to traditional PR or stakeholder relations. These jobs are often given by public officials to people who were particularly active campaign volunteers, but who have only a narrow and limited background in what engagement can do for governance and problem solving, and the many forms it can take.
- Inability to measure impact. One of the most critical measures of engagement, especially to citizens, is whether public input has some kind of meaningful influence on public policies and practices. This is a particularly difficult thing to assess; it defies quantitative measurement and is subject to many different variables.
Despite these challenges, it is possible—and, in fact, critically important—to assess public engagement, including quantitative measures of both processes and outcomes. (Leighninger and Nabatchi, “How Can We Quantify Democracy?” Dispute Resolution, Fall 2015). Engagement practitioners have been able to measure how many and what kinds of people are participating. They’ve also been able to examine if people value the engagement, how the experience affects them, and whether engagement inspires and supports volunteerism, voting and other civic measures.
However, in most places, these kinds of measurement practices are done only sporadically and on a project-by-project basis. Leaders and practitioners are more likely to be focusing on the basics—how many people are participating, and the demographics of those participants—and have not begun assessing community members’ perceptions of engagement opportunities, or evaluating the impacts of engagement on volunteerism or policymaking. When measurement does occur, the findings are often not shared with the community and community members are rarely asked to help gather, analyze or act on the data.
If we can do better measuring on a more regular basis, we may connect the findings about engagement with some of the high-level indicators that are being used to track community success. These include the Civic Index that the National Civic League has maintained for over 25 years, the Civic Health Index developed by the National Conference on Citizenship a decade ago, and the Soul of the Community research produced by the Knight Foundation. There are also specific community examples like the Wellbeing Index in Santa Monica, California. While these indexes are interesting and helpful for assessing where the community stands, it’s unclear whether and how a community’s engagement level impacts the overall scores.
We probably need a family of measurement tools in order to bridge the gap between narrow evaluations and broad indicators. I’ve written about potential tools and have also been involved in creating others. One example is the Participatory Democracy Index, which is being piloted in beta by the World Forum on Democracy in Europe. The more that we can connect people who are building new tools, the more we can learn from one another and ensure that we are on the same page about fundamental questions, like how we are defining engagement. Public Agenda convened an online dialogue among people who are grappling with the measurement challenge, so that we could compare notes and see if there are common themes in our work. Later in the year, the Knight Foundation will release a white paper based on what we found.
By doing a better job of measuring engagement, we can help clear up some of the confusion about what engagement means and why it is important. Many public officials and other leaders use the rhetoric of community building, citizenship and democracy, but the language often seems to be used mainly as a window dressing, making it difficult for citizens to monitor their progress or hold public officials accountable for their rhetoric. Finding new ways to measure these interactions can be a powerful way of making engagement more meaningful and productive.