Card image

Citizen science for social transformation: Measuring the impact of citizen science initiatives

ECS project
June 1, 2023, noon

ABSTRACT

While impact assessment remains a complex issue, a lot of experience and expertise has already been developed within the citizen science community of practice on how to approach and implement impact assessment in a holistic manner – taking into account the needs and expectations of citizen scientists, professional scientists, wider stakeholder communities, funding agencies, and policymakers. Progress has been made in elevating the profile of citizen generated data for policy making, while new impact metrics are being considered on all levels of the R&I system. Still, tensions arise when different impact logics meet. While these must be met in contextually sensitive ways, ethical principles cannot be neglected just to fulfil performance metrics – which might ultimately be harmful not just to engaged citizen communities, but wider society. 

We explored this theme during the first thematic meeting of the ECS collaboration group and we now share the different perspectives addressed on: 1. How to measure the positive influence on society and social groups? 2. How to convince public authorities that citizen science data are good for policy decisions? 3. How to initiate the potential for social impact? 4. How to use labels in CS in ethical appropriate ways? It is a step on a long journey to continue reflection in the community.


How we worked

The first thematic session of the ECS Collaboration Group meeting was held on March 7, 2023 with a focus on the evaluation and impact assessment of citizen science initiatives. The session was prepared and hosted by Stefanie Schürz, Teresa Schäfer and Barbara Kieslinger (ZSI/ECS project), with support from Antonella Passani (IMPETUS), Usue Lorenz (YouCount) and Stephen Parkinson (MICS). 


In preparation of the session, the hosts sent out a grid to be filled by the members of the group, outlining their most important impacts and the challenges they faced in achieving them. From these inputs, ZSI developed four recurring questions and prepared them for discussion in breakout rooms anchored by an interactive Jamboard. The main points of these conversations are summarised below. In total, 29 initiatives and 35 colleagues took part in the exercise.


Figure 1: Jamboard created in Group 1


1. How to measure the positive influence on society and social groups?

In the first group, the main focus of the conversation revolved around the question of how to measure long-term impact within the timeframe of a limited, short-term project. While outcomes are still within the scope of such interventions, larger impacts on e.g. the environment, behaviour, or policymaking usually unfold over time and are sometimes hard to quantify. Especially more abstract impacts such as raising awareness of an issue, science literacy, community building, or empowerment are not easily measured in a standardised, quantifiable manner. Some impacts might also be small and hard to scale, like direct impact on the participating citizens, and it might not be clear how these impacts affect wider society. The group suggested looking outside the scope of singular projects and towards the combined efforts of a plethora of thematically and methodologically related initiatives on various geographic scales – from the regional to the global. This might be a way of both continuing to measure impact beyond the runtime of a project and amplifying the impact achievable by individual, time-limited projects. Finally, dissemination and exploitation were also mentioned as important pillars of longer-term transformation, which ties both into the role of social media outreach and community building, as well as the role of traditional media, which has shown increasing attention to citizen science.


Figure 2: Jamboard created in Group 2


2. How to convince public authorities that citizen science data are good for policy decisions?

In this group, a lot of best practice examples were shared where citizen science produced data was either picked up by, or its value inscribed into policy. For instance, advocacy efforts by ECSA and citizen science initiatives via success stories led to the explicit recognition of citizen science and crowdsourcing as sources of environmental monitoring data in the reworked UNECE Aarhus Convention, namely in the recommendations on electronic information tools. Similarly, the European Network of Environmental Protection Agencies set up an Interest Group on Citizen Science (IGCS) to improve the understanding and use of citizen science within environmental policy and governance. Examples from concrete projects collecting data for policy that were shared in the group focus on odour pollution, use of natural resources, biodiversity monitoring, and others. Thus, the field of environmental monitoring gives overall rich evidence of the potential policy impact of citizen science, while showing a number of learnings to observe going forward:

First of all, it is important to understand that policymakers need to be made aware of the existence and the strengths of employing citizen generated data. They need to be shown the added value of such data, and their buy-in must be ensured. At the same time, such data must be made accessible to decision makers, and it needs to be translated in such a way that it is useful to both science and policy. This includes not just a contextual understanding of the data, but also data and metadata standards that allow for its use by ministries, agencies, scientists, and statistical offices. To this end, understanding the data and its needs in terms of quality and make-up is key. In each case, the question must be answered in practical terms of how data can be translated between various target groups – citizen scientists, professional scientists, policy makers – and into legislation and policy action. In this context the importance was also pointed to critically reflect on how data and initiatives can be protected from being instrumentalised by any one political party. And, as always, the challenges of securing sufficient resources to create policy impact remains to be considered. 


Figure 3: Jamboard created in Group 3


3. How to initiate the potential for social impact?

The third group focused their discussion on two questions. First, the group discussed how to successfully deal with awareness raising, recruiting, motivation, behavioural change, and sustainability to initiate the potential for social impact. Regarding the recruitment process, participants touched on the difficulties brought on by the use of digital technologies (webs/apps), which brings up many questions from how to address the digital divide through budgeting of such tools to privacy concerns. A decentralised approach to recruitment via the dissemination through various partner networks was named as beneficial, as was the use of different communication channels/strategies depending on the target group, but also the country of recruitment (e.g., newspapers in Germany, TV in Italy, national radio in France). Schools can also be a target for CS, but there is a need for specific educational materials for this to be successful. The role of established networks, associations and other multiplier organisations was also stressed, although their role also depends on whether citizens are already organised in the region and field of interest, and, associated with this, the already existing levels of mobilisation and motivation. It was also pointed out how it is always somewhat unpredictable how many citizen scientists a recruitment effort will mobilise. Organising regular feedback loops with consortium members about recruitment can be beneficial in adjusting a project’s approach. 

To keep citizen scientists motivated and achieve sustainable change, it can also be beneficial to involve them in dissemination efforts of the project and the results. To keep all involved stakeholders engaged, it is also important to balance the opinions represented in a consortium, including potential differences between lay and scientific citizens’ opinions on certain controversial topics. In any case, being aware and addressing cultural and subcultural differences – including between different countries – is key.

Regarding the question of how to approach different languages in transdisciplinary projects, the group was united in the opinion that native languages work best for successful communication. The citizen science initiatives represented in the group employed translations both produced within the consortium team and by technological means to meet the different stakeholder groups. Some colleagues could draw on good experiences with (private) facebook groups in different languages to support new participants and connect with each other, However, an important caveat is that this is not possible through the app for security reasons. 


Figure 4: Jamboard created in Group 4


4. How to use labels in CS in ethical appropriate ways?

The fourth group focused on the question of ethical classification. As certain standards are needed for accurate and comparable measurements, it is important to take into account ethical considerations when choosing labels. This is especially true in cases when marginalised and underserved groups are engaged, where certain labels may be dangerous or reinforce existing inequalities. In the discussion, once consideration brought up was the question of whether people want to identify with certain categories such as old, poor, or disabled, and in turn how to approach possible negative connotations of certain labels. As a solution, it was proposed to use language in appropriate and respectful ways while not disregarding social and material realities. Another potential approach is to let people label themselves, i.e., to let them self-identify. This, however, might be easier done in interactive activities rather than situations where specific underserved groups are sought out for inclusion. At this step of a process, it might be helpful to draw on specific organisations already serving such communities without having to label participants beyond that. Through this, participants are allowed to take part as individuals rather than representatives of certain societal groups. Furthermore, organisational data might be used instead of forcing people to label themselves, while people are not put into situations where they might be pressured to out themselves. 

Finally, it is also important to ask for whom the monitoring and reporting in certain categories is done – namely usually the funders rather than the participants. The pressure to produce Key Performance Indicators (KPIs) might not be compatible with ethical considerations, especially if it tries to make quantifiable the lifeworlds of people without capturing their quality and complexity. KPIs also often come back to what funders expect in terms of outcomes, instead of focusing on the needs and expectations of the engaged communities. Thus, it is important to keep some flexibility in the definition of KPIs and leave space for co-creation and co-definition of project goals. At the same time, the complexity this entails may also be overwhelming, depending at least in part on the employed methodology. In turn, existing and established labels may also be helpful, depending at least in part on the intended audience of an intervention. In any case, it is paramount to respect the contextuality of a project and be clear about its scope and limits. Having more time to reflect and adjust an approach is also always beneficial. 



Text and image credit: Stefanie Schürz

Zentrum für Soziale Innovation


x
This website is using cookies. More info. That's Fine