Methods and measures to evaluate the impact of participatory model building on public policymakers: a scoping review protocol

Introduction

Participatory model building

Participatory model building—including group model building and community-based system dynamics—is a set of methods that uses a series of scripted activities to introduce stakeholders to principles of systems thinking (eg, feedback loops, change over time) and to describe and explore the structure and function of a complex system.1–3 The scripted activities engage participants to define the variables of an endogenous system, posit causal relationships between variables and describe how these variables and relationships work in combination (eg, via feedback loops) to produce the behaviours that define the system.4 Broadly, these methods can help participants to view a system from a feedback perspective and to make their mental models (ie, a cognitive representation of a real system) explicit. Because participatory model building sessions typically engage diverse stakeholders with different expertise or experiences, the methods can also help broaden understanding of the system, identify commonalities or differences in how a system is understood, work towards common understanding of the system that reflects a shared dynamic hypothesis, and develop consensus about how to change or intervene on the system.5

Participation is a key component of group model building that dynamically shifts over time as participants engage in the modelling process. Participation by stakeholders can range from helping design workshops and problem definitions to participating in elicitation of models and related data collection activities, identifying actionable leverage points and using the generated model to advocate for policy change. At its core, participation in group model building reflects a structured process in which participants inform the development (and subsequent use) of a model that reflects a dynamic hypothesis for the problem at hand.1

Among the benefits of participatory model building is that many of the scripted activities produce discrete outputs or ‘artefacts’ that, in and of themselves, convey information about a system’s structure or function. For example, artefacts include lists of variables that may be important within the system (ie, from the ‘graphs over time’ activity),4 causal loop diagrams that describe how relationships between variables may work in combination to form feedback loops6 and lists of action ideas that describe how the system might be changed to achieve collective goals.

An additional potential benefit of participatory model building—but one that is not typically linked to a set of artefacts—is the impact on participants themselves.7 Through the facilitated activities that occur during the model building process, participants may experience changes across multiple domains, such as seeing the problem from a feedback rather than linear perspective, modifying their mental model of how a complex system operates,8 their knowledge and attitudes about individual components of the system and their confidence in their ability to intervene on aspects of the system.9 10 Such changes can be important because they may lead to changes in participants’ decision-making behaviour within the system.

Reviews by Rouwette et al (2002)11 and Scott et al (2013, 2016)9 10 have summarised early efforts at participatory model building evaluation. These reviews, however, focus predominately on evaluation of the application of group model building as a process rather than the impact on participants. More recently, Felmingham and colleagues (2023)12 conducted a review to understand how the ‘success’ of community-based system dynamic processes is conceptualised and measured in public health. None of these reviews, however, distinguish between different types of participants (eg, researchers, community members, direct service providers, policymakers) when describing evaluation effects or measures and methods of evaluation.9–12

Policymakers and participatory model building

Public policymakers (ie, elected officials, leaders of government agencies) are an important and unique group to consider in participatory modelling. A range of interventions have sought to affect public policymakers’ knowledge, attitudes and mental models with the goal of increasing the likelihood that they make decisions which promote policies that reduce health inequity and improve population health.13–16 Public policymaker engagement in participatory model building is an approach that could help achieve this goal.

In 2015, Atkinson and colleagues17 published a systematic review of the uses and impacts of system dynamics in health policymaking. The review identified six relevant articles, none of which evaluated the effects of policymaker participation in these processes (nor the outcomes of these policy-focused model building activities more broadly). A 2021 study by Haynes and colleagues18 interviewed 18 public healthcare system policymakers in Australia to understand their experiences participating in a 5-year initiative focused on cultivating capacity for systems thinking. The interviews indicated that the policymakers felt that systems thinking and participatory model building had utility in policy contexts, but that issues related to political feasibility needed to be addressed more directly. The article was accompanied by a series of commentaries about engagement of policymakers in participatory model building processes.19–21 A common theme across these commentaries was the need for a stronger evidence base about the impact of participatory model building on policymakers.

To fill this gap, we developed a protocol for a scoping review aimed at systematically identifying and characterising literature and existing gaps about the impacts of participatory model building on policymakers, including the methods and measures used to evaluate these impacts to date. Given the adaptability of group model building approaches across fields, we will scope the evidence across multiple fields and policy areas. Our goal is to provide methodological guidance for researchers and practitioners who engage policymakers in participatory model building.

The scoping review will be conducted in accordance with guidelines from the Joanna Briggs Institute Manual for Evidence Synthesis22 and the methodological framework outlined by Khalil et al,23 which builds on early scoping review methods.24 25 The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist for scoping reviews will also be used and reported.26 We will use the five-stage process of Khalil et al23 to execute the scoping review. We will not evaluate quality of studies given a lack of consensus on assessing for systems mapping studies.

Stage 1: identifying objectives and research questions

The objectives of the review are to: (a) scope studies that have evaluated the impacts of facilitated model building processes on public policymakers who participated in these processes; and (b) describe methods and measures used to evaluate impacts and the main findings of these evaluations. To achieve these objectives, the scoping review seeks to answer the following research questions (RQs):

  • RQ1: To what extent have the impacts of participatory model building on public policymakers been evaluated? (Objective A)

  • RQ2: What methods have been used to evaluate these impacts? (Objective B)

  • RQ3: What measures, survey items and interview questions have been used to evaluate these impacts? (Objective B)

  • RQ4: What are the key findings from evaluations of the impact of participatory model building on policymakers? (Objective B)

Stage 2: identifying relevant studies

After iterative consultations with an academic sociological science librarian and a health sciences librarian, we selected seven electronic databases which will be searched in the scoping review. These databases are: MEDLINE (Ovid), ProQuest Health and Medical, Scopus, Web of Science, Embase (Ovid), CINAHL Complete and PsycInfo. The Joanna Briggs Institute’s Population, Concept, Context (PCC) framework was used to formulate the search strategy.22 ‘Population’ search terms sought to identify studies that involved public policymakers, ‘Concept’ terms sought to identify studies that involved empirical methods to evaluate impacts and ‘Context’ terms sought to identify studies reporting on participatory model building processes, building on the terms used in a companion review.27

We first identified a set of relevant keywords from relevant papers known by the research team (eg, terms related to the evaluation of initiatives that involve public policymakers13 28; terms related to the evaluation of participatory model building9–11) and through multiple consultations with an academic librarian. Search terms and syntax were iteratively revised as two members of the project team reviewed the abstracts of articles retrieved from the results of trial searches to improve sensitivity and specificity. Table 1 shows the MEDLINE search strategy, which will be adapted for the other six databases using the Polyglot Search Translator. The Polyglot translations will be verified for accuracy by a health sciences librarian. We also screened articles included in the reviews by Rouwette et al (2002)11 and Scott et al (2013, 2016)9 10 on the effects of group model building generally to identify any studies focused on policymakers and identify relevant terms. Subject headings from the selected databases (eg, MeSH) will be added to the finalised search strategies. The search was conducted on 28 April 2023.

Table 1

Syntax of Ovid MEDLINE search strategy, in accordance with PCC framework

Stage 3: selecting studies to be included in the scoping review

Articles identified in the six databases will be imported into EndNote. Duplicate citations will be removed and the remaining articles will be imported into Covidence—a systematic review data management platform29 —for review according to inclusion and exclusion criteria and subsequent data extraction.

Inclusion criteria

Studies will be included if they meet all of the below criteria:

  • Describe a process whereby public policymakers—as operationalised by the population terms in table 1—are engaged in a facilitated, participatory model building process. Facilitation is defined here as the use of structured or scripted activities to engage policymakers in the model building process. Participation is defined here as engagement of policymakers in the elicitation of a systems model or map. This definition of participation allows for the inclusion of studies that reflect varying levels of participation along the continuum of participation in group model building processes

  • Describe empirical methods through which primary data were collected from policymakers before, during and/or after the participatory model building process for the purpose of evaluating the impacts of the participatory model building process on the policymakers. Before-only data will be included to describe measures and methods used despite its inability to describe impact

  • Published in English and in a peer-reviewed journal or as a peer-reviewed conference proceeding paper.

Exclusion criteria

Studies will be excluded if they meet one or more of the criteria below:

  • Data were collected from public policymakers, but not for the purpose of evaluating the impacts of a facilitated participatory model building process (eg, studies that collect data from policymakers to inform the modeler-driven design of a causal loop diagram, Delphi studies with policymakers, boundary objects from participatory policymaking exercises);

  • Data were collected from policymakers evaluating the impact of an unstructured workshop or exercise that was not facilitated (according to the definition of facilitation above);

  • Detail is not provided about the methods used to collect data from policymakers;

  • Non-peer-reviewed book chapters and manuscripts (eg, dissertations),

  • Articles that do not present original data or data collection methods for evaluation of policymakers (eg, commentaries); or

  • Written in a language other than English.

Geography and publication year will not serve as exclusion criteria.

Article screening process

The citation screening process will be executed in Covidence. Two reviewers will independently screen the titles and abstracts of all articles identified through the search strategy and indicate whether the article might meet inclusion criteria. Full-texts for abstracts meeting inclusion criteria will be obtained, screened and discussed by two reviewers. Any outstanding disagreements in full-text inclusion/exclusion status will be resolved through discussion with the larger project team. The reasons for excluding papers during the full-text screening phase will be documented. We will also screen the titles of all references cited in the included articles to identify additional articles that may not have been captured by the search terms. The references of review articles that appear relevant to this scoping review will also be screened for additional articles. The results of the article screening process will be presented in a PRISMA flow diagram.30

Stage 4: reporting of extracted data

We will develop a data extraction table to capture key information about each article that is included in the review. This information will span three domains: (1) study characteristics (RQ1), (2) methods and measures (RQ2, RQ3) and (3) findings (RQ4). To capture relevant information about study characteristics, we will extract the title, publication year, sector(s) of focus (eg, health, environment, transportation), geography (eg, country as well as state/province and city as relevant), study purpose as described in objectives or aims of the study, types of public policymakers participating in the model building process (eg, elected officials, administrative officials in government agencies) and details about the systems science methods that were informed by the participatory approach used (eg, agent-based or system dynamics modelling). Articles will be coded according to multiple characteristics as appropriate (eg, more than one sector, more than one type of policymaker).

For methods and measures, we will extract information about the general methodological approach used to evaluate the impact of participation on policymakers (eg, quantitative, qualitative or mixed methods), design (eg, pre-post, post-only), data collection methods (eg, interviews, surveys, document analysis), constructs assessed (eg, knowledge, attitudes, behaviours, satisfaction) and measures (eg, specific interview questions asked, specific items and instruments used in surveys). When reported, information about the psychometric properties of measures will be extracted. In terms of findings, we will extract information about the reported results related to the impacts of the participatory model building process on policymaker participants. This information will be summarised qualitatively as the approaches and measures used across included studies are anticipated to vary widely.

The data extraction table will be piloted in Covidence by two independent reviewers on five full-text articles. The extractions will be compared and iteratively adjusted to ensure that the appropriate information is consistently collected by both reviewers. The pilot testing will also provide an opportunity to revise the data extraction table and response categories. After the extraction table has been finalised, two reviewers will extract the relevant information from all included studies using the Covidence platform and resolve any conflicts that may arise through discussion.

Stage 5: summarising and identifying the implications of findings

The extracted information will be summarised to characterise the scope of the literature on methods and measures to evaluate the impact of participatory model building on public policymakers and impacts of these evaluations, when applicable. We will report results using narrative text, tables and figures (eg, PRISMA diagram). One table will present findings related to study characteristics (RQ1), one related to methods (RQ2), one related to measures (RQ3) and one related to findings (RQ4). The PRISMA-ScR checklist will be completed to ensure systematic rigour of the scoping review. The primary paper that presents results from the review will discuss findings within the context of the participatory model building literature, as well as broader literatures related to the measurement and impact of public participation in policymaking processes.31–33

Patient and public involvement

Members of the public were not involved in the design or dissemination plans of this protocol.

Ethics and dissemination

The scoping review produced using this protocol will generate an overview of how public policymaker engagement in participatory model building processes has been evaluated. The review will identify gaps in literature and also provide concrete guidance about methods and measures that researchers might use, or adapt, when evaluating the impacts of participatory model building processes on policymaker participants. Given growing interest in policymaker-engaged modelling processes,18–21 34 35 the results will be of interest to researchers, participatory modelling building communities of practice, funders, research-to-policy intermediary organisations and other organisations that convene policymakers. The findings of the scoping review will be primarily disseminated through publications in peer-reviewed journals and through conference presentations. We will also disseminate our findings directly to communities of practice that convene policymakers in participatory model building processes across a range of public policy fields (eg, public health, healthcare, social work, urban planning). This review will not require ethics approval because it is not human subject research.

This post was originally published on https://bmjopen.bmj.com