Introduction
To evaluate the impact of an intervention, a common design has investigators assign participants to either an intervention or a control arm, the latter serving to establish the counterfactual for the former to allow causal inference of intervention impact. If the participants in the two arms are assessed recurrently over a period, the influence of the assessments during data collection (ie, testing effects) must be considered. Assessments during data collection at an earlier time can influence participants’ responses to assessments at later times in two ways.1 One way is that questionnaire assessments may influence how participants respond to the same questions later because of practice, familiarity or other forms of reactivity.1 The other way is that assessments done either by questionnaire or objective methods (eg, weighing a child) may influence the behaviour of participants.1
A study conducted in Kilifi, Kenya demonstrated that caregivers participating in an infant monitoring programme perceived benefits from participating in the data collection which included the Developmental Milestones Checklist, measurements of infant height and weight, questions on maternal education and the Kilifi Developmental Inventory.2 Reported benefits included an increased awareness of the need to instruct the child and of the child’s nutritional needs.2 Caregivers reported trying the developmental measurements at home as play, indicating that participation in data collection could alter participant behaviour.2
These reports suggest that participating in assessments during data collection can influence participant behaviour, but little is known about such influence. Understanding the influence of data collection on participants is important because a large influence on the control arm may reduce the estimated impact of an intervention. Additionally, it is important for researchers to fully understand the potential benefits and harms that come with administering extensive and recurrent assessments.
The Shamba Maisha paediatric substudy in western Kenya provided an opportunity to investigate the influence of recurrent assessments on participants during data collection. The control arm was analysed as these participants received no intervention and therefore reported effects were caused by data collection itself. Exit interviews were conducted with participants who had been assigned to six control and six intervention health facilities, but the intervention arm interviews are not included in this analysis because these participants were exposed to both data collection and the intervention. We posed two research questions. First, how did the participation in recurrent assessments during data collection influence caregivers in the control arm, their actions, and their appraisal of their child’s development? Second, through which mechanisms did this influence occur?
Methods
Shamba Maisha paediatric substudy
Shamba Maisha was a cluster randomised controlled trial in Western Kenya encompassing substudies on pregnancy, adults, caregivers and young children, land use, and more.3 16 health facilities were randomly assigned within pairs to intervention or control arms to examine the effects of a multisectoral agriculture and livelihood intervention on changes from baseline to the end of follow-up in viral load suppression (primary outcome), clinic attendance, antiretroviral therapy adherence, food insecurity, depression, self-confidence and social support (secondary outcomes). All index participants of Shamba Maisha (366 intervention and 354 control) had HIV, were between 18 and 60 years of age, had access to farmland and currently were experiencing signs of food insecurity.3 Participants assigned to the control arm were promised the intervention (agricultural teachings, microloans and an irrigation pump) following the conclusion of the study. Data were collected at the health facilities and at home visits between 2016 and 2019. After the completion of the baseline assessment, follow-up visits were conducted every 6 months over the course of 2 years. These assessments included questionnaires, physical assessments, observations and nutritional assessments. Each of the five visits included a survey, blood sample collection and medical chart abstraction.
Participants in the paediatric substudy (173 intervention and 186 control) were required to be a caregiver >18 years old to a child 6–36 months old who was resident in a household in the compound of an index participant in the Shamba Maisha study at its start. We excluded households in which no resident was related to the adult index participant because unrelated household members would not be expected, in Luo society, to share human, financial or food resources with the index participant. These situations were rare in the Nyanza region. We excluded (and referred to immediate care) children who had severe malnutrition (below −3 z-scores of the median WHO growth standards). The paediatric substudy collected data about children on somatic growth, dietary intake, psychomotor development, quality of home environment, caregiver–child interaction and morbidity (online supplemental appendix 1). The primary outcome for the paediatric substudy was somatic growth. Secondary outcomes were morbidity, neurobehavioural development, quality of the home environment and caregiver–child interaction.
Supplemental material
Data collection
After 2 years of participating in assessments, nearly half of the caregivers were asked to participate in an exit interview. For logistical reasons, the exit interviews were not conducted in four (2 pairs of intervention and control) of the 16 health facilities. The exit interview and the questions asked were planned prior to the initiation of the Shamba Maisha paediatric substudy. Exit interviews were administered to 99 control participants in Luo and Swahili by trained Kenyan natives. Prior to beginning the interview, the interviewer explained the purpose of the interview and asked for consent to record the interview using a digital audio recorder. For the interviewer to continue, the participant had to sign the consent form. Interviewers used a standardised semistructured interview guide with eight questions. Asking about what the caregiver does differently and how their food situation has changed since beginning the assessments (online supplemental appendix 2). Caregivers were asked to report on changes in their child’s play and learning. They were probed on what they attributed these changes to. Caregivers were asked how they think differently about the child after watching the assessments and if they feel the child has benefited from participating in the assessments. Finally, caregivers were asked how their own physical and mental health had changed since joining the study. On average, interviews were 1 hour long. Incentives given were a small colouring book with crayons, other small items depending on age, and a bar of soap. Each caregiver received KES400 as a reimbursement for travel to the health facilities for data collection.
Supplemental material
Data analysis
The audio-recorded interviews were transcribed and translated by a qualified and experienced staff member at the Kenyan Medical Research Institute. This staff member did not conduct any of the qualitative interviews. Kenyan members of the Shamba Maisha research team coded text segments of all the deidentified exit interviews using Dedoose software. The codebook was developed initially a priori, and emergent codes were added during coding by study investigators and project staff. A total of 36 codes were identified.
The first two authors read 10% of the coded interviews from the control arm and discussed initial themes to ensure inter-rater reliability. One author performed a thematic analysis of the codes corresponding to half of the interview questions and the other author analysed the codes corresponding to the other half of the questions. The first author also gave a final review of all transcripts to ensure data saturation and accuracy.
Matrices were created using Google Sheets which included columns for the code number, theme, caregiver identifying number, quote that displayed the theme and questions for the field workers. When a concept or behaviour was noted at least three times with clear attributions to participating in the study, the concept or behaviour became a theme. Each theme was assigned a row. Many themes were reported by both authors. The first two authors regularly checked each other’s work and debriefed with a senior author. In instances where one author was unsure of a theme, the senior researcher was consulted. The analysis resulted in twenty themes in the matrices. After the matrices were entirely filled, the first author reviewed all the codes a second time ensuring no quotes were missed.
Throughout the coding process, the first two authors noted in the matrices when they did not fully understand the context behind a transcript segment. The Shamba Maisha field workers were then consulted for clarification.
After the interviews were analysed, the first two authors returned to Dedoose to investigate the mechanisms through which assessments influenced participants. Both authors analysed all the control arm codes for the third time to understand mechanisms that may have led to a behaviour change in the control arm. Quotes with strong attribution and explanation were noted.
Trustworthiness was assured by member checking with field staff, peer debriefing, having the two coders regularly check each other’s work and review by a senior author. Furthermore, the first author read each control arm transcript twice after completion of the matrices.
Patient and public involvement
The research questions for this study were informed by discussions with caregivers that were had throughout the paediatric substudy. Participants were not involved in the design, recruitment and conduct of the study. Results of the overall study were shared in community dissemination meetings. Although not all participants could be contacted directly, the community dissemination meetings were advertised. The burden of the intervention was assessed by participants through exit interviews with the intervention arm.
Discussion
Participation in recurrent assessments functioned as an unintended intervention. This unintended intervention altered the knowledge and behaviour of caregivers and children. The control arm received home and clinic assessments. By asking questions and conducting assessments, participants were introduced to new questions they may have not previously considered. They were asked questions on food variety, sanitation and their child’s play. Interviewers were trained to not elicit responses by making certain foods seem good or bad, yet most caregivers reported beginning to serve their child some of the 64 listed foods. By highlighting concepts through the assessments, caregivers began to evaluate whether their behaviours should change to incorporate what they have learnt. As the child also participated in the assessment and was influenced by his or her caregiver, the child’s behaviour was influenced. Incorporating new foods was a result of being asked about particular foods. As the child’s diet was diversified, new nutrients likely were incorporated into the child’s diet.
Caregivers believed play was more important than they did prior to the study due to being asked about many characteristics of play such as ‘Do you or anyone else regularly structure the child’s playtime?’ As caregivers more carefully and consistently observed their child’s play, they noticed changes in energy level, complexity of play and frequency of play. These changes could be indicative of health status and development. In a population where regular health appointments and evidence-based parenting resources may be inaccessible, play served as a method of evaluation for caregivers. Overall, caregivers’ attentiveness to the value of their child’s play after recurrent assessment led to caregivers being able to both facilitate play and evaluate their child’s play.
Children incorporated new play through modelling. Children modelled activities in the physical assessments such as kicking a ball, standing on one foot and jumping off a platform. Almost all caregivers reported their child incorporating at least one of these ‘new’ activities into their daily play. As all children in the study were under 5 years of age, the new activities were age-appropriate play as some assessment questions varied by the age of the child. Children taught their playmates these ‘games’, indicating the influence of assessments may go beyond the participants directly being assessed.
Despite the control arm not receiving the intervention, caregivers reported that assessments generated a benefit. By participating in assessments, caregivers were given an objective measurement of their child’s health and obtained other knowledge.
Caregivers reported study staff as being a resource due to their kindness and willingness to listen to participant responses. Interviewers were trained to listen to the participant and probe when more information was required. To the participant, this inquiry may have been interpreted as having someone caring for them. Additionally, mental health benefited as study staff reassured some participants that, even with HIV, they could live a happy life. Thus, having easy access to study staff reduced the stress of study participants.
The influences of recurrent assessments from data collection in this study on participants are distinct from a placebo effect. The latter refers to a response to a therapeutically inert substance or non-specific intervention (ie, a placebo) that derives from the participant’s expectations or beliefs regarding the intervention; a participant experiences a placebo effect because they believe that the inert intervention provided is helping them.4 In this study, control participants knew that they did not receive an intervention, and they were influenced instead by the interactions with study staff during data collection.
This analysis made use of a large set of exit interviews that asked caregivers about their experiences and perspectives throughout the 2 years of participating in the Shamba Maisha paediatric substudy. During the interviews, caregivers were asked what changed during the study and why these changes occurred. Asking caregivers what changed could have conveyed to them an assumption that something should have changed, but doing so is unlikely to have resulted in biased responses given that their children were growing and developing over the 2-year period, and therefore, change was expected. Exit interviews being conducted for the control arm of the study meant that the influence of assessments on the control arm could be determined without exposure to the intervention and its possible impacts. Although participants assigned to the control arm were promised the intervention at the end of the study, this knowledge is unlikely to have influenced participants’ responses. There was little-to-no mention of receiving the intervention in the interviews conducted with control participants, and the influences that they described were attributed by them to time at the clinic and time with the data collection. One challenge of the study was differentiating between the influence of the study assessments and typical growth and development. Caregivers frequently reported advances in the child’s play, mind and growth. Attribution was only given when explicitly stated and explained; the two analysts erred on the side of not giving behaviour changes attribution. The senior researcher assisted in this distinction as needed.
Implications
Caregivers in the control arm reported benefits from participating in study assessments in play, child courage, feeding and caregiver mental health, benefits that are expected to improve psychomotor development in children. Playing more frequently helps children master fundamental motor skills and self-regulation quicker due to increased physical activity and creativity.5 6 Children exercising courage likely leads to opportunities to improve self-confidence (eg, mastering play, making new friends and leading peers in play). A more diversified diet for children likely led to more nutrients and better nutritional status,7 supporting psychomotor development.8 9 Improved caregiver mental health enables positive parental practices because caregivers are able to focus more on developing their child’s psychomotor skills.10
Although assessments during data collection at an earlier time purportedly can influence participants’ responses to assessments at later times in two ways (ie, reactivity to questions and changes in behaviour),1 we have not identified any previous study that reported the influence that assessments had on behaviour. Caregivers previously reported unintended benefits of participating in assessments by caregivers in Kenya, but in that study, the influence that assessments had on participants’ behaviour was not analysed.2 The unintended benefits for participants in the control arm seen in the current study have six implications for future interventions and evaluations of them.
First, the literature on child psychomotor development points to the importance of nutrition and stimulation, with stimulation being more consistently beneficial.8 9 The Shamba Maisha agricultural livelihood intervention did not have intervention components that specifically focused on improving child nutrition or psychomotor development. The quantitative impacts on children that were hypothesised in the Shamba Maisha paediatric substudy were indirect and downstream. What was observed and reported from caregivers in the exit interviews revealed the strong interest caregivers had in their caregiving and their children’s development, stimulated by participating in the structured assessments during the data collection. Whereas the indirect and downstream theory of change of the agricultural livelihood intervention might improve caregiving practices slowly over time, having a more immediate and larger impact on child development would require complementary direct intervention components. Such intervention directed at early child development would have had willing and interested participants in the communities involved in this study.
Second, if participation in data collection can result in unintended consequences, future interventions might be planned to intentionally capitalise on this potential. That is, asking questions in a survey could be seen as a component of an intervention that may complement other components of intervention.11
Third, if participation in data collection can have unintended benefits for participants, could the same occur for harms? For example, if asking about healthy foods prompts caregivers to provide such foods more often, theoretically asking about unhealthy foods could prompt caregivers to provide these foods more often. In a context in which participants have other knowledge about healthy and unhealthy foods, this risk may be low.
Fourth, the benefits of participation in assessments could be advertised to participants in future studies as an incentive to participate, in addition to the typical small gifts or monetary compensation as incentives. Doing so would require care to not provoke social desirability bias.
Fifth, as the control participants were influenced by the assessments, the estimates of effectiveness of the intervention from the cluster randomised quantitative evaluation may have been diluted. That is, because participants in data collection in both arms may have been influenced by the data collection itself, seeing the additional impact of the agricultural livelihood intervention on caregiving and psychological development may be more difficult. For example, differences between intervention and control index participants in mental health and depression scores for the Shamba Maisha study,3 and differences between arms in quality of the home environment and caregiver–child interaction in the paediatric substudy, may have been diluted.
Sixth, given that the behaviour of participants may be influenced by the recurrent assessments, making inference of intervention impact more difficult, design choices that mitigate the influence of assessments should be considered. Shamba Maisha had five visits during which assessments were conducted over the course of 2 years. The frequent visits compared with, for example, just two visits (baseline and end-line), provided greater statistical power.12 13 The frequent, recurrent assessments, however, may have influenced participants’ behaviours because of the assessments. Thus, in designing studies using control and intervention arms, a trade-off between statistical power and avoidance of altering participants’ behaviour must be considered when deciding on the number of visits for assessment.
This post was originally published on https://bmjopen.bmj.com