Quality measures in primary care skin cancer management: a qualitative study of the views of key informants

Introduction

Skin cancer accounts for the largest number of cancers diagnosed in Australia and contributes substantially to cancer mortality.1 Non-melanocytic skin cancers (NMSCs) are mostly comprised of basal cell carcinomas (BCCs) and squamous cell carcinomas (SCCs), the keratinocyte cancers, but also encompass a variety of rare cancers of other skin cells (eg, Merkel cell carcinoma).2 Keratinocyte cancers are more common than melanoma but their risk of metastasis is lower.2

Australia and New Zealand have the highest skin cancer incidence rates in the world, due to their largely light-skinned populations, excessive sun exposure and high detection rates.3 It has been estimated that in Australia, two in three individuals will be diagnosed with skin cancer in their lifetime.4 5 The Australian health system costs of skin cancer have been estimated at A$1.7 billion each year, making it the most expensive national cancer to manage.6 The number of NMSC and melanoma presentations are expected to continue growing, particularly among older age-groups,5 7 placing significant demand and economic burden on the care system.

General practitioners (GPs) play a central role in the diagnosis and treatment of skin cancer in Australia.8 Compared with other countries, Australian GPs have a greater role in melanoma management.9 10 Primary care skin cancer clinics have proliferated in Australia to meet increasing demand, staffed in large part by GPs with a special interest in skin cancer, with many having undertaken relevant additional training.11 12

Despite the practical necessity for the vast majority of skin cancer to be managed in primary care, and the significant health system costs associated with skin cancer management, there is little specific guidance delineating roles and responsibilities for GPs. The Royal Australian College of General Practitioners (RACGP) issues evidence-based guidance to GPs mainly around prevention.13 Cancer Council Australia’s Clinical Practice Guidelines (CPGs) provide little primary care-specific information on diagnosing and managing melanoma,14 and only a brief general chapter on keratinocyte cancer management in primary care.15

Potentially concerning variations in the quality of care for skin cancer have been noted, including variability in diagnostic accuracy16 17 and familiarity with relevant guidelines.18 19 There have been calls for greater attention to improving quality in skin cancer management in primary care settings, including the importance of more extensive training for GPs.10 20 21 It has been suggested that implementing quality indicators, such as measurable elements of practice performance, would allow primary care practitioners to compare their performance to that of their peers and improve the quality of care they provide.22 23

In line with Donabedian’s model of healthcare quality, indicators or measures can relate to attributes of settings (ie, structure), the giving and receiving of care (ie, process) or the effects of care on health (ie, outcome); compliance with structural and process measures can support improved outcomes.24 25 Recently, Jobson and colleagues developed a set of quality indicators for melanoma, targeting care processes that can be conveniently measured using pathology test results in a variety of care settings.26 Although process measures (eg, lead time to treatment) are important and tend to be most frequently utilised,27 28 structural measures (eg, surgical checklists) and outcome measures (eg, patient experience) specific to the primary care setting are also potentially valuable,28 albeit more difficult to measure routinely.

A recent scoping review identified 13 groups of quality measures relevant to structure, process or outcomes measurement in primary care skin cancer management.28 The aim of this qualitative study was to add to that knowledge by gaining insights from expert informants in skin cancer treatment and care about potential quality measures and the barriers and facilitators associated with implementing them, including their perceived feasibility and acceptability in primary care settings.

Methods

Study design

Semistructured interviews and qualitative proforma surveys (QPSs) were conducted. QPSs comprise the same questions as interviews, but enable participants to consider how they would write answers rather than discuss issues face to face or virtually with a researcher.29 As a tried and tested method, they comprise minimal questions encouraging extensive answers and can provide ample space after each question posed for free text responses.29 30 They also provide options for participants in a study by extending a different approach to response modes, and they can help circumvent difficulties with interview scheduling.

Patient and public involvement

Patients or the public were not involved in the design or conduct of this research.

Participants and recruitment

Potential participants (GPs and others working with and within the primary care arena linked to general practice) were identified via known networks of research team members or through reaching out to professionals who were experts in the field such as senior clinicians, managers, policymakers and academics. Participants with expertise in skin cancer in primary care were contacted purposively, with the goal of recruiting participants covering different settings including those working in both urban and rural environments. This number was considered an optimal sample size for qualitative studies that address complex research topics within the healthcare domain31 32 and sufficient to enable rich information to be derived from a broad range of perspectives.

Participants were approached by email; those who accepted (63% of those approached) provided informed consent and completed a brief demographic questionnaire, providing details about their age, gender and country of residence. Participants were provided with an option of participating in either the interview or the QPS, or both, each requiring a time commitment of around 45 min, with participation costs acknowledged by a payment of AUD$80.

Question development

Concurrent with a scoping review,28 literature relevant to the aims of the study was assessed and the research team developed a set of questions for use in both the QPS and interview. Participants were asked to provide their perspectives and experiences on skin cancer quality measurement, its importance, and barriers and facilitators of quality measurement. The questions were piloted with two participants to ensure question relevance and clarity (online supplemental appendix 1). With little need to amend, pilot data were incorporated into the study to support the analyses. The conduct and reporting of this study followed the Consolidated Criteria for Reporting Qualitative Research checklist (online supplemental appendix 2).33

Supplemental material

Supplemental material

Data collection

Interviews

Semistructured interviews were conducted by a Research Assistant (BIL, female, held a Master of Medical Science) following training on interview methods from FR. Interviews took place via phone or video (Zoom) during which detailed notes were taken by BIL. Participants knew that BIL was a member of the research team investigating primary care skin cancer management. BIL had no prior relationship with participants. Interviews were recorded and professionally transcribed. The average duration of interviews was 43 min, and on average 20 (1.5 spaced) pages were transcribed per interview.

Qualitative proforma surveys

Links to a REDCap survey platform were emailed to participants electing to participate using this method. QPSs contained free text boxes for participants to respond in detail, with design based on previous research using the QPS method34 to collect data via online platforms and encourage extended participant responses. Participants who elected to complete a QPS written survey had at least 1 month to complete it, with up to 3 months for some, and were sent two reminder emails within this period. Using an intramethod approach,35 36 supplementary interviews were also offered to participants who completed QPSs, to clarify their responses and allow follow-up probes, where necessary.

Data synthesis

QPSs and interview transcripts were deidentified and assigned code numbers prior to analysis. The framework method of analysis37 was used to organise information about quality measures, implementation barriers and implementation facilitators. The data synthesis process according to the stages of framework analysis is shown in online supplemental appendix 3. First, a team of qualitative and mixed-method researchers familiarised themselves with the data to gain a broad overview of participant responses. QPS and interview data differed in depth and richness, requiring separate analysis prior to triangulation. QPSs contained briefer responses with informant ideas more explicit and direct, and interview transcripts were richer,36 requiring a longer period of familiarisation.

Supplemental material

A deductive approach was used to identify themes, framed within the domains of the Donabedian framework (structure, process, outcome). This framework was chosen because of its influence in the field of health services research and the widely known organising concepts of structure, process and outcomes in healthcare quality measurement.38 SS, NS, and FR independently read a subset (25%) of interview transcripts, identifying and applying initial codes to themes and using inductive analysis to identify groups (ie, common groups of meaning within each theme). Groups interpreted from participant responses were examined alongside those identified in the concurrent scoping review to ensure that quality measures were being interpreted in a similar way across data sets. A thematic framework was created and used by SS and NS to code the remainder of the interview transcripts and QPSs. As new codes and groups were identified, transcripts and survey responses were reread, and groups were revisited, reinterpreted, and renamed or redefined accordingly.

Data were indexed and summarised into a purpose-designed Microsoft Excel spreadsheet, separately for structure, process and outcome. SS and NS met regularly throughout data coding and charting phases to reach agreement on categorisation and synthesis, consulting with FR regularly. Two team workshops39 were held to discuss interpretation and reach consensus on the final themes and subthemes identified in the transcripts and QPSs. Finally, findings from interviews and surveys were triangulated to identify common groups of quality measures and groups of barriers and facilitators.

Results

Characteristics of participants

Data from 15 participants were collected across 36 data capture events (15 demographic surveys, 12 interviews and 9 QPSs) between January and April 2022. Six participants had interviews only, two of whom had a follow-up interview to provide further information on themes discussed in the initial interview. Five participants completed a QPS only, and four participants completed both a QPS and follow-up interview to clarify their responses and provide further information. The 15 participants represented 75% of the 20 who originally consented to participate. Of the five individuals who originally consented but did not participate, three were informed that data collection had ceased and two failed to complete the QPS within the data collection period.

Two-thirds of the respondents were male, over half were aged 50–59 years, and 60% had been working in the skin cancer field for over 15 years. One-third of respondents were Australian GPs, the others comprised specialists and researchers. Characteristics of participants are displayed in online supplemental appendix 4.

Supplemental material

Quality measures

For structural measures of quality, three key groups (table 1) emerged: diagnostic tools and equipment, education and training, and documentation and protocols. Diagnostic tools and equipment were seen as important by five participants, who regularly used these tools; three of these participants recommended confocal microscopy. Education and training encompassed skill and knowledge development (related to dermoscope use, use of CPGs and professional responsibilities), accreditation for skin cancer practitioners and clinics and continuing medical education programmes. Accreditation structures and certification were described as crucial in light of the “largely unregulated (ID5)” nature of skin cancer practice in primary care, where there are “no current official measures (ID7)” for monitoring.

Table 1

Quality measures triangulated from QPS and interview data

Documentation and protocols primarily related to audits of performance, record keeping (eg, photographing lesions and capturing clinical information and structuring consultation notes) and evidence-based guidelines and checklists to standardise local practice and provide medicolegal protection. Four participants highlighted the lack of guidance available for measuring skin cancer care quality, especially for GPs who have not undertaken specialised training in skin cancer medicine, with one participant stating: “there is no way to measure skin cancer care [quality] for occasional GP skin cancer care (ID7)”.

Four key groups emerged for process measures of quality (table 1). Quality measures related to diagnostic processes were most common, including biopsy performance (eg, rate of partial biopsies), diagnostic rates (eg, benign to malignant ratios), careful skin examination and measures to guide patient staging at diagnosis. Measures reflecting delays in treatment were highlighted by four participants as useful indicators of quality. Treatment process measures frequently related to surgical procedures (including type of surgical technique, number of lesions excised and re-excision rates) but non-surgical procedures were also mentioned (including cryotherapy, topical treatments and antibiotics). Post-treatment management was primarily related to appropriate patient follow-up. Interpersonal process quality measures related to patient communication, such as ensuring “attention to detail in conveying treatment plans and histopathology reports (ID2)”.

There were three key groups of outcome quality measures (table 1). Measures for poor treatment outcomes primarily related to treatment infections and complications through self-audits and benchmarking rates against others’ performance (eg, “biopsy complication rates [ID14]”) and skin cancer recurrence. Patient-reported measures were mostly advocated in relation to patient-reported experiences and outcomes and satisfaction with care, including “patient feedback (ID10)” and when patients “have recommended you to their family and friends (ID2)”. Long-term outcomes such as morbidity and mortality were highlighted by three participants as important quality measures at levels of aggregation higher than the individual clinician.

Barriers and facilitators of implementation

Three themes of barriers to implementing quality measures (table 2) were identified: clinician resistance, system inadequacies and external factors. Clinician resistance was discussed primarily in terms of time constraints and burnout, but there was also mention of discomfort with peer comparison and the related desire for clinical autonomy regarding “how they [GPs] improve their practice and where they spend their time (ID9)”. System inadequacies were described mostly in terms of insufficient training and feedback necessary for improvement, and limited access to resources, particularly in relation to equipment costs, especially for GPs in lower volume rural areas. Six participants spoke about the lack of clarity in GP role and expected competencies vis a vis the role of the specialist and their availability, and how this should vary with GPs’ varying skin cancer qualifications and accreditation. External factors related most commonly to variations in case-mix that can influence patient management, and which make comparison between GPs difficult. Defensive practices, such as “rely[ing] more on referrals in cases where in-depth diagnosis or tools are required (ID5)” reportedly occurred due to the lack of clarity about GP and specialist roles, and commercial pressures could potentially “encourage doctors to artificially manipulate the results (ID6)”.

Table 2

Barriers triangulated from QPS and interview data

Four themes of facilitators of implementing quality measures (table 3) were identified: incentives, education, agreed and feasible indicators and support and guidance. Incentives primarily related to accumulating points as part of continuing professional development (CPD) programmes, to encourage appropriate training, and to acquire recognition as a subspecialist GP. Related to this, the importance of education availability was uniformly recognised as being a key enabler. Agreed and feasible indicators referred to indicators that were developed via communication and consensus among doctors, that were time efficient, and gathered via appropriate methods, such as “a well collected random sample of patients (ID12)”; a related enabler was the need to respect the clinical perspective of GPs, giving them primary management over the process and the flexibility to opt in. Support and guidance related to appropriate feedback systems providing constructive and positive reinforcement along with “access to specialised clinics with a multidisciplinary team of experts (ID3)” for best patient care.

Table 3

Facilitators triangulated from QPS and interview data

Discussion

This study sought to elicit key informants’ views about quality indicators for skin cancer management in primary care. Ten key groups of quality measures were identified, along with key barriers and facilitators associated with their implementation. These findings add detailed support to those identified from a recent scoping review.28 Most thematic elements of the two explorations matched well, but key differences were observed for education and training quality measures. In the scoping review, these measures reflected specific care and management processes, but in the current study, these measures additionally reflected accreditation requirements as well as broader professional responsibilities. The difference likely reflects a concern to ensure appropriate structural underpinnings for the primary healthcare challenge faced in Australia, which did not come through as strongly in the published literature. Other differences included a greater focus in the literature on quality measures for prevention, delays in care, and patient engagement in care (eg, measuring the proportion of patients who completed satisfaction questionnaires)28 when compared with the current qualitative data.

An overarching sentiment expressed by participants, derived across datasets, was that skin cancer management in primary care in Australia is largely unregulated and unsupported, and that quality measures could help to improve care provision. This sentiment echoes the interest in this area demonstrated by the recent publication of a set of recommended quality indicators for melanoma management, many of which are applicable to primary care settings.26

The process measures of quality identified in this qualitative study were most frequently related to diagnostic and treatment processes, but delays in care and interpersonal processes were also raised. Most process quality measures highlighted are those that can be assessed through the routine collection of histological data from pathology reports which, with appropriate systems in place, can be highly feasible to implement. The recently proposed melanoma indicators specifically targeted indicators that could be routinely collected from histology reports.26 The concurrent scoping review also identified important process indicators that are more difficult to assess routinely, but are nevertheless considered important, like quality of the interpersonal relationship with patients. This included communication during consultations, ensuring that patients understand diagnosis and treatment options, as well as taking measures to assess the patient care experience overall, through survey dissemination.

The key elements of structure quality measures related to diagnostic tools and equipment, education and training, and documentation and protocols. In particular, inspection aids such as the dermascope were emphasised as critical to optimal skin cancer detection, and thus important for high-quality skin cancer care and a major element of skin cancer education and training related to dermoscopy. Systematic reviews have identified that dermoscopy and training can improve diagnostic accuracy for melanoma in primary care settings and reduce unnecessary excisions and referrals.40 41 It has recently been noted that even Australian GPs that subspecialise in skin cancer diagnosis and management vary in the extent to which they use dermascopes when conducting skin assessments,10 underscoring earlier calls for promotion of dermascope use in these settings.16 However, it is not clear the extent to which dermascope training and use across a range of GP settings, from lower through to higher incidence, volume, or both, is cost-effective. Less commonly focused on structures, such as documentation and protocols, were emphasised as important for quality, including appropriate CPGs, medico-legal support systems and participation in clinical practice audits, for example, using the SCARD system42 to allow GPs to compare their performance with their peers.

Outcome measures of quality related to poor treatment outcomes, patient-reported outcomes and long-term outcomes. Participants acknowledged that patient-related and measurement-related factors confound the validity and reliability of externally recorded measures, which has also been widely acknowledged in other research,43 44 suggesting the preferential use of these measures at higher levels of aggregation. Patient-reported outcomes, which are increasingly a focus of quality measurement,45 and commoner treatment outcomes such as acceptability of visible excision scar may be amenable to monitoring at individual practitioner level, especially if systematically monitored and reported back by a practice.

Barriers and facilitators

The most commonly reported barrier to implementation of quality measures was time constraints, preventing GPs from compiling and analysing performance data and taking measures to place the patient at the centre of care. System inadequacies identified included a lack of standardised training, a lack of feedback about skin cancer management, a lack of clarity in the roles and competencies of GPs in skin cancer referral and treatment, reflecting a lack of clear CPGs specifying these responsibilities, which can in turn lead to the practice of defensive medicine.46 Clinician resistance related to discomfort with peer comparison in part reflects concern about control of confounders, to account for variations in case-mix, necessitating the tailoring of CPGs to address common contexts but can also reflect perceived challenges to clinical autonomy.47 48 Participants also alluded to the possibility that GPs may face external pressure to frame performance data to ‘fit the standards’, potentially compromising patient care and invalidating comparisons.

Education was the most highly reported facilitator, enabled in turn by its generation of CPD points. Recognised accreditation in skin cancer management would also incentivise GPs to upskill, enhancing their credibility with patients and other doctors. At the time interviews and surveys were conducted, for example, the Skin Cancer College Australasia (SCCA49) offered training, services and activities that contributed to SCCA accreditation, but did not accrue CPD hours with the RACGP. By contrast, participating in the SCARD self-audit database was recognised as a CPD activity by both bodies.50

Measures need to be agreed (so they are considered relevant and credible) and feasible (time-efficient, collected as part of routine care and easily accessible), allowing GPs to focus on patient care. While some participants advocated mandatory quality measurement, others encouraged an opt-in approach, echoing the decades old adage of ‘tools not rules’.51 Other options include internal audit and feedback, removing the complications associated with transferring data to external bodies,52 generating trusted locally generated, context-specific data.53 Regardless of the audit structure (internal vs external), system characteristics that were advocated included: involving doctors from the design stage, fostering open communication, providing constructive feedback, positive reinforcement and peer support rather than punitive action, and fostering multidisciplinary collaboration to enhance patient outcomes. Appointing clinical ‘champions’ was recommended to generate and sustain enthusiasm, a factor frequently highlighted as important in other quality improvement studies.54

Strengths and limitations

This study used two qualitative data collection methods to explore a range of potential quality measures for primary care skin cancer management and factors associated with their implementation. Offering opened-ended QPSs and interviews provided flexibility to participants, and triangulating two datasets gave greater strength to the findings than would a single methodological technique. The use of QPSs in this context is highly original and has not, to our knowledge, been reported in other literature. Purposive recruitment was used to increase the range of clinical and academic perspectives on skin cancer management and primary care from individuals known for their experience and credibility in the field.

There are several limitations of this study. First, participants were from diverse professional backgrounds as we wanted to represent a range of perspectives; but five GPs was not intended to be representative nor their views generalisable. The small numbers sampled limits the implications that can be drawn, particularly for subgroups (eg, for GPs in skin cancer clinics, for GPs in traditional primary care setting, for dermatologists). Second, the differences in data richness between the QPSs and interviews may have contributed to some perspectives being represented in greater depth than others. However, despite the diversity in professional backgrounds and the fact that interviews disclosed richer data than QPSs, consistent themes were successfully identified across datasets. Third, as outlined in the methods, participants were purposively identified by members of the research team as experts. The authors who led this selection hold senior clinical and academic positions in skin cancer management, with extended networks, but this introduces a potential for sampling bias. We acknowledge that expertise was subjectively determined, but note that the vast majority of participants had over 15 years of experience in the field. Relatedly, as participants were experienced professionals, we do not claim to have captured the personal views of their less experienced colleagues, except inasmuch as our informants reflected on their early experiences as part of their responses. Participants were selected for their expertise, and thus their views are not representative of the general population of skin cancer clinicians. Fourth, no patients were involved in this study, so our perspectives are limited to selected healthcare professionals’ and researchers’ views of good quality indicators. Exploring patient perspectives on quality skin cancer provision and the associated barriers and facilitators is an important focus for future research. The use of healthcare quality frameworks that place a greater focus on patient preferences and person-centeredness will be key to understanding the factors most crucial to quality skin cancer care delivery in primary care.

This post was originally published on https://bmjopen.bmj.com