NEED FOR A MULTI-ITEM SCREENING OR CASE-FINDING TOOL 

Get Complete Project Material File(s) Now! »

CHAPTER 3.TO SCREEN OR NOT TO SCREEN

Outline of Chapter 3

The previous chapter indicated that while there are accepted screening criteria and studies have been conducted on the effectiveness of screening, for example for depression, there may still not be consensus on whether or not to screen. Screening criteria act as guidelines but different components may be given different weightings. Ultimately the decision to systematically screen or case-find or not will be directed by value judgements and the importance placed on various aspects including consideration of the specific population in question and availability of potential interventions.
This chapter addresses the question of how even meta-analyses with the same research question can result in opposing recommendations. I was the lead researcher of this descriptive study, with its publication co-authored by Professors Mieke van Driel, Bruce Arroll and Chris del Mar.216

Introduction to assessment of opposing meta-analyses

Meta-analysis of randomised controlled trials provides us with the highest level of evidence to inform guidelines and clinical practice. Therefore, it is important to get it right. Over the past 20 years much has been done to improve the methodology217-219 and reporting, resulting in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement (http://www.prisma-statement.org/). This has provided a guideline for standardised reporting of systematic reviews which has increased their rigor and transparency. However it is not uncommon that meta-analyses addressing the same research question arrive at conflicting conclusions or recommendations, and reasons for inconsistency have been explored.220-223 For example, two different author groups in a series of meta-analyses of trials investigating the effectiveness of screening for depression, have opposing recommendations, one supporting screening and another questioning its usefulness.75,117,118,224 A preliminary review revealed that in spite of identical research questions, choices about inclusion or exclusion of studies may have shaped the results and conclusions. Throughout the process of meta-analysis many decision moments occur – for example which study to include or exclude, the risk of bias assessment, which data to extract. Even when following strict protocols, subjective decisions need to be made. Each choice can take us down a different path and lead off into another direction. Choices may not be value-free and often many of these decisions remain covert (not explicit) which makes it difficult for readers to interpret their impact.225
Given the discrepancies in recommendations from different reviews on screening for depression, we explored the determinants of this divergence by examining the choices made by the authors in conducting their reviews and reported our own decision-points when conducting our analysis.

Methodology for assessing opposing meta-analyses

A search was conducted for all systematic reviews and meta-analyses on screening for depression in primary care using the databases MEDLINE, EMBASE,CINAHL, PsycLIT and the Cochrane Database of Systematic Reviews, and hand-searching of the relevant reference lists.
The objectives, findings and conclusions of all accessed reviews were compared (Table 3-1). Two meta-analyses were selected for in depth exploration of the review process. Subsequently, co-author MLVD and I applied a stepwise approach to unravel the review process followed by the authors of the selected meta-analyses. Each decision moment in the analysis process was recorded alongside an appreciation of the decisions reported by the authors of the selected meta-analyses. Discrepancies between the authors of this study and the justification of choices made were recorded. The two other authors of this paper commented on consistency and transparency of the recorded process and findings. The individual RCTs included in each review were identified, accessed and examined.
A table was constructed recording for each RCT the sample size of the trial, whether or not it favoured screening, whether it was included and whether it was pooled in each of the reviews (Table 3-2). The various decisions the authors of the two meta-analyses had made regarding which outcomes to analysis and their data extraction from original studies were explored.

Results of assessment of meta-analyses

The results of our explorative analysis are presented in the flowchart (Figure 3.1). Five systematic reviews (four with pooled data) were identified. Three meta-analyses were conducted by Gilbody and colleagues between 2001 and 2008, including one Cochrane review.75,117,118,224 None of these favoured screening. Two reviews (one meta-analysis) from another author group, the US Preventative Task Force (USPTF), in 2002119,226 and 2009121,226 favoured screening (Table 3-1).
The five reviews included a total of 26 RCTs227-240,242-245,247-254 with a total of 12,569 participants. None of these RCTs was common to all reviews (Table 3-2). For example, for the outcome of providing practitioners with feedback on screening (detection of possible depression) prior to initiation of treatment, Gilbody 2001118 pooled four RCTs228,232-234 whereas for the same outcome the USPTF pooled a completely different set of eight RCTs.231,237,238,241,242,244,245 All of these studies would have been available to both author groups with the exception of the study by Wells,244 which might not have been published when Gilbody et al conducted their search.
Each of the five reviews considered three different research questions (effectiveness on detection, treatment and patient outcomes) with different combinations of RCTs included for each. Again, none of these were common between reviews. This meant that there were 15 different combinations of RCTs for the five reviews considering the three research questions. For pragmatic reasons we decided to select two reviews with opposing recommendations which addressed the same research question to determine factors leading to discrepant findings.
The two meta-analyses we selected for comparison, one favouring and the other not favouring screening were the Cochrane review by Gilbody of 2005117 and the USPTF 2002 meta-analysis.189 These two meta-analyses contained the most information on both included and excluded trials, had the most overlapping studies and both included pooled data. We decided to focus on only one of the three research questions addressed in the meta-analyses. The outcome of the effect of depression screening on treatment (ie if the patient received treatment for depression) was selected because this is of clinical importance and also included the largest number of studies used in the reviews. We identified RCTs included and pooled in either review and then examined these to determine which most influenced the results favouring not screening or screening. Which studies were pooled or not pooled in either review are outlined in Table 3-3.
We found that the opposing recommendations of the two reviews were largely determined by the Lewis study,238 pooled in the Cochrane but not the USPTF review, and the Wells trial,244 pooled in the USPTF but excluded from the Cochrane review.
On inspection of the forest plot in the Cochrane review for the outcome of management of depression following feedback (prescription of anti-depressants)117 (their Analysis 2.2, p 28), the Lewis study238 has the greatest weighting (37.5%). It can be seen clearly that this study shifts the plot from favouring screening to not favouring screening. The USPTF included this study in their review but did not pool it for this outcome because they report that the figures ―cannot be calculated from available data‖. There were 227 patients in each of the control and screened arms. The Cochrane review has entered the Lewis study in their forest plot as 100/227 for control and 125/227 for screening. It is unclear how they have derived these numbers. The Cochrane review states that for the Lewis study they used published data only.117 The Lewis study reports that the mean number of psychotropic drug prescriptions for the control arm was 0.44 (SD 1.58) and for the screened arm was 0.55 (SD 1.43) with a p value of 0.6 (their Table 3.4).238 However the mean number of drugs prescribed does not necessarily equate to the proportion of patients taking psychotropic drugs. Our own attempts to contact the authors of the Lewis paper to obtain their data have been unsuccessful to date.
The RCT in the USPTF review189 which has the greatest weighting and clearly influences the finding favouring towards screening is the Wells study.244 This study enrolled 1356 patients who were screened as depressed using the ―stem‖ items for major depressive and dysthymic disorders from the CIDI.244 Randomisation was by clinic which either provided usual care (provider not informed that their patients were in the trial) or provided a quality improvement program with either psychotropic medication or psychological intervention (providers notified that their patients had screened positive for depression). The quality of care, mental health outcomes and retention of employment of depressed patients improved in the intervention group. The Wells study is excluded from the Cochrane review because it is a ―Complex quality improvement programme‖ (Characteristics of excluded studies, p22).117

READ  The anatomy of African Jurisprudence: A basis for understanding African socio-legal and political cosmology

Discussion on assessment of meta-analyses

What initially presented as a straightforward task revealed itself to be increasingly complex when we discovered that in the five reviews each considering three outcomes, there were 15 different combinations of RCTs. Our in-depth analysis of the process of two meta-analyses that address the same research question but reach contradictory conclusions demonstrates how decisions in the meta-analysis process can shape the conclusion. This is an important finding as evidence-based clinical guidelines and practice recommendations rely on evidence from systematic reviews and meta-analyses.
Two questions come to mind; first, “who is right?‖ and second, ―what drove the decisions?” The second question is the most essential one that requires full attention from meta-analysts. Addressing the fundamental issue of human choices in a methodologically rigorous process might even make an answer to the first and most intuitive question superfluous.
There is ample literature on the impact of publication bias, referring to an overrepresentation of trials with a ‗positive‘ outcome in searches, on the conclusions of meta-analyses.220,255 This type of bias can be addressed by searching for unpublished data or extending the search to languages other than English,218 although it is not clear if this is worth the effort.256
Discrepancies in outcomes of meta-analyses have been documented and are often attributed to selective inclusion of studies.221,257,258 Felson describes a model for bias in meta-analytic research identifying three stages at which bias can be introduced: finding studies, selection of studies to include and extraction of data.259 He argues that ―selection bias of studies [as opposed to selection bias of individuals within studies] is probably the central reason for discrepant results in meta-analyses.‖ Cook et al determined that discordant meta-analyses could be attributed to ―incomplete identification of relevant studies, differential inclusion of non-English language and nonrandomised trials, different definitions .., provision of additional information through direct correspondence with authors, and different statistical methods.‖260 Another study of eight meta-analyses found ―many errors in both application of eligibility criteria and dichotomous data extraction‖.261
While selection bias and differing data extraction may contribute to discrepancy, our study suggests that the bias begins before these steps. Over three research questions in five different reviews, we found 15 different sets of RCTs were included, yet one author group consistently found against while the other found for screening. Which studies are included and which data from those studies are used involves numerous decisions. To our knowledge, the issue of choices and decision making in the process of meta-analysis has not been studied empirically before.
The methodology of meta-analysis is well developed and is continuously being refined to address identified threats of bias. The process is well documented in numerous text books, of which the Cochrane Collaboration Reviewers‘ Handbook218 may be the most widely used. The Cochrane Collaboration, the largest database of systematic reviews and meta-analyses of clinical trials in medicine, requires its authors to produce a protocol describing the intended process of the review before embarking on the review. A strength of Cochrane reviews is that they justify their decisions. Each step is peer reviewed and monitored by editorial groups, ensuring methodological rigor. No matter how rigorously we describe each step in the process, human decisions based on judgement are being made all the time. When documenting each decision we made in our exploration, we ourselves, although experienced reviewers, were astonished by the number of decision moments that occurred. Moreover, some of these decisions could be traced to ‗subjective‘ inclinations. For example, our choice to explore the question related to effect of screening on number of patients on treatment, was based on a compromise of the desire to study a clinically relevant question and at the same time have enough material for further study. Documenting each of these decisions and the rationale for the choices could add transparency to the process.
However, there might be an even more fundamental unintentional source of ―bias‖ embedded in the review process. The consistent findings of the two author groups suggest this despite the different combinations of RCTs in each of their reviews. Authors might have a ―hunch‖ of the outcome of their meta-analysis before they even start. It is likely that this ―hidden bias‖ guides the choices that are made on the way. It could be called ―hunch bias‖.
The main limitations of this study are that we chose to compare only two meta-analyses from the many options available and we have introduced subjectivity by the choices we made. However, making these choices and their potential subjectivity explicit is the main strength of the study.
Meta-analysis is a process and no meta-analysis is value-free. PRISMA involves a 27-item check list (http://www.prisma-statement.org/ ). We can never standardise everything, especially author bias, so adding another 27 items is not the answer. An additional step of recognising each decision point and being explicit about these choices and their rationale would greatly increase the transparency of the meta-analysis process. But perhaps the greatest improvement in transparency of meta-analysis can be achieved by asking authors to declare their ―hunch‖ of the outcome before they embark on the review process. This step can easily be built into the review process of the Cochrane Collaboration, where the review protocol precedes publication of the full review. The implicit ―subjectivity‖ of the seemingly ―objective‖ meta-analysis process deserves attention in all published reviews and is an important part of well-informed evidence-based practice.

READ  LANGUAGE PLANNING AND LANGUAGE POLICY: CRITICAL REVIEW

Summary of Chapter 3: Science is never value-free

This analysis explored the decisions that were made in the different meta-analyses but did not attempt to answer the question of whether or not screening for depression is justified. While scientific enquiry extends empirical knowledge and directs evidence-based practice, research findings are neither complete nor immutable, and will always rest on the questions we ask and the interpretations we make.262 Application of generic findings needs to be contextualised. Decisions about health promotion, prevention and clinical management require understanding and application of contextual knowledge, relating to specific populations or individuals and local legal, social and policy circumstances. Whether ―to screen or not to screen‖ will depend on many regional context-specific factors, particularly the availability and accessibility of effective interventions.

CHAPTER 1. INTRODUCTION
1.1 HISTORY OF THIS BODY OF RESEARCH
1.2 HYPOTHESES
1.3 AIMS OF THE STUDY
1.4 OVERALL STRUCTURE OF THIS THESIS
1.5 SUMMARY OF CHAPTER 1
CHAPTER 2. NEED FOR A MULTI-ITEM SCREENING OR CASE-FINDING TOOL 
2.1 OUTLINE OF CHAPTER 2 2.2 BACKGROUND: RATIONALE FOR A COMPOSITE TOOL
2.3 SCREENING CRITERIA
2.4 CASE-FINDING VERSUS SCREENING
2.5 JUSTIFICATION FOR SCREENING OR CASE-FINDING FOR SPECIFIC LIFESTYLE
BEHAVIOURS OR MENTAL HEALTH ISSUES
2.6 SUMMARY OF CHAPTER 2
CHAPTER 3. TO SCREEN OR NOT TO SCREEN?
3.1 OUTLINE OF CHAPTER 3
3.2 INTRODUCTION TO ASSESSMENT OF OPPOSING META-ANALYSES
3.3 METHODOLOGY FOR ASSESSING OPPOSING META-ANALYSES
3.4 RESULTS OF ASSESSMENT OF META-ANALYSES
3.5 DISCUSSION ON ASSESSMENT OF META-ANALYSES
3.6 SUMMARY OF CHAPTER 3: SCIENCE IS NEVER VALUE-FREE
CHAPTER 4. LITERATURE REVIEW OF EXISTING SCREENING TOOLS 
4.1 OUTLINE OF CHAPTER 4
4.2 SCREENING TOOLS FOR SMOKING
4.3 SCREENING TOOLS FOR ALCOHOL USE
4.4 SCREENING TOOLS FOR OTHER DRUG USE
4.5 SCREENING TOOLS FOR GAMBLING
4.6 SCREENING TOOLS FOR DEPRESSION AND ANXIETY
4.7 SCREENING TOOLS FOR ABUSE AND ANGER
4.8 SCREENING TOOLS FOR PHYSICAL INACTIVITY
4.9 SCREENING TOOLS FOR EATING DISORDERS
4.10 SUMMARY OF CHAPTER 4: LITERATURE REVIEW OF AVAILABLE SCREENING TOOLS
CHAPTER 5. DEVELOPMENT OF THE CHAT
5.1 BACKGROUND TO CHAPTER 5
5.2 SELECTION OF QUESTIONS FOR THE CHAT
5.3 THE HELP QUESTION
5.4 SUMMARY OF CHAPTER 5
CHAPTER 6. EVALUATION AND ACCEPTABILITY OF THE CHAT
6.1 OUTLINE OF CHAPTER 6
6.2 EVALUATION OF THE CHAT IN GENERAL PRACTICE SETTINGS
6.3 ETHNIC VARIATIONS
6.4 EVALUATION OF CO-EXISTING LIFESTYLE AND MENTAL HEALTH ISSUES WITH PROBLEMATIC GAMBLING
6.5 EVALUATION OF THE CHAT IN ASIAN LANGUAGE SCHOOLS
6.6 SUMMARY OF CHAPTER 6
CHAPTER 7. VALIDATION OF THE CHAT
7.1 OUTLINE OF CHAPTER 7
7.2 FACE VALIDITY OF THE CHAT
7.3 CONTENT VALIDITY OF THE CHAT
7.4 CONSTRUCT VALIDITY OF THE CHAT
7.5 CRITERION-RELATED VALIDITY OF THE CHAT
7.6 DEVELOPMENT OF THE COMPOSITE REFERENCE STANDARD
7.7 METHODOLOGY FOR VALIDATION OF THE CHAT
7.8 RESULTS OF VALIDATION OF THE CHAT
7.9 DISCUSSION OF VALIDATION OF THE CHAT
7.10 SUMMARY OF CHAPTER 7
CHAPTER 8. RESEARCHING THE HELP QUESTION
8.1 OUTLINE OF CHAPTER 8
8.2 VALIDITY OF THE TWO DEPRESSION QUESTIONS PLUS THE HELP QUESTION
8.3 VALIDITY OF THE ANXIETY QUESTION PLUS THE HELP QUESTION
8.4 VALIDITY OF THE CHAT PLUS THE HELP QUESTION
8.5 SUMMARY OF CHAPTER 8
CHAPTER 9. FURTHER DEVELOPMENTS OF THE CHAT AND THE eCHAT
9.1 OUTLINE OF CHAPTER 9
9.2 UTILISATION OF THE CHAT
9.3 FEASIBILITY STUDY OF THE eCHAT
9.4 WEB-BASED VERSION OF THE eCHAT
9.5 INTEGRATED DECISION SUPPORT
9.6 FUTURE DIRECTIONS FOR eCHAT
9.7 SUMMARY OF CHAPTER 9
CHAPTER 10. DISCUSSION
10.1 OUTLINE OF CHAPTER
10.2 KEY FINDINGS OF THIS THESIS
10.3 MULTI-ITEM SCREENING AND CASE-FINDING INSTRUMENTS
10.4 THEORETICAL MODELS
10.5 INTEGRATED MODELS OF PRIMARY CARE DELIVERY
10.6 PATIENT-CENTREDNESS AND DECISION-MAKING
10.7 CHALLENGES RELATING TO INFORMATION TECHNOLOGY
10.8 SUMMARY OF DISCUSSION
CHAPTER 11. IMPLICATIONS AND CONCLUSION
11.1 CASE-FINDING NEEDS INTEGRATION WITH INTERVENTION
11.2 RECOMMENDATIONS
11.3 CLINICAL IMPLICATIONS
11.4 POLICY IMPLICATIONS
11.5 RESEARCH IMPLICATIONS
11.6 CONCLUSION
GET THE COMPLETE PROJECT
Evolution of the eCHAT

Related Posts