Clinical Practice Guidelines and their transition to Computer Interpretable Guidelines

Get Complete Project Material File(s) Now! »

Clinical Practice Guidelines and their transition to Computer Interpretable Guidelines

Clinical Practice Guidelines (CPGs) are defined as explicit statements that model and summarize current evidence and clinical judgment, following Evidence-Based Medicine (EBM) principles for a standardized and best practice quality healthcare at the decision-making level (Lobach and Hammond 1997). Implementing CPGs has proved to be a valuable reference when supporting clinicians in their decision-making process as they can provide educational help for practitioners with less experience, improve the clinical care quality by assessing the evidence behind the recommended treatment, ensuring that best clinical practice is followed and help to avoid negligent medical practice or to reduce the biases from reported evidence (Silberstein 2005).
When implementing CPGs, several characteristics from the clinical and development points of view must be considered to ensure good healthcare quality levels and clinicians’ satisfaction. Assuring the validity and reliability of their clinical content, along with their clinical applicability in real clinical settings, may help to engage clinicians in their systematic application. Moreover, they must be clear when defining the procedures to be followed and allow some clinical flexibility, being developed in a representative manner to coexist with the current clinical performance procedures within a healthcare system (Sackett et al. 1996; Thomas 1999).
Nevertheless, several clinician adherence barriers cause the dissemination of the guidelines to be tedious and difficult. These barriers are mainly caused by (i) lack of the awareness, (ii) lack of familiarity with the guideline provided recommendations (iii) lack of agreement due to different clinical interpretations, simplification of the clinical knowledge reported in the guidelines or standardization of clinical cases, (iv) lack of self-efficacy, (v) lack of outcome expectancy, (vi) inertia of previous practice and (vii) other external barriers coming from the patients or environmental factors, out of the clinicians’ control (Cabana et al. 1999). In conclusion, barriers related to clinicians’ knowledge about the guidelines, attitudes or trust of or behavior towards the guidelines could affect their implementation, compliance, and adherence in real clinical settings (Tunis 1994).
Several methods for guideline integration in clinical settings have been explored, but many barriers still persist. Some studies propose that having timely feedback on the performance and how the clinical behavior changes based on CPG usage could increase the clinicians’ likelihood for CPG adherence (Dykes et al. 2005). Including the clinician within the CPG formalization process and encouraging them with clinical performance analysis and study is highly recommended (Hysong, Best, and Pugh 2006).
Actual trends move towards highly interactive computerized systems, trying to intuitively present complex clinical cases, where clinicians may access and check computerized clinical data and take away insights from all of this information in a more natural and intuitive way (Liem et al. 1995). These systems are candidates for more easily accommodating a digital implementation of the CPGs providing evidence-based decision support (Garg et al. 2005). Another objective is to achieve a correct and good quality guideline formalization into computerized languages, following a consistent and adequate methodological development of the clinical processes and the objectives represented in the guidelines. Since CPGs are living documents that report the latest clinical evidence, maintaining and updating them often becomes mandatory. However, CPGs are expressed as textual documents which means their contents lag behind actual knowledge and require new versions based on the reported clinical knowledge being updated (Wang et al. 2002). Furthermore, CPGs are designed to support most common and evidence-backed clinical cases, making them standard and assuring their quality for usual clinical cases but being insufficient for patients that are in gray areas, where lack of evidence exists (e.g. excluded clinical cases in Randomized Controlled Trials or RCTs) or differ from the canon (Bates et al. 2003). In some cases, there are no CPGs formalizing the appropriate scientific evidence to be based on during clinical practice but data corresponding to the opinion of experienced physicians when providing therapeutic decisions is available. The usage of data mining techniques has been proved to help on identifying practice-based decision rules that go beyond the formalized evidence for helping in the guideline reported evidence completion (e.g. detailing the duration of a treatment administration which is currently not defined in the guidelines but proven to influence the outcomes of the patients) and updating it when needed (Canavero et al. 2017; Toussi et al. 2009).
To facilitate implementation, dissemination, and maintenance-related barriers in the last decade, the representation of the clinical knowledge contained in the CPGs was translated into computerized implementation, known as Computer Interpretable Guidelines (CIGs). CIGs allow for the analysis of computerized clinical data coming from patient electronic medical records and contrasting it with the guidelines in an automatic manner, being able to provide more personalized and reliable advice or treatment recommendations. The following characteristics are key at the core of a CIG´s inherent success and in aiding in their dissemination and implementation throughout healthcare systems:
(i) the use of standardized clinical terminology that facilitates the understanding and univocal interpretation of the clinical data to be analyzed and the clinical knowledge formalized in CIGs, (ii) the proposition of a model for easy updating the guidelines and facilitating their dissemination over the clinical community, and (iii) the promotion of quality test tools for assessing the strength of CIG recommendations as a whole and for each of the provided recommendations. This will help in providing optimal personalized guideline-based recommendations at a reasonable cost and implementation effort (Latoszek-Berendsen et al. 2010).
Although several proposals for CIGs representation have been made, there is no leading standardization language that fully satisfies the requirements for the representation of the logic of CPGs (Votruba, Miksch, and Kosara 2004; Kaiser and Miksch 2005; Tu and Musen 1999; Wang et al. 2002). One of these approaches formalizes the clinical knowledge as “Task-Network Models” (TNMs), i.e. models that represent the dependency among actions, structured as hierarchical networks which, when fulfilled in a satisfactory way, provide recommendations (Peleg 2013). Several proposals have been reported following this approach aiming at managing with different clinical modeling challenges, such as GLIF (Boxwala et al. 2004), PROforma (Sutton and Fox 2003), or Asbru (Miksch 1999). Moreover, due to the vast amount of digitalized data coming from the electronic health records to be evaluated and the formalization of the clinical processes in CIGs, it is highly recommended to apply Semantic Web Technologies (SWTs) (Blomqvist 2014) in order to process the data in a more effective and efficient way, create a proper framework for interoperability between systems and also integrate data from various sources (Argüello et al. 2009; Pruski, Bonacin, and Da Silveira 2011). In addition, along with SWTs, the implementation of standardized terminologies is highly promoted, guaranteeing the interoperability of the implemented knowledge and its univocal interpretation since it allows the representation of the biomedical concepts with stable and unique codes (Ahmadian, Cornet, and de Keizer 2010). Some of the most extended terminologies in cancer domain applications are SNOMED CT2 and NCI Thesaurus3 (Bodenreider 2008; Sioutos et al. 2007; Kumar and Smith 2005). Applying these kinds of approaches during the data acquisition and requirements definition process could alleviate the missing data or bad quality data gathering, which results in a poor CIG support and lower guideline compliance (Lanzola et al. 2014).
In conclusion, formalizing CPGs into CIGs allows the implementation of decision-support systems that provide patient-specific advice at the point of care. Computerizing guidelines permits the analysis of all patient information, not only focusing on the latest clinical results but also studying all of the relevant medical records in a reliable and efficient manner in the least amount of time, which will help in the inclusion of data mining techniques for identifying relationships between patient specific data, execution paths, process goals and achieved clinical results (de Clercq et al. 2004; Peleg, Soffer, and Ghattas 2008; Ghattas, Soffer, and Peleg 2014). Moreover, it facilitates CPG adherence and the measurement of clinical outcomes and performance related results, such as CPG compliance and the impact of the made decisions on the patients’ healthcare, identifying guidelines’ grey areas (Sim et al. 2001; Terenziani et al. 2008; Bragaglia et al. 2015; Hommersom and Lucas 2015; Lucas and Orihuela-Espina 2015; Panzarasa et al. 2010; Lanzola et al. 2014).

Clinical Decision Support Systems

Clinical Decision Support Systems (CDSSs) consist of computerized systems or software developments that aim aiding healthcare professionals in the diagnostic and therapeutic decision-making process (Payne 2000). When these CDSSs are CPGs based, the provided knowledge-driven clinical guidance is based on clinical CIGs knowledge implementation. These systems analyze the relevant clinical characteristics of an individual patient in order to provide patient-specific assessments or recommendations for the best decision-making. In the last decade, CDSSs have proven to be potential tools to improve clinicians’ CPG adherence and to support ambulatory patients (Sim et al. 2001; Peleg 2013; Quaglini et al. 2013). Moreover, these systems are able to analyze considerable amounts of structured information coming from patient electronic medical records in a very short period of time, thus achieving an overall improvement in the health care practice, decreasing medical errors and variability while promoting guideline compliance (Sim et al. 2001; Berner and Lande 2016).
CDSSs must verify a list of design requirements in order to successfully support the clinical practitioner during the decision-making process and assure the acceptance as well as the adherence to the CPGs (Isern and Moreno 2008; Bates et al. 2003; Sittig et al. 2008). Some of those requirements are (i) providing a guideline repository that contains the latest available medical evidence for a given clinical domain and keep this knowledge base updated, (ii) have the ability to feed the CDSS directly from electronic medical records and be able to process the relevant information for each case, dealing with the missing data management efficiently (iii) evaluate the clinical data in the least amount of time possible avoiding the inclusion of excessive information which may be overwhelming during the decision-making process and (iv) fit within the clinical reasoning workflow and track its implementation and use impact by analyzing the guideline compliance and the decisions made over time (Lanzola et al. 2014).
From the development point of view, providing tools that would help to maintain the knowledge base updated and following a standardized language when defining the CIGs to be implemented is highly recommended, since this is an important constraint when trying to implement these systems to support medical teams in real clinical settings. For example, in the breast cancer domain, different prototypes of CDSSs that aid in managing care for breast cancer patients have been developed. The success of these prototypes during routine breast unit meetings, however, depends on periodic updates and constant maintenance of the knowledge base in order to upgrade their usage from purely supportive research tools (Séroussi et al. 2017).
Hence, studies have reported that CDSSs do improve care quality and decrease medical errors (Berner and Lande 2016) having a positive impact on the quality of medical practice, but they are quite constraining since they depend on the a priori defined domain knowledge and including new knowledge or updating the implemented guidelines is still not an easy task. Therefore, providing tools that facilitate the implementation, update, and evaluation of computerized guidelines is crucial for the best quality and latest evidence-based clinical support through CDSSs.

Limits of guideline compliance

Even if CPGs proved to enhance clinical practice, several causes limit their effectiveness and, consequently, the adherence and compliance of clinicians with CPGs. (Grimshaw and Russell 1994; Davis and Taylor-Vaisey 1997).
The complexity of the medical domain makes the formalization of CPGs a difficult task to be achieved successfully. First, formalizing evidence is not a straight forward task and may not reach the correctness and knowledge definition level that clinicians would expect, since opinion and interpretation still have a huge influence on healthcare management. On one hand, CPG development procedures are quite constraining, considering that guidelines are developed for population healthcare management, assuming that the concept of a “standard” patient exits, but might be inaccurate or even wrong for particular patients in real populations (Hurwitz 1999). The opposite can happen as well, when small randomized clinical trials or controlled observational studies are used to report evidence that may need to be generalized, resulting in poorer outcomes when treating bigger populations (Shekelle et al. 1999). The importance of personalization of the guidelines is imperative in order to improve adherence and compliance rates. The work of (Bouaud and Seroussi 2002) states that for breast cancer management guidelines, 66% out of 127 patients fit correctly to be evaluated with standard guidelines, whereas 39% of the cases suffer a bias between the guidelines recommendation and the treatment administered.
A closely related and important issue is the guideline development process or how CPG development working groups are composed. Usually, these teams are comprised of quality auditors or managers who are guided by their opinions, interests, and experience, and who intend to formalize evidence seeking appropriateness of the provided recommendations but ignore the iterative and causal reasoning of clinicians (Woolf et al. 1999). Depending on the clinical context and according to the approaches followed for developing and disseminating, as well as the applied implementation methods, CPGs can be more or less successful when reporting the latest clinical evidence (Grimshaw and Russell 1993). Even if CPGs are audited to rate their quality of evidence and strength of recommendations, trying to replicate the clinical reasoning process is difficult and translates into simplified, generalized, and in some cases ambiguous vocabulary, which may lack the supporting evidence and will require the clinicians’ own opinions for its interpretation. The act of giving way to interpretation and providing purely clinical judgment-based recommendations can be very susceptible to bias and/or directly non-compliant with CPG-based recommendations and following one´s own self-interests (Shekelle et al. 1999). Defining the followed reasoning process as much as possible would help to track and identify the causes of these evidence gaps and to analyze the reasons behind biases from guidelines.
Another point to take into account is that current clinical care is moving towards patient-clinician shared decision-making since patient involvement can provide insights into best health states or outcomes in each case, apart from establishing a partnership that will help clinicians understand their patients’ preferences (Say and Thomson 2003). There are particularly complicated cases in which making a clinical decision is a difficult task due to the trade-off between the level of observed symptoms and the impact that those symptoms could have on the patients’ life, especially in those cases where the expected medical outcomes are similar for different clinical procedures (i.e. term referred to as “equipoise”), requiring an individualized and personalized healthcare process and the close interaction with the patient for the best decision (Hlatky 1995). Nevertheless, CPGs do not include evidence on patient preferences. It is necessary to overcome many barriers in order to ensure success in this task (Chong et al. 2009):
(i) consider patient preferences as population knowledge that follows some general trends and not only as individual one-off cases, subjective and variable factors, including them as part of the clinical evidence reported in the studies to identify “preference-sensitive” decisions (e.g. those decisions having lifelong implications or an uncertain benefit to the patient, unclear or conflicting evidence, risk of suffering side effects or negatively affecting the patient’s quality of life, etc.) of high levels of uncertainty about best clinical procedure to follow (Krahn and Naglie 2008),
(ii) create a clear taxonomy (i.e. systematic categorization) for patients’ preferences that will serve as a standardization over all of the involved disciplines (i.e. analysts, economists, clinical psychologists, etc.) that have different point of views on the measurement of patients’ preferences, to label and extract this information in a processable and understandable way (Bastemeijer et al. 2017; Luckmann 2001), and (iii) build a methodology to synthesize the current evidence on preferences and be able to describe preference-based evidence along with clinical-based evidence, as it is proven to strongly influence the decision-making process (Noble et al. 2015; Froberg and Kane 1989).
In conclusion, even if CPGs aim to improve healthcare outcomes through standardized clinical procedures, their generalized approach lacks significant relevant information, causing low clinical adherence and considerable non-compliance rates in real clinical performance. To overcome these issues, the implementation of more flexible guidelines that facilitate shared decision-making and take into account patients’ preferences is being promoted (van der Weijden et al. 2010). Understanding CPGs limitations, identifying “gray” areas (i.e. cases that are complex and which guidelines are not capable of providing satisfactory support) and providing timely feedback to clinicians about compliance rates, guideline biases, and outcomes could significantly improve significantly the clinicians’ adherence to CPGs and the quality of the healthcare provided.

READ  Viscosity Solutions for Integro-Differential Equations

Rating the Strength of Recommendation and Quality of Evidence of CPGs

CPGs rely on the latest EBM to guide clinicians in the decision-making process. Scales such as the Appraisal of Guidelines Research and Evaluation (AGREE) assess the quality of the guideline’s development process, not focusing on the clinical content and the quality of the evidence of the provided recommendations (AGREE Collaboration 2003). Hence, to what extent are the recommendations provided in the CPGs based on high-quality evidence? What is considered as high-quality evidence? How can clinicians and CPG developers be confident about those recommendations?
In the last decade, several approaches have been developed in an attempt to answer these questions and formalize evidence-grading systems. The Agency for Health Care Quality and Research (AHRQ)4 reviewed the ongoing efforts of different medical groups and reported that there are currently over 100 proposals for grading the evidence of the guideline recommendations (West et al. 2002). Since many of these approaches were complex and difficult to integrate in daily clinical practice, the AHRQ stated three key elements to be covered by any evidence grading system that would facilitate their dissemination throughout the clinical community (Clair 2005): (i) quality, referring to the validity of the study or the minimal opportunity of bias that it could have, (ii) quantity, when talking about the number of studies taken into account to formalize that evidence and the number of subjects studied within them and, (iii) consistency among other studies on the same topic that could be comparable. Some of the approaches that do accomplish these criteria are the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence5, the Cochrane Collaboration6, the US Preventive Services Task Force7 (USPSTF), the Strength of Recommendations Taxonomy (SORT) and the Grading of Recommendations, Assessment, Development and Evaluation8 (GRADE). The first five are more focused on reporting evidence based on patient-oriented outcomes, which may disagree with disease-oriented outcomes. For example, when analyzing a disease or condition such as providing Doxazosin for treating hypertension or high blood pressure, a disease-oriented outcome would be that it reduces the patient’s blood pressure to prevent suffering a stroke whereas a patient-oriented outcome reports that this same treatment increases mortality in people of African ancestry. Patient-oriented outcome approaches are more simplistic in order to facilitate their implementation throughout CPGs and are mainly developed for specific clinical domains or illnesses (Ebell et al. 2004a).
To provide reliable measurements of the quality of the provided recommendations the Quality of Evidence (QoE) and Strength of Recommendations (SoR) are defined. The QoE reflects how confident we are with the provided recommendation(s), and the SoR defines the evidence supporting that recommendation and the benefits/risks tradeoff when following it. Focusing on SORT, this scale provides a uniform rating system, simple and easy to use, for rating the QoE and SoR based on patient-oriented outcomes. The rating system is based on 3 levels of SoR of a body of evidence (A, for recommendations based on consistent and good-quality patient-oriented evidence, B for recommendations based on inconsistent or limited patient-oriented evidence and C for recommendations based on evidence deduced over consensus, usual practice, opinion or disease-oriented evidence) and 3 levels of QoE (1, meaning good patient-oriented evidence, 2 for limited patient-oriented evidence, and 3 for other kind of evidence, such as consensus, usual practice, opinion or disease-oriented evidence) (Ebell et al. 2004a). Since this approach is based on patient-oriented evidence instead of disease-oriented evidence, it is still insufficient or poor when used on its own. Moreover, this rating system is not able to effectively manage some particular qualitative results, since SORT does not address these type of recommendations (Ebell et al. 2004b).
GRADE, on the other hand, has been adopted by over 65 organizations worldwide trending to be the international benchmark for rating QoE, and SoR in a transparent and explicit way (Guyatt et al. 2013). The primary keypoints of this rating system are (i) the clear separation between QoE and SoR, which means that a particular QoE does not necessarily imply a particular SoR, (ii) the inclusion of patients’ outcomes, (iii) identifying explicitly the factors that downgrade (i.e. limitations in the study design, inconsistency or imprecision of the results, indirectness of the evidence, publications bias) or upgrade (i.e. large magnitude of effect, the underestimation of true treatment effect caused by biases results) the QoE of a recommendation,
(iv) the transparency of the process of formalizing evidence into recommendations, proposing first the clinical question or recommendation to be studied, then reporting the treatment effects and critical outcomes from available evidence to ultimately assess its confidence when evaluating the followed evidence reporting method and finally analyze the tradeoff between the benefits and risks of following that recommendation, (v) grading the quality of the available evidence on diagnostic strategies, (vi) explicit advice and guidance among values and assumed preferences when making a recommendation even in scarcely available evidence cases, (vii) clear and pragmatic interpretation of SoR levels into “Strong” when the benefits outweighs the risk of following the recommendation, “Strong against” when risks overweigh benefits and “Weak” when risks and benefits are balanced and (viii) simple but methodologically comprehensive approach for rating QoE in 4 grades, “High” when further research won’t change the confidence on the expected treatment effect, “Moderate” when further research is likely to have an important impact on the estimated confidence of the treatment, “Low” when further research is very likely to affect the confidence estimation and “Very Low” when the estimation of the effect of the analyzed treatments are unclear (Brożek et al. 2009; Balshem et al. 2011; Maymone, Gan, and Bigby 2014).
In conclusion, GRADE is the most frequently implemented SoR and QoE grading system because of its comprehensive, explicit, and transparent methodology when rating a recommendation to treat a patient. It guides clinicians, aiming to provide the best health care with the most recent evidence and information available in the most objective way. Nevertheless, the assessment of QoE is dependent on subjective opinion, since each step requires clinical judgment and it cannot be completely determined objectively, not assuring the consistency through these assessments.
Hence, measuring the clinical performance, the biases from the latest reported evidence within the CPGs and gathering the outcomes of the patients that followed those treatments to keep evidence as updated as possible is crucial. Nevertheless, accomplishing each of these tasks in a consistent and objective way is still a challenge.

Visual analytics in healthcare

The clinical data available is increasing exponentially in recent years along with the digitization of healthcare systems. Exploiting these large amounts of heterogeneous data may provide insight for improving healthcare’s effectiveness and efficiency, but due to the datasets’ magnitude and complexity, these conclusions are difficult to obtain and demonstrate in real clinical settings (Sun and Reddy 2013). Clinicians are overwhelmed by the large amounts of heterogeneous and scattered information they are receiving which in turn requires extensive efforts for their interpretation. Leading to a conclusion about the implicit relationships in the data that could influence patients’ health conditions is not a straightforward task. Due to this information overload, some crucial variables and relationships may be ignored, misinterpreted or missed, causing a negative impact on the patient outcomes and clinical performance (Vaitsis, Nilsson, and Zary 2014). To overcome these issues, visual analytics, which is the science of displaying information through easy-to-use interactive interfaces focused on analytical reasoning, is proposed (May et al. 2010). Visual analytics offers timely information in an intuitive and interactive format, facilitating the hypothesis generation, reasoning, and interpretation of the complex data for a given population (Caban and Gotz 2015). Moreover, it permits the discovery of unknown hidden implicit information patterns by highlighting the connections through the analyzed variables within a dataset, customizing the queries to be carried out depending on the formalized hypothesis in each case and allowing the visualization of complex ideas in a clear and precise way, which is not possible using other approaches (Simpao et al. 2014).
One of the most widespread techniques for visualizing complex multidimensional datasets for discovering patterns among data are the Parallel Coordinates Plots (PCPs) (Inselberg and Dimsdale 1990; Cuzzocrea and Zall 2013) (see Figure 2).

Table of contents :

1. Introduction
1.1 Background
1.2 Problem Analysis Context
1.3 Objectives and Research Questions
1.4 Research Scope
1.5 Approach
1.6 Structure
2. State-of-the-Art
2.1 Clinical Practice Guidelines and their transition to Computer Interpretable Guidelines
2.2 Clinical Decision Support Systems
2.3 Limits of guideline compliance
2.4 Rating the Strength of Recommendation and Quality of Evidence of CPGs
2.5 Visual analytics in healthcare
2.6 Conclusions
3. Research Design Approach
3.1 Research Design Concepts
3.1.1 Clinical Knowledge Formalization
3.1.2 CIG Formalization Module
3.1.3 Ontology-based semantic validation
3.2 Methodology for an evolutive CDSS
3.2.1 Domain-independent CIG formalization
3.2.2 Authoring tool
3.2.3 Augmenting the clinical knowledge using experience
3.3 Visual Analytics
3.3.1 Decisional Events visualization
3.3.2 Real-World Data visualization
4. Use Case: Breast Cancer
4.1 Breast Cancer Knowledge Model (BCKM)
4.2 Breast Cancer Clinical Practice Guidelines
4.3 Non-compliance criteria definition
4.4 Breast cancer outcomes
4.5 Interaction of the guideline-based CDSS and the experience-based CDSS within DESIREE project
5. Validation
5.1 Technical Assessment
5.1.1 Unit test design and implementation
5.1.2 Integration test design and implementation
5.2 Clinical assessment
5.2.1 Experience generation from retrospective data in simulated BUs
5.2.2 Performance and clinical validation with prospective data in real BUs
6. Conclusions
7. Research contributions
7.1 International Conferences
7.2 Journals
7.3 Awards
7.4 Intellectual Property
8. Discussion and Future research

GET THE COMPLETE PROJECT

Related Posts