Using the SQUIRE Explanation and Elaboration Document

This Explanation and Elaboration document is designed to support authors in the use of the revised SQUIRE guidelines by providing representative examples of high-quality reporting of SQUIRE 2.0 content items, followed by analysis of each item, and consideration of the features of the chosen example that are consistent with the item’s intent. Each sequential sub-subsection of this E&E document was written by a contributing author or authors, chosen for their expertise in that area. Contributors are from a variety of disciplines and professional backgrounds, reflecting a wide range of knowledge and experience in healthcare systems in Sweden, the United Kingdom, Canada, and the United States. SQUIRE 2.0 is intended for reports that describe systematic work to improve the quality, safety, and value of healthcare, using a range of methods to establish the association between observed outcomes and intervention(s). SQUIRE 2.0 applies to the reporting of qualitative and quantitative evaluations of the nature and impact of interventions intended to improve healthcare, with the understanding that the guidelines may be adapted as needed for specific situations.

        Title and Abstract

1.  Title                                 

Indicate that the manuscript concerns an initiative to improve healthcare (broadly defined to include the    quality, safety, effectiveness, patient-centeredness, timeliness, cost, efficiency, and equity of healthcare, or access to it).

Example 1

“Reducing post-caesarean surgical wound infection rate: an improvement project in a Norwegian  maternity clinic.” [15]

Example 2

“Large scale organizational intervention to improve patient safety in four UK hospitals: mixed method evaluation.”[16]


The title of a healthcare improvement report should indicate that it is about an initiative to improve safety, value and/or quality in healthcare, and should describe the aim of the project and the context in which it occurred.  Because the title of a paper provides the first introduction of the work, it should be both descriptive and simply written to invite the reader to learn more about the project.   Both examples given above do this well.

Authors should consider using terms which allow the reader to identify easily that the project is within the field of healthcare improvement, and/or state this explicitly as in the examples above.  This information also facilitates the correct assignment of medical subject headings (MeSH) in the National Library of Medicine’s Medline database.  In 2015, healthcare improvement-related MeSH terms include:  Health Care Quality Access and Evaluation; Quality Assurance; Quality Improvement; Outcome and Process Assessment (Healthcare); Quality Indicators, Health Care; Total Quality Management; Safety Management (http://www.nlm.nih.gov/mesh/MBrowser.html).  Sample key words which might be used in connection with improvement work include Quality, Safety, Evidence, Efficacy, Effectiveness, Theory, Interventions, Improvement, Outcomes, Processes, and Value.

2.  Abstract

a. Provide adequate information to aid in searching and indexing

b. Summarize all key information from various sections of the text using the abstract format of the intended publication or a structured summary such as: background, local problem, methods, interventions, results, conclusions


“BACKGROUND: Pain assessment documentation was inadequate because of the use of a subjective pain assessment strategy in a tertiary level IV neonatal intensive care unit (NICU). The aim of this study was to improve consistency of pain assessment documentation through implementation of a multidimensional neonatal pain and sedation assessment tool. The study was set in a 60-bed level IV NICU within an urban children's hospital. Participants included NICU staff, including registered nurses, neonatal nurse practitioners, clinical nurse specialists, pharmacists, neonatal fellows, and neonatologists.

METHODS: The Plan Do Study Act method of quality improvement was used for this project. Baseline assessment included review of patient medical records 6 months before the intervention. Documentation of pain assessment on admission, routine pain assessment, reassessment of pain after an elevated pain score, discussion of pain in multidisciplinary rounds, and documentation of pain assessment were reviewed. Literature review and listserv query were conducted to identify neonatal pain tools.

INTERVENTION: Survey of staff was conducted to evaluate knowledge of neonatal pain and also to determine current healthcare providers' practice as related to identification and treatment of neonatal pain. A multidimensional neonatal pain tool, the Neonatal Pain, Agitation, and Sedation Scale (N-PASS), was chosen by the staff for implementation.

RESULTS: Six months and 2 years following education on the use of the N-PASS and implementation in the NICU, a chart review of all hospitalized patients was conducted to evaluate documentation of pain assessment on admission, routine pain assessment, reassessment of pain after an elevated pain score, discussion of pain in multidisciplinary rounds, and documentation of pain assessment in the medical progress note. Documentation of pain scores improved from 60% to 100% at 6 months and remained at 99% 2 years following implementation of the N-PASS. Pain score documentation with ongoing nursing assessment improved from 55% to greater than 90% at 6 months and 2 years following the intervention. Pain assessment documentation following intervention of an elevated pain score was 0% before implementation of the N-PASS and improved slightly to 30% 6 months and 47% 2 years following implementation.

CONCLUSIONS: Identification and implementation of a multidimensional neonatal pain assessment tool, the N-PASS, improved documentation of pain in our unit. Although improvement in all quality improvement monitors was noted, additional work is needed in several key areas, specifically documentation of reassessment of pain following an intervention for an elevated pain score.

Keywords: N-PASS, neonatal pain, pain scores, quality improvement,” [17]


The purpose of an abstract is two-fold. First, to summarize all key information from various sections of the text using the abstract format of the intended publication or a structured summary of the background, specific problem to be addressed, methods, interventions, results, conclusions , and second, to provide adequate information to aid in searching and indexing.

The abstract is meant to be both descriptive, indicating the purpose, methods and scope of the initiative and informative, including the results, conclusions and recommendations. It needs to contain sufficient information about the article to allow a reader to quickly decide if it is relevant to their work and if they wish to read the full-length article. Additionally, many online databases such as Ovid and CINAHL use abstracts to index the article so it is important to include key words and phrases that will allow for quick retrieval in a literature search.  The example given includes these.

Journals have varying requirements for the format, content length and structure of an abstract. The above example illustrates how the important components of an abstract can be effectively incorporated in a structured abstract.  It is clear that it is a healthcare improvement project.  Some background information is provided, including a brief description of the setting and the participants, and the aim/objective is clearly stated.  The methods section describes the strategies used for the interventions, and the results section includes data that delineates the impact of the changes. The conclusion section provides a succinct summary of the project, what led to its success, and lessons learned.  This abstract is descriptive and informative, allowing readers to determine whether they wish to investigate the article further.


Why did you start?

3. Problem Description

Nature and significance of the local problem

4. Available Knowledge

Summary of what is currently known about the problem, including relevant previous studies


“Central venous access devices place patients at risk for bacterial entry into the bloodstream, facilitate systemic spread, and contribute to the development of sepsis.  Rapid recognition and antibiotic intervention in these patients, when febrile, are critical. Delays in time to antibiotic (TTA) delivery have been correlated with poor outcomes in febrile neutropenic patients.2 TTA was identified as a measure of quality of care in pediatric oncology centers, and a survey reported that most centers used a benchmark of  < 60 minutes after arrival, with >75% of pediatric cancer clinics having a mean TTA of < 60 minutes…

The University of North Carolina (UNC) Hospitals ED provides care for ∼65 000 patients annually, including 14 000 pediatric patients aged, 19 years. Acute management of ambulatory patients who have central lines and fever often occurs in the ED. Examination of a 10-month sample revealed that only 63% of patients received antibiotics within 60 minutes of arrival … “[18]


The introduction section of a quality improvement article clearly identifies the current relevant evidence, the best practice standard based on the current evidence, and the gap in quality. A quality gap describes the difference between practice at the local level and the achievable evidence-based standard. The authors of this article describe the problem and identify the quality gap by stating that “Examination of a 10-month sample revealed only 63% of the patients received antibiotics within 60 minutes of arrival and that the benchmark of <60 minutes and that delays in delivering antibiotics led to poorer outcomes.”[18] The timing of antibiotic administration at the national level compared to the local level provides an achievable standard of care, which helps the authors determine the goal for their antibiotic administration improvement project.

Providing a summary of the relevant evidence and what is known about the problem provides background and support for the improvement project and increase the likelihood for sustainable success.  The contextual information provided by describing the local system clarifies the project and reflects upon how suboptimal care with antibiotic administration negatively impacts quality. Missed diagnoses, delayed treatments, increased morbidity and increased costs are associated with a lack of quality, having relevance and implications at both the local and national level. 

Improvement work can also be done on a national or regional level.  In this case, the term “local” in the SQUIRE guidelines should be interpreted more generally as the specific problem to be addressed.  For example, Murphy and colleagues describe a national initiative addressing a healthcare quality issue.[19]   The introduction section in this article also illuminates current relevant evidence, best practice based on the current evidence, and the gap in quality. However, the quality gap reported here is the difference in knowledge of statin use for patients at high risk of cardiovascular morbidity and mortality in Ireland compared with European clinical guidelines:  “Despite strong evidence and clinical guidelines recommending the use of statins for secondary prevention, a gap exists between guidelines and practice … A policy response that strengthens secondary prevention, and improves risk assessment and shared decision-making in the primary prevention of CVD [cardiovascular disease] is required.”  [19]

Improvement work can also address a gap in knowledge, rather than quality.  For example, work might be done to develop tools to assess patient experience for quality improvement purposes. [20]  Interventions to improve patient experience, or to enhance team communication about patient safety[21]  may also address quality problems,  but in the absence of an established, evidence-based standard

5. Rationale

Informal or formal frameworks, models, concepts, and/or theories used to explain the problem, any reasons or assumptions that were used to develop the intervention(s), and reasons why the intervention(s) was expected to work.

Example 1 

“The team used a variety of qualitative methods …to understand sociotechnical barriers. At each step of collection, we categorised data according to the FITT [‘Fit between Individuals, Task, and Technology’]model criteria … Each component of the activity system (ie, user, task and technology) was clearly defined and each interface between components was explored by drawing from several epistemological disciplines including the social and cognitive sciences. The team designed interventions to address each identified FITT barrier……. By striving to understand the barriers affecting activity system components and the interfaces between them, we were able to develop a plan that addressed user needs, implement an intervention that articulated with workflow, study the contextual determinants of performance, and act in alignment with stakeholder expectations.”[22]

Example 2

“…We describe the development of an intervention to improve medication management in multimorbidity by GPs, in which we applied the steps of the BCW [Behaviour Change Wheel] [23] to enable a more transparent implementation of the MRC [Medical Research Council] framework for design and evaluation of complex interventions ....

…we used the COM-B (capability, opportunity, motivation—behaviour) model to develop a theoretical understanding of the target behaviour and guide our choice of intervention functions. We used the COM-B model to frame our qualitative behavioural analysis of the qualitative synthesis and interview data. We coded empirical data relevant to GPs’ …capabilities, …opportunities and …motivations to highlight why GPs were or were not engaging in the target behaviour and what needed to change for the target behaviour to be achieved.

The BCW incorporates a comprehensive panel of nine intervention functions, shown in Fig. 1, which were drawn from a synthesis of 19 frameworks of behavioural-intervention strategies. We determined which intervention functions would be most likely to effect behavioural change in our intervention by mapping the individual components of the COM-B behavioural analysis onto the published BCW linkage matrices...[24]


The label “rationale” for this guideline item refers to the reasons the authors have for expecting that an intervention will ‘work.’  A rationale is always present in the heads of researchers; however, it is important to make this explicit and communicate it in healthcare quality improvement work.  Without this, learning from empirical studies may be limited and opportunities for accumulating and synthesising knowledge across studies restricted. [8]

Authors can express a rationale in a variety of ways, and in more than one way in a specific paper. These include providing an explanation, specifying underlying principles, hypothesising processes or mechanism of change, or producing a logic model (often in the form of a diagram) or a programme theory.  The rationale may draw on a specific theory with clear causal links between constructs or on a general framework which indicates potential mechanisms of change that an intervention could target.

A well-developed rationale allows the possibility of evaluating not just whether the intervention had an effect, but how it had that effect.  This provides a basis for understanding the mechanisms of action of the intervention, and how it is likely to vary across, for example, populations, settings and targets.  An explicit rationale leads to specific hypotheses about mechanisms and/or variation, and testing these hypotheses provides valuable new knowledge, whether or not they are supported. This knowledge lays the foundation for optimizing the intervention, accumulating evidence about mechanisms and variation, and advancing theoretical understanding of interventions in general.

The first example shows how a theory (the ‘Fit between Individuals, Task and Technology’ framework; FITT) can identify and clarify the social and technological barriers to healthcare improvement work.  The study investigated engagement with a computerised system to support decisions about post-operative DVT prophylaxis: use of the framework led to 11 distinct barriers being identified, each associated with a clearly specified intervention which was undertaken.

The second example illustrates the use of an integrative theoretical framework for intervention development.[25]  The authors used an integrative framework rather than a specific theory/model/framework.  This was in order to start with as comprehensive a framework as possible, since many theories of behaviour change are partial.[18]  This example provides a clear description of the framework and how analysing the target behaviour using an integrative theoretical model informed the selection of intervention content.

Interventions may be effective without the effects being brought about by changes identified in the hypothesised mechanisms; on the other hand, they may activate the hypothesised mechanisms without changing behaviour. The knowledge gained through a theory-based evaluation is essential for understanding processes of change and, hence, for developing more effective interventions. This paper also cited evidence for, and examples of, the utility of the framework in other contexts.

6. Specific Aims

Purpose of the project and of this report


“The collaborative quality improvement  (QI) project described in this article was conducted to determine whether care to prevent postoperative respiratory failure as addressed by PSI 11 [Patient Safety Indicator #11, a national quality indicator] could be improved in a Virtual Breakthrough Series (VBTS) collaborative…..” [26]


The specific aim of a project describes why it was conducted, and the goal of the report.  It is essential to state the aims of improvement work clearly, completely, and precisely.  Specific aims should align with the nature and significance of the problem, the gap in quality, safety and value identified in the introduction, and reflect the rationale for the intervention(s).   The example given makes it clear that the goal of this multisite initiative was to improve or reduce postoperative respiratory failure by using a Virtual Breakthrough Series (VBTS). 

When appropriate, the specific aims section of a report about healthcare improvement work should state that both process and outcomes will be assessed.  Focusing only on assessment of outcomes ignores the possibility that clinicians may not have adopted the desired practice, or did not adopt it effectively, during the study period. Changing care delivery is the foundation of improvement work and should also be measured and reported.  In the subsequent methods section, the example presented here also describes the process measures used to evaluate the VBS. 


What did you do?

7. Context

Contextual elements considered important at the outset of introducing the intervention(s)

Example 1

“CCHMC [Cincinnati Children’s Hospital Medical Center]is a large, urban pediatric medical center and the Bone Marrow Transplant (BMT) team performs 100 to 110 transplants per year.  The BMT unit contains 24 beds and 60-70% of the patients on the floor are on cardiac monitors…The clinical providers…include 14 BMT attending physicians, 15 fellows, 7 NP’s [nurse practitioners], and 6 hospitalists…The BMT unit employs ~130 bedside RNs [registered nurses] and 30 PCAs [patient care assistants]. Family members take an active role…”[27]

Example 2 

“Pediatric primary care practices were recruited through the AAP QuIIN [American Academy of Pediatrics Quality Improvement Innovation Network] and the Academic Pediatric Association’s Continuity Research Network.  Applicants were told that Maintenance of Certification (MOC) Part 4 had been applied for, but was not assured.  Applicant practices provided information on their location, size, practice type, practice setting, patient population and experience with quality improvement (QI) and identified a 3-member physician-led core improvement team.  ….  Practices were selected to represent diversity in practice types, practice settings, and patient populations.  In each selected practice the lead core team physician and in some cases the whole practice had previous QI experience…Table 1 summarizes practice characteristics for the 21 project teams.”  [28]


Context is known to affect the process and outcome of interventions to improve the quality of healthcare.[14]  This section of a report should describe the contextual factors that authors considered important at the outset of the improvement initiative.  The goal of including information on context is two-fold.  First, describing the context in which the initiative took place is necessary to assist readers in understanding whether the intervention is likely to “work” in their local environment, and, more broadly, the generalizability of the finding.  Second, it enables the researchers to examine the role of context as a moderator of successful intervention(s).  Specific and relevant elements of context thought to optimize the likelihood of success should be addressed in the design of the intervention, and plans should be made a priori to measure these factors and examine how they interact with the success of the intervention.

Describing the context within the methods section orients the reader to where the initiative occurred.  In single center studies, this description usually includes information about the location, patient population, size, staffing, practice type, teaching status, system affiliation, and relevant processes in place at the start of the intervention, as is demonstrated in the first example by Dandoy et al.[26] reporting a QI effort to reduce monitor alarms. Similar information is also provided in aggregate for multi-centre studies.  In the second example by Duncan et al.,[28] a table is used to describe the practice characteristics of the 21 participating pediatric primary care practices, and includes information on practice type, practice setting, practice size, patient characteristics, and use of an electronic health record. This information can be used by the reader to assess whether his or her own practice setting is similar enough to the practices included in this report to enable extrapolation of the results. The authors state that they selected practices to achieve diversity in these key contextual factors.  This was likely done so that the team could assess the effectiveness of the interventions in a range of settings and increase the generalizability of the findings.

Any contextual factors believed a priori would impact the success of their intervention should be specifically discussed in this section.  Although the authors’ rationale is not explicitly stated, the example suggests that they had specific hypotheses about key aspects of a practice’s context that would impact implementation of the interventions.  They addressed these contextual factors in the design of their study in order to increase the likelihood that the intervention would be successful.  For example, they stated specifically that they selected practices with previous healthcare improvement experience and strong physician leadership.  In addition, the authors noted that practices were recruited through an existing research consortium, indicating their belief that project sponsorship by an established external network could impact success of the initiative.  They also noted that practices were made aware that American Board of Pediatrics Maintenance of Certification Part 4 credit had been applied for but not assured, implying that the authors believed incentives could impact project success.  While addressing context in the design of the intervention may increase the likelihood of success, these choices limit the generalizability of the findings to other similar practices with prior healthcare improvement experience, strong physician leadership, and available incentives. 

This example could have been strengthened by using a published framework such as the Model for Understanding Success in Quality (MUSIQ),[10] Consolidated Framework for Implementation Research (CFIR),[29]or the Promoting Action on Research Implementation in Health Services (PARiHS) model [30]to identify the subset of relevant contextual factors that would be examined.[11 17] The use of such frameworks is not a requirement but a helpful option for approaching the issue of context. The relevance of any particular framework can be determined by authors based on the focus of their work—MUSIQ was developed specifically for microsystem or organizational QI efforts, whereas CFIR and PARiHS were developed more broadly to examine implementation of evidence or other innovations. 

If elements of context are hypothesized to be important, but are not going to be addressed specifically in the design of the intervention, plans to measure these contextual factors prospectively should be made during the study design phase.  In these cases, measurement of contextual factors should be clearly described in the methods section, data about how contextual factors interacted with the interventions should be included in the results section, and the implications of these findings should be explored in the discussion.  For example, if the authors of the examples above had chosen this approach, they would have measured participating team’s prior healthcare improvement experience and looked for differences in successful implementation based on whether practices had prior experience or not. In cases where context was not addressed prospectively, authors are still encouraged to explore the impact of context on the results of intervention(s) in the discussion section.

     8. Intervention(s)

a.  Description of the intervention(s) in sufficient detail that others could reproduce it

b.  Specifics of the team involved in the work

Example 1

“We developed the I-PASS Handoff Bundle through an iterative process based on the best evidence from the literature, our previous experience, and our previously published conceptual model. The I-PASS Handoff Bundle included the following seven elements: the I-PASS mnemonic, which served as an anchoring component for oral and written handoffs and all aspects of the curriculum; a 2-hour workshop (to teach TeamSTEPPS teamwork and communication skills, as well as I-PASS handoff techniques), which was highly rated; a 1-hour role-playing and simulation session for practicing skills from the workshop; a computer module to allow for independent learning; a faculty development program; direct-observation tools used by faculty to provide feedback to residents; and a process-change and culture-change campaign, which included a logo, posters, and other materials to ensure program adoption and sustainability. A detailed description of all curricular elements and the I-PASS mnemonic have been published elsewhere and are provided in Table S1 in the Supplementary Appendix, available with the full text of this article at NEJM.org. I-PASS is copyrighted by Boston Children’s Hospital, but all materials are freely available.

Each site integrated the I-PASS structure into oral and written handoff processes; an oral handoff and a written handoff were expected for every patient. Written handoff tools with a standardized I-PASS format were built into the electronic medical record programs (at seven sites) or word-processing programs (at two sites). Each site also maintained an implementation log that was reviewed regularly to ensure adherence to each component of the handoff program.”[21]

Example 2

 “All HCWs [healthcare workers] on the study units, including physicians, nurses and allied health professionals, were invited to participate in the overall study of the RTLS [real-time location system] through presentations by study personnel. Posters describing the RTLS and the study were also displayed on the participating units… Auditors wore white lab coats as per usual hospital practice and were not specifically identified as auditors but may have been recognisable to some HCWs. Auditors were blinded to the study hypothesis and conducted audits in accordance with the Ontario Just Clean Your Hands programme.”[31]


In the same way that reports of basic science experiments provide precise details about the quantity, specifications and usage of reagents, equipment, chemicals and materials needed to run an experiment, so too should the description of the healthcare improvement intervention include or reference enough detail that others could reproduce it.  Improvement efforts are rarely unimodal, and descriptions of each component of the intervention should be included.  For additional guidance regarding the reporting of interventions, readers are encouraged to review the TIDIER guidelines: http://www.ncbi.nlm.nih.gov/pubmed/24609605.
In the first example above[21] about the multisite I-PASS study to improve paediatric handoff safety, the authors describe seven different elements of the intervention, including a standardized mnemonic, several educational programs, a faculty development program, observation/feedback tools, and even the publicity materials used to promote the intervention. Every change that could have contributed to the observed outcome is noted. Each element is briefly described and a reference to a more detailed description provided so that interested readers can seek more information. In this fashion, complete information about the intervention is made available, yet the full details do not overwhelm this report. Note that not all references are to peer-reviewed literature as some are to curricular materials in the web site MedEd Portal (https://www.mededportal.org), and others are to online materials.

The supplementary appendix available with this report summarizes key elements of each component which is another option to make details available to readers. The authors were careful to note situations in which the intervention differed across sites. At two sites the written handoff tool was built into word-processing programs, not the electronic medical record. Since interventions are often unevenly applied or taken up, variation in the application of intervention components across units, sites, or clinicians is reported in this section where applicable.

The characteristics of the team that conducted the intervention (for instance, type and level of training, degree of experience, and administrative and/or academic position of the personnel leading workshops) and/or the personnel to whom the intervention was applied should be specified. Often the influence of the people involved in the project is as great as the project components themselves. The second example above,[31] from an elegant study of the Hawthorne effect on hand hygiene rates, succinctly describes both the staff that were being studied and characteristics of the intervention personnel: the auditors tracking hand hygiene rates.

9. Study of the Intervention(s)

a.  Approach chosen for assessing the impact of the intervention(s)

b.  Approach used to establish whether the observed outcomes were due to the intervention(s)

Example 1

 “The nonparametric Wilcoxon-Mann-Whitney test was used to determine differences in OR use among Radboud UMC [University Medical Centre] and the six control UMC's together as a group. To measure the influence of the implementation of new regulations about cross functional teams in May 2012 in Radboud UMC, a (quasi-experimental) time-series design was applied and multiple time periods before and after this intervention were evaluated.” [32]

Example 2

“To measure the perceptions of the intervention on patients and families and its effect on transition outcomes, a survey was administered in the paediatric cystic fibrosis clinic at the start of the quality improvement intervention and 18 months after the rollout process.   The survey included closed questions on demographics and the transition materials (usefulness of guide and notebook, actual use of notebook and guide, which specific notebook components were used in clinic and at home).  We also elicited open-ended feedback…..

“A retrospective chart review assessed the ways patients transferred from the paediatric to adult clinic before and after the transition programme started.  In addition, we evaluated differences in BMI [body mass index] and hospitalizations 1 year after transfer to the adult centre.”[33]


Broadly, the study of the intervention is the reflection upon the work that was done, its effects on the systems and people involved, and an assessment of the internal and external validity of the intervention. Addressing this item will be greatly facilitated by the presence of a strong rationale, because when authors are clear about why they thought an intervention should work, the path to assessing the what, when, why and how of success or failure becomes easier. 

The study of the intervention may at least partly (but not only) be accomplished through the study design used.  For example, a stepped wedge design or comparison control group can be used to study the effects of the intervention. Other examples of ways to study the intervention include, but are not limited to, stakeholder satisfaction surveys around the intervention, focus groups or interviews with involved personnel, evaluations of the fidelity of implementation of an intervention, or estimation of unintended effects through specific analyses. The aims and methods for this portion of the work should be clearly specified. The authors should indicate whether these evaluative techniques were performed by the authors themselves, or an outside team, and what the relationship was between the authors and the evaluators.  The timing of the ‘study of the intervention’ activities relative to the intervention should be indicated.

In the first example [32], the cross functional team study, the goal was to improve utilization of operating room time by having a multidisciplinary, inter-professional group proactively manage the operating room schedule.   This project used a pre-specified study design to study an intervention, including an intervention and a control group. They assessed whether the observed outcomes were due to the intervention or some other cause (internal validity) by comparing operating room utilization over time at the intervention site to utilization at the control site.   They understood the possible confounding effects of system wide changes to operating room policies, and planned their analysis to account for this by using a quasi-experimental time series design.  The authors used statistical results to determine the validity of their findings, suggesting that the decrease in variation in use was indicative of organizational learning.

In a subsequent section of this report, the authors also outlined an evaluation they performed to make sure that improved efficiency of operating room was not associated with adverse changes in operative mortality or complication rates.  This is an example of how an assessment of unintended impact of the intervention - an important component of studying the intervention - might be completed.  An additional way to assess impact in this particular study might have been to obtain information from staff on their impressions of the program, or to assess how cross functional teams were implemented at this particular site.

In the second example,[33] a program to improve the transition from paediatric to adult cystic fibrosis care was implemented and evaluated.  The authors used a robust theoretical framework to help develop their work in this area, and its presence supported their evaluative design by showing whose feedback would be needed in order to determine success: healthcare providers, patients, and their families. In this paper, the development of the intervention incorporated the principle of studying it through PDSA cycles, which were briefly reported to give the reader a sense of the validity of the intervention.  Outcomes of the intervention were assessed by testing how patients’ physical parameters changed over time before and after the intervention.  To test whether these changes were likely to be related to the implementation of the new transition program, patients and families were asked to complete a survey, which demonstrated the overall utility of the intervention to the target audience of families and patients.  The survey also helped support the assertion that the intervention was the reason patient outcomes improved by testing whether people actually used the intervention materials as intended.

10. Measures

a.  Measures chosen for studying processes and outcomes of the intervention(s), including rationale for choosing them, their operational definitions, and their validity and reliability

b.  Description of the approach to the ongoing assessment of contextual elements that contributed to the success, failure, efficiency, and cost

c.  Methods employed for assessing completeness and accuracy of data


Improvement in culture of safety and ‘transformative’ effects—Before and after surveys of staff attitudes in control and SPI1 [the Safer Patients Initiative, phase 1] hospitals were conducted by means of a validated questionnaire to assess staff morale, attitudes, and aspects of culture (the NHS National Staff Survey)…

Impact on processes of clinical care—To identify any improvements, we measured error rates in control and SPI1 hospitals by means of explicit (criterion based) and separate holistic reviews of case notes. The study group comprised patients aged 65 or over who had been admitted with acute respiratory disease: this is a high risk group to whom many evidence based guidelines apply and hence where significant effects were plausible.

Improving outcomes of care—We reviewed case notes to identify adverse events and mortality and assessed any improvement in patients’ experiences by using a validated measure of patients’ satisfaction (the NHS patient survey)…

To control for any learning or fatigue effects, or both, in reviewers, case notes were scrambled to ensure that they were not reviewed entirely in series. Agreement on prescribing error between observers[ref.] was evaluated by assigning one in 10 sets of case notes to both reviewers, who assessed cases in batches, blinded to each other’s assessments, but compared and discussed results after each batch.” [16]


Studies of healthcare improvement should document both planned and actual changes to the structure and/or process of care, and the resulting intended and/or unintended (desired or undesired) changes in the outcome(s) of interest.[34] While measurement is inherently reductionistic, those evaluating the work can provide a rich view by combining multiple perspectives through measures of clinical, functional, experiential, and cost outcome dimensions.   [35-37]

Measures may be routinely used to assess healthcare processes or designed specifically to characterize the application of the intervention in the clinical process.  Either way, evaluators also need to consider the influence of contextual factors on the improvement effort and its outcomes.[7 38 39] This can be accomplished through a mixed method design which combines data from quantitative measurement, qualitative interviews, and ethnographic observation.[40-43]  In the study described above, triangulation of complementary data sources offers a rich picture of the phenomena under study, and strengthens confidence in the inferences drawn.

The choice of measures and type of data used will depend on the particular nature of the initiative under study, on data availability, feasibility considerations, and resource constraints. The trustworthiness of the study will benefit from insightful reporting of the choice of measures and the rationale for choosing them.  For example, in assessing “staff morale, attitudes, and aspects of ‘culture’ that might be affected” by the SPI1, the evaluators selected the 11 most relevant of the 28 survey questions in the NHS Staff Survey questionnaire and provided references to detailed documentation for that instrument. To assess patient safety, the authors’ approach to reviewing case notes “was both explicit (criterion based) and implicit (holistic) because each method identifies a different spectrum of errors.” [16]

Ideally, measures would be perfectly valid, reliable, and employed in research with complete and accurate data. In practice, such perfection is impossible. [42] Readers will benefit from reports of the methods employed for assessing the completeness and accuracy of data, so they can critically appraise the data and the inferences made from it.

11. Analysis

a.  Qualitative and quantitative methods used to draw inferences from the data

b.  Methods for understanding variation within the data, including the effects of time as a variable   

Example 1

“We used statistical process control with our primary process measure of family activated METs [Medical Emergency Teams] displayed on a u-chart.  We used established rules for differentiating special versus common cause variation for this chart. We next calculated the proportion of family-activated versus clinician-activated METs which was associated with transfer to the ICU within 4 h of activation. We compared these proportions using χ2 tests. [44]

Example 2

 “The CDMC [Saskatchewan Chronic Disease Management Collaborative] did not establish a stable baseline upon which to test improvement; therefore, we used line graphs to examine variation occurring at the aggregate level (data for all practices combined) and linear regression analysis to test for statistically significant slope (alpha=0.05). We used small multiples, rational ordering and rational subgrouping to examine differences in the level and rate of improvement between practices.

We examined line graphs for each measure at the practice level using a graphical analysis technique called small multiples. Small multiples repeat the same graphical design structure for each ‘slice’ of the data; in this case, we examined the same measure, plotted on the same scale, for all 33 practices simultaneously in one graphic. The constant design allowed us to focus on patterns in the data, rather than the details of the graphs. Analysis of this chart was subjective; the authors examined it visually and noted, as a group, any qualitative differences and unusual patterns.

To examine these patterns quantitatively, we used a rational subgrouping chart to plot the average month to month improvement for each practice on an Xbar-S chart.” [45]

Example 3          

 “Key informant interviews were conducted with staff from 12 community hospital ICUs that participated in a cluster randomized control trial (RCT) of a QI intervention using a collaborative approach. Data analysis followed the standard procedure for grounded theory. Analyses were conducted using a constant comparative approach.22 A coding framework was developed by the lead investigator and compared with a secondary analysis by a coinvestigator to ensure logic and breadth. As there was close agreement for the basic themes and coding decisions, all interviews were then coded to determine recurrent themes and the relationships between themes. In addition, ‘deviant’ or ‘negative’ cases (events or themes that ran counter to emerging propositions) were noted. To ensure that the analyses were systematic and valid, several common qualitative techniques were employed including consistent use of the interview guide, audiotaping and independent transcription of the interview data, double coding and analysis of the data and triangulation of investigator memos to track the course of analytic decisions.”[46]


Various types of problems addressed by healthcare improvement efforts may make certain types of solutions more or less effective. Not every problem can be solved with one method –– yet a problem often suggests its own best solution strategy.  Similarly, the analytic strategy described in a report should align with the rationale, project aims, and data constraints. Many approaches are available to help analyse healthcare improvement, including qualitative approaches (e.g., fishbone diagrams in root cause analysis,, structured interviews with patients/families, Gemba walks) or quantitative approaches (e.g., time series analysis, traditional parametric and non-parametric testing between groups, logistic regression). Often the most effective analytic approach occurs when quantitative and qualitative data are used together. Examples of this might include value stream mapping where a process is graphically outlined with quantitative cycle times denoted; or a spaghetti  map linking geography to quantitative physical movements; or annotations on a statistical process control (SPC) chart to allow for temporal insights between time series data and changes in system contexts.

In the first example by Brady et al, [44] family-activated Medical Emergency Teams (MET) are evaluated. The combination of three methods – statistical process control, a Pareto chart, and χ2 testing – makes for an effective and efficient analysis.  The choice of analytic methods is described clearly and concisely.  The reader knows what to expect in the results sections and why these methods were chosen. The selection of control charts gives statistically sound control limits that capture variation over time. The control limits give expected limits for natural variation, whereas statistically-based rules make clear any special cause variation. This analytic methodology is strongly suited for both the prospective monitoring of healthcare improvement work as well as the subsequent reporting as a scientific paper.  Depending on the type of intervention under scrutiny, complementary types of analyses may be used, including qualitative methods.

The MET analysis also uses a Pareto chart to analyse differences in characteristics between the clinician-initiated versus family-initiated MET activations.  Finally, specific comparisons between sub-groups, where time is not an essential variable, are augmented with traditional biostatistical approaches, such as χ2 testing. This example, with its one-paragraph description of analytic methods (control charts, Pareto charts, and basic biostatistics) is easily understandable and clearly written so that it is accessible to frontline healthcare professionals who might wish to use similar techniques in their work.  

Every analytic method also has constraints, and the reason for choosing each method should be explained by authors.  The second example, by Timmerman et al., [45] presents a more complex analysis of the data processes involved in a multicenter improvement collaborative. The authors provide a clear rationale for selecting each of their chosen approaches. Principles of healthcare improvement analytics are turned inward to understand more deeply the strengths and weaknesses of the way in which primary data were obtained, rather than interpretation of the clinical data itself.  In this example[45], rational subgrouping of participating sites is undertaken to understand how individual sites contribute to variation in the process and outcome measures of the collaborative.  Control charts have inherent constraints, such as the requisite number of baseline data points needed to establish preliminary control limits. Recognizing this, Timmerman, et al. used linear regression to test for the statistical significance in the slopes of aggregate data, and used run charts for graphical representation of the data to enhance understanding. 

Donabedian said  “Measurement in the classical sense—implying precision in quantification—cannot reasonably be expected for such a complex and abstract object as quality.”[47] In contrast to the what, when, and how much of quantitative, empirical approaches to data, qualitative analytic methods strive to illuminate the how and why of behavior and decision making – be it of individuals or complex systems. In the third example, by Dainty et al, grounded theory is applied to improvement work wherein the data from structured interviews are used to gain insight into and generate hypotheses about the causative or moderating forces in multicenter quality improvement collaboratives, including how they contribute to actual improvement. Themes were elicited using multiple qualitative methods – including a structured interview process, audiotaping with independent transcription, comparison of analyses by multiple investigators, and recurrence frequencies of constructs.[47]

In all three example papers, the analytic methods selected are clearly described and appropriately cited, affording readers the ability to understand them in greater detail if desired.   In the first two, statistical process control methods are employed in divergent ways that are instructive regarding the versatility of this analytic method. All three examples provide a level of detail which further supports replication.

12. Ethical Considerations

Ethical aspects of implementing and studying the intervention(s) and how they were addressed, including, but not limited to, formal ethics review and potential conflict(s) of interest


 “Close monitoring of (vital) signs increases the chance of early detection of patient deterioration, and when followed by prompt action has the potential to reduce mortality, morbidity, hospital length of stay and costs. Despite this, the frequency of vital signs monitoring in hospital often appears to be inadequate…Therefore we used our hospital’s large vital signs database to study the pattern of the recording of vital signs observations throughout the day and examine its relationship with the monitoring frequency component of the clinical escalation protocol...The large study demonstrates that the pattern of recorded vital signs observations in the study hospital was not uniform across a 24 h period… [the study led to] identification of the failure of our staff in our study to follow a clinical vital signs monitoring protocol…

Acknowledgements The authors would like to acknowledge the cooperation of the nursing and medical staff in the study hospital.

Competing interests  VitalPAC is a collaborative development of The Learning Clinic Ltd (TLC) and Portsmouth Hospitals NHS Trust (PHT). PHT has a royalty agreement with TLC to pay for the use of PHT intellectual property within the VitalPAC product. Professor Prytherch and Drs Schmidt, Featherstone and Meredith are employed by PHT. Professor Smith was an employee of PHT until 31 March 2011. Dr Schmidt and the wives of Professors Smith and Prytherch are shareholders in TLC. Professors Smith and Prytherch and Dr Schmidt are unpaid research advisors to TLC. Professors Smith and Prytherch have received reimbursement of travel expenses from TLC for attending symposia in the UK.

Ethics approval Local research ethics committee approval was obtained for this study from the Isle of Wight, Portsmouth and South East Hampshire Research Ethics Committee (study ref. 08/02/1394).“ [48]


SQUIRE 2.0 provides guidance to authors of improvement activities in reporting on the ethical implications of their work.  Those reading published improvement reports should be assured that potential ethics issues have been considered in the design, implementation and dissemination of the activity.  The example given highlights key ethical issues that may be reported by authors, including whether or not independent review occurred, and any potential conflicts of interest.[49-56]  These issues are directly described in the quoted sections. 

Expectations for the ethical review of research and improvement work varies between countries,[57] and may also vary between institutions.  At some institutions, both quality improvement and human subjects research are reviewed using the same mechanism.  Other institutions designate separate review mechanisms for human subject research and quality improvement work.[56] In the example above, from the United Kingdom, Hands and colleagues[48] report that the improvement activity described was reviewed and approved by a regional research ethics committee.  In another example, from the United States, the authors of a report describing a hospital-wide improvement activity to increase the rate of flu vaccinations indicate that their work was reviewed by the facility’s Quality Management office[58].

Avoiding potential conflict of interest is as important in improvement work as it is in research.  The authors in the example paper indicate the presence or absence of potential conflicts of interests, under the heading, “Competing Interests.”  Here, the authors provide the reader with clear and detailed information concerning any potential conflict of information.

Both the original and SQUIRE 2.0 guidelines stipulate that reports of interventions to improve the safety, value or quality of healthcare should explicitly describe how potential ethical concerns were reviewed and addressed in development and implementation of the intervention.  This is an essential step for ensuring the integrity of efforts to improve healthcare, and should therefore be explicitly described in published reports. 


What did you find?

13. Results

Results:  Evolution of the intervention and details of process measures

a.  Initial steps of the intervention(s) and their evolution over time (e.g., time-line diagram, flow chart, or table), including modifications made to the intervention during the project

b.  Details of the process measures and outcome


“Over the course of this initiative, 479 patient encounters that met criteria took place (Table 1). TTA [Time to antibiotic] delivery was tracked, and the percentage of patients receiving antibiotics within 60 minutes of arrival increased from 63% to 99% after 8 months, exceeding our goal of 90% (Fig 1)… Control charts demonstrated that antibiotic administration was reliably, 1 hour by phase III and has been sustained for 24 months since our initiative goal was first met in June 2011.

Key improvement areas and specific interventions for the initiative are listed in Table 2 [figure 2]. During phase I, the existing processes for identifying and managing febrile patients with central lines were mapped and analyzed. Key interventions that were tested and implemented included revision of the greeter role to include identification of patients with central lines presenting with fever and notification of the triage nurse, designation of chief complaint as “fever/central line,” re-education and re-emphasis of triage acuity as 2 for these patients, and routine stocking of the Pyxis machine ….

In phase II, strategies focused on improving performance by providing data and other information for learning, using a monthly newsletter, public sharing of aggregate compliance data tracking, individual reports of personal performance, personal coaching of noncompliant staff, and rewards for compliance...

In phase III, a management guideline with key decision elements was developed and implemented (Fig 3). A new patient identification and initial management process was designed based on the steps, weaknesses, and challenges identified in the existing process map developed in phase I. This process benefited from feedback from frontline ED staff and the results of multiple PDSA cycles during phases I and II….

During the sustainability phase, data continued to be collected and reported to monitor ongoing performance and detect any performance declines should they occur… “ [18]


Healthcare improvement work is based on a rationale, or hypothesis, as to what intervention will have the desired outcome(s) in a given context.   Over time, as a result of the interaction between interventions and context, these hypotheses are re-evaluated, resulting in modifications or changes to the interventions. Although the mechanism by which this occurs should be included in the methods section of a report, the resulting transformation of the intervention over time rightfully belongs under results.   The results section should therefore describe both this evolution and its associated outcomes.

When publishing this work, it is important that the reader has specific information about the initial interventions and how they evolved.   This can be in the form of tables and figures in addition to text.  In the example above, interventions are described in phases:  I, II, III, and a sustainability phase, and information provided as to why they evolved and how various roles were impacted (figure 2).   This level of detail allows readers to imagine how these interventions and staff roles might be adapted in the context of their own institutions, as an intervention which is successful in one organization may not be in another. 

It is important to report the degree of success achieved in implementing an intervention in order to assess its fidelity, for example, the proportion of the time that the intervention actually occurred as intended.  In the example above, the goal of delivering antibiotics within an hour of arrival, a process measure, is expressed in terms of the percentage of total patients for whom it was achieved.  The first chart (figure 3) shows the sustained improvement in this measure over time.   The second chart (figure 4) illustrates the resulting decrease in variation as the interventions evolved and took hold.  The charts are annotated to show the phases of evolution of the project, to enable readers to see where each intervention fits in relationship to project results over time.

Results:  Contextual elements and unexpected consequences

c.  Contextual elements that interacted with the intervention(s)

d.  Observed associations between outcomes, interventions, and relevant contextual elements 

e.  Unintended consequences such as unexpected benefits, problems, failures, or costs associated with the intervention(s).


Quantitative Results

“In terms of QI efforts, two-thirds of the 76 practices (67%) focused on diabetes and the rest focused on asthma. Forty-two percent of practices were family medicine practices, 26% were pediatrics, and 13% were internal medicine. The median percent of patients covered by Medicaid and with no insurance was 20% and 4%, respectively. One-half of the practices were located in rural settings and one-half used electronic health records. For each diabetes or asthma measure, between 50% and 78% of practices showed improvement (i.e. a positive trend) in the first year.

Tables 2 and 3 show the associations of leadership with clinical measures and with practice change scores for implementation of various tools, respectively. Leadership was significantly associated with only 1 clinical measure, the proportion of patients having nephropathy screening (odds ratio [OR] = 1.37: 95% CI, 1.08- 1.74). Inclusion of practice engagement reduced these odds, but the association remained significant. The odds of making practice changes were greater for practices with higher leadership scores at any given time (ORs = 1.92-6.78). Inclusion of practice engagement, which was also significantly associated with making practice changes, reduced these odds (ORs = 2.41 to 4.20), but the association remained significant for all changes except for registry implementation.

Qualitative Results

Among the 12 practices interviewed, 5 practices had 3 or fewer clinicians and 7 had 4 or more (range = 1-32). Seven practices had high ratings of practice change by the coach. One-half were NCQA [National Committee for Quality Assurance] certified as a patient-centered medical home. These practices were similar to the quantitative analysis sample except for higher rates of electronic health record use and Community Care of North Carolina Medicaid membership…

Leadership-related themes from the focus groups included having (1) someone with a vision about the importance of the work, (2) a middle manager who implemented the vision, and (3) a team who believed in and were engaged in the work.…Although the practice management provided the vision for change, patterns emerged among the practices that suggested leaders with a vision are a necessary, but not sufficient condition for successful implementation.

Leading From the Middle

All practices had leaders who initiated the change, but practices with high and low practice change ratings reported very different “operational” leaders. Operational leaders in practices with low practice change ratings were generally the same clinicians, practice managers, or both who introduced the change. In contrast, in practices with high practice change ratings, implementation was led by someone other than the lead physician or top manager..” [59]


One of the challenges in reporting healthcare improvement studies is the effect of context on the success or failure of the intervention(s).  The most commonly reported contextual elements that may interact with interventions are structural variables including organizational/practice type, volume, payer mix, electronic health record use, and geographical location.  Other contextual elements associated with healthcare improvement success include [8, 23] top management leadership, organizational structure, data infrastructure/information technology, physician involvement in activities, motivation to change and team leadership.[48]  In this example, the authors provided descriptive information about the structural elements of the individual practices, including type of practice, payer mix, geographical setting, and use of electronic health records.  The authors noted variability in improvement in diabetes and asthma measures across the practices, and examined how characteristics of practice leadership affected the change process for an initiative to improve diabetes and asthma care.  Practice leadership was measured monthly by the community based practice coach at each site.  For analyses, these scores were reduced into low (0-1) and high (2-3) groups.  Practice change ratings were also assigned by the practice coaches indicating the degree of implementation and use of patient registries, care templates, protocols, and patient self-management support tools. Local leadership  showed no association with most of the clinical measures; however, local leadership involvement was significantly associated with implementation of the process tools used to improve outcomes. The authors use tables to display these associations clearly to the reader.   

In addition, the authors use the information from the coaches’ ratings \ to further explore this concept of practice leadership. The authors conducted semi-structured focus group interviews for a sample of 12 of the 76 practices based on improvement in clinical measures and improvement in practice change score.  Two focus groups were conducted in each practice including one with practice clinicians and administrators and one with front line staff.  Three themes emerged from these interviews that explicated the concept of practice leadership in these groups.  While two of the themes reflect contextual elements that are often cited in the literature (visionary leader and engaged team), the authors addressed an unexpected theme about the role of the middle (operational) manager.  This operational leader was often reported to be a nurse or nurse practitioner with daily interactions with physicians and staff, who appeared to be influential in facilitating change.  The level of detail provided about the specifics of practice leadership can be useful to readers who are engaged in their own improvement work.  Although no harms or failures related to the work were described, transparent reporting of negative results is as important as reporting successful ones.

In this example, the authors used a mixed methods approach in which practice leadership and engagement was quantitatively rated by improvement coaches as well as qualitatively evaluated using focus groups. The use of qualitative methods enhanced understanding of the context of practice leadership.  This mixed methods approach is not a requirement for healthcare improvement studies as the influence of contextual elements can be assessed in many ways.  For example, Cohen and colleagues simply describe the probable impact of the 2009 H1N1 pandemic on their work to increase influenza vaccination rates in hospitalized patients,[58]  providing important contextual information to assist the reader’s understanding of the results.

Results: Missing data

f.  Details about missing data

Example 1

“We successfully contacted 69% (122/178) of patients/families in the postimplementation group…Among the remaining 56 patients (31%) for whom no phone or E-mail follow-up was obtained, 34 had another encounter in our hospital on serial reviews of their medical record. Nine patients were evaluated in a cardiology clinic and 7 in a neurology clinic. As a result of these encounters, there were no patients ultimately diagnosed with a cardiac or neurologic condition.”[60]

Example 2

“We identified 328 patients as under-immunized between September 2009 and September 2010. We fully immunized 194 (59%) of these patients by September 2010…We failed to recover missing outside immunization records on 15 patients (5%). The remaining 99 patients (30%) refused vaccines, transferred care, or were unreachable by phone or mail. For the 194 patients we fully immunized, we made 504 (mean 2.6) total outreach attempts for care coordination. We immunized 176 (91%) of these patients by age 24 months. For the 20 patients who remained under-immunized, we made 113 (mean 5.7) total outreach attempts for care coordination. We continued attempting outreach to immunize these patients even after their second birthday.”[46]


Whenever possible, the results section of a healthcare improvement paper should account for missing data.  Doing so enables the reader to understand potential biases in the analysis, and may add important context to the study findings.  It is important for authors to clearly state why data are missing, such as technical problems or errors in data entry, attrition of participants from an improvement initiative over time, or patients lost to follow-up.  Efforts made by the team to recover the data should be described, and any available details about the missing data provided.

In the first example, [60] the improvement team was unable to contact 56 patients for phone or E-mail follow-up (i.e. why the data are missing).  To account for this missing data, the team performed serial reviews of medical records. In doing so, they were able to report patient information relevant to the study outcomes.  In the second example,[46] the authors also clearly state the reasons for missing data (failure to recover outside records, transfers of care, unreachable by phone or email). In addition, they give details about the number of outreach attempts made for specific patient groups. Providing a detailed description of missing data allows for a more accurate interpretation of study findings.  


What does it mean?

14. Summary

a.  Key findings, including relevance to the rationale and specific aims

b.  Particular strengths of the project


“In our 6-year experience with family-activated METs [Medical Emergency Teams], families uncommonly activated METs. In the most recent and highest-volume year, families called 2.3 times per month on average. As a way of comparison, the hospital had an average of 8.7 accidental code team activations per month over this time. This required an urgent response from the larger team. Family activation less commonly resulted in ICU transfer than clinician activated METs, although 24% of calls did result in transfers. This represents a subset of deteriorating patients that the clinical team may have missed. In both family-activated and clinician-activated MET calls, clinical deterioration was a common cause of MET calls. Families more consistently identified their fear that the child’s safety was at risk, a lack of response from the clinical team, and that the interaction between team and family had become dismissive. To our knowledge, this study is the largest study of family-activated METs to date, both in terms of count of calls and length of time observed. It is also the first to compare reasons for MET calls from families with matched clinician-activated calls."[44]


Although often not called out with a specific subheading, the “summary” of a report on healthcare improvement most often introduces and frames the “discussion” section.  While the first paragraph should be a focused summary of the most critical findings, the majority of the project’s results should be contained in the results section.  The goal of the summary is to capture the major findings and bridge the discussion to a more nuanced exploration of those findings.  Exactly where the summary section ends is far less important that how it sets up the reader to explore and reflect on the ensuing discussion.

The example above gives a clear and concise statement of the study’s strengths and distinctive features.  This summary recaps quantitative findings (families called METs relatively infrequently and fewer of their calls resulted in ICU transfers), and introduces a subsequent discussion of concerns identified by families which might not be visible to clinicians, including ways in which “family activation of an MET may improve care without reducing MET-preventable codes outside of the ICU.”[44]  This conveys an important message and bridges to a discussion of the existing literature and terminology.  Providing a focused summary in place of an exhaustive re-statement of project results appropriately introduces the reader to the discussion section and a more thorough description of the study’s findings and implications.  

The authors go on to relate these main findings back to the nature and significance of the problem and the specific aims previously outlined in the introduction section, specifically  (emphasis added) ” To evaluate the burden of family activation on the clinicians involved\...too better understand the outcome of METs,  and to begin to understand why families call METs.”[44]

Another approach in structuring the summary component of the discussion, is to succinctly link results to the relevant processes in the development of the associated interventions.  This approach is illustrated by Beckett et al. in a recent paper about decreasing cardiac arrests in the acute hospital setting [58] “Key to this success has been the development of a structured response to the deteriorating patient. Following the implementation of reliable EWS [early warning systems] across the AAU [Acute Admissions Unit] and ED [Emergency Department], and the recognition and response checklists, plus weekly safety meetings in the AAU at SRI [Stirling Royal Infirmary], there was an immediate fall in the number of cardiac arrests, which was sustained thereafter.”[58] This linkage serves to re-introduce the reader to some of the relevant contextual elements which can subsequently be discussed in more detail as appropriate.  Importantly, it also serves to frame the interpretive section of the discussion which focuses on comparison of results with findings from other publications, and further evaluating the project’s impact.  

     15. Interpretation

a.  Nature of the association between the intervention(s) and the outcomes

b.  Comparison of results with findings from other publications

c.  Impact of the project on people and systems

d.  Reasons for any differences between observed and anticipated outcomes, including the influence of context

e.  Costs and strategic trade-offs, including opportunity costs

Example 1

(a) “After QI interventions, the percentage of patients attending four or more clinic visits significantly improved, and in 2012 we met our goal of 90% of patients attending four or more times a year.  A systematic approach to scheduling processes, timely rescheduling of patients who missed appointments and monitoring of attendance resulted in a significant increase in the number of patients who met the CFF national recommendation of four or more visits per year.” [61]

(b) “Although the increase in the percentage of patients with greater than 25th centile for BMI/W-L from 80% to 82% might seem small, it represents a positive impact on a few more patients and provides more opportunities for improvement.  Our data are in agreement with Johnson et al. (2003), who reported that frequent monitoring among other interventions made possible due to patients being seen more in clinic was associated with improved outcomes in CF.”[61].

(c) “We learned that families are eager to have input and be involved... participation in the [learning and leadership collaborative] resulted in a positive culture change at the ACH CF Care Center regarding the use of QI methods.” [61]

(d) “We noticed our clinic attendance started to improve before the [intervention] processes were fully implemented.  We speculate this was due to the heightened awareness of our efforts by patients, families and our CF team.” [61]

(e) “Replication of these processes could be hindered by lack of personnel, lack of buy-in by the hospital administration and lack of patient/family involvement....barriers to attendance included rising fuel costs, transportation limitations, child care issues, missed workdays by caregivers and average low-income population.” [61]

Example 2

“The direct involvement of patients and families ... allowed us to address the social and medical barriers to adherence.  Their input was invaluable since they live with the treatment burden that is a daily part of CF care... the in-clinic patient demonstration gave staff the ability to upgrade or replace equipment that was not functioning.”([62]

“We found that following a simple algorithm helped to maintain consistency in our program... the simplicity of this program makes it easily incorporated into routine CF clinic visits.”[62]


In the first example, Berlinski, et al. [61] describe the implications of their improvement efforts by highlighting that they increased the proportion of CF patients receiving four clinic visits a year and achieved secondary improvements on a nutritional outcome and on the culture of their context. The authors also offer alternative explanations for outcomes, including factors which might have confounded the asserted relationship between intervention and outcome— namely that performance on the primary outcome began to improve well before implementation of the intervention.  This provides insight into what the actual drivers of the outcome might have been, and can be very helpful to others seeking to replicate or modify the intervention.  Finally, their comparison of their results to that of a similar study provides a basis for considerations of feasibility, sustainability, spread, and replication of the intervention. 

The second example, from Zanni, et al.[62]  found that the simplicity of their intervention could maximize ease of implementation, suggesting that costs and trade-offs are likely to be minimal for replication in similar contexts.  Conversely, Berlinksi et al. [61]cite barriers to replicating and sustaining their work, including staffing, leadership, population socioeconomic characteristics and informatics issues, each of which could present cost or trade-off considerations that leadership will need to consider to support implementation and sustainability.  Additionally, both Berlinski et al. and  Zanni, et al.  observe that patient and family involvement in the planning and intervention process simultaneously improved the context and effectiveness of the intervention.

16. Limitations

a.  Limits to the generalizability of the work

b.  Factors that might have limited internal validity such as confounding, bias, or imprecision in the design, methods, measurement, or analysis

c.  Efforts made to minimize and adjust for limitations

Example 1                

“Our study had several limitations. Our study of family MET activations compared performance with our historical controls, and we were unable to adjust for secular trends or unmeasured confounders. Our improvement team included leaders of our MET committee and patient safety, and we are not aware of any ongoing improvement work or systems change that might have affected family MET calls. We performed our interventions in a large tertiary care children’s hospital with a history of improvement in patient safety and patient-centred and family-centred care.

Additionally, it is uncertain and likely very context-dependent as to what is the ‘correct’ level of family-activated METs. This may limit generalizability to other centres, although the consistently low rate of family MET calls in the literature in a variety of contexts should reduce concerns related to responding team workload. We do not have process measures of how often MET education occurred for families and of how often families understood this information or felt empowered to call. This results in a limited understanding of the next best steps to improve family calling. Our data were collected in the course of clinical care with chart abstraction from structured clinical notes. Given this, it is possible that notes were not written for family MET calls that were judged ‘nonclinical.’  From our knowledge of the MET system, we are confident such calls are quite few, but we lack the data to quantify this. Our chart review for the reasons families called did not use a validated classification tool as we do not believe one exists. This is somewhat mitigated by our double independent reviews that demonstrated the reliability of our classification scheme.” [44]

Example 2

“Our study has a number of important limitations. Our ethnographic visits to units were not longitudinal, but rather snapshots in time; changes in response to the program could have occurred after our visits. We did not conduct a systematic audit of culture and practices, and thus some inaccuracies in our assessments may be present. We did not evaluate possible modifiers of effect of factors such as size of unit, number of consultants and nurses, and other environmental features. We had access to ICUs’ reported infection rates only if they provided them directly to us; for information governance reasons, these rates could not be verified. It is possible that we have offered too pessimistic an interpretation of whether Matching Michigan ‘worked’: the quantitative evaluation may have underestimated the effects of the program (or over-estimated the secular trend), since the ‘waiting’ clusters were not true controls that were unexposed to the interventions. …”[63]


The limitations section offers an opportunity to present potential weaknesses of the study, explain the choice of methods, measures and intervention, and examine why results may not be generalizable beyond the context in which the work occurred.  In the first example, a study of family-activated medical emergency teams (METs), Brady and colleagues identified a number of issues that might influence internal validity and the extent to which their findings are generalizable to other hospitals.  The success of medical emergency teams, and the participation of family members in calling these teams, may depend on contextual attributes such as leadership involvement. Although few hospitals have implemented family activated METs, the growing interest in patient and family engagement may also contribute to a broader use of this intervention. There are no data available to assess the secular trends in these practices that might suggest the observed changes resulted from external factors. 

There were few family-activated MET calls.  This positive result may stem from family education, but the authors report  that they had limited data on such education. . The lack of a validated tool to capture chart review information is noted as a potential weakness since some non-clinical MET calls might not have been recorded in the chart. The authors also note that the observed levels of family-activated MET calls are consistent with other literature.

The impact of improvement interventions often varies with context, but the large number of potential factors to consider requires that researchers focus on a limited set of contextual measures they believe may influence success and future adaptation and spread.  In the second example given, Dixon-Woods and colleagues assessed variation in results of the implementation of the central line bundle to reduce catheter-related bloodstream infections in English ICUs (intensive care units). [63]  While English units made improvements, the results were not as impressive as in the earlier US experience. The researchers point to the prior experiences of staff in the English ICUs in several infection control campaigns, as contributing to this difference. Many English clinicians viewed the new program as redundant, believing this was a problem already solved The research team also notes that some of the English ICUs did not have an organizational culture that supported consistent implementation of the required changes. 

Dixon-Woods and colleagues relied on quantitative data on clinical outcomes as well as observation and qualitative interviews with staff. However, as they report, their study had several limitations.  Their visits to the units were not longitudinal, so changes could have been made in some units after the researchers’ observations. They did not carry out systematic audits of culture and practices that might have revealed additional information, nor did they assess the impact of local factors including the size of the unit, the number of doctors and nurses, and other factors that might have affected the capability of the unit to implement new practices. Moreover, while the study included controls, there was considerable public and professional interest in these issues, which may have influenced performance and reduced the relative impact of the intervention.  The authors’ report [63] of the context and limitations is crucial to assist the reader in assessing their results, and in identifying factors that might influence results of similar interventions elsewhere.

         17. Conclusions

a.  Usefulness of the work

b.  Sustainability

c.  Potential for spread to other contexts

d.  Implications for practice and for further study in the field 

e.  Suggested next steps


 “We have found that average paediatric nurse staffing ratios are significantly associated with hospital readmission for children with common medical and surgical conditions. To our knowledge, this study is the first to explicitly examine and find an association between staffing ratios and hospital readmission in paediatrics… Our findings have implications for hospital administrators given the national emphasis on reduction of readmissions by payers. The role of nursing care in reducing readmissions has traditionally focused on nurse-led discharge planning programmes in the inpatient setting and nurse-directed home care for patients with complex or chronic conditions.

While these nurse-oriented interventions have been shown to significantly reduce readmissions, our findings suggest that hospitals might also achieve reductions in readmission by focusing on the number of patients assigned to nurses. In paediatrics, limiting nurses’ workloads to four or fewer patients appears to have benefits in reducing readmissions.

Further, hospitals are earnestly examining their discharge processes and implementing quality improvement programmes aimed at preparing patients and families to manage health condition(s) beyond the hospital. Quality improvement initiatives to improve inpatient care delivery often depend upon the sustained efforts of front-line workers, particularly nurses. Prior research shows that hospitals with better nurse staffing ratios deliver Joint Commission-recommended care for key conditions more reliably, highlighting the inter-relationship of nurse staffing levels and quality improvement success.

The sustainability of quality improvement initiatives related to paediatric readmission may ultimately depend on nurses’ ability to direct meaningful time and attention to such efforts.”[64]


The conclusion of a healthcare improvement paper should address the overall usefulness of the work, including its potential for dissemination and implications for the field, both in terms of practice and policy.  It may be included as a separate section in or after the discussion section, or these components may be incorporated within a single overall discussion section.

The authors of this report highlight the usefulness of their research with reference to “the national [US] emphasis on reduction of readmission by payers.” They also refer throughout the paper to the debates and research around appropriate nurse staffing levels, and the impact of nurse staffing levels on the sustainability of quality improvement initiatives in general, with reference to the key role of nurses in improving care and evidence that nursing staff levels are associated with delivery of high quality care.  Although the authors don’t refer directly to potential for spread to other contexts, the generalizability of their findings is discussed in a separate section of the discussion (not included here). 

In this example, the authors refer to “implications for hospital administrators” because their findings “suggest that hospitals might also achieve reduction in readmissions by focusing on the number of patients assigned to nurses.” They also observe that these findings speak to “the validity of the California minimum staffing ratio for paediatric care.” Perhaps they could have suggested more in terms of implications for policy, for example what their findings might mean for the potential of payer organizations to influence nurse staffing levels through their contracts, or for broader government legislation on nurse-patient ratios.  However, in their discussion they also recognize the limitations of a single study to inform policy decisions.

The need for further study is emphasized in the wider discussion section.  The authors note that “more research is needed to better understand the reasons for children’s readmissions and thus identify which ones are potentially preventable,” calling for “additional research on both paediatric readmission measures and the relationship between nursing care delivery, nurse staffing levels and readmissions. “  In writing about healthcare improvement, it is important that the authors’ conclusions are appropriately related to their findings, reflecting their validity and generalizability, and their potential to inform practice.  In this case, direct recommendations to change practice are appropriately withheld given the need for further research.

Other Information


18. Funding

Sources of funding that supported this work. Role, if any, of the funding organization in the design, implementation, interpretation, and reporting


"Funding/Support: This research was funded by the Canadian Institutes of Health Research, the Ontario Ministry of Health and Long-Term Care, the Green Shield Canada Foundation, the University of Toronto Department of Medicine, and the Academic Funding Plan Innovation Fund. 

Role of the Funder/Sponsor: None of the funder or sponsors had any role in the design of the study, the conduct of the study, the collection, management, analysis or interpretation of the data, or the preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication." [65]


Sources of funding for quality improvement should be clearly stated at the end of a manuscript in similar fashion to other scholarly reports.  Any organization, institution or agency that contributed financially to any part of the project should be listed.  In this example, funding was received from multiple sources including government, university, and foundation granting agencies.  

Due to their financial interest in the quality improvement project, funding sources have the potential to introduce bias in favour of exaggerated effect size.  The role of each funding source should be described in sufficient detail, as in the example above, to allow readers to assess whether these external parties may have influenced the reporting of improvement outcomes.   A recent paper by Trautner and colleagues provides a similar approach.[66]

Summary and Conclusions Regarding the Explanations and Elaborations Document

The SQUIRE 2.0 Explanation and Elaboration (E&E) document is intended to help authors “operationalize” SQUIRE in their reports of systematic efforts to improve the quality, safety, and value of healthcare.  Given the rapid growth in healthcare improvement over the past two decades, it is imperative to promote the sharing of successes and failures to inform further development of the field.  The E&E provides guidance about how to utilize SQUIRE as a structure for writing and can be a starting point for ongoing dialogue about key concepts that are addressed in the guidelines.  We hope that SQUIRE 2.0 will challenge authors both to write better and to think more clearly about the role of formal and informal theory, interaction between context, interventions, and outcomes, and methods for studying improvement work.  Due to space considerations, we have been able to cite a few of many possible examples from the literature for each guideline section. To further explore these key concepts in healthcare improvement, we recommend both the complete articles cited by the authors of this E&E as well as their secondary references.  To promote the spread and sustainability of SQUIRE 2.0, the Guidelines, this E&E, and the accompanying glossary are accessible on the SQUIRE website (www.squire-statement.org).  The website also links the viewer to resources such as screencasts and opportunities to discuss key concepts through an interactive forum.   

Since the publication of SQUIRE 1.0[1] in 2008, there has been an enormous increase in the number and complexity of published reports about healthcare improvement.  We hope that the time spent in the evaluation and careful development of SQUIRE 2.0 and this E&E will contribute to a new chapter in scholarly writing about healthcare improvement.  We look forward to the continued growth of the field and the further evolution of SQUIRE as we deepen our understanding of how to improve the quality, safety, and value of healthcare.


1. Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Quality and Safety in Health Care 2008;17(Suppl 1):i13-i32 doi: 10.1136/qshc.2008.029058[published Online First: Epub Date]|.

2. Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348(mar07 3):g1687-g87 doi: 10.1136/bmj.g1687[published Online First: Epub Date]|.

3. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340(mar23 1):c869-c69 doi: 10.1136/bmj.c869[published Online First: Epub Date]|.

4. von Elm E. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies. Annals of Internal Medicine 2007;147(8):573 doi: 10.7326/0003-4819-147-8-200710160-00010[published Online First: Epub Date]|.

5. Bossuyt PM. Towards Complete and Accurate Reporting of Studies of Diagnostic Accuracy: The STARD Initiative. Clinical Chemistry 2003;49(1):1-6 doi: 10.1373/49.1.1[published Online First: Epub Date]|.

6. Moss F, Thompson R. A new structure for quality improvement reports. Quality and Safety in Health Care 1999;8(2):76-76 doi: 10.1136/qshc.8.2.76[published Online First: Epub Date]|.

7. Harvey G, Jas P, Walshe K, Skelcher C. Analysing organisational context: case studies on the contribution of absorptive capacity theory to understanding inter-organisational variation in performance improvement. BMJ Quality & Safety 2014;24(1):48-55 doi: 10.1136/bmjqs-2014-002928[published Online First: Epub Date]|.

8. Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Quality & Safety 2015;24(3):228-38 doi: 10.1136/bmjqs-2014-003627[published Online First: Epub Date]|.

9. Portela MC, Pronovost PJ, Woodcock T, Carter P, Dixon-Woods M. How to study improvement interventions: a brief overview of possible study types: Table 1. BMJ Qual Saf 2015;24(5):325-36 doi: 10.1136/bmjqs-2014-003620[published Online First: Epub Date]|.

10. Kaplan HC, Provost LP, Froehle CM, Margolis PA. The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Quality & Safety 2011;21(1):13-20 doi: 10.1136/bmjqs-2011-000010[published Online First: Epub Date]|.

11. Bate P. Organizing for Quality. Radcliffe Publishing 2008

12. Dixon-Woods M. The problem of appraising qualitative research. Quality and Safety in Health Care 2004;13(3):223-25 doi: 10.1136/qhc.13.3.223[published Online First: Epub Date]|.

13. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf 2015 doi: 10.1136/bmjqs-2015-004411[published Online First: Epub Date]|.

14. Davies L, Donnelly KZ, Goodman DJ, Ogrinc G. Findings from a novel approach to publication guideline revision: user road testing of a draft version of SQUIRE 2.0. BMJ Qual Saf 2015 doi: 10.1136/bmjqs-2015-004117[published Online First: Epub Date]|.

15. Dyrkorn OA, Kristoffersen M, Walberg M. Reducing post-caesarean surgical wound infection rate: an improvement project in a Norwegian maternity clinic. BMJ Quality & Safety 2012;21(3):206-10 doi: 10.1136/bmjqs-2011-000316[published Online First: Epub Date]|.

16. Benning A, Ghaleb M, Suokas A, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ 2011;342(feb03 1):d195-d95 doi: 10.1136/bmj.d195[published Online First: Epub Date]|.

17. Bamberger R. Perspectives on Context.: The Health Foundation, 2014.

18. Jobson M, Sandrof M, Valeriote T, Liberty AL, Walsh-Kelly C, Jackson C. Decreasing Time to Antibiotics in Febrile Patients With Central Lines in the Emergency Department. PEDIATRICS 2014;135(1):e187-e95 doi: 10.1542/peds.2014-1192[published Online First: Epub Date]|.

19. Murphy C, Bennett K, Fahey T, Shelley E, Graham I, Kenny RA. Statin use in adults at high risk of cardiovascular disease mortality: cross-sectional analysis of baseline data from The Irish Longitudinal Study on Ageing (TILDA). BMJ open 2015;5(7):e008017 doi: 10.1136/bmjopen-2015-008017[published Online First: Epub Date]|.

20. Homa K, Sabadosa KA, Nelson EC, Rogers WH, Marshall BC. Development and validation of a cystic fibrosis patient and family member experience of care survey. Qual Manag Health Care 2013;22(2):100-16 doi: 10.1097/QMH.0b013e31828bc3bc[published Online First: Epub Date]|.

21. Starmer AJ, Spector ND, Srivastava R, et al. Changes in Medical Errors after Implementation of a Handoff Program. New England Journal of Medicine 2014;371(19):1803-12 doi: 10.1056/nejmsa1405556[published Online First: Epub Date]|.

22. Lesselroth BJ, Yang J, McConnachie J, Brenk T, Winterbottom L. Addressing the sociotechnical drivers of quality improvement: a case study of post-operative DVT prophylaxis computerised decision support. BMJ Qual Saf 2011;20(5):381-9 doi: 10.1136/bmjqs.2010.042689[published Online First: Epub Date]|.

23. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation science : IS 2011;6:42 doi: 10.1186/1748-5908-6-42[published Online First: Epub Date]|.

24. Sinnott C, Mercer SW, Payne RA, Duerden M, Bradley CP, Byrne M. Improving medication management in multimorbidity: development of the MultimorbiditY COllaborative Medication Review And DEcision Making (MY COMRADE) intervention using the Behaviour Change Wheel. Implementation science : IS 2015;10(1):132 doi: 10.1186/s13012-015-0322-1[published Online First: Epub Date]|.

25. R. MSALW. The Behaviour Change Wheel – A Guide to Designing Interventions. London: Silverback Publishing, 2014.

26. Zubkoff L, Neily J, Mills PD, et al. Using a virtual breakthrough series collaborative to reduce postoperative respiratory failure in 16 Veterans Health Administration hospitals. Joint Commission journal on quality and patient safety / Joint Commission Resources 2014;40(1):11-20

27. Dandoy CE, Davies SM, Flesch L, et al. A team-based approach to reducing cardiac monitor alarms. Pediatrics 2014;134(6):e1686-94 doi: 10.1542/peds.2014-1162[published Online First: Epub Date]|.

28. Duncan PM, Pirretti A, Earls MF, et al. Improving Delivery of Bright Futures Preventive Services at the 9- and 24-Month Well Child Visit. PEDIATRICS 2014;135(1):e178-e86 doi: 10.1542/peds.2013-3119[published Online First: Epub Date]|.

29. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science 2009;4(1):50 doi: 10.1186/1748-5908-4-50[published Online First: Epub Date]|.

30. Rycroft-Malone J, Seers K, Chandler J, et al. The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implementation Science 2013;8(1):28 doi: 10.1186/1748-5908-8-28[published Online First: Epub Date]|.

31. Srigley JA, Furness CD, Baker GR, Gardam M. Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: a retrospective cohort study. BMJ Qual Saf 2014;23(12):974-80 doi: 10.1136/bmjqs-2014-003080[published Online First: Epub Date]|.

32. van Veen-Berkx E, Bitter J, Kazemier G, Scheffer GJ, Gooszen HG. Multidisciplinary Teamwork Improves Use of the Operating Room: A Multicenter Study. Journal of the American College of Surgeons 2015;220(6):1070-76 doi: 10.1016/j.jamcollsurg.2015.02.012[published Online First: Epub Date]|.

33. Okumura MJ, Ong T, Dawson D, et al. Improving transition from paediatric to adult cystic fibrosis care: programme implementation and evaluation. BMJ Qual Saf 2014;23 Suppl 1:i64-i72 doi: 10.1136/bmjqs-2013-002364[published Online First: Epub Date]|.

34. Donabedian A. Evaluating the Quality of Medical Care. Milbank Quarterly 2005;83(4):691-729 doi: 10.1111/j.1468-0009.2005.00397.x[published Online First: Epub Date]|.

35. Provost L, Murray, S. The health care data guide : learning from data for improvement (1st ed). San Francisco, CA: Jossey-Bass, 2011.

36. Nelson EC, Mohr JJ, Batalden PB, Plume SK. Improving health care, Part 1: The clinical value compass. The Joint Commission journal on quality improvement 1996;22(4):243-58

37. Lloyd RC. Quality health care : a guide to developing and using indicators. 1st ed. Sudbury, Mass.: Jones and Bartlett Publishers, 2004.

38. Pettigrew A WRMcfcs. Managing change for competitive success. Cambridge, MA: Blackwell.

39. Pawson RaNT. Realistic evaluation. London: SAGE, 1997.

40. Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs-principles and practices. Health services research 2013;48(6 Pt 2):2134-56 doi: 10.1111/1475-6773.12117[published Online First: Epub Date]|.

41. Yin RK, Applied social research methods. 2009. Case study research : design and methods. 4th ed. . Los Angeles, CA: SAGE, 2009.

42. Robson C, 2011. Real world research : a resource for users of social research methods in applied settings. 3rd ed. . Chichester: Wiley, 2011.

43. Creswell JWe. Research design : qualitative, quantitative, and mixed methods approaches. 3. ed.. Thousand Oaks, CA: SAGE, 2009.

44. Brady PW, Zix J, Brilli R, et al. Developing and evaluating the success of a family activated medical emergency team: a quality improvement report. BMJ Quality & Safety 2014;24(3):203-11 doi: 10.1136/bmjqs-2014-003001[published Online First: Epub Date]|.

45. Timmerman T, Verrall T, Clatney L, Klomp H, Teare G. Taking a closer look: using statistical process control to identify patterns of improvement in a quality-improvement collaborative. BMJ Quality & Safety 2010;19(6):e19-e19 doi: 10.1136/qshc.2008.029025[published Online First: Epub Date]|.

46. Dainty KN, Scales DC, Sinuff T, Zwarenstein M. Competition in collaborative clothing: a qualitative case study of influences on collaborative quality improvement in the ICU. BMJ Qual Saf 2013;22(4):317-23 doi: 10.1136/bmjqs-2012-001166[published Online First: Epub Date]|.

47. A. D. Explorations in quality assessment and monitoring. . Ann Arbour, MI:: Health Administration Press, 1980.

48. Hands C, Reid E, Meredith P, et al. Patterns in the recording of vital signs and early warning scores: compliance with a clinical escalation protocol. BMJ Quality & Safety 2013;22(9):719-26 doi: 10.1136/bmjqs-2013-001954[published Online First: Epub Date]|.

49. Baily MA, Bottrell MM, Lynn J, Jennings B. Special Report: The Ethics of Using QI Methods to Improve Health Care Quality and Safety. Hastings Center Report 2006;36(4):S1-S40 doi: 10.1353/hcr.2006.0054[published Online First: Epub Date]|.

50. Bellin E, Dubler NN. The Quality Improvement–Research Divide and the Need for External Oversight. Am J Public Health 2001;91(9):1512-17 doi: 10.2105/ajph.91.9.1512[published Online First: Epub Date]|.

51. Lynn J. The Ethics of Using Quality Improvement Methods in Health Care. Annals of Internal Medicine 2007;146(9):666 doi: 10.7326/0003-4819-146-9-200705010-00155[published Online First: Epub Date]|.

52. Perneger TV. Why we need ethical oversight of quality improvement projects. International Journal for Quality in Health Care 2004;16(5):343-44 doi: 10.1093/intqhc/mzh075[published Online First: Epub Date]|.

53. Taylor HA, Pronovost PJ, Sugarman J. Ethics, oversight and quality improvement initiatives. Quality and Safety in Health Care 2010;19(4):271-74 doi: 10.1136/qshc.2009.038034[published Online First: Epub Date]|.

54. Fox E, Tulsky JA. Recommendations for the ethical conduct of quality improvement. The Journal of clinical ethics 2005;16(1):61-71

55. O’Kane M. Do patients need to be protected from quality improvement? . In: Jennings B, Baily MA BM, Lynn J, eds, eds. Health Care Quality Improvement: Ethical and

Regulatory Issues. Garrison NY: The Hastings Center, 2007:89–99.

56. Dixon N. Ethics and Clinical Audit and quality Improvement(QI) A guide for NHS Organisations. London: Healthcare Quality Quest Improvement Partnership, 2011.

57. Dixon N. Proposed standards for the design and conduct of a national clinical audit or quality improvement study. International Journal for Quality in Health Care 2013;25(4):357-65

58. Cohen ES, Ogrinc G, Taylor T, Brown C, Geiling J. Influenza vaccination rates for hospitalised patients: a multiyear quality improvement effort. BMJ Qual Saf 2015;24(3):221-7 doi: 10.1136/bmjqs-2014-003556[published Online First: Epub Date]|.

59. Donahue KE, Halladay JR, Wise A, et al. Facilitators of Transforming Primary Care: A Look Under the Hood at Practice Leadership. The Annals of Family Medicine 2013;11(Suppl_1):S27-S33 doi: 10.1370/afm.1492[published Online First: Epub Date]|.

60. Guse SE, Neuman MI, O'Brien M, et al. Implementing a Guideline to Improve Management of Syncope in the Emergency Department. PEDIATRICS 2014;134(5):e1413-e21 doi: 10.1542/peds.2013-3833[published Online First: Epub Date]|.

61. Berlinski A, Chambers MJ, Willis L, Homa K, Com G. Redesigning care to meet national recommendation of four or more yearly clinic visits in patients with cystic fibrosis. BMJ Quality & Safety 2014;23(Suppl 1):i42-i49 doi: 10.1136/bmjqs-2013-002345[published Online First: Epub Date]|.

62. Zanni RL, Sembrano EU, Du DT, Marra B, Bantang R. The impact of re-education of airway clearance techniques (REACT) on adherence and pulmonary function in patients with cystic fibrosis. BMJ Qual Saf 2014;23 Suppl 1:i50-5 doi: 10.1136/bmjqs-2013-002352[published Online First: Epub Date]|.

63. Dixon-Woods M, Leslie M, Tarrant C, Bion J. Explaining Matching Michigan: an ethnographic study of a patient safety program. Implementation science : IS 2013;8:70 doi: 10.1186/1748-5908-8-70[published Online First: Epub Date]|.

64. Tubbs-Cooley HL, Cimiotti JP, Silber JH, Sloane DM, Aiken LH. An observational study of nurse staffing ratios and hospital readmission among children admitted for common conditions. BMJ Quality & Safety 2013;22(9):735-42 doi: 10.1136/bmjqs-2012-001610[published Online First: Epub Date]|.

65. Dhalla IA, O’Brien T, Morra D, et al. Effect of a Postdischarge Virtual Ward on Readmission or Death for High-Risk Patients. JAMA 2014;312(13):1305 doi: 10.1001/jama.2014.11492[published Online First: Epub Date]|.

66. Trautner BW, Grigoryan L, Petersen NJ, et al. Effectiveness of an Antimicrobial Stewardship Approach for Urinary Catheter–Associated Asymptomatic Bacteriuria. JAMA Internal Medicine 2015;175(7):1120 doi: 10.1001/jamainternmed.2015.1878[published Online First: Epub Date]|.