National Nurses United

National Nurse Magazine October 2010

Issue link: https://nnumagazine.uberflip.com/i/197820

Contents of this Issue

Navigation

Page 16 of 27

CE2_Oct REV 11/6/10 1:38 PM Page 17 design for testing cause and effect relationships. Quantitative study is ideal for testing hypothesis and for hard sciences trying to answer specific questions. On the other hand, the lowest level on the study design totem pole of hierarchical and qualitative research is a Level VII study, which cites "evidence from the opinion" of the authors or reports of "expert" committees. Qualitative study is a much more subjective form of research in which researchers often allow themselves to introduce their own bias to help form a "more complete" picture. Qualitative research may yield stories or descriptions of feelings and emotions, and the interpretations of research subjects are given weight; there is no attempt to limit their bias. Scripting, rounding, and patient satisfaction researchers have an apparent preference and attachment to the "less-than-rigorous" form of qualitative study, so their own bias often plays heavily into the results. While qualitative studies have their place, the researchers must have the integrity to disclose their biases, and be forthcoming about the weaknesses and limitations of their study when reporting their findings and making recommendations. A cursory review of the literature demonstrates that many of the recommendations to implement scripting and rounding schemes are not supported by the so-called "evidence" presented by the author(s). For example, the findings in the 2006 study by Meade, et. al, were listed as follows: "Of the 46 units in 22 hospitals that participated in the (Rounds/Patient Safety) study, data from 19 units in 8 hospitals were excluded from the analysis because of poor reliability and validity of data collection." Melnyk's commentary, (with implications for action in clinical practice and future research), regarding the study by Meade and associates, includes the following statements: "When assessing whether findings from an intervention study are valid (i.e., as close to the truth as possible), it is important to answer some key questions, including whether: (1) random assignment to study groups was used, (2) the study groups were equal at baseline on key demographic and clinical variables, and (3) all of the subjects were accounted for at the end of the study. This research used a quasiexperimental design that did not randomly assign hospital units to one of the three intervention groups, which resulted in non-equivalent groups at the beginning of the experiment (e.g., patient satisfaction and falls were not equal among the three groups at the beginning of the study). In addition, there was a high attrition rate in this research, (i.e. data from several units were excluded from the analysis), which also threatens the internal validity of this study." Melnyk generously rates this study as a Level III in terms of strength of design. The question remains as to whether or not it measures what it intends to measure in terms of a relationship between safety and satisfaction. HCAHPS is purposely misread and misapplied by the industry the hcahps survey is a standardized tool that asks discharged patients about their recent hospital stay for purposes of measuring customer satisfaction. However, it is but one of the indicators currently used by CMS that purportedly allows the public to compare hospital quality. The prestigious Institute of Medicine reports have recommended a healthcare culture "that is transparent, open, safe, and honest about its defects and its performance." And, the IOM O C TO B E R 2 0 1 0 warns against "toxic financing schemes" and has recommended that CMS establish service area experiments of payment reform as a way to encourage improvement. Although a greater focus on patient safety has been a trend since the Institute of Medicine's landmark report in 1999 estimating that 44,000 to 98,000 people die yearly as a result of medical errors, several recent studies have turned the spotlight on nursing as a safety net. When nurses' workloads are too heavy, safety can too easily become compromised. Can we expect nurses caring for too many patients or working too many hours, and burdened with tasks and data collection schemes, to continue to catch 86 percent of the medication errors made by physicians and pharmacists that they usually intercept before such errors reach the patient? It has become apparent to many of the critical thinkers among direct-care RNs that some of the presumably well-intended "change agents" out there, in their rush to win their employer's coveted "early adopter" and "champion of change" badges, are guilty of not using well-constructed pre- and post-implementation studies of therapeutic outcomes to determine whether or not a change in practice and care model delivery is even justified in the first place. According to Dr. Berwick, "the overall strengths of the IOM'S 'Quality Chasm' report lies foremost in its systems view. Rooted in the experiences of patients as the fundamental source of the definition of quality, the report shows clearly that we should judge the quality of professional work, delivery systems, organizations, and policies, first and only by the cascade of effects back to the individual patient and to the relief of suffering, the reduction of disability, and the maintenance of health." As the principal caregivers in any healthcare system, nurses are critical to the quality of care patients receive, a fact that is well-documented in multiple well-designed studies. The research on patient morbidity and mortality in relation to RN-to-patient staffing has been published in respected peer-reviewed scientific journals over the course of many years. The evidence is clear and convincing that safe staffing saves lives, but the bottom-line, profit-seeking mentality leads most healthcare employers to ignore the preponderance of evidence. Nurses are fed up while their employers waste time and money on unproven tactics and rounding schemes that nibble around the edge of the problem, while patients' lives hang in the balance and the careers of direct-care nurses are threatened. National Quality Foundation (NQF) patient-centered outcome measures include: death among surgical patients with treatable serious complications (failure to rescue); central line catheter-associated blood stream infections and rate of septicemia; ventilator-associated pneumonia for ICU and high-risk nursery patients; urinary catheter-associated urinary tract infections; hospital-acquired pressure ulcers; and falls associated with serious injury. NQF system-centered outcome measures include skill mix and percentage of RNs, LVNs, and number of nurse staffing hours, staffing and resource adequacy, voluntary turnover of staff, and collegiality of nurse-physician relations. Basic rounding and satisfaction study designs usually tout the success of their programs by casually addressing only one outcome goal, a reduction in the number of reported falls, but results so far are spotty and inconclusive. Because of a failure in the researchers' ability to address or control for significant confounding variables, the results are not applicable or generalizable. It's scientifically W W W. N A T I O N A L N U R S E S U N I T E D . O R G N AT I O N A L N U R S E 17

Articles in this issue

Links on this page

Archives of this issue

view archives of National Nurses United - National Nurse Magazine October 2010