National Nurses United

National Nurse Magazine October 2010

Issue link: https://nnumagazine.uberflip.com/i/197820

Contents of this Issue

Navigation

Page 13 of 27

CE2_Oct REV 11/6/10 1:38 PM Page 14 infection, sepsis, patient fall/injury, and med/drug events) were associated with increased costs. For example, the cost of care for patients who developed pneumonia while in the hospital rose by 84 percent. The length of stay increased by 5.1-5.4 days, and the probability of death rose 4.67-5.5 percent (Cho, Ketefian, Barkauskas, et. al, 2003). Hospitals have pursued a number of strategies to limit costs and increase revenue by reducing their RN staff and replacing them with unlicensed assistive personnel. McCue, Mark, and Harless conducted a study published in 2003 that examined the relationship between nurse staffing, quality of care, and hospital financial performance. The researchers found a statistically significant increase in operating costs when hospitals increased their staffing of RNs, but no statistically significant decrease in hospital profit, suggesting that the cost-benefit of reduced complications and length of stay offsets the additional cost incurred by increasing the ratio of the RN staff. The Institute of Medicine: From Safety to Quality. How did we get here from there? in 1999 the iom committee on Quality of Healthcare in America published its landmark report, "To Err is Human." The IOM reframed medical error as a chronic threat to public health. One of the report's main conclusions is that the majority of medical errors do not result from individual recklessness or the actions of a particular group – this is not a "bad apple" problem. More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them. For example, stocking patient-care units in hospitals with certain full-strength drugs, even though they are toxic unless diluted, has resulted in deadly mistakes. Thus, mistakes can best be prevented by designing the health system at all levels to make it safer – to make it harder for people to do something wrong and easier for them to do it right. Of course, this does not mean that individuals can be careless. People still must be vigilant and held responsible for their actions. But when an error occurs, blaming an individual does little to make the system safer and prevent someone else from committing the same error. The IOM committee followed its initial report 18 months later in 2001 with a second report titled, "Crossing the Quality Chasm." The Quality Chasm report broadly implied that patient safety is only part of a larger picture. Indeed, in a theoretical opinion article titled, "A User's Manual for the IOM's 'Quality Chasm' Report," Dr. Donald Berwick stated the second report was "even more important because it deals with the entire terrain of concerns about healthcare quality." He further opined that "to the serious student of healthcare quality and the serious leader of needed change, it signals the possible dawning of a new and persistent sense that the U.S. healthcare system's performance in many dimensions, not just safety, is unacceptably far from what it should be." In bold print under the title of the article, Berwick asserts, "Patients' experiences should be the fundamental source of the definition of quality." How reliable is a check list? in 2002, the centers for Medicare and Medicaid Services (CMS) formed a partnership with the AHRQ to develop, test, and seek endorsement of a nationally standardized survey tool and methodology for such data collection that would allow "valid" and credible 14 N AT I O N A L N U R S E practical comparisons to be made among hospitals locally, regionally, and nationally. Over the years many hospitals have collected information on patient satisfaction for their own proprietary use for quality control, marketing, and advertising purposes. Although many hospitals administered their own surveys or were already working with survey vendors to design and administer a patient satisfaction survey as part of their own internal quality improvement efforts, the questions and methodologies were customized and did not allow comparison across hospitals. AHRQ published a Federal Register notice on July 24, 2002, soliciting the submission of existing instruments measuring patients' perspectives on care. The notice of request for measures closed on September 23, 2002. The seven submissions received were reviewed rigorously by the Consumer Assessment of Healthcare Providers and Systems (CAHPS) II Grantees (AIR, Rand, and Harvard). Three criteria were considered in reviewing the submissions: 1) Does the instrument capture the patients' perspectives on care in acute-care and/or hospital settings?; 2) Does the instrument demonstrate a high degree of reliability and validity?; and 3) Has the instrument been widely used, not just in one or two research studies or local hospital settings? In January 2003, AHRQ submitted to CMS a draft HCAHPS instrument that consisted of 66 questions. AHRQ drew upon the seven surveys submitted by vendors, a comprehensive literature review, and earlier CAHPS work to develop the HCAHPS instrument. Most reviewed studies of hospital patient satisfaction used institution-specific measures rather than a standard instrument. The instruments reviewed included the HCA Patient Judgments System Questionnaire/Nashville Consulting Group Survey; the Comprehensive Assessment of Satisfaction with Care Instrument; the SERVQUAL; the Press Ganey Survey, and several privately prepared instruments. In instances when AHRQ drew upon items in existing surveys from vendors, it made material changes, modifying wording and changing the response sets. The instrument that was developed to meet the need for publicly reporting patient perspectives on care information is called Hospital CAHPS, or HCAHPS. In 2003, CAHPS II investigators and the Agency for Healthcare Research and Quality (AHRQ) performed an empirical analysis of the HCAHPS pilot data of hospital patients' perspectives of care to evaluate the degree to which these experiences corresponded with the Institute of Medicine's (IOM's) nine domains of care: respect for patient's values; preferences and expressed needs; coordination and integration of care; information, communication, and education; physical comfort; emotional support; involvement of family and friends; transition and continuity; and access to care. While some of the survey items correlated strongly with this hypothesized domain or composite in pilot studies, it became clear that the general hypothesized structure was inconsistent with the observed data. Based on analyses of the data and stakeholder suggestions, a revised HCAHPS survey was produced that consists of questions assessing seven internally developed domains of care: (1) nurse communication; (2) nursing services; (3) doctor communication; (4) physical environment; (5) pain control; (6) communication about medicines; and, (7) discharge information. The revised survey also includes global rating items for nursing W W W. N A T I O N A L N U R S E S U N I T E D . O R G O C TO B E R 2 0 1 0

Articles in this issue

Links on this page

Archives of this issue

view archives of National Nurses United - National Nurse Magazine October 2010