The federal government has earmarked $1.1 billion to create a process for comparing medications, devices and treatments. The goal: research focusing on "real people" in the "real world."
The $1.1 billion in new federal funding for comparative effectiveness research presents a host of opportunities but also some challenges for hospitals.
Both the Institute of Medicine and the Federal Coordinating Council for Comparative Effectiveness Research in June issued reports on their priorities as required by the American Recovery and Reinvestment Act of 2009, which authorized the new funding. The documents make it clear that the organizations see a major role for hospitals in conducting the research and ensuring that the findings make their way into clinical practice.
The term "medical research" typically brings to mind strictly controlled, randomized clinical trials conducted at major academic medical centers. But federal officials are placing an emphasis on studies involving "real patients" in "real settings" when it comes to comparative effectiveness evaluations, which examine the difference in effectiveness between drugs, devices and/or interventions for the same condition. That means more chances for community hospitals to get involved, several experts say.
This shift will correct a long-standing limitation of traditional medical research, says Peter J. Pronovost, M.D., professor of anesthesiology and critical care medicine, and health policy and management at Johns Hopkins University. "One of the things that is such a missed opportunity is that health care is probably the only industry that doesn't formally learn from its daily work. We find out what works and what doesn't in health care in clinical studies." The problem is the studies are so strict about who can enroll that their findings don't always pan out when applied in other health care settings, where the patient population is more diverse, he notes.
"What we need to also do is say, 'How can I be sure that this therapy is also going to work when I try it in a community hospital or outside of the study?' " Pronovost says. "The way you do that is by doing an effectiveness study. You loosen up the inclusion criteria and say, 'Let's just see if I try to do this therapy, what do I get in the real world?' "
The Agency for Healthcare Research and Quality received $300 million of the $1.1 billion in comparative effectiveness research funding. In its operating plan for the use of this money, AHRQ lays out its intention to finance different types of studies with looser criteria for patient participation than is typical in randomized clinical trials. For example, the agency plans to spend $100 million next year on up to 10 Clinical and Health Outcomes Initiative in Comparative Effectiveness, or CHOICE, studies, which it describes as "pragmatic studies" focused on comparing the benefit of treatments in routine clinical practice on real-world populations.
Another $48 million will fund up to five registry studies, which use databases that collect clinical data on patients with a specific disease or that track outcomes associated with specific medical tests, devices or surgical procedures. AHRQ already has issued a request for applications for an organization to conduct a registry study on orthopedic drugs, devices and procedures. Data must be collected from at least five institutions performing high volumes of hip and knee replacements.
Because few hospitals have people on staff who are experts in designing comparative effectiveness evaluations, most facilities interested in taking part would do so by teaming up with a university or independent research group, says Arnold Milstein, M.D., associate clinical professor at the University of California—San Francisco Medical Center, medical director of the Pacific Business Group on Health and health care thought leader at Mercer Health & Benefits.
AHRQ is seeking hospital participation, says Jean R. Slutsky, director of the agency's Center for Outcomes and Evidence. "Hospitals of all types absolutely should be involved, if they want to be, with researchers who are analyzing data or want to do patient recruitment or sites of research."
In their reports, the coordinating council, IOM and AHRQ point out that they want the research to involve patients from certain segments of the population, including women, the elderly, people with disabilities, and racial and ethnic minorities, that have historically been underrepresented in medical research.
Hospitals should look at the priorities listed by AHRQ and the IOM to learn which of these "subpopulations" they target, says Clifford Goodman, acting director of the Lewin Group's Center for Comparative Effectiveness Research. "Institutions that serve those populations, particularly in community settings, may be well positioned to be part of that research."
The goal is to learn how drugs, devices and procedures affect different patient populations differently. "People accuse comparative effectiveness research of being one size fits all," says Brian Strom, M.D., chair of the department of biostatistics and epidemiology and vice dean of the medical school at the University of Pennsylvania. "It's actually the opposite. The goal is heterogeneity, looking within the general population at who is going to respond positively and who is not."
Engaging in comparative effectiveness research offers several advantages for hospitals, Pronovost says. The studies are very meaningful for physicians, they help elevate the quality of care because they use the best evidence, and their prestige offers a brand benefit, he explains. "Then undoubtedly there is the social good. We treat a lot of patients with medicines, and we don't really know how well they work or in whom they work. We have an awful lot of need to generate new knowledge, and hospitals could be participating in this knowledge generation."
Once the research findings start coming in, hospitals will be on the front lines of making sure they're actually put into practice, Strom predicts. "They should be the ones saying, 'We don't want to pay for this expensive drug anymore because it isn't any better' or 'We have to pay for this expensive drug now because it is better.' "
The push for more comparative effectiveness research is occurring concurrent with a societal trend to hold health care providers financially and publicly accountable for ensuring that they offer the best care, Milstein notes. The findings will increase the pressure on hospitals to make sure that each patient gets the most effective care every time.
The result, Milstein and others say, will be attention to hospital protocols.
"There is a huge national opportunity to deploy some of the comparative effectiveness research money, not in comparing treatment options, but rather in comparing different treatment application methods," Milstein says. "That's where you get into the question of now that you've figured out the right treatment, how do you make sure it's implemented effectively, safely, patient-pleasingly and without wasting resources?"
Half of the suggested IOM priorities involve a comparison of some aspect of the health care delivery system, the institute notes in its report. These topics focus on comparing how or where services are provided, as opposed to which services are provided.
To adapt, health care providers will have to start thinking at the "system level," Pronovost says, "to ensure that all patients reliably get what the evidence is, not just the ones who are being treated by the doctor who happened to read the comparative effectiveness report."
In light of the expected increase in findings, hospitals should establish a process to look at the data as it emerges and change their practices accordingly, Strom says. The University of Pennsylvania Health System has a Center for Evidence-based Practice to handle this task. The center performs assessments of pharmaceuticals, medical devices and processes of care by reviewing the evidence. It then works with stakeholders to produce reports used to guide decisions ranging from formulary and purchasing choices to medical practice.
Getting information on what treatment works best on which patients into physicians' hands will require electronic clinical decision-support tools, Milstein says. "It's never going to work as the number of treatment selection rules increases for doctors to sort of carry this around in their brains."
Computerized decision-support tools could be designed to prompt a physician when the results of a comparative effectiveness study are relevant to a patient, note the treatment option the findings suggest, and allow the doctor to click on a box to order that treatment, Milstein explains.
The federal government, too, is interested in making sure hospitals and other providers use the new evidence. For example, AHRQ plans to spend $34.5 million on projects aimed at implementing innovative approaches to integrating comparative effectiveness research findings into clinical practice and health care decision-making.
Although the new comparative effectiveness research funding is a boon, compared with AHRQ's fiscal year 2008 funding of $30 million, many health care experts view it as a long-overdue down payment that will help fill knowledge gaps.
Past research has shown whether pharmaceuticals, for example, work better than a placebo. Now what's needed is research to determine how drugs, procedures and devices used to treat the same condition compare with one another, the experts say.
A steady, long-term funding stream is needed, Goodman says, as is an ongoing system to identify national priorities for comparative effectiveness research, conduct the studies and report the findings.
Comparative effectiveness research provisions are included in the major health reform bills being debated in Washington. In addition, the research has been targeted by some lawmakers as a guise for health care rationing.
Proponents acknowledge that public and private payers may use the findings to guide coverage decisions. However, they note, comparative effectiveness research will enable payers to base those policies on science, as opposed to price or other considerations.
"Comparative effectiveness research is not the same as policy, it is not the same as a [clinical] guideline, it is not the same as a coverage decision," says Goodman. "It is a way to supplement existing bodies of evidence on how well health care works."
The research effort could be buffered from political attacks because the government will be issuing findings, rather than clinical guidelines, Strom says. It will be up to professional medical societies and quality experts to develop guidelines. If a payer, such as the Centers for Medicare & Medicaid Services, decides not to pay for a treatment or product based on the evidence, the political pressure will be on CMS, he says. "Keep in mind we're talking about comparative stuff, so it's not going to be research that says, 'Don't do it,' it's going to be research that says how to do it better."
Because health care is consuming more of the nation's economy each year, the comparative effectiveness research effort ultimately will succeed politically even if there are fits and starts along the way, Milstein predicts. National health expenditures are expected to grow from an estimated $2.5 trillion this year to $4.4 trillion in 2018, roughly 20.3 percent of the gross domestic product, according to CMS.
Comparative effectiveness research is not going to solve the nation's health care spending problem, several experts say. In some cases, the studies will find that the more expensive treatment is best, Strom points out. But over time, it should save money by preventing wasteful spending on treatments that are less effective.
Says Goodman: "Politics notwithstanding, doctors, patients, hospitals and other decision-makers and policymakers are going to have increased demand for the kind of evidence that is produced by comparative effectiveness research. That is not going to go away. We need that information."
Geri Aston is a freelance health care writer living in Chicago.
This article first appeared in the November 2009 issue of H&HN magazine.