Nonprofit Hospitals Raise $8.26 Billion in Donations in 2010

Despite the poor economy, U.S. nonprofit hospitals and health care systems managed an 8 percent increase in philanthropic donations last year, to more than $8 billion, with individual donors contributing almost 60 percent of that total. But fundraising costs climbed and return on investment dipped, according to the fiscal year 2010 Report on Giving USA issued today by the Association for Healthcare Philanthropy.

The AHP's annual survey showed that donations and grants to health care institutions in the not-for-profit sector totaled $8.264 billion in fiscal year 2010, up $620 million over the $7.644 billion raised in fiscal year 2009. While last year's total was still short of the $8.588 billion raised in FY 2008 and the FY 2007 level of $8.347 billion, the 8 percent growth rate was the healthiest rate of advance since FY 2006.

Annual giving was the largest source of funds raised in FY 2010, the AHP report noted, accounting for 20 percent of all funds raised followed by major gifts (17.1 percent), capital campaigns (15.4 percent) and special events (14.8 percent). Planned giving, which includes bequests, charitable gift annuities, charitable remainder trusts and similar long-term philanthropic arrangements, accounted for 9.5 percent of donations last year, similar to pre-recession levels.

"These outcomes for fiscal 2010 were not unexpected. They reflect the slow pace of our economic recovery and shifts in giving priorities that have resulted," said William C. McGinly, president and chief executive officer of AHP. "Earlier studies AHP released this year also showed signs of progress beginning in 2010, but far from a full recovery from the recession."

The donated funds were used to support a range of programs and functions. In FY 2010, as in previous years, health care organizations directed the largest single share of their donated dollars to fund construction and renovation projects; however, that portion has declined since FY 2009, 22 percent compared with 27.3 percent, respectively. New and upgraded equipment purchases constituted the second largest category, at 20.6 percent, followed by general operations at 17.6 percent—both up slightly from FY 2009. Community benefit programs remained constant at about 10.7 percent.

Over the past three years of the recession, foundations have experienced a falling return on investment. Likewise, costs to raise each dollar have climbed. At 33 cents in FY 2010, the cost-to-raise-a-dollar through philanthropy remained stubbornly above 30 cents for the third year in a row, and return-on-investment declined, on average, more than four percent to just $3.05 raised for every dollar spent on fundraising. Taken together, these metrics indicate increased expenses associated with raising the same (or for some foundations, less), than previous years. The bottom line: Fundraising has become more challenging and, therefore, more expensive.

A copy of the AHP Report on Giving Fact Sheet is available for free on the AHP website at www.ahp.org/reportongiving.


Rating Hospital Quality Means Asking the Right Questions

With an increased emphasis on grading hospitals and a push to withhold payments from hospitals that don't meet certain standards, two Johns Hopkins researchers argue that more attention needs to be paid to the quality of the measurement tools used to praise and punish.

The science of outcomes reporting is young and lags behind the desire to publicly report adverse medical outcomes, write Elliott R. Haut, M.D., an associate professor of surgery at the Johns Hopkins University School of Medicine, and Peter J. Pronovost, M.D., a Johns Hopkins professor of anesthesiology and critical care medicine, in the June 15 Journal of the American Medical Association.

"Everyone wants to know, 'What is the best hospital?' 'Where should I have my surgery?'" Haut says. "People want to compare hospitals, but if the science can't keep up, maybe we're doing more harm than good when we report certain kinds of data. It raises a different question: Are the numbers being reported meaningful?"

The researchers say an important source of error in some currently reported outcome measures is something called "surveillance bias," which essentially means that "the more you look, the more you find." Take the problem of deep venous thrombosis, a clot deep inside a part of the body that can block blood flow and cause swelling and pain. If the clot breaks off, it becomes a pulmonary embolism and can get stuck in the heart or lungs and kill the patient.

One key to stopping DVT from becoming deadly is to prevent it or find it early and treat it. So the more tests done for DVT, the higher the DVT rate for a hospital. If a hospital has a high DVT rate, Haut says, is it a place a patient should avoid? Or is it a place that looks for DVT more aggressively—before any symptoms appear—and prevents DVT from progressing to a much more serious complication? Therefore, reporting a DVT rate, he says, doesn't tell much about hospital quality, since it doesn't delineate whether the hospital is ignoring a potential complication or successfully preventing one.

"Without a standard way for looking for these complications, the numbers people are looking at—and making major decisions based on—may be worthless," Haut says.

The Johns Hopkins Hospital actively looks for DVT in most trauma patients who are at high risk for these potentially life-threatening clots. These patients are given the blood thinner heparin, their legs are wrapped in automatic compression devices to keep blood moving and they are given regular ultrasounds to look for clots—before symptoms can even appear.

Not long ago, Haut remembers, Maryland state regulators asked Johns Hopkins why it had the highest rate of DVT in Maryland. "The question was, essentially, why are you doing such a bad job?" he recalls. Hopkins officials went back and realized the high rates were likely because of surveillance bias. "If you look more you may find more, but you can also treat DVT early before it becomes a major problem and kills you," he says. The lesson: "It might be OK to have a higher rate."

The issue isn't simply one of giving misleading information to the public about hospital quality, Haut says. The Centers for Medicare & Medicaid Services has said it will no longer pay the expenses associated with treating patients who develop DVT or PE after certain orthopedic surgeries, calling such complications "never events."

Haut calls it a "perverse incentive." If hospitals don't look for a DVT and don't find one, they will still get paid, but if other hospitals aggressively look for DVTs and find them, they won't get paid.

"There is broad bipartisan and public support for measuring outcomes, yet these measurements must be made accurately, guided by principles of measurement from clinical research. To do otherwise would be reckless and unjust," Haut and Pronovost write. "Which outcomes to evaluate must be determined and then they must be measured accurately, rather than squandering resources on measuring many outcomes inaccurately."

Haut and Pronovost argue that several steps can help reduce the errors caused by surveillance bias. First, those developing and reviewing outcome measures should ensure that the methods for surveillance are made clear and standardized among hospitals, so that comparisons are apples to apples. Second, a cost-benefit analysis must take place to determine which specific measures should be mandated. Third, they suggest, perhaps some of the outcomes measured may not be telling the whole story about preventable harm. Instead, it might be better to look at the processes involved in reaching an outcome.

For example, instead of reporting how many of a hospital's trauma patients get a DVT, a better way to judge the quality of care at the hospital might be to ask if a patient got the proper DVT prophylaxis, the right medicines and/or therapies to prevent the adverse event. That, they say, would paint a clearer picture of the quality of a hospital.

"You have to make sure your measures are fair and that the benefits of reporting adverse outcomes outweigh the risks of unfairly harming hospitals, because these measures have unintended consequences," Haut says.


Ambulance Diversion Linked to Increased Mortality of Heart Attack Patients

Lengthy periods of ambulance diversion are associated with higher mortality rates among patients with time-sensitive conditions, such as acute myocardial infarction, commonly known as a heart attack. When a patient's nearest emergency department was on diversion for 12 or more hours, there were higher patient mortality rates at 30 days, 90 days, nine months and one year than when not on diversion, according to a study in the Journal of the American Medical Association.

The study of nearly 14,000 elderly patients indicates that ambulance diversion is a signal of a larger access problem in the health care system. Study authors said that ambulance diversion affects not only the patients who are diverted, but also non-diverted patients within the hospital that was on diversion.

Emergency rooms go on diversion because they are crowded with patients waiting for a bed in the hospital. And crowded ERs may struggle to care for all their patients.

"A task force of emergency physician experts have proposed solutions to crowding and ambulance diversion, and hospitals can implement them now," said Sandra Schneider, M.D., president of the American College of Emergency Physicians. "The key is increasing flow through emergency departments by moving patients who have been admitted to the hospital out the emergency department to inpatient areas."