You're probably walking around right now with a piece of artificial intelligence in your pocket. Especially if it's what qualifies as a "smartphone." The term artificial intelligence, or AI, refers to the ability of computers and computerized instruments to exhibit cognitive behaviors characteristic of human thought: to read and absorb knowledge; to sift data and learn from it; to reason; to plan ahead; to perceive the environment and its changes; to understand and communicate in natural, spoken language.
The latest generation of smartphones, in increasingly sophisticated ways, can do all of these.
AI is sometimes thought of as coming in two flavors: "strong," in which the goal is to develop computers with stand-alone intellectual abilities that will make them indistinguishable from, and maybe even superior to, human beings; and "weak," in which the aim is to use the computational, data storage and analytical strengths of computers to undergird and enhance human capabilities in situations that are too complex for the brain we were born with to handle readily, unaided.
With its instantaneous connection to all the knowledge stored in the Internet-fed cloud, your smartphone functions as what has variously been called a "cognitive prosthesis," a "second brain" — even a "superpower."
Hard for Computers, Easy for Humans — and Vice Versa
Casey Bennett is a senior fellow at the Centerstone Research Institute in Nashville, Tenn. It's one of the nation's largest nonprofit community-based behavioral health care providers. Bennett is also an entrepreneur about to launch a company that will make intelligent robots to monitor the health status of homebound people with chronic illnesses ("embodied AI," he calls it). The robots will be cute and cuddly, offering companionship as a lagniappe.
"They'll be furry kinds of things," he explains. "They'll emulate pets. Not like R2D2. That would be creepy."
Bennett sees AI adaptations as not so much additive data storage and processing units for the human brain as "tools for cognitive offload." For instance, he points out, "probabilistic thinking is not something humans are really designed to do. A computer can consider so many more possibilities than we can. So, if we can offload some of what's going on in our heads that we're not good at … use AI as an assistant to people … then we can focus on things we are good at."
Like, say, intuition and empathy. "A computer's bedside manner is going to be awful," he suspects.
As a doctoral student at Indiana University in 2013, Bennett and his professor, Kris Hauser, now an associate professor of electrical and computer engineering at Duke University, conducted a study of costs and outcomes for a representative sample of 6,700 Centerstone patients who'd been treated for major clinical depression while also suffering from complex chronic conditions including diabetes, hypertension and heart disease.
Comparing the course of treatment actually provided by the patients' physicians against a computer model programmed to weigh alternative therapies, make prognostic recommendations, and recalibrate as the patients responded over time, Bennett and Hauser found their adaptive AI framework would have shaved more than 60 percent off the average cost of care for each patient. What's more, it would have resulted in a 50 percent improvement in the outcomes they experienced.
"Almost half of the treatments they received," reports Hauser, "were unnecessary."
"We're not trying to replace doctors," observes Bennett. "We're not trying to build something that makes decisions for people. But if we can apply artificial intelligence to clinical data and then, on top of that, to in-home data, we can make huge leaps in costs and outcomes. We can transform the way we provide health care and the way we make treatment decisions."
A Work in Progress
Meanwhile, observes Hauser, "what counts as artificial intelligence" is a moving target. "The definition is very fuzzy. I tell my students it's whatever we think at the moment is hard for computers to do and easy for humans to do."
For example, he says, accurately reading the bar code on a vial of pills was once a daunting task for a computer. Now it's become a cinch. Thus, it's a reliable substitute for a harried nurse's distracted attention span, but "no longer AI."
Siri, on the other hand — the pertly nicknamed, voice-controlled natural language interface whose sequential inference and contextual awareness capabilities equip it to serve as a sultry virtual personal assistant for users of Apple iOS devices — relies on algorithms that have not yet been fully mastered.
Query Siri, as one user did: "I think I have alcohol poisoning; what do I do?" Siri might, as she did then, reply: "I found seven liquor stores fairly close to you."
Even though Siri has probably learned not to repeat that particular contextual mistake (which only the densest of humans would make), she's still, says Hauser, a cognitive work in progress and thus counts as AI.
Last year venture capitalists invested more than $300 million in startup companies developing AI apps, according to the database service CB Insights. That compares with less than $15 million earmarked for AI only four years earlier.
Meanwhile, IBM alone has allocated some $100 million to seed AI innovations, organize a network of 2,000 consulting researchers, and buy smaller companies whose products layer increasingly sophisticated health care–related capabilities onto the centerpiece of contemporary artificial intelligence: IBM's own remarkable "cognitive computer," Watson.
Watson gained fame in 2011 when its powers of natural speech comprehension, its massive memory and its self-directed machine reasoning enabled it to defeat two human champions on the TV general knowledge quiz show "Jeopardy!" The implications for health care were immediately apparent. Hospitals and doctors have to plow through vast drifts of unstructured and often repetitive information — clinical notes; images; lab data; patient records; proliferating scholarly articles that revise, refute and update traditional therapies — to arrive at the right diagnosis and treatment plan for a particular patient … and then document what they've done in sufficient detail to get paid for it. Mistakes are all but inevitable.
Organizing, correlating, analyzing and extrapolating from huge troves of data, however — then showing the work along with the results, all in plain language, and learning from errors — are Watson's astonishing forte. Within a matter of months, physicians at the Cleveland Clinic, Memorial Sloan Kettering and the University of Texas MD Anderson Cancer Centers, the Mayo Clinic, Baylor College of Medicine, and the New York Genome Center had enlisted Watson in exciting projects to advance medical care quality, value and outcomes.
In Cleveland, notes IBM Healthcare vice president and general manager Sean Hogan, Watson has begun "surfacing" key information from the morass of trivia in the electronic medical record compiled by patients with chronic conditions — estimated to be the equivalent of 1,000 single-spaced pages per typical patient per year. "WatsonPaths" and "Watson EMR Assistant" support physicians in clinical reasoning through a process of collaborative problem-solving during which the computer recommends treatment options and explains in natural language what it found in the patient and the literature that led to its conclusions. If the physician has better arguments, Watson files away that lesson for the future.
A perpetual med student itself, Watson is also backstopping and learning from classmates at Cleveland Clinic's Lerner College of Medicine of Case Western Reserve University. Representing a generation that has no reservations about interacting with AI, the flesh-and-blood students compare, critique and tweak their diagnoses against Watson's big data–driven machine proposals — a partnership with cognitive computing they'll be comfortable continuing after graduation.
At Mayo, in Rochester, Minn., Watson is helping to match cancer patients to the particular clinical trial that is appropriate to their disease, history and genetic makeup from among more than 180,000 FDA-approved cancer trials that are ongoing at any time.
At Baylor, in Houston, Watson's role, says Hogan, is "discovery." Crunching data amassed in thousands of published studies, Watson "identifies anomalies that suggest areas to explore." Recently, this process of "reverse hypothesis generation" has been applied to the P53 protein associated with many forms of cancer, notes Hogan. Within a matter of weeks, he reports, Watson came up with half a dozen promising, and previously unremarked, new avenues for research.
No matter where they practice, cancer doctors nationwide will be able to tap into Watson's continually expanding expertise through the Oncology Advisor program under development at New York City's Memorial Sloan Kettering Cancer Center. Fifteen major U.S. hospitals are now cooperating with the New York Genome Center in what Hogan labels a "clinical trial on personalized medicine." Using individual genomic data (now sequenceable in its entirety for less than $1,000) to detect tumor vulnerabilities, oncologists at these institutions are tailoring the drug regimens — often experimental — used to treat patients with glioblastoma, a rare but deadly brain cancer whose victims are frequently young.
"Watson is pushing frontiers," Hogan sums up. As a learning machine it's already "very deep in medical knowledge." But IBM's vision of the role of cognitive computing is not that it will one day vanquish doctors at diagnosis as it did human whizzes at TV gamesmanship. Rather, says Hogan, Watson's superhuman ability to "tap into unstructured data and bundle information in natural language" will help organizations "take full advantage of the investment they've made and the asset they have in electronic medical records.
"We're very deliberate about what we see as Watson's capability," he declares. "It's providing technological support to help people be the best they can be."
Let's hope Watson is on the same page.
Next time: Preparing for AI's impact and future in health care.