Within a few weeks, "about 2,800 cars, trucks and buses will start talking to each other on the streets of Ann Arbor, Mich.," as Tom Krisher writes for the Associated Press. The vehicles are warning their drivers of dangers, which could lead to the devices being installed in every car. According to Transportation Secretary Ray LaHood, Krisher writes, "80 percent of crashes in which the drivers aren't impaired by drugs or alcohol could be prevented — or the severity reduced — if cars could talk to each other."
 
Another recent example of artificial intelligence: The delivery vans of a U.K. online grocery store contain chips that wirelessly send air temperature readings to a central computer. This amounts to "a stream of consciousness" among machines, writes Kevin O'Brien in The New York Times. The result has been fewer groceries spoiled in transit.

 

Other examples abound. Japan is considering a bigger system of seismic sensors to detect earthquakes. Wireless sensors monitor temperature and humidity in cheese production in Italy. Telekom Austria wirelessly connects more than 500,000 machines in eight countries, providing traffic and weather updates for navigation systems, linking Otis elevators in Slovenia to emergency breakdown centers, and connecting 900 bank ATMs in Belarus. And in Germany, robots install solar panels in large installations.

The list goes on and on, and these examples probably come as no surprise to you.

Growth of Machine Talk

Citing industry sources, O'Brien notes that the number of devices communicating with each other over the world's wireless networks reached 108 million in 2011. The number will at least triple by 2017, and could reach as many as 50 billion by 2020. Only about 20 percent of them will have a human somewhere in the loop, perhaps on cell phones and tablet computers.

A third of today's machine-machine communication traffic, he reports, consists of automated electric and gas meter readings. Another third is for car and truck fleet management and emergency accident, repair and location services such as OnStar, now installed on a quarter of new GM vehicles. In 2015, Europe will require all new cars to have wireless transmitters that automatically report accident location and other data, such as airbag deployment, to emergency responders.

But O'Brien's numbers count only physical devices — chips and computers and gizmos we tend to think of as machines. His numbers do not include the probably much vaster number of virtual machines variously known as programs, softbots and algorithms that would seem, on the face of what we are about to read, somewhat less benign and less under control.

High-Frequency Trading Algorithms

Knight Capital recently lost $440 million in the space of five minutes during which a machine — a stock-trading computer algorithm — went haywire. As a U.K. government adviser told the BBC: "It is possible to programme a computer system that can do the job of a human trader, and indeed do it much, much faster."

How much faster? "High-frequency trading" algorithms can complete 165,000 separate trades in a second or so, according to the BBC's Tim Harford, writing about the Knight Capital incident. He tells of high-frequency algorithms that cut big trades into small pieces that attract less scrutiny by competing algorithms, of algorithms that search for buyers and sellers with a little margin between them, and of algorithms that seek and exploit fleeting statistical aberrations in the relationships between different shares or bonds.

Then there are ethically and legally questionable predatory algorithms called "algo-sniffers" that spy on slower algorithms to exploit the advantage that gives them. There are also "spoofers," algo-sniffing variants that make fake offers to induce dumber algorithms to show their hands.

The Knight Capital incident was not the first of its kind. There have been many. Most notably, on May 6, 2010, the Dow dropped 600 points in five minutes. Panicked human traders pulled the plug on their individual high-frequency trading algorithms, causing instant illiquidity in the market, which made things worse. It took another algorithm — a monitor installed by the stock exchange — to avert disaster by halting all trading for just five seconds, buying enough time to halt the downward spiral, wrote Harford. The entire incident was over 10 minutes later.

It was gone, but not forgotten. Andrew Haldane, the executive director for financial stability at the Bank of England, told Harford: "What we have out there now is this complex array of multiple, mutating interacting machines, algorithms. It's constantly developing and travelling at ever-higher velocities. And it's just difficult to know what will pop out next. And that's not an accident waiting to happen, that's an accident that has been happening with increasing frequency over the last few years. We shouldn't wait for the equivalent of the space shuttle disaster before remedying the situation. We already have enough light on the dashboard flashing red to want to do something differently."

Reliance on Machines

The problem is, the genie is out of the bottle, and it grants enough wishes that we seem prepared to accept its occasional tantrums — if we even have a choice any more. Algorithms that talk to one another are just as beneficial as the physical devices described earlier, perhaps even more so (beneficial, that is, when they work according to a plan not concocted by incompetent, greedy or malevolent humans).

The inestimably beneficial Wikipedia, for example, is published and maintained in all languages by an army of editors but "is so vast, and its maintenance so labour-intensive that it defies the capability of its human administrators and editors to keep it in order," wrote Daniel Nasaw for the BBC recently. So guess what: Algorithms do a growing amount of the work for them and are even beginning to encroach on the territory of the knowledge contributor. Some Wikipedia articles, like some sports reports in the media, are now researched and written by algorithms.

That we have no choice but to embrace and trust intelligent, intercommunicating algorithms and other machines, warts and all, is suggested by a European plan to build a €1billion ($1.2 billion) "living earth simulator" that will monitor, via a vast network of intercommunicating machines, the global environment, societies, economies, financial markets, power grids and other complex systems essential to modern life. According to Tom Simonite writing in Technology Review, the simulator will be "an oracle" to which academics and governments will turn for advice, in the same way we check the weather forecast before leaving for a day at the beach.

But experience with the stock market shows that eventually the earth simulator won't just advise: It will govern. It will have to, because we will not be able to handle the volume of data and the velocity of change.

The Smart EHR

What does all this have to do, other than peripherally, with health care?

In a recent issue of H&HN Daily, John Glaser suggests that the adoption of electronic health records is accelerating and that the next step will be to make them more intelligent. While "the core focus of today's EHR remains the transaction," writes Glaser, under accountable care the focus will instead be on reimbursement tied to the practice of evidence-based medicine. The problem is, there is too much evidence for physicians to master. "We must," therefore, "make the shift from a transaction-oriented record to an intelligence-oriented record [EHR]."

In other words, we need smart algorithms, conferring among themselves and the various monitors and other systems connected to the EHR, in health care. We are indeed moving, as we must, to EHRs replete with individual genomes, proteomes, metabolomes and 'omes as yet undiscovered, and this alone defies the capability of human doctors to remember and apply to delivering care.

Glaser suggests that an intelligent EHR will incorporate "big data" analytics algorithms "to measure quality and process performance and assess guideline adherence, financial performance, and provider treatment and outcome variations" in real time from standard EHR data plus the mountains of data amassing via imaging, molecular medicine, patient-provided data, and insurance claim systems. The transition from the transaction-based EHR to the intelligence-based EHR, he concludes, will not just "support post-market surveillance, comparative effectiveness and clinical trial hypothesis framing," but "may become one of the most critical undertakings in our journey toward more accountable, cost-effective care."

Implicitly, Glaser recognizes the importance of machine-machine communications in reaching the desirable future he paints. However, wearing futurist goggles that focus on a future a bit beyond the point where practical minds care to look, I would go further. I would say that "our journey toward more accountable, cost-effective care" will lead us to a time when algorithms communicating together, in and around the EHR, will run health care.

The End of Work

The downside implications of growing machine autonomy, amplified by orders of magnitude as the machines learn to collaborate, include not just the risk of going haywire and of having evil programmed into them, but also the replacement of human jobs.

Within the next decade or two, the only remaining human jobs will likely be to train robots to replace humans. In fact, it's already happening (see this series of slides posted by Technology Review for examples). A friend of mine just sent me an email that makes the point passionately and personally. She wrote:

"I'm somewhat adrift, career-wise, at the moment. I worked for over 14 years as a medical transcriptionist; both in-office and home-based, where I learned more subspecialties. I LOVED it with a passion! Now, everything is either Dragon Speak [Dragon NaturallySpeaking, the software program that recognizes and to some extent understands human speech and automatically transcribes it] or worse — forms on the computer. I wonder who is catching the errors we transcriptionists used to find, though." 

Don't Be Surprised

Provided that the global economy does not implode before something like the living earth simulator comes along to govern it for us, and before the general breakdown of civilization instigated by mad machines and starving masses finding themselves somewhat adrift, career-wise, with nothing else to do except riot, hospitals can expect to see increasing automation in every aspect of their operation, from ordering supplies to performing surgical procedures, with all the technologies talking to one another and taking action at a speed and with a complexity beyond human capacity for input or control.

It won't happen overnight, but it will happen. What Haldane said of financial networks will start to apply to hospitals and health networks as the next wave of advances, well described by Glaser, starts to take hold. Haldane's words bear repeating:

"What we have out there now is this complex array of multiple, mutating interacting machines, algorithms. It's constantly developing and travelling at ever-higher velocities. And it's just difficult to know what will pop out next. And that's not an accident waiting to happen, that's an accident that has been happening with increasing frequency over the last few years. We shouldn't wait for the equivalent of the space shuttle disaster before remedying the situation. We already have enough light on the dashboard flashing red to want to do something differently."

David Ellis is a futurist, author, consultant and publisher of Health Futures Digest, a monthly online discursive digest of news and commentary on long-range, leading-edge technological innovations and their consequences and implications for health care policy and practice. Mr. Ellis is also a regular contributor to H&HN Daily and a member of Speakers Express.