Will Covid-19 be seen, in hindsight, as a technological tipping point for telecommuting and other forms of remote interaction, including of the medical kind? Has the technology improved enough, given all that has happened in tech since 2009—an Internet century ago? Consider some bellwether indicators.

Only 200 million people were using Facebook in 2009; 2.5 billion do now. The trend is similar for all the other social media networks—Twitter use has risen from 20 million to more than 350 million people. Desktop and handheld video are now easy to use. Compared with a decade ago, a single smartphone can receive, send, and process 1,000 times more data in a day. The most powerful supercomputer in the world has more digital horsepower than the combined capacity of the top 100 supercomputers in 2009. And drones are delivering medications in rural Africa, less than a decade after the first consumer drone was sold.

Everyone knows that the age of ubiquitous connectivity has led to the hyper-amplification of rumors and fears and propagation of outright “fake news” through social media. But the same networks have enabled researchers and physicians to share clinical insights more rapidly and enabled the World Health Organization to quickly train 80,000 health workers in 85 countries. They also now make “social distancing” far easier.

And as we all now know, the expert advice—not least from the National Institute of Health’s chief immunologist, Anthony Fauci—is for people to engage in social distancing and “telework” in order to slow the spread of Covid-19. As I write, more than 300 million students around the world are out of school; many are telecommuting to classes. And millions of adults are working from home.

The last pandemic, the H1N1 flu, which struck in 2009, hospitalized nearly 300,000 Americans and led to more than 12,000 deaths. It’s too early to tell how Covid-19 will play out. It is a different, non-flu disease and appears to have a higher ratio of patients requiring hospitalization (particularly among the elderly). It is the disease’s potential to stress health-care systems that moves experts to recommend (or require) social distancing to “flatten the curve” of peak demand on hospitals. We can expect similar advice next time around—but we’ll have better tools by then to make it work more effectively.

A NASA engineer coined the word “telecommute” in the 1970s to describe his vision of employees working from home in order to alleviate road congestion, reduce fuel use, and improve air quality. Ever since, analysts and pundits have serially overestimated the share of people who would actually do so. The Internet accelerated these enthusiasms. Research in the mid-1990s claimed that telecommuting employees were far more productive than their stay-at-work counterparts; if that were true, businesses would have long ago required this arrangement.

While episodic telecommuting did increase from 1996 through 2009, its early growth largely stalled out. The share of those who telecommute at least half the time is still just 3 percent of the workforce. Compared with those in “professional” jobs, only half as many people in hands-on work (such as construction) say that they do some of their work from home. Everyone who has used teleconferencing software understands why telecommuting remains in its infancy. Aside from just how clunky even the best apps are, surveys show that huge percentages of people on a teleconference surreptitiously send e-mails, engage in social media conversations, shop, or sleep (among other things), instead of paying attention. Remote working remains limited in value for much of what people need to do—and especially for replicating the intangible productivity of personal interactions.

Then there are those whose jobs involve hands-on tasks like drawing a blood sample, delivering supplies, or operating machinery. Engineers have for decades worked to make remote telemedicine and tele-operation more effective—a far more technologically difficult task than creating videoconferencing software. But technology is finally at a tipping point, and practical telework will soon be possible.

When it comes to weighing a teleconference against in-person interactions, even psychologists now recognize the limitations in using such tools as Google Hangouts, Zoom, Slack, Facetime, or similar software. Everyone knows what’s really needed: virtual reality (VR), a near-perfect simulation of an in-person meeting. Science-fiction movies have captured what we want, and at least one product, Microsoft’s HoloLens, comes close to that. Users wearing a headset can see both the local scene and an immersive 3-D virtualization of the remote reality. (This is distinctly different from and far harder to do than VR games.) Virtual Reality–creating Virtual Clinics would be a major advance over staring at a small, planar screen with tiny icons or video links. Though still expensive, the HoloLens is particularly popular in medical settings and is starting to gain industrial traction.

Now that an immersive system is possible, it’s only a matter of time before the costs drop. Digital progress also means that the hardware becomes more convenient and lightweight. The huge goggles currently required need to become no bigger than eyeglasses. Prototypes in that size already exist.

Another near-term reality: conformal, video wallpaper, a product idea that has gone from theory to conceivable with paper-thin, organic-display technology that is getting cheaper. Would people want such a product? In 1995, it would have been hard to imagine homes filled with 60-inch flat-panel TVs. Today, wall-size, tiled displays already enable room-scale VR “caves.” The currently prohibitive price will be eroded by the dramatic cost reductions that we continue to see in digital domains.

Teleconferencing is only marginally useful for many critical industries and infrastructures. We still need to run machines, deliver food, and distribute medical supplies. Robots can do some of that, but the reality is that tele-operation and tele-medicine are far easier—and less expensive—than full automation. But for hands-on tasks, remote-controlled systems require not only hyperreal virtual vision but also tactile feedback. The latter is possible with the technology of haptics, which is undergoing a quiet revolution.

Remotely operating a robotic hand is far more effective if one is wearing a (comfortable) glove that lets you feel what the robot is touching. Such technologies have evolved more from advances in materials sciences than from communications tech per se. Engineers now talk in terms of the “tactile Internet.” Early uses for such technologies have always targeted places considered dangerous for humans; in a pandemic, that means everywhere.

Meantime, in our immediate future, acquiring your medicines by drone (both automated and remotely operated) instead of going to a pharmacy is already under way in remote areas around the world, including Africa. That’s easily adoptable here, and the FAA recently promulgated rules to permit such low-flying urban systems. Late last year, UPS drones in North Carolina delivered medications for CVS in two trial flights. The cost curves for such deliveries are already bent well down to practical ranges. One can easily imagine (again, for the next pandemic) deliveries of both medicine and food in quarantined areas. It will also allow delivery of self-administered diagnostic tests to minimize gratuitous travel to clinics and doctors’ offices. The Gates Foundation recently offered home test kits for coronavirus in Seattle. Patients complete an online questionnaire, and, if the symptoms align, test kits are delivered by people, within hours, and results get picked up, too. It would be much more efficient to use drones.

During a crisis, it’s also vital to ensure that supply chains continue to function. Warehouses, which already had faced labor shortages in the pre-Covid-19 low-unemployment economy, inevitably see greater staffing challenges during epidemics; in mid-March, deluged with orders, Amazon announced that it would hire 100,000 new workers. While the fully automated warehouse is years away, labor-saving, robot-handling assistance is around the corner. Years ago, Amazon led the way, buying a company that made turtle-like robots that move pallets of goods. Now robot-maker Boston Dynamics has demonstrated a robot that can find, pick up, and place boxes on those robotic pallets.

But all the resilience and defensive measures, while vital, are only half the battle. The first line of defense in an epidemic is a rapid and accurate diagnosis. Here the urgent need is for a diagnostic tool that can be used easily in a doctor’s office or clinic, or by a frontline responder (including the proliferating drive-through corona swab system)—and then, ultimately, diagnostic tech that anyone can use at home. The former will come faster than the latter. Since the last pandemic hit the world, technology has allowed the development of desktop-size, gene-sequencing diagnostic machines. Reports out of China claim that physicians there used powerful AI software to help them more rapidly identify Covid-19 cases from CT images. Such AI tools, combined with easily portable CT scanners (already in production, if limited in supply) will push fast diagnostics into neighborhoods.

While physicians worry about misinformation and misuses of self-diagnostic technology, consumers will clamor for useful tools and apps once they know they’re possible. We are well on the way to turning smartphones and smartwatches into diagnostic tools. Direct measurement of vitals like temperature and FDA-approved ECG capability on the Apple watch, including (rumored for the new Apple watch) noninvasive blood-oxygen levels, are already available. Here, too, radical advances in tech are on the way. Lost in the fog of coronavirus coverage is the news that University of Cincinnati researchers have created a credit-card-size “lab” that allows home detection of viruses present in saliva. It uses a tiny, single-use plastic chip that plugs into a smartphone-like machine that connects you with your doctor. The hype and overpromises behind Elizabeth Holmes and Theranos (for blood tests) by no means indicate that the dream of fast, cheap lab analysis is dead.

An entire subterranean industry exists of innovators making new kinds of wearable—and even ingestible—smart biosensors. The fusion of easy-to-use, bandage-like sensors, some with microneedles for painless blood extraction, and smartphone connectivity is no longer the stuff of fiction. A new bioelectronics industry is fomenting; we can expect, before long, consumable smart sensors embedded in a pill. The FDA has already approved many of the enabling materials for ingestion. Evidence of that personal diagnostics future: an FDA-cleared screening app for smartphones that detects urinary-tract infections. Regulators have recently clarified and modified confidentiality rules to allow easier use of consumer-facing apps and smartphone diagnostics. Combined with AI, this all portends the democratization of personal laboratory-quality diagnostics.

The holy grail when it comes to a disease, of course, is defeating it, not just identifying and hiding from it. Accelerating drug discovery with supercomputing AI is becoming feasible using powerful hyperrealistic models of pathogens. This past February, scientists announced the discovery of a novel antibiotic, thanks to a machine-learning algorithm. Using Summit, the world’s most powerful supercomputer, University of Tennessee researchers believe that they have found a compound that stops the coronavirus. (They weren’t alone in that pursuit, or in the discovery of potential therapies.) In one day, Summit can model and simulate biological and chemical processes that formerly took months. For those who fear that the machines are taking over, the reality is, as AI researcher Regina Barzilay at MIT recently said, “It’s not the machine that invented the molecule. It’s that the machine helped humans to scan the huge space of possibilities and zoom in on the fruitful set of hypotheses that they tested.” This is where the productivity of drug discovery finally starts to improve, after decades of slow, brute-force, and expensive progress.

While supercomputing AI can speed up discovery, clinical trials are necessary to ensure safety. Until now, the only way to move faster is to let the FDA accelerate patient access to new drugs still in trial phase, as it did under President Ronald Reagan (with the advent of the AIDS epidemic), and as is now possible under new rules for patients in extremis. (See “Ronald Reagan’s Quiet War on AIDS,” Autumn 2016.) But it would be far better if one could also accelerate preliminary testing inside computer models—that is, in silico clinical trials, instead of in-human clinical trials.

With sufficiently powerful computing, clinicians imagine simulating not just how a pathogen operates but also how human biology operates. That will first yield more accurate trials, especially combined with data from hyper-specific drug testing with “organ on a chip” technology, borrowing tools and techniques from the microprocessor industry. The combination will yield radically improved predictive accuracy for therapeutics. All this will require computers 100 times more powerful than any we have today. As it happens, several such silicon beasts will be coming online in the next several years. The first, El Capitan, will be ready by 2023. Meantime, the power of machines that approach Summit are already available from all the major Cloud providers, and at low rental costs. We will see far more startups and entrepreneurs tapping into that silicon power to chase the holy grails of therapeutics, especially in the wake of the Covid-19 crisis.

According to tech-tracker CBInsights, last year over $40 billion of private investment went into health-tech startups. That matches the entire annual NIH research budget. These are powerfully synergistic investments. We’ll get through this pandemic, albeit not without cost. But when the next one comes, silicon machines may finally give humanity the tools it needs to win its Sisyphean battle with viruses.

Photo: A technician prepares a drone for flight in Kigali, Rwanda, where the unmanned aerial vehicles are used for delivering medications; trial flights for such systems have been conducted in the United States. (JASON FLORIO/REDUX)


City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next