Apple and Google recently announced a joint effort to create a software app to notify people when they have come into contact with someone with the coronavirus. The use of smartphones for contact tracing appears as a deus ex machina, the obvious way for letting society reopen from its current regime of limited movement and restart economic activity. As presented so far, the plan seems unlikely to be very effective, but it points toward the beginning of a different, and much larger, discussion.

Apple and Google have publicly committed themselves to helping contain the pandemic without sacrificing personal privacy. The new app will be made available in May. Down the line, the system might even work without needing a specific app, making it easier and more convenient for users to join. The basic idea is to let devices exchange a private key via a Bluetooth signal, logging users that come in close proximity for a certain amount of time. If someone then tests positive for the coronavirus and enters that information into the app, contacts automatically receive a message, informing them that they need to get tested or isolate themselves. Note how the human factor has been removed from contact tracing: people have been transformed into cellphone signals.

If the system, to protect privacy, allowed the individual to choose whether to use the technology, one difficulty immediately arises. Without a critical mass of users, the impact on slowing viral spread could be too limited. The solution would be to offer some kind of incentive for those choosing to use the app. It might be a monetary incentive, or it could come in the form of certain privileges, such as being able to enter specified public spaces like large shopping malls or the transportation system.

Apple and Google say that the app would not collect location or other identifiable information. That’s an obvious specification: geolocation is unnecessary for contact tracing and raises the most delicate privacy concerns. It is entirely possible to identify users who have come in contact with one another without referencing their exact locations. Users should be conceived as points in empty space. The system is interested in whether they have come into contact, not where they have done so. GPS-based tracking systems proposed by other developers would have the significant disadvantage of collecting location data and might require cryptographic protection to avoid exposing highly personal information, from political dissent to extramarital affairs.

Another question arises: should relevant pairings be automatically shared solely with other users, or should they be communicated to health authorities, who could then proceed in their own way, and contact those with a significant risk of having been exposed? Keeping authorities out of the loop runs into an obvious snag. People could potentially use the system to create chaos by communicating a positive test that never took place. Imagine a malicious anonymous user going to a mall, where hundreds of contacts would be logged, before pressing a button and sending a notification warning all of them that they might be infected. A few days after their announcement, Google and Apple clarified that the app would have to include a mechanism to verify that someone tested positive, such as a code from a health-care provider. The system thus seemed to survive its first major obstacle.

Other potential problems remain. Bluetooth tracking is notoriously unreliable. We know that it often fails to distinguish between phones that are within six feet of one another and those that might be farther away—yet precision here is critically important. And the app is likely to be designed in such a way that it would require a number of successive steps: people would have to sign in; they would have to keep the Bluetooth signal on; they would later have to confirm that they had tested positive and agree to share that information. If we try a back-of-the envelope calculation of the odds that each of these switches in the process will have a positive value, we end up with too small a number of identified pairings. Is anyone going to be happy lifting the existing quarantines and restrictions and replacing them with such a haphazard experiment?

In many cases, too, contagion might not take place through direct contact between people. One could leave traces of the virus on a surface, and someone might be infected by touching that surface later. No tracing app would be able to detect the possible transmission. Nor would the system that Apple and Google are developing be able to determine if contacts are behind a wall or inside a car, in the apartment next door, or wearing a mask. The system looks unreliable.

In a way, though, all that is beside the point. What I think Apple and Google are suggesting is a path forward. If we accept that new technology of this kind is politically and ethically valid, the system can always be improved later. More powerful ways can be found to achieve the same result. They have not been discussed so far because they would raise greater public reservations.

Contact-tracing apps are unlikely to be of much use during the current pandemic, but the questions they raise could not be more important. Should we take the first timid steps in developing a surveillance system that applies some of the lessons and expertise from counterterrorism and law enforcement to wider threats, such as pandemics? Much would depend on public perceptions, but these remain ambiguous. Activists and intellectuals firmly believe that the digital surveillance system put in place to prevent terrorist attacks is a profound threat to personal freedom; the general public is much more sanguine.

At present, it might be possible to develop a hybrid, less automated tracing system, in which a digital app would be supported and its results verified by health officials. I support this general approach. Alternatively, the digital system might be made less voluntary and therefore more reliable. In Hong Kong, the authorities now distribute digital-tracking bracelets to everyone getting off a plane at the airport. Between the benign app suggested by Google and Apple and the vaguely dystopian Hong Kong system, a range of potential alternatives exists for societies to try out. In the end, however, even these alternatives fail to consider the full question under examination here.

In South Korea, the pandemic surveillance system is much bolder and much more intrusive, allowing the government to access, not only smartphone location, but credit-card histories, security-camera footage, travel records, and other data. No new data-collection system or app is needed, since all this is already available. It provides a much fuller and more reliable picture of where coronavirus patients have been and the people with whom they have had contact. It also requires an active role for human analysts, who must integrate all the data and make sense of it. A human data analyst would be able to incorporate information beyond just physical proximity. He or she could correct for biases and could consider other factors and circumstances going far beyond those introduced by an automated app.

When new patients are identified, the health authorities get to work building a detailed map of their whereabouts over the past two weeks. They rely on oral testimony at first, but memories are patchy and need to be confirmed or corrected by using smartphone location and then security-camera footage. Genuine sleuth work is often necessary. Imagine camera footage shows you that one patient sat at the same table in a cafe with a stranger. There is only one way to identify that person: in a country where cashless payments are almost universal, authorities can ask credit-card companies to pull the information and communicate to their customers that they need to be tested and isolated. More controversially—but only because the individualized method still leaves gaps—patient routes are posted online so that everyone in the city can see for himself if he faces any risk of contagion.

Israel presents an interesting case. Its security agency, the Shin Bet, can already access cell-phone geolocation data for counterterrorism and security purposes. These powers are now being used for coronavirus tracking. Remarkably, no health professionals were involved in designing the system.

The connection to counterterrorism is not fortuitous. This is the area where liberal democracies have already developed powerful surveillance tools. South Korea even seems to have adapted privacy-preserving methods previously developed for counterterrorism in the United States. The information platform where different data sources are combined and analyzed must match access and need. If a patient was in contact with someone for only a few seconds, and if security-camera footage confirms that both individuals were wearing masks, that’s all the analyst needs to know. In this case, there is no need to obtain the identity of the contact. When the system finds evidence of risky contact, it automatically lowers anonymity, and the analyst is allowed to request further information. South Korea has incorporated some of this logic into its pandemic surveillance system. It is remarkably similar to what counterterrorism authorities in America call “selective revelation,” a way to reconcile safety and privacy.

I have little doubt that the South Korean and Israeli models—and not the contact-tracing apps being discussed in Europe and America—offer a vision of the future that we’re heading toward. Automation is a way to allay our fears about privacy and control, but automation won’t be enough. Human beings must be kept in the loop as the mind guiding a vast system of data and analysis, the full impact and import of which we have only begun to understand.

When chess master Garry Kasparov lost to Deep Blue, he complained that the computer had access to a database of every single chess move that was ever done. “If I had access to that same database in real time, I could have beat Deep Blue. I want to make a whole new chess league where you can play as a human with access to that database.” A centaur: a combination of man and machine, an augmented human mind. It is a forbidding, unnerving vision.

Photo: Vonschonertagen/iStock


City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next