There’s been a lot of coverage of Apple’s and Google’s joint initiative to develop a contact tracing tool and collaboration on monitoring the pandemic. Such collaboration is rare. The fact that these two companies effectively control between them the whole market for the smartphone operating systems we carry around with us at all times, make this even more unusual.
What is contact tracing and how does it work? Basically, it’s a set of technologies that use the sensors in our terminals and internet infrastructure to identify people who may have come into contact with us, and then collect additional information about their movements. Contact tracing is an effective way to interrupt transmission and reduce the spread of coronavirus, alert our contacts to the possibility of infection and to provide preventive advice or care, as well as diagnosis, advice and treatment to people already infected, or to investigate the epidemiology of a disease in a particular population.
Implementing these types of initiatives is possible without completely compromising the privacy of users, but given the nature of geolocation data, the proposal has generated concern. In practice, a large percentage of the population has already given permission to Apple or Google, and possibly many other companies, regarding their geolocation data in order to use certain applications. But doing so for something as sensitive as health data requires a certain level of trust not only in these companies’ privacy safeguards, but also in the public institutions involved, something that for many is a real leap of faith. There are no easy answers here.Today In: Leadership Strategy
How do such systems work? In a first phase, the idea is to generate a common interface that public health agencies can integrate into their own applications. In the second, the idea is to develop a system-level contact tracking system that will work on both iOS and Android devices, which uses the smartphone to transmit anonymous identification at short ranges via Bluetooth. The device generates a daily tracking key, and transmits its last 14 days of keys on a rotating basis to other devices, which look for a match. This correlation is also capable of determining both the threshold of time spent in proximity, as well as the distance between the two devices. From this data, if a match is found with another user who has notified the system that he or she has tested positive, he or she is notified so that he or she can take action, perform the test and, if necessary, self-quarantine.
All of this raises a number of questions, for example, if our terminals generate a 16-byte identifier each day, which they must transmit together with those corresponding to the previous fourteen days to all the devices they cross, what levels of data transmission are we talking about? Logically, we will have to introduce some cut-off variable that will allow us to restrict transmissions, and the first candidate is the geolocation record. There are also possible problems such as people not registering having tested positive — fearful of the stigma or restrictions on their movement — or the other way around: people reporting positive when they aren’t, issues that could be addressed by attaching some kind of personal data to identifiers that would allow offenders to be located, but which raises civil rights issues.
As Sara Harrison pointed out a few days ago in The Markup, “When is anonymous not really anonymous?”, we know that the anonymization of data is not enough to guarantee privacy, because there are numerous techniques of de-anonymization — and abundant evidence of their use.
One way or the other, we are about to enter a phase in which, using the pandemic as a justification, it will be normal for data as personal as our geolocation, our state of health or proximity to other people to be collected and processed. The risk, as Edward Snowden has warned, is that some governments will develop systems that can continue to be used to surveil us. And not just governments: this kind of data can be used by companies to practice various forms of discrimination.
In addition to risks, there are opportunities, related to the future of health care: what would have happened, in a hypothetical scenario where privacy was taken for granted, if our devices were capable of transmitting our basic health parameters to a central authority? How simple would it have been to have noted the start of the epidemic and treat it properly before it spread? What about detecting the symptoms of other types of health problems which, in many cases, due to their late detection, cause not only more suffering to patients, but also incur costs to the health system?