COVID-19Privacy

Covid or Covert?

Since the government announced earlier this week that the COVID-19 Contact Tracing application is to be trialled in the Isle of Wight, there has been a lot of mixed messages and FUD (Fear Uncertainty and Doubt) on both mainstream and social media.

Whilst we’re waiting for the app to be released so we can start analysis, we can review the white paper released by NCSC to understand how the application should work and what kind of privacy and security concerns may exist. Of course, this analysis is based solely on the documentation which may differ from implementation.

In this article, we’ll attempt to answer a few questions we see come up in news articles and conversations and help you to decide whether this is an application you’d be comfortable installing.

Centralised vs Decentralised  

One of the main criticisms of the app is the use of a centralised model due to privacy concerns. This means that all data is stored on an NHS server where it could potentially  be tracked and used by Governments. In this centralised model, data is sent to the NHS owned server where the data is processed. At-risk users are then notified if someone they have been in physical proximity to has reported symptoms. This differs from other contact tracing apps which use a decentralised approach where a database of everyone that has reported symptoms of COVID19 are downloaded to all devices.

The reasons stated by NCSC for the centralised approach is related to the fact that they do not currently have a way to protect against people maliciously reporting symptoms to the server without requiring users to authenticate to prove they are infected. We can’t see how this is fixed with a centralised model however one advantage of a centralised model is that users are only informed when they have been in contact with someone that has reported symptoms of COVID19. A decentralised model may in fact make it easier for users to identify who has reported symptoms as everyone has access to all infected user IDs.

Will the app even work? Bluetooth can’t be transmitted by background services.

We’ve seen this reported in a number of places. This is certainly not true for Android. Android can can both scan and broadcast Bluetooth packets in the background, so, for Android at least, there is no technical limitation to why the application won’t work.

iOS behaves differently. iOS apps are able to passively scan for Bluetooth Low Energy devices if they know the service UUID but more information on the implementation will need to be made available to understand how this works on iOS.

What user data is collected?

The app asks users to enter their partial post code on registration. As far as we can see no other personal data is collected.

Are locations tracked?

No. The app uses Bluetooth Low Energy (BLE) to detect proximity to other devices. If someone reports infection, a notification is sent to everyone that that has been identified as at risk. This does not state where or who the infection came from.

What does the app send over Bluetooth?

The app creates an ID at registration. It also creates a private and public key. Every 24 hours, a new key (made from the devices private key and servers public key) is used to encrypt the date, ID and country code. The app broadcasts this encrypted message along with the devices public key, integrity checks and transmission power & time.

What data is sent to the server?

When a user reports symptoms, the app uploads a log of all payloads (i.e. all the encrypted IDs of other users that it detected) gathered in the last 28 days. This log is then encrypted using a key generated during registration. Once received, the server decrypts the log with its version of the key, reads each payload and extracts the public key from the payload. Using the device public key and the server’s private key, It then decrypt the user’s ID. It then does some sense checking of the record. The records are analysed and users that are at high risk are sent a notification.

Are there security/privacy issues?

Yes, just reading though the report highlights a few things but nothing I would consider to be high risk. In terms of privacy, the app doesn’t track location but it would be possible to see who has been in proximity to who (obviously!). Users can be tracked by individuals for only 24 hours before the public key (which is transmitted in plaintext) is updated.

The big concern is how this data is to be used in future. Yes, the data is pseudonymised and therefore no personally identifiable information is stored but we wonder what could happen if this dataset is combined with other datasets. Currently, it seems unlikely the information collected would be used this way as governments have easier and more accurate ways to track people where necessary.