‘Personalised analytics’ doesn’t come without risks. “Profiling, analytics and big data all allow the generation of inferences, which affect individuals even if they were not the source of the original data,” says Wilton. “Inference data is not currently well regulated – if at all – and yet it can often have at least as much privacy impact as the data from which it was derived.”
For the upcoming era of wearable smart devices in medical care, this could be crucial – the immensely personal data doctors collect as they track pacemakers and monitor the vital signs (and even GPS position) of hospitalised patients would be vulnerable.
Although wearables categorise themselves as ‘personal entertainment device monitors’, as they become more prevalent in healthcare their data could be used beyond the initial reason for collection. “These devices could be used in a corporate wellness program, and leveraging the personal data may allow the company to impose sanctions on those employees that are not active enough,” says Stroud.
Data is a jigsaw, and the more pieces floating around, the more likely it is that so-called anonymous data can be pieced together. What’s made that easier is the emergence of public datasets.
“Right now medical data has specific requirements for anonymised data, but with the proliferation of public datasets it could be difficult to prove that you can’t put that data back together,” says Stroud. So if you can use the customer data without the personal information, you should. “Also developing are micro-retention policies, which are not graduated by server or dataset, but field-level retention strategies,” he adds.
That’s simple: minimisation, metadata, context and consent all need to be reformed.