UKs health data guardian sets a firm line for app development using patient data

The UK’s health data watchdog, the National Data Guardian (NDG), has published correspondence between her office and the national privacy watchdog which informed the ICO’s finding in 2017 that a data-sharing arrangement between an NHS Trust and Google-owned DeepMind broke the law. The exchange was published following a Freedom of Information request by TechCrunch. In fall 2015 the Royal Free NHS Trust and DeepMind signed a data-sharing agreement which saw the medical records of 1.6 million people quietly passed to the AI company without patients being asked for their consent. The scope of the data-sharing arrangement — ostensibly to develop a clinical task management app — was only brought to light by investigative journalism. That then triggered regulatory scrutiny — …

The UKs National Health Service is launching an AI lab

The UK government has announced it’s rerouting £250M (~$300M) in public funds for the country’s National Health Service (NHS) to set up an artificial intelligence lab that will work to expand the use of AI technologies within the service. The Lab, which will sit within a new NHS unit tasked with overseeing the digitisation of the health and care system (aka: NHSX), will AI systems for day-to-day tasks inspecting algorithms already used by the NHS to increase the standards of Could the #NHS in the UK self-finance a large part of #HealthTechnology innovation if it had the courage to commercialize its data trove? Hint: YES.https://t.co/9PKTEKLSdc — Mark Tluszcz (@marktluszcz) August 8, 2019 Commenting on the launch of NHSX in a statement, …

Researchers spotlight the lie of anonymous data

Researchers from two universities in Europe have published a method they say is able to correctly re-identify 99.98% of individuals in anonymized data sets with just 15 demographic attributes. Their model suggests complex data sets of personal information cannot be protected against re-identification by current methods of “anonymizing” data — such as releasing samples (subsets) of the information. Indeed, the suggestion is that no “anonymized” and released big data set can be considered safe from re-identification — not without strict access controls. “Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR [Europe’s General Data Protection Regulation] and seriously challenge the technical and legal adequacy of the de-identification …