EU Data Protection Law May End The Unknowable Algorithm
- by 7wData
Slated to take effect as law across the EU in 2018, the General Data Protection Regulation could require companies to explain their algorithms to avoid unlawful discrimination.
Europe's data protection rules have established a "right to be forgotten," to the consternation of technology companies like Google that have built businesses on computational memory. The rules also outline a "right to explanation," by which people can seek clarification about algorithmic decision that affect them.
In a paper published last month, Bryce Goodman, Clarendon Scholar at the Oxford Internet Institute, and Seth Flaxman, a post-doctoral researcher in Oxford's Department of Statistics, describe the challenges this right poses to businesses and the opportunities it presents to machine learning researchers to design algorithms that are open to evaluation and scrutiny.
The rationale for requiring companies to explain their algorithms is to avoid unlawful discrimination. In his 2015 book The Black Box Society, University of Maryland law professor Frank Pasquale describes the problem with opaque programming.
"Credit raters, search engines, major banks, and the TSA take in data about us and convert it into scores, rankings, risk calculations, and watch lists with vitally important consequences," Pasquale wrote. "But the proprietary algorithms by which they do so are immune from scrutiny."
Several academic studies have already explored the potential for algorithmic discrimination.
A 2015 study by researchers at Carnegie Mellon University, for example, found that Google showed ads for high income jobs to men more frequently than to women.
That's not to say Google did so intentionally. But as other researchers have suggested, algorithmic discrimination can be an unintended consequence of reliance on inaccurate or biased data.
Google did not immediately respond to a request to discuss whether it changed its advertising algorithm in response to the research findings.
A 2014 paper from the Data & Society Research Institute echoes the finding that inappropriate algorithmic bias tends to be inadvertent. It states, "Although most companies do not intentionally engage in discriminatory hiring practices (particularly on the basis of protected classes), their reliance on automated systems, algorithms, and existing networks systematically benefits some at the expense of others, often without employers even recognizing the biases of such mechanisms."
Between Europe's General Data Protection Rules (GDPR), which are scheduled to take effect in 2018 and existing regulations, companies would do well to pay more attention to the way they implement algorithms and machine learning.
But adhering to the rules won't necessarily be easy, according to Goodman and Flaxman.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
Shift Difficult Problems Left with Graph Analysis on Streaming Data
29 April 2024
12 PM ET – 1 PM ET
Read MoreYou Might Be Interested In
The rise of logical data fabrics – knit a virtual view of all enterprise data
25 May, 2020It is time to stop “collecting” the data into a central repository and start “connecting” to the data at the …
Understanding the difference between AI, ML and DL!!
3 May, 2020Artificial Intelligence, Machine Learning, Deep Learning, Data Science are popular terms in this era. And knowing what it is and …
Open data creates new business
26 Apr, 2016The cities in the Helsinki Region have worked actively for several years to make their data open and accessible to …
Recent Jobs
Do You Want to Share Your Story?
Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.