Slated to take effect as law across the EU in 2018, the General Data Protection Regulation could require companies to explain their algorithms to avoid unlawful discrimination.
Europe’s data protection rules have established a “right to be forgotten,” to the consternation of technology companies like Google that have built businesses on computational memory. The rules also outline a “right to explanation,” by which people can seek clarification about algorithmic decision that affect them.
In a paper published last month, Bryce Goodman, Clarendon Scholar at the Oxford Internet Institute, and Seth Flaxman, a post-doctoral researcher in Oxford’s Department of Statistics, describe the challenges this right poses to businesses and the opportunities it presents to machine learning researchers to design algorithms that are open to evaluation and scrutiny.
The rationale for requiring companies to explain their algorithms is to avoid unlawful discrimination. In his 2015 book The Black Box Society, University of Maryland law professor Frank Pasquale describes the problem with opaque programming.
“Credit raters, search engines, major banks, and the TSA take in data about us and convert it into scores, rankings, risk calculations, and watch lists with vitally important consequences,” Pasquale wrote. “But the proprietary algorithms by which they do so are immune from scrutiny.”
Several academic studies have already explored the potential for algorithmic discrimination.
A 2015 study by researchers at Carnegie Mellon University, for example, found that Google showed ads for high income jobs to men more frequently than to women.
That’s not to say Google did so intentionally. But as other researchers have suggested, algorithmic discrimination can be an unintended consequence of reliance on inaccurate or biased data.
Google did not immediately respond to a request to discuss whether it changed its advertising algorithm in response to the research findings.
A 2014 paper from the Data & Society Research Institute echoes the finding that inappropriate algorithmic bias tends to be inadvertent. It states, “Although most companies do not intentionally engage in discriminatory hiring practices (particularly on the basis of protected classes), their reliance on automated systems, algorithms, and existing networks systematically benefits some at the expense of others, often without employers even recognizing the biases of such mechanisms.”
Between Europe’s General Data Protection Rules (GDPR), which are scheduled to take effect in 2018 and existing regulations, companies would do well to pay more attention to the way they implement algorithms and machine learning.
But adhering to the rules won’t necessarily be easy, according to Goodman and Flaxman.