There’s good news from the scholarly community working on the assessment of fairness in algorithms. Computer scientists and statisticians are developing a host of new measures of fairness aimed at providing companies, policymakers, advocates with new tools to assess fairness in different contexts.
The essential insight of the movement is that the field needs many different measures of fairness to capture the variety of normative concepts that are used in different business and legal contexts. Alexandra Chouldechova, Assistant Professor of Statistics and Public Policy at Heinz College, says “There is no single notion of fairness that will work for every decision context or for every goal.”
To find the right measure for the job at hand, she advises, “Start with the context in which you’re going to apply [your decision], and work backwards from there.”
This issue came to a head in the controversy surrounding the COMPAS score. This s ...
November 04, 2016 by Diane
Recent technological developments have led to the rise of “big data” analytics which include machine learning and artificial intelligence. These new technologies will without question provide ample opportunity for growth for consumers, businesses, and the global economy as a whole. As this technological evolution continues to take place, it does not come without some risk.
Over the last few years, algorithmic fairness, has become an issue of serious debate. Most recently, Cathy O’Neil released a book titled, “Weapons of Math Destruction,” and Frank Pasquale published “The Black Box Society,” in which they look at issues of discrimination and the role that algorithms play in exacerbating discrimination. SIIA responded to these works in a blog by saying that tech leaders must quickly act to ensure algorithmic fairness.
To go even further, on Friday, November 04, 2016, SIIA released an issue brief on the topic ...