November 16, 2018 by Mark
On November 13, I participated in the Federal Trade Commission’s workshop on Ethics and Common Principles in Algorithms, Artificial Intelligence, and Predictive Analytics along with James Foulds, an Assistant Professor at the University of Maryland, Baltimore County, Rumman Chowdhury, the Global Lead for Responsible AI at Accenture Applied Intelligence, Martin Wattenberg, a Senior Research Scientist at Google, Erika Brown Lee, Senior VP & Assistant General Counsel at MasterCard, and Naomi Lefkovitz, a Senior Privacy Policy Advisor at the National Institute of Standards and Technology. The following commentary is based on my remarks and the discussion at the panel.
In 2017, SIIA published its Ethical Principles for Artificial Intelligence and Data Analytics as a guide for companies as they develop and implement advanced data analytic systems. There are many other such ethical principles including the famous Belmont principles of respect for persons, beneficence and justice which guide human subject experimentation and are the basis for IRB reviews of Federally funded research; principles developed by FAT/ML, a group of computer scientists focused on ethical issues in machine learning; and the recently-released revised code of professional conduct from the Association of Computing Machinery.
In addition, Access Now just released their report on Human Rights in the Age of Artificial Intelligence, joining a valuable report from the Berkman Center on Artificial Intelligence & Human Rights: Opportunities & Risks that was released in September.
SIIA’s principles relate to rights, justice welfare and virtue. They encourage companies to:
- Engage in data practices that respect internationally recognized principles of human rights
- Engage in data practices that encourage the practice of virtues that contribute to human flourishing
- Aim for an equitable distribution of the benefits of data practices and avoid data practices that disproportionately disadvantage vulnerable groups
- Aim to create the greatest possible benefit from the use of data and advanced modeling techniques
What is the status of these principles? There’s a kind of sliding scale. They could be principles to guide individual conduct, like ACM’s code of professional ethics. They could be guides for companies. They could be principles of a self-regulatory organization like the marketing guidelines of the Direct Marketing Association. They could be soft law, like the OECD Fair Information Practice Principles that became the basis for privacy laws around the world. Finally, they could be proposed as binding legal principles.
SIIA intends these principles to be guides to company conduct. They are more than just ethical principles for individuals to follow, but they are not ready for use in a self-regulatory organization, or soft or hard law.
The key reason is that all the important ethical questions arise at the level of the application of these principles in particular contexts. The answers that work for autonomous vehicles are not the same as the rules for autonomous weapons. And, in specific areas such as autonomous weapons reasonable companies can apply roughly the same general ethical principles and come up with different answers.
A related question is when to apply the principles. Companies have a wide variety of policies and procedures involving data analysis and use, and not all of them have an ethical dimension. But they become ethical in character when they have, or create a substantial risk of having, a large impact on one of these values, either positively or negatively. A data practice could, for instance, threaten serious violations of fundamental human rights such as the right to life or privacy; or it could have the potential for significant improvement in the fulfillment of human rights such as freedom of speech or the right to live in a safe and secure community. Ethical assessment is important when these ethical consequences seem as if they might be considerable.
Rather than start from scratch and develop their own conception of fundamental human rights, the better way for companies to proceed is to base their data practices on respect for internationally recognized principles of human rights. This framework of human rights specifies what is would be for organizations to respect the equal dignity and autonomy of individuals. These rights include the right to life, privacy, religion, property, freedom of thought, and due process before the law.
Individuals have rights based on justice to a fair share of the benefits and burdens of social life.
Companies should therefore aim for an equitable distribution of the benefits of data practices and avoid data practices that disproportionately disadvantage vulnerable groups. The benefits of advanced analytical services should be available to all and not restricted based on arbitrary and irrelevant characteristics such as race, ethnicity, gender, or religion. Organizations share responsibility for how the models they develop are used and by whom and how the benefits of their new analytical services are distributed.
Companies have a duty to promote the general welfare. They should therefore aim to create the greatest possible benefit from the use of data and advanced modeling techniques. They should increase human welfare through data-driven improvements in the provision of public services and low-cost, high-quality goods and services.
Companies should engage in data practices that encourage the practice of virtues that contribute to human flourishing. Data and advanced modeling techniques should be designed and implemented to enable people, individually and collectively, to further their efforts to become people capable of living genuinely good lives in their communities. Data practices should allow affected people to develop and maintain moral virtues such as honesty, courage, moderation, self-control, humility, empathy, civility, care, and patience.
These virtue words have an old-fashioned feel. But the current debates over addictive and manipulative social media design raise questions of virtue ethics, about whether our devices take advantages of people’s personality weaknesses or encourage harmful habits of thought, feeling, and behavior.
It is possible to think of these four principles – rights, justice, welfare, and virtue – as alternatives and to encourage companies to pick one. But the better way is to do it all! Organizations need not choose one of these principles to the exclusion of the others.
They should use them jointly as general guides to the development of ethical data practices.
Of course, the principles by themselves don’t get companies very far. To be useful they need to be supplemented with specific principles appropriate to the context or domain of use.
As an example, consider how to deal with the possibility that an algorithm might be discriminatory or biased. How should a company deal with that possibility?
The answer is to conduct a disparate impact analysis. These analyses are a key part of assessing compliance with statutory and constitutional prohibitions on discrimination. Companies should also use them to assess AI decision-making algorithms as designed and as they evolve and adjust themselves in use.
This proceeds in three stages. The company should assess the algorithm to see if there is any evidence of a disproportionate adverse impact on a chosen group. If so, then the company needs to think carefully about the purpose served by the algorithm and ensure that it is important and legitimate and that the use of the algorithm in fact leads to the fulfillment of the purpose. And the last stage is to assess whether there are alternatives that achieve the legitimate objective with less of a disparate impact. If there are, the company should use one of them. If there are no such alternatives, then the algorithm has passed the test.
A disparate impact assessment can reveal bias or discrimination against a particular group. But which groups should be assessed? Legally protected classes include race, gender, religion, ethnicity. But companies should consider expanding to vulnerable groups also at risk but not explicitly protected by law.
A disparate impact assessment can reveal bias or discrimination in the use of an algorithm for a particular purpose. But which purposes should be assessed? The law currently protects eligibility decisions in employment, housing, insurance, credit. But companies should consider expanding this to include consequential decisions that would affect a person’s life chances.
It is at the level of these concrete assessments of disparate impact in particular circumstances where the real ethical issues arise, not at the level of general principle.
For instance, commentators have noted that Netflix’s movie recommendations differ by race. They are not using a racial characteristic to do that; they are attempting to accurately reflect people’s tastes and it turns out that people’s movie tastes differ by race. Should they fix that? There are considerations on both sides. If they move away from accuracy toward group equality, then people will be exposed to movies that they might not otherwise have seen thereby broadening and diversifying their experience. On the other hand, presenting black entertainment to black views is not perceived as offensive. As filmmaker Tobi Aremu said about Netflix, “…if something is black, I take no offence in being catered to. I am black, give me black entertainment.” So the cost of “fixing” the algorithm is a mismatch between the movie recommendations and people’s movie preferences.
What is offensive is deception based on race. Netflix also personalizes images of movie posters according to its best guess of what people would want to see. This leads them to mislead black audiences with inaccurate images of movies that show minor black characters instead of the leading white cast.
A far more serious ethical issue arises in the case of recidivism scores, where aiming purely for accuracy produces racial differences. In particular, in one study of one score, predictions about reoffending were wrong about blacks about twice as often as about whites. This resulted from using an algorithm that aimed for accuracy in predictions when rates of recidivism differ by race.
There are two important reasons to try to fix this. First, racial bias is a major and recognized problem in the criminal justice system. Second, in the criminal justice context, protecting the innocent takes precedence over finding the guilty, but by using an algorithm with unequal group error rates, that principle is implemented much more effectively for whites than for blacks.
But if the recidivism score is adjusted to equalize group error rates, predictive parity suffers, that is, the accuracy of recidivism predictions differs for blacks and whites. This raises all the philosophical, legal, and ethical issues of equal protection and affirmative action. And there is a measureable cost in terms of a decline in public safety.
Abstract principles cannot resolve concrete issues such as these.
Another issue relates to whether the principles that should apply to artificial intelligence are new or the same as the principles that applied to previous methods of statistical analysis. In many ways, there are more continuities than discontinuities. The ethical issues and methods of resolving them have arisen before in the context of using old-fashioned methods of analysis such as regression analysis. Regression-based credit scores, for instance, raise the same issues that must be faced in machine learning applications.
Issues of responsibility, however, are different when systems become more autonomous. Who is responsible must be determined legally and ethically if self-driving cars are going to be introduced widely. Autonomous weapons need to be designed so that if something goes wrong – if a killer robot runs amuck – then someone can be held accountable for what happened.
Finally, some ethical issues that are raised about AI are not important or genuine issues at all. Any issue that depends on an AI-system achieving consciousness and fully independent agency is more science-fiction than science. There are enough important and pressing ethical issues in AI so that we need not spend valuable policy time talking about the speculative ones.

Mark MacCarthy, Senior Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy.