Google’s New AI Principles Are a Step in the Right Direction

Share |

Last week, Google released a blog of seven ethical principles to guide their work in artificial intelligence.  The principles are:

  • Be socially beneficial. This is essentially a social welfare test under which Google will move ahead with an AI project only when “the overall likely benefits substantially exceed the foreseeable risks and downsides.”  Moreover, in certain cases, Google will make their technologies “available on a non-commercial basis.”
  • Avoid creating or reinforcing unfair bias.  This commits Google to conduct disparate impact analyses and to “seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
  • Be built and tested for safety. Google will use “strong safety and security practices to avoid unintended results that create risks of harm.”  This includes in appropriate cases, continuing to “monitor their operation after deployment.”
  • Be accountable to people. Google will provide “opportunities for feedback, relevant explanations, and appeal.”  It will subject its AI technologies “to appropriate human direction and control.”
  • Incorporate privacy design principles. Google will embrace good privacy practices, including to “give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.”
  • Uphold high standards of scientific excellence. Google will embrace the traditional scientific standards of “open inquiry, intellectual rigor, integrity, and collaboration.” It “will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.”
  • Be made available for uses that accord with these principles.  Google will seek to limit harmful applications of its technologies and assess likely harm, in particular by assessing whether the primary purpose and likely use of an AI technology is related to or adaptable to a harmful use.

Much of the public discussion of the announcement has focused on Google’s application of these principles to particular cases, in particular to its decision that it would not pursue projects related to “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

But the framework is the important thing.  Google has proposed ethical guidelines that largely mirror the recommendations from SIIA in its Ethical Principles, released in November 2017.  They focus the company on the important task of developing and implementing its technology in a way that comports with widespread ethical norms.

Is this enough?  Of course not!  The proof of the pudding is in the eating, and so much will depend on how Google implements these thoughtful principles.  It must also follow up on its pledge for public accountability in ways that preserve its own integrity and independence while reassuring the public and policymakers that it is a responsible steward of this powerful new technology.  But endorsing this ethical framework is a clear positive step forward. 

Mark Mark MacCarthy, Senior Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy.