This well attended event entitled: “Artificial Intelligence: What Can be Learned from Other Countries Approaches?” can be viewed on YouTube here. Professor Susan Aaronson provided a preview of her work on the topic which will be discussed in a paper entitled: “Data is a Development Issue.” Some takeaways included the reality that there are no broadly generalizable studies on the impact of AI on job creation – in fact, available data can be used to posit both that it contributes to job loss or gain; cybersecurity will include an AI component; bias in AI is possible (just as it is in non-AI contexts), but it can be addressed; and, the AI use skills deficiency in people capable of an inter-disciplinary approach to AI use is both real but also an opportunity. Given that McKinsey (among other estimates of the economic impact of AI) estimates that AI could deliver up to 16% higher global GDP by 2030, understanding and taking advantage of this technology in a “human-centric” way will be crucial to building popular acceptance of the technology if countries and companies are going to be able to take full advantage of possible AI applications.
Panelists
Japanese Embassy Economic Counselor Masayuki Matsui provided valuable information on the Japanese approach to AI development, especially in the international space. Japan is hosting the G20 Ministerial Meeting on Trade and the Digital Economy on June 8-9, 2019, which will include discussions on AI. The Government Accountability Office’s (GAO) Chief Scientist and Managing Director, Science, Technology Assessment, and Analytics Timothy M. Persons spoke about the GAO’s work on Artificial Intelligence, as well as Administration policy in “Artificial Intelligence for the American People.” The GAO has focused on AI’s impact for cybersecurity, automated vehicles, criminal justice and financial services. Canadian Embassy Senior Policy Advisor Brad Wood focused on the AI ecosystem in Canada, especially efforts to foster research excellence, promote AI across sectors, enhance public trust in the technology, and spearhead international collaboration. European Union Digital Policy Officer Jesse Spector spoke about the “four pillars” of the EU’s Artificial Intelligence Coordinated Plan on Artificial Intelligence, which includes policies around investment, data, skills and trust. With respect to trust, the Digital Policy Officer noted the European Commission’s draft ethics guidelines on Artificial Intelligence developed by a multi-stakeholder High Level Expert Group (SIIA will submit a comment on those guidelines on February 1, 2019). Dun and Bradstreet Senior Vice President and Chief Data Scientist Anthony J. Scriffignano talked about how while there are many “head winds” propelling AI adoption, there are “tail winds,” including a significant skills gap, something all the panelists agreed was a serious problem.
Some Takeaways
Cybersecurity and law enforcement in general will depend on smart applications of AI: The “changing face of malfeasance,” as Anthony J. Scriffignano puts it, involves using AI to combat it. The GAO considers that AI will be crucial in ensuring cybersecurity. For instance, automated systems can help by identifying vulnerabilities; patching vulnerabilities; detecting attacks, and defending against active attacks. In general, AI technologies are important to the SIIA member companies that are engaged in providing anti-money laundering, anti-terrorism, know-your-customer and other services important to law enforcement.
There are no broadly generalizable (cross-sectoral) studies estimating the impact of AI on net job creation or destruction: There is a wide debate on the possible impact of AI on jobs. And there are many reports with estimates based on seemingly large data sets and solid methodologies. But the reality seems to be that available data can support the notion that AI leads to job loss and vice versa. For instance, many reports are based on what computer scientists guess with respect to which tasks in which jobs can possibly be done technically by today’s machine learning programs. So, for example, salad making could be programmed; therefore that part of the job of short order cooks is at risk and therefore there will be a need for fewer of them. However, often the cost of using a robot to make that salad is not factored into the analysis, thereby not providing a realistic sense of whether an employer would want to use AI technologies to make the salad in the first place.
Bias in AI is Possible (just as it is in non-AI contexts), but it can be addressed: Although it is almost a truism to say it, bias is possible in the AI context, but the problem can be addressed. Given the perceived opacity of AI, the concern is also raised in the context of “explainable AI,” which was discussed at the event. It is important in any case to conduct disparate impact analysis to check for and understand the impact of bias on decisions. However, analysts need to be careful to avoid removing one bias factor and thereby introducing another one! In many cases, it is important to understand the impact of bias on decisions. For instance, would the bias have changed the decision? If so, further actions would be appropriate. With respect to the question of “explainability,” full explainability might not be possible as one panelist said. However, it is necessary to communicate key factors in scores and provide evidence of the validity of predictive models. For more information and discussion from SIIA’s perspective on this matter, see the September 15, 2017 SIIA Issue Brief entitled: “Ethical Principles for Artificial Intelligence and Data Analytics” and the September 22, 2016 Issue Brief entitled: “Algorithmic Fairness.”
There is a consensus that there is a need for skills development, especially in inter-disciplinary work: If there was one thing that all panelists agreed upon, other than than that AI will have a profound impact, it is that there is a growing need for skills development. Interestingly, that does not mean that everybody needs to learn how to write code and become a computer programmer, although there is certainly a need for more coders and more computer programmers. There is a reason why U.S. college students are demanding more courses in this field as the NYT recently reported. What it does mean, particularly in this era of growing calls for “explainable AI” (itself a challenging concept), is that there will be an increasing need for individuals who know how to use AI technologies appropriately. There was a discussion, for instance, about how AI is going to become an increasingly important part of the criminal justice system. So that means that prosecutors and others have to work with professionals who are conversant with the technology and who also understand the laws and ethical considerations underpinning criminal justice work. That is a different skill set from the work conducted by today’s IT professionals.
Conclusion
There is a reason why AI dominated the conversation at the Davos World Economic Forum. Although AI has experienced period of hype in the past, it seems like “this time it is different,” in terms of usable relatively near term potential AI applications in fields as different as drug discovery, criminal justice, cybersecurity, financial services, fraud prevention and many other spaces. SIIA will continue to work with academic institutions such as George Washington University in exploring the policy implications of AI developments. We also hope to work with the U.S. Congress and international institutions in understanding better what kind of inter-disciplinary training is needed to prepare the professionals of tomorrow. Given that the panelists from the United States, Japan, EU, and Canada all recognized the importance of skills development, this need for inter-disciplinary expertise could perhaps be an area of greater discussion at the June 8-9, 2019 G20 meeting in Japan.

Carl Schonander is Senior Vice President for Global Public Policy.