The Software & Information Industry Association (SIIA) and the European Center for International Political Economy (ECIPE) hosted a stimulating June 19, 2018 panel discussion on “The Future of AI” in Brussels at ECIPE’s offices. European Commission Policy Officer Andrea Glorioso, Delft University Professor Jeroen van den Hoven, Elsevier Senior Vice President for Analytics for Research Products Elisabeth Ling, and Thomson Reuters Global Head of Risk Technology Management Solutions Alex Cesar provided perspectives on what it will take for the European Union to achieve the ambitious public and private and investment objectives it has set for itself in its April 25, 2018 Communication on Artificial Intelligence. It was a privilege to moderate this event, and I thank ECIPE and the panelists for their participation.

Synopsis of Panelist Views
Andrea Glorioso noted that the Commission has specifically opted not to propose an AI regulation, although there had been some requests for a legislative approach. And he emphasized that the strategy does not reflect a purely financial investment perspective. It is also about digital innovation hubs; skilling/upskilling; and, keeping up with the technology as it moves out from the core technology sector to other sectors. Economy-wide adoption will require additional reflection with respect to risks as well as opportunities. There will have to be some explanatory narratives with respect to AI solutions. “Blackboxing” will not enhance trust in AI. While Glorioso expressed optimism regarding AI’s potential for developing new useful products and services, he cautioned that there are risks. As he said, “people are afraid,” especially with respect to what will happen to their jobs. And it is up to policymakers to “manage that fear.” AI ethics principles are important and people’s questions need to be answered.
Jeroen van den Hoven suggested that the March 9, 2018 European Commission statement on AI provides some answers on how AI should be developed from a specifically European perspective. He offered the idea that the way forward for Europe is to promote “Responsible Innovation.” Van den Hoven posited that the EU’s emphasis on privacy as reflected, for instance, in the General Data Protection Regulation (GDPR), will result in new products and services. An approach to AI development informed by ethical considerations at the start of the development cycle, “AI development by design” as it were, could be a competitive advantage for Europe. He suggested that there was a lack of choice for European consumers. Given a choice, he said that he would, for example, probably opt for a “slightly less embellished version of Facebook.”

Elisabeth Ling explained that her career in developing digital products and services had led her to what might be called a domain-oriented approach to AI. In other words, what is the technology going to be used for? What problem might AI be suited to solve? Ling is, for example, optimistic regarding AI’s potential for reducing medical errors. So how to foster AI in Europe? Ling noted that there is, in fact, a “war for talent” in Europe. After having hired hundreds of engineers and scientists, she has found that many candidates for technology positions are motivated to work to solve health and diversity problems. This suggests that it makes business and AI development sense to hire as diverse a workforce as possible. Ling concluded by saying that there is a need for “scale.”
Alex Cesar noted that AI helps clients manage risks. She put the technology in perspective by noting that AI has been around since the 1950s. What makes today’s (and tomorrow’s) AI different is the processing power and size of data sets available. This, at least in part, is what the challenge of “scale” is all about. However, for continued “trust and confidence” in the technology, it is important to have as wide a range of backgrounds as possible because this is how bias is avoided. Cesar added that trust and confidence also depend on an understanding of AI. This is why Thomson Reuters is working on “explainable AI.”

The Challenge of “Scale”
There is a perception that “scale” is simply a matter of the size of data sets available to AI developers. This is important but is also about quality. And it is also about providing for the conditions that make high quality data sets available. Glorioso noted that the EC recognized this in the April 25, 2018 “Third Data Package.” This imperative is also reflected in broader Commission efforts such as the Capital Markets Union. Van den Hoven emphasized the need for trust. Ling suggested that the challenge of obtaining scale is addressed depending on the use case. The use of pseudonymized data is important in this regard. Scale is also needed in the use of the learned systems so that new data is captured from users and feedback loops are created, allowing algorithms to be improved.
Explainable AI, Transparency, Accountability, and Intellectual Property
The conversation on what many commentators call “algorithmic transparency” was lively. Naturally, nobody defended “black boxes.” But beyond the need for explainable AI in order to enhance trust in the technology, there was agreement that just as in many non-AI decision-making settings, for instance a doctor making a decision on what medicine to prescribe, there needs to be accountability. Van den Hoven noted that there is an accountability framework for many domains such as food, transportation, and medicine. But “we lack a theoretical vocabulary” for AI. Cesar diverged somewhat in saying that AI is not being developed in a vacuum. Domain experts are the ones “putting the nuts and bolts” into the AI framework for their domains. A question from the audience nevertheless emerged, how can algorithmic transparency be reconciled with trade secret protection? I suggested that companies can offer the “narrative” behind important decisions, for instance with respect to credit decisions.

What about Liability?
This is of course one of the most basic legal questions in any legal system. Glorioso asked the question of whether new liability rules are needed. It is worthwhile noting that this question is a being discussed all over the world. Will there be a need, for instance, for new liability rules for the autonomous vehicles sector? A member of the audience suggested that liability rules are already being worked out in the legal system and that “joint liability” will likely emerge as the solution in many cases.
Takeaways
The dynamic in many think tank events is to emphasize areas of convergence rather than divergence. But is that really possible with a panel composed of a European Commission official, an academic ethicist, and two industry representatives? Perhaps surprisingly, at least in some areas, yes. There was strong agreement on the need for diversity in the AI development workforce itself in order to avoid bias. The need for explainability was universally acknowledged. Accountability frameworks are needed. Nobody suggested that the EU’s privacy and other regulations are an insurmountable barrier to assembling the large data sets needed for AI development. (Note: The June 19, 2018 “political agreement” between the European Parliament, Council and Commission on the free flow of data within the EU should facilitate AI adoption in the EU.) Nobody called for regulation, despite the many controversies surrounding the technology sector. So there was quite a bit of convergence. This suggests that it will be possible for the EU to put out robust AI principles by the end of 2018, which the Communication calls for. We did not get into as much detail as I expected on how and whether the Commission’s ambitious public and private sector investment goals will be met. With respect to Commission funding, for instance, will it take a venture capital style approach to AI projects or a more conventional bank lending posture? So, many questions still to be answered but at least some convergence of views at this Commission, academic, and industry panel.

Carl Schonander is Senior Vice President for Global Public Policy.