AI Spotlight: Competitions that Use AI to Exploit Vulnerabilities Can Strengthen Cybersecurity

Share |

So far in 2017, cyberattacks like WannaCry and Petya have left the world wondering what can be done to prevent such attacks in the future before people lose millions of dollars to ransomware, or worse.  While no digital system is ever completely secure, finding and patching vulnerabilities before they can be exploited is the best defense, but continues to be a challenge.  One tool that can help detect vulnerabilities is artificial intelligence (AI).  AI technology can look for bugs that need to be patched in a faster, more in-depth way that produces more results than humans can.  In this vein, this week’s AI spotlight is about using AI in data science competitions like those held by Kaggle, which is part of Google’s cloud platform, or those held by the U.S. Department of Defense. 

Typically, Hackathons pit human versus human or human versus system in competitions which also show how particular systems can be vulnerable and need to be fixed, like the Department of Defense’s recent “Hack the Pentagon” bounty program.  Kaggle’s competitions, however, pit AI versus AI to accomplish this where data scientists create and train the algorithms that do the “fighting.”  The contests involve trying to confuse an AI system, forcing a system to classify something incorrectly, and developing a strong defense. 

Such competitions like this help data scientists and developers find flaws in their algorithms so they can fix and strengthen them.  This is necessary because sometimes faulty algorithms can be misled to perform in a different way than they were intended to.  Additionally, these algorithms are able to detect vulnerabilities in systems at a deeper level than humans can, and this can help companies and governments fix their systems before an attack.

Kaggle isn’t the first platform to conduct such activities – the Department of Defense has also held similar competitions which pit AI against AI.  Last year, the Defense Advanced Research Projects Agency (DARPA) held a competition at the Def Con hacker conference called the DARPA Cyber Grand Challenge.  This challenge pitted AI systems against each other in a game of capture the flag.  The Department of Defense was able to use the competition to find new ways to strengthen its systems in the event of a cyberattack.

Kaggle and DARPA's competitions have also provided a source of talented data scientists at a time when they are desperately needed.  As U.S. companies and the U.S. government become more technologically sophisticated, so too do U.S. adversaries who intend to do harm.  Therefore, more data scientists are necessary in both the public and private sectors to train algorithms and understand machine-learning to protect vulnerable systems from malicious actors.  As SIIA wrote in its AI issue brief and other AI spotlights, AI presents an opportunity for job growth, especially in areas of social good.  Protecting consumer and government systems against cyberattacks is an area of social good where humans are critically necessary to develop the machine-learning tools needed to defend against such attacks.

Diane Diane Pinto is the Public Policy Coordinator at SIIA. Follow the Policy team on Twitter @SIIAPolicy.