Will Artificial Intelligence Solve Cybersecurity—And Put Experts Out Of Work?
It seems like everyone from Elon Musk to Stephen Hawking have warned about allowing artificial intelligence to run wild at the risk that it eventually turns on its creators and wipes out all of humankind. That concern may be a long way out for humanity, but AI could be viewed as a more immediate threat for security professionals.
Artificial intelligence and machine learning is making its way into more security products, helping organizations and individuals automate certain tasks required to keep their services and information safe. While the technology is encroaching on jobs once done by humans, Rahul Kashyap believes its in the best interest of security experts to embrace AI.
Kashyap, the senior vice president and chief product officer at Cylance—a cybersecurity firm known for its use of AI—doesn’t view AI and machine learning as a replacement for human workers but rather as a supplemental service that can enable those workers to do their job more efficiently.
Kashyap said Hollywood has given many people a glorified version of AI that presents it as a cautionary tale about how automation and technology will eventually overtake human workers and take their jobs. By his estimation, that isn’t the case.
"Think about Microsoft Excel, Microsoft Word. When it came out, it made typewriters obsolete but people learned new skills,” Kashyap said at Structure Security 2017. “And now we have more opportunities because of that." Like the software that replaced dedicated hardware, AI will help to squash inefficiencies while creating new opportunities for people.
Kashyap has watched the cybersecurity landscape shift quickly in the years he has been active in the field. When he served as the head of threat researcher at McAfee, he said the primary security check against threats was to simply check the signature of code to see if it was legitimate or not. At that time there were tens or hundreds of thousands of threats so it was possible to, for the most part, keep tabs on many of the most malicious attacks.
The threats have escalated significantly in recent years. He said there were now “billions of pieces of malware” in the wild, and “well thought-out cyber campaigns” being carried out on the regular, with targeted threats directed at individuals and organizations that require a more efficient way to check the validity of code and defend against attacks.
With a widening gap between the number of security professionals needed compared to the number available—a shortage of more than 1.5 million is expected by 2020—Kashyap determined the issue no longer just required a human scale solution; it needed a computing solution.
Luckily, as the threats have evolved, so has the technology that can be used to defend against them. Kashyap cited the development of several technologies as the biggest enabling force for AI and machine learning, including the Amazon Elastic Compute Cloud—a web service that provides scalable cloud computing space—and the advancement of graphics processing units (GPUs).
“It’s not a coincidence that we suddenly have more AI in technology,” he said, noting that these advancements weren’t available as recently as just seven or eight years ago, and the ability to scale the newfound processing power is still coming to fruition.
Kashyap also noted that the type of AI that people are concerned about—artificial superintelligence—is likely decades away and may not come to be in our lifetimes. General intelligence, or a form of AI that is as competent as humans, will likely come to be during our lives, but nano intelligence is available now and can be used to improve cybersecurity.
Nano intelligence, as Kashyap defined it, allows people to utilize computing power to solve specific, narrow problems. It can be used for playing chess—or, in a cybersecurity context, automating some of the repetitive tasks that security workers currently perform.
The long term promise of AI and machine learning may be more than that, but companies overpromising the power of AI in current products run the risk of putting companies in harm’s way by becoming over-reliant on the technology. He said AI needs to bring simplicity to companies to help combat threats, but it can’t be a prediction engine to stop future attacks.
Kashyap advised organizations to do their due diligence on companies offering AI products. He suggested looking at the company’s LinkedIn and see if there are engineers and data scientists on staff. If not, doubt the ability of the company to provide an AI-powered product.
Editor’s Note: Newsweek Media Group and International Business Times partnered with Structure to host Structure Security 2017.
© Copyright IBTimes 2024. All rights reserved.