Back

 Industry News Details

 
Above The Hype: Harnessing The Power Of AI For Your Cybersecurity Program Posted on : Jun 25 - 2019

Security professionals who have seen artificial intelligence (AI) and machine learning (ML) deployed in cybersecurity for the first time may recall the famous adage by Arthur C. Clarke, “Any sufficiently advanced technology is indistinguishable from magic.” Yes, AI is as powerful as once imagined by CISOs when managing SOC teams meant dealing with hundreds of false positives and nights of never-ending worry.

But attack methods are continuing to evolve in tandem with enterprises. CISOs understand that their enterprises have left the firewall behind, and adversaries are looking for new ways to exploit them. One prominent attack method lies outside the enterprise on digital channels, and CISOs need a new approach to address the emerging risks associated with them.

Today’s challenge is keeping up with (and protecting) the expanding myriad of ways the business interacts with customers, partners, suppliers and employees through these channels in a way acceptable by regulators. Smart organizations embrace new technologies and new cybersecurity defenses in equal measure. Yet the “black box” approach of AI keeps popping up as an area of concern and holding back organizations from adopting advanced risk analysis technologies in a reportable way.

The 'Black Box' Problem With AI/ML Decision-Making

A recent Gartner survey uncovered that CIOs are rapidly adopting AI technologies within their organizations, but effective applications of AI/ML in regulated or audited industries is a pain point. AI and ML are often considered a non-starter in these industries since a chain of evidence is critical, which is hard to extract. In almost any industry, legal teams want complete records of risk remediation for evidentiary record keeping. For most SaaS tools, a one-size-fits-all ML risk model is offered as a black box limiting adoption. Human analysts are excluded from understanding the decisions governed by these ML systems. Moreover, it’s nearly impossible to verify how or what automated actions were taken and why.

Without that clarity, it’s similarly impossible to report required details about policies, data, decisions, evidence and outcomes to outside auditors or regulators. To borrow from high school, security teams are unable to show their work. The question, then, is how to make machine-learning engines more configurable where the rules and evidence are obvious.

Acceptable AI For Inspection

According to a recent survey (login required), 40% of successful attacks are equipped with unknown tactics, techniques and procedures. AI/ML-enabled systems identify new risks by relying on an analysis of content's innate properties or features. The answer for broader adoption then becomes two-fold: extricate the evidence mapping process, rules and outcomes and train your AI/ML system with your own company’s invaluable data. Now your organization will both have an auditable process and a model built specifically for your own company risk. Instead, these models leverage unknown risks that are not necessarily relevant.

When trained on sufficient quality data from your own company’s analyses and with mapped policies in a configurable way, AI/ML techniques can easily outperform and go beyond traditional risk prevention approaches. View More