Charles Leaver – Machine Learning Advances Are Good But There Will Be Consequences

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you study history you will observe lots of examples of severe unexpected consequences when brand-new technology has actually been presented. It typically surprises people that brand-new technologies might have wicked purposes as well as the positive intentions for which they are launched on the market however it takes place on a very regular basis.

For example, Train robbers using dynamite (“You think you used enough Dynamite there, Butch?”) or spammers utilizing email. Just recently using SSL to conceal malware from security controls has actually become more typical just because the genuine use of SSL has actually made this technique more useful.

Since new technology is typically appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine learning tools that have actually reached the market.

To what degree will there be misuse of these tools? There are most likely a couple of ways in which attackers could utilize machine-learning to their benefit. At a minimum, malware authors will check their new malware against the new class of advanced danger defense solutions in a quest to modify their code so that it is less likely to be flagged as destructive. The effectiveness of protective security controls always has a half life because of adversarial learning. An understanding of machine learning defenses will assist assailants become more proactive in reducing the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with fake traffic with the intention of “poisoning” the machine-learning model being constructed from that traffic. The goal of the assailant would be to deceive the defender’s machine learning tool into misclassifying traffic or to develop such a high level of false positives that the defenders would dial back the fidelity of the signals.

Machine learning will likely likewise be utilized as an offensive tool by opponents. For instance, some researchers predict that enemies will use machine learning techniques to refine their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to personalize a social engineering attack is especially unpleasant given the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a powerful financial reward for enemies to adopt the methods.

Anticipate the kind of breaches that provide ransomware payloads to increase greatly in 2017.

The requirement to automate tasks is a major motivation of financial investment choices for both hackers and protectors. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will progressively become a basic element of defense in depth techniques, it is not a magic bullet. It should be comprehended that assailants are actively working on evasion methods around artificial intelligence based detection solutions while likewise using machine learning for their own offensive purposes. This arms race will need protectors to progressively achieve incident response at machine pace, further worsening the need for automated incident response capabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *