Monthly Archives: April 2017

Charles Leaver – Using Edit Difference Is Vital Part 2

Published by:

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften


In the very first about edit distance, we took a look at hunting for harmful executables with edit distance (i.e., the number of character edits it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to search for harmful domains, and how we can develop edit distance features that can be combined with other domain features to pinpoint suspicious activity.

Case Study Background

What are bad actors trying to do with harmful domains? It might be simply utilizing a similar spelling of a typical domain name to fool careless users into looking at advertisements or getting adware. Genuine websites are slowly catching onto this technique, sometimes called typo-squatting.

Other destructive domain names are the product of domain generation algorithms, which can be used to do all types of nefarious things like evade counter measures that obstruct recognized compromised websites, or overwhelm domain servers in a distributed denial of service attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further puzzling protectors.

Edit distance can help with both usage cases: here we will find out how. Initially, we’ll leave out typical domains, since these are normally safe. And, a list of regular domains supplies a baseline for discovering anomalies. One excellent source is Quantcast. For this conversation, we will adhere to domains and prevent subdomains (e.g., not

After data cleaning, we compare each candidate domain name (input data observed in the wild by Ziften) to its possible neighbors in the very same top level domain (the last part of a domain name –,. org, and so on now can be practically anything). The standard job is to find the closest next-door neighbor in terms of edit distance. By discovering domains that are one step away from their closest next-door neighbor, we can easily identify typo-ed domains. By discovering domain names far from their neighbor (the stabilized edit distance we presented in Part 1 is beneficial here), we can likewise find anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Be careful when browsing to these domains considering that they could consist of destructive content!

Here are a few possible typos. Typo squatters target popular domains considering that there are more possibilities someone will visit. Several of these are suspect in accordance with our danger feed partners, however there are some false positives as well with cute names like “wikipedal”.


Here are some odd looking domains far from their neighbors.


So now we have created two useful edit distance metrics for searching. Not just that, we have three features to potentially add to a machine learning model: rank of nearest neighbor, distance from next-door neighbor, and edit distance 1 from neighbor, indicating a danger of typo tricks. Other features that might play well with these are other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network features like the total count of failed DNS requests.

Simplified Code that you can Play Around with

Here is a streamlined version of the code to have fun with! Developed on HP Vertica, but this SQL should function with a lot of innovative databases. Note the Vertica editDistance function might vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).


Charles Leaver – Without Proper Management Your Infrastructure Will Not Be Completely Secure And Vice Versa

Published by:

Written by Charles Leaver Ziften CEO


If your business computing environment is not appropriately managed there is no way that it can be absolutely protected. And you can’t effectively manage those complex enterprise systems unless there’s a strong feeling that they are safe and secure.

Some might call this a chicken-and-egg circumstance, where you do not know where to begin. Should you begin with security? Or should you begin with the management of your system? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Instead, both are blended together – and dealt with as a single tasty treat.

Lots of companies, I would argue too many companies, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO group have no idea each other, talk with each other just when absolutely required, have distinct spending plans, certainly have different concerns, read various reports, and make use of various management platforms. On a daily basis, what makes up a job, a problem or an alert for one group flies completely under the other team’s radar.

That’s bad, since both the IT and security teams should make presumptions. The IT group believes that everything is secure, unless somebody notifies them otherwise. For example, they presume that devices and applications have not been jeopardized, users have actually not intensified their privileges, and so-on. Similarly, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications fully updated, patches have actually been used, and so on

Since the CIO and CISO groups aren’t talking to each other, don’t understand each others’ functions and concerns, and aren’t using the same tools, those assumptions may not be correct.

And once again, you can’t have a safe and secure environment unless that environment is effectively managed – and you cannot manage that environment unless it’s safe and secure. Or to put it another way: An unsecure environment makes anything you carry out in the IT group suspect and unimportant, and means that you cannot understand whether the information you are seeing is right or manipulated. It might all be phony news.

Bridging the IT / Security Space

The best ways to bridge that space? It sounds easy but it can be difficult: Guarantee that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or structure somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand and the business can be reduced to zero. Similarly, if the users, devices, infrastructure, application, and data aren’t managed well, the business cannot work successfully, and the value drops. As we have actually talked about, if it’s not well handled, it can’t be secured, and if it’s not secure, it cannot be well managed.

The fiduciary duty of senior executives (like the CFO) is to secure the worth of company assets, which suggests making sure IT and security speak with each other, comprehend each other’s goals, and if possible, can see the very same reports and data – filtered and displayed to be meaningful to their particular areas of responsibility.

That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security capabilities. No, it’s a Peanut Butter Cup, designed equally around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT groups exactly what they require to do their tasks, and provides security teams what they need too – without coverage gaps that could undermine assumptions about the state of business security and IT management.

We need to guarantee that our organization’s IT infrastructure is built on a secure foundation – and that our security is executed on a well-managed base of hardware, infrastructure, software applications and users. We can’t run at peak performance, and with full fiduciary duty, otherwise.

Charles Leaver – Offline Devices Must Not Escape Constant Endpoint Visibility

Published by:

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO


A study recently completed by Gallup found that 43% of US citizens that were in employment worked remotely for a few of their employment time in 2016. Gallup, who has actually been surveying telecommuting trends in the United States for practically a decade, continues to see more workers working outside of standard workplaces and an increasing number of them doing so for more days out of the week. And, obviously the number of connected devices that the typical employee uses has jumped as well, which assists drive the convenience and desire of working away from the workplace.

This mobility undoubtedly makes for happier employees, and one hopes more efficient workers, however the issues that these trends present for both systems and security operations groups must not be overlooked. IT asset discovery, IT systems management, and hazard detection and response functions all gain from real time and historical visibility into user, device, application, and network connection activity. And to be truly effective, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (regional), off the network but connected (remote), or detached (offline). Current remote working patterns are significantly leaving security and functional groups blind to prospective concerns and hazards.

The mainstreaming of these patterns makes it much more tough for IT and security groups to restrict what was previously considered greater threat user behavior, for example working from a coffee bar. However that ship has actually sailed and today security and systems management teams need to be able to thoroughly monitor device, network activity, user and application, detect abnormalities and inappropriate actions, and enforce appropriate action or fixes no matter whether an endpoint is locally linked, from another location linked, or detached.

Additionally, the fact that many employees now routinely access cloud-based assets and applications, and have back-up network or USB attached storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls frequently offer the only record of activity being remotely performed that no longer always ends in the corporate network. Offline activity presents the most severe example of the need for constant endpoint monitoring. Plainly network controls or network monitoring are of little use when a device is operating offline. The setup of an appropriate endpoint agent is important to make sure the capture of all important system and security data.

As an example of the kinds of offline activities that could be identified, a client was just recently able to track, flag, and report uncommon behavior on a business laptop. A high level executive moved large amounts of endpoint data to an unapproved USB drive while the device was offline. Since the endpoint agent was able to collect this behavioral data throughout this offline duration, the client had the ability to see this unusual action and follow-up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was disconnected, offered the customer visibility they never had in the past.

Does your organization maintain continuous monitoring and visibility when employee endpoints are on an island? If so, how do you do so?

Charles Leaver – Machine Learning Advances Are Good But There Will Be Consequences

Published by:

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


If you study history you will observe lots of examples of severe unexpected consequences when brand-new technology has actually been presented. It typically surprises people that brand-new technologies might have wicked purposes as well as the positive intentions for which they are launched on the market however it takes place on a very regular basis.

For example, Train robbers using dynamite (“You think you used enough Dynamite there, Butch?”) or spammers utilizing email. Just recently using SSL to conceal malware from security controls has actually become more typical just because the genuine use of SSL has actually made this technique more useful.

Since new technology is typically appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine learning tools that have actually reached the market.

To what degree will there be misuse of these tools? There are most likely a couple of ways in which attackers could utilize machine-learning to their benefit. At a minimum, malware authors will check their new malware against the new class of advanced danger defense solutions in a quest to modify their code so that it is less likely to be flagged as destructive. The effectiveness of protective security controls always has a half life because of adversarial learning. An understanding of machine learning defenses will assist assailants become more proactive in reducing the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with fake traffic with the intention of “poisoning” the machine-learning model being constructed from that traffic. The goal of the assailant would be to deceive the defender’s machine learning tool into misclassifying traffic or to develop such a high level of false positives that the defenders would dial back the fidelity of the signals.

Machine learning will likely likewise be utilized as an offensive tool by opponents. For instance, some researchers predict that enemies will use machine learning techniques to refine their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to personalize a social engineering attack is especially unpleasant given the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a powerful financial reward for enemies to adopt the methods.

Anticipate the kind of breaches that provide ransomware payloads to increase greatly in 2017.

The requirement to automate tasks is a major motivation of financial investment choices for both hackers and protectors. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will progressively become a basic element of defense in depth techniques, it is not a magic bullet. It should be comprehended that assailants are actively working on evasion methods around artificial intelligence based detection solutions while likewise using machine learning for their own offensive purposes. This arms race will need protectors to progressively achieve incident response at machine pace, further worsening the need for automated incident response capabilities.