Charles Leaver – Using Edit Difference Is Vital Part 2

Published by:

ed2-3

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we took a look at hunting for harmful executables with edit distance (i.e., the number of character edits it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to search for harmful domains, and how we can develop edit distance features that can be combined with other domain features to pinpoint suspicious activity.

Case Study Background

What are bad actors trying to do with harmful domains? It might be simply utilizing a similar spelling of a typical domain name to fool careless users into looking at advertisements or getting adware. Genuine websites are slowly catching onto this technique, sometimes called typo-squatting.

Other destructive domain names are the product of domain generation algorithms, which can be used to do all types of nefarious things like evade counter measures that obstruct recognized compromised websites, or overwhelm domain servers in a distributed denial of service attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further puzzling protectors.

Edit distance can help with both usage cases: here we will find out how. Initially, we’ll leave out typical domains, since these are normally safe. And, a list of regular domains supplies a baseline for discovering anomalies. One excellent source is Quantcast. For this conversation, we will adhere to domains and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain name (input data observed in the wild by Ziften) to its possible neighbors in the very same top level domain (the last part of a domain name – classically.com,. org, and so on now can be practically anything). The standard job is to find the closest next-door neighbor in terms of edit distance. By discovering domains that are one step away from their closest next-door neighbor, we can easily identify typo-ed domains. By discovering domain names far from their neighbor (the stabilized edit distance we presented in Part 1 is beneficial here), we can likewise find anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Be careful when browsing to these domains considering that they could consist of destructive content!

Here are a few possible typos. Typo squatters target popular domains considering that there are more possibilities someone will visit. Several of these are suspect in accordance with our danger feed partners, however there are some false positives as well with cute names like “wikipedal”.

ed2-1

Here are some odd looking domains far from their neighbors.

ed2-2

So now we have created two useful edit distance metrics for searching. Not just that, we have three features to potentially add to a machine learning model: rank of nearest neighbor, distance from next-door neighbor, and edit distance 1 from neighbor, indicating a danger of typo tricks. Other features that might play well with these are other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network features like the total count of failed DNS requests.

Simplified Code that you can Play Around with

Here is a streamlined version of the code to have fun with! Developed on HP Vertica, but this SQL should function with a lot of innovative databases. Note the Vertica editDistance function might vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

Charles Leaver – Without Proper Management Your Infrastructure Will Not Be Completely Secure And Vice Versa

Published by:

Written by Charles Leaver Ziften CEO

 

If your business computing environment is not appropriately managed there is no way that it can be absolutely protected. And you can’t effectively manage those complex enterprise systems unless there’s a strong feeling that they are safe and secure.

Some might call this a chicken-and-egg circumstance, where you do not know where to begin. Should you begin with security? Or should you begin with the management of your system? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Instead, both are blended together – and dealt with as a single tasty treat.

Lots of companies, I would argue too many companies, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO group have no idea each other, talk with each other just when absolutely required, have distinct spending plans, certainly have different concerns, read various reports, and make use of various management platforms. On a daily basis, what makes up a job, a problem or an alert for one group flies completely under the other team’s radar.

That’s bad, since both the IT and security teams should make presumptions. The IT group believes that everything is secure, unless somebody notifies them otherwise. For example, they presume that devices and applications have not been jeopardized, users have actually not intensified their privileges, and so-on. Similarly, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications fully updated, patches have actually been used, and so on

Since the CIO and CISO groups aren’t talking to each other, don’t understand each others’ functions and concerns, and aren’t using the same tools, those assumptions may not be correct.

And once again, you can’t have a safe and secure environment unless that environment is effectively managed – and you cannot manage that environment unless it’s safe and secure. Or to put it another way: An unsecure environment makes anything you carry out in the IT group suspect and unimportant, and means that you cannot understand whether the information you are seeing is right or manipulated. It might all be phony news.

Bridging the IT / Security Space

The best ways to bridge that space? It sounds easy but it can be difficult: Guarantee that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or structure somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand and the business can be reduced to zero. Similarly, if the users, devices, infrastructure, application, and data aren’t managed well, the business cannot work successfully, and the value drops. As we have actually talked about, if it’s not well handled, it can’t be secured, and if it’s not secure, it cannot be well managed.

The fiduciary duty of senior executives (like the CFO) is to secure the worth of company assets, which suggests making sure IT and security speak with each other, comprehend each other’s goals, and if possible, can see the very same reports and data – filtered and displayed to be meaningful to their particular areas of responsibility.

That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security capabilities. No, it’s a Peanut Butter Cup, designed equally around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT groups exactly what they require to do their tasks, and provides security teams what they need too – without coverage gaps that could undermine assumptions about the state of business security and IT management.

We need to guarantee that our organization’s IT infrastructure is built on a secure foundation – and that our security is executed on a well-managed base of hardware, infrastructure, software applications and users. We can’t run at peak performance, and with full fiduciary duty, otherwise.

Charles Leaver – Offline Devices Must Not Escape Constant Endpoint Visibility

Published by:

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A study recently completed by Gallup found that 43% of US citizens that were in employment worked remotely for a few of their employment time in 2016. Gallup, who has actually been surveying telecommuting trends in the United States for practically a decade, continues to see more workers working outside of standard workplaces and an increasing number of them doing so for more days out of the week. And, obviously the number of connected devices that the typical employee uses has jumped as well, which assists drive the convenience and desire of working away from the workplace.

This mobility undoubtedly makes for happier employees, and one hopes more efficient workers, however the issues that these trends present for both systems and security operations groups must not be overlooked. IT asset discovery, IT systems management, and hazard detection and response functions all gain from real time and historical visibility into user, device, application, and network connection activity. And to be truly effective, endpoint visibility and monitoring must work regardless of where the user and device are operating, be it on the network (regional), off the network but connected (remote), or detached (offline). Current remote working patterns are significantly leaving security and functional groups blind to prospective concerns and hazards.

The mainstreaming of these patterns makes it much more tough for IT and security groups to restrict what was previously considered greater threat user behavior, for example working from a coffee bar. However that ship has actually sailed and today security and systems management teams need to be able to thoroughly monitor device, network activity, user and application, detect abnormalities and inappropriate actions, and enforce appropriate action or fixes no matter whether an endpoint is locally linked, from another location linked, or detached.

Additionally, the fact that many employees now routinely access cloud-based assets and applications, and have back-up network or USB attached storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls frequently offer the only record of activity being remotely performed that no longer always ends in the corporate network. Offline activity presents the most severe example of the need for constant endpoint monitoring. Plainly network controls or network monitoring are of little use when a device is operating offline. The setup of an appropriate endpoint agent is important to make sure the capture of all important system and security data.

As an example of the kinds of offline activities that could be identified, a client was just recently able to track, flag, and report uncommon behavior on a business laptop. A high level executive moved large amounts of endpoint data to an unapproved USB drive while the device was offline. Since the endpoint agent was able to collect this behavioral data throughout this offline duration, the client had the ability to see this unusual action and follow-up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was disconnected, offered the customer visibility they never had in the past.

Does your organization maintain continuous monitoring and visibility when employee endpoints are on an island? If so, how do you do so?

Charles Leaver – Machine Learning Advances Are Good But There Will Be Consequences

Published by:

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you study history you will observe lots of examples of severe unexpected consequences when brand-new technology has actually been presented. It typically surprises people that brand-new technologies might have wicked purposes as well as the positive intentions for which they are launched on the market however it takes place on a very regular basis.

For example, Train robbers using dynamite (“You think you used enough Dynamite there, Butch?”) or spammers utilizing email. Just recently using SSL to conceal malware from security controls has actually become more typical just because the genuine use of SSL has actually made this technique more useful.

Since new technology is typically appropriated by bad actors, we have no need to think this will not hold true about the brand-new generation of machine learning tools that have actually reached the market.

To what degree will there be misuse of these tools? There are most likely a couple of ways in which attackers could utilize machine-learning to their benefit. At a minimum, malware authors will check their new malware against the new class of advanced danger defense solutions in a quest to modify their code so that it is less likely to be flagged as destructive. The effectiveness of protective security controls always has a half life because of adversarial learning. An understanding of machine learning defenses will assist assailants become more proactive in reducing the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with fake traffic with the intention of “poisoning” the machine-learning model being constructed from that traffic. The goal of the assailant would be to deceive the defender’s machine learning tool into misclassifying traffic or to develop such a high level of false positives that the defenders would dial back the fidelity of the signals.

Machine learning will likely likewise be utilized as an offensive tool by opponents. For instance, some researchers predict that enemies will use machine learning techniques to refine their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to personalize a social engineering attack is especially unpleasant given the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a powerful financial reward for enemies to adopt the methods.

Anticipate the kind of breaches that provide ransomware payloads to increase greatly in 2017.

The requirement to automate tasks is a major motivation of financial investment choices for both hackers and protectors. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will progressively become a basic element of defense in depth techniques, it is not a magic bullet. It should be comprehended that assailants are actively working on evasion methods around artificial intelligence based detection solutions while likewise using machine learning for their own offensive purposes. This arms race will need protectors to progressively achieve incident response at machine pace, further worsening the need for automated incident response capabilities.

Charles Leaver – Threat Indications Can Be Observed From Command Usage

Published by:

commands-to-watch04

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO

 

The repetition of a theme when it concerns computer system security is never ever a bad thing. As advanced as some cyber attacks can be, you really have to watch for and comprehend the use of typical easily available tools in your environment. These tools are usually utilized by your IT personnel and more than likely would be white listed for use and can be missed out on by security groups mining through all the relevant applications that ‘could’ be carried out on an endpoint.

As soon as someone has penetrated your network, which can be carried out in a range of ways and another blog post for another day, indications of these programs/tools running in your environment ought to be looked at to make sure appropriate usage.

A few tools/commands and their functions:

Netstat – Information on the current connections on the network. This could be utilized to determine other systems within the network.

Powershell – Built-in Windows command line function and can carry out a variety of activities such as getting critical details about the system, eliminating processes, including files or deleting files and so on

WMI – Another powerful built-in Windows function. Can shift files around and gather essential system details.

Route Print – Command to see the local routing table.

Net – Including users/domains/accounts/groups.

RDP (Remote Desktop Protocol) – Program to access systems remotely.

AT – Scheduled tasks.

Trying to find activity from these tools can be time consuming and in some cases be overwhelming, but is needed to get a handle on who might be moving around in your network. And not just exactly what is taking place in real-time, however in the past also to see a path someone may have taken through the network. It’s typically not ‘patient zero’ that is the target, once they get a foothold, they might make use of these tools and commands to begin their reconnaissance and finally migrate to a high value asset. It’s that lateral movement that you wish to find.

You must have the ability to gather the details talked about above and the means to sort through to discover, alert, and examine this data. You can use Windows Events to monitor various changes on a device and after that filter that down.

Taking a look at some screen shots below from our Ziften console, you can see a quick difference in between what our IT group utilized to push out changes in the network, versus somebody running a very comparable command themselves. This could be just like what you discover when somebody did that from a remote location say via an RDP session.

commands-to-watch01

commands-to-watch02

commands-to-watch03

commands-to-watch04

A fascinating side note in these screenshots is that in all of the cases, the Process Status is ‘Terminated’. You wouldn’t see this specific information throughout a live investigation or if you were not constantly collecting the data. However considering that we are collecting all the information continuously, you have this historical data to look at. If in case you were seeing the Status as ‘Running’, this could show that someone is actually on that system right now.

This only scratches the surface of exactly what you must be collecting and ways to evaluate what is right for your network, which naturally will be different than that of others. However it’s a start. Harmful actors with intent to do you damage will usually try to find the path of least resistance. Why try and create new and interesting tools, when a lot of what they require is currently there and prepared to go.

Charles Leaver – Incident Response And Forensic Analysis Are Related But Different

Published by:

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

There may be a joke someplace concerning the forensic analyst that was late to the incident response party. There is the seed of a joke in the concept at least but obviously, you have to comprehend the distinctions between incident response and forensic analysis to appreciate the capacity for humor.

Incident response and forensic analysis are related disciplines that can utilize comparable tools and associated data sets but likewise have some crucial differences. There are 4 particularly important differences between forensic analysis and incident response:

– Objectives.
– Data requirements.
– Group abilities.
– Advantages.

The distinction in the goals of forensic analysis and incident response is possibly the most essential. Incident response is focused on determining a quick (i.e., near real time) reaction to an instant danger or issue. For instance, a house is on fire and the firefighters that attend to put that fire out are associated with incident response. Forensic analysis is typically performed as part of an arranged compliance, legal discovery, or law enforcement investigation. For instance, a fire investigator may analyze the remains of that house fire to determine the total damage to the property, the cause of the fire, and whether the origin was such that other houses are likewise at risk. To puts it simply, incident response is focused on containment of a danger or concern, while forensic analysis is concentrated on a full understanding and comprehensive removal of a breach.

A second major difference between the disciplines is the data resources needed to accomplish the objectives. Incident response teams typically only require short term data sources, frequently no more than a month or so, while forensic analysis groups usually need a lot longer lived logs and files. Bear in mind that the average dwell time of an effective attack is somewhere in between 150 and 300 days.

While there is commonality in the workers abilities of incident response and forensic analysis groups, and in fact incident response is typically thought about as a subset of the border forensic discipline, there are very important differences in job requirements. Both types of research study need strong log analysis and malware analysis capabilities. Incident response requires the capability to quickly separate a contaminated device and to establish methods to remediate or quarantine the device. Interactions have the tendency to be with other security and operations employees. Forensic analysis typically needs interactions with a much broader set of departments, consisting of HR, compliance, operations and legal.

Not surprisingly, the perceived benefits of these activities likewise vary.

The capability to get rid of a risk on one machine in near real time is a significant determinate in keeping breaches separated and limited in impact. Incident response, and proactive hazard hunting, is first line of defense in security operations. Forensic analysis is incident responses’ less glamorous relative. Nevertheless, the benefits of this work are undeniable. An extensive forensic investigation permits the remediation of all threats with the mindful analysis of an entire attack chain of events. And that is nothing to laugh about.

Do your endpoint security procedures allow both immediate incident response, and long-lasting historical forensic analysis?

Charles Leaver – Using Edit Difference Is Vital Part 1

Published by:

edit-distance-3

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

Why are the exact same tricks being utilized by assailants over and over? The easy response is that they are still working today. For instance, Cisco’s 2017 Cybersecurity Report informs us that after years of decline, spam email with malicious attachments is again on the rise. Because conventional attack vector, malware authors normally mask their activities by using a filename just like a common system process.

There is not necessarily a connection with a file’s path name and its contents: anybody who has attempted to conceal sensitive information by giving it a boring name like “taxes”, or changed the extension on a file attachment to circumvent email rules is aware of this idea. Malware creators know this as well, and will typically name their malware to look like typical system procedures. For instance, “explore.exe” is Internet Explorer, however “explorer.exe” with an additional “r” could be anything. It’s simple even for experts to overlook this small difference.

The opposite issue, known.exe files running in uncommon locations, is simple to solve, using string functions and SQL sets.

edit-distance-1

What about the other scenario, finding close matches to the executable name? The majority of people begin their hunt for close string matches by arranging data and visually looking for inconsistencies. This usually works effectively for a small set of data, maybe even a single system. To find these patterns at scale, nevertheless, needs an algorithmic approach. One established strategy for “fuzzy matching” is to utilize Edit Distance.

Exactly what’s the best approach to computing edit distance? For Ziften, our technology stack includes HP Vertica, making this task easy. The web has plenty of data researchers and data engineers singing Vertica’s praises, so it will be adequate to mention that Vertica makes it simple to create customized functions that take full advantage of its power – from C++ power tools, to analytical modeling scalpels in R and Java.

This Git repo is maintained by Vertica lovers operating in industry. It’s not a certified offering, however the Vertica group is definitely familiar with it, and furthermore is thinking everyday about ways to make Vertica better for data researchers – a great space to see. Most importantly, it includes a function to compute edit distance! There are likewise alternative tools for the natural processing of langauge here like word stemmers and tokenizers.

Using edit distance on the top executable paths, we can quickly discover the nearest match to each of our leading hits. This is an intriguing data-set as we can arrange by distance to discover the closest matches over the whole data set, or we can arrange by frequency of the leading path to see exactly what is the closest match to our frequently utilized procedures. This data can likewise emerge on contextual “report card” pages, to show, e.g. the leading five nearest strings for a provided path. Below is a toy example to offer a sense of usage, based on genuine data ZiftenLabs observed in a client environment.

edit-distance-2

Setting an upper limit of 0.2 seems to find excellent results in our experience, however the point is that these can be adapted to fit specific use cases. Did we find any malware? We see that “teamviewer_.exe” (should be simply “teamviewer.exe”), “iexplorer.exe” (needs to be “iexplore.exe”), and “cvshost.exe” (must be svchost.exe, unless possibly you work for CVS drug store…) all look unusual. Given that we’re already in our database, it’s likewise minor to obtain the associated MD5 hashes, Ziften suspicion scores, and other attributes to do a deeper dive.

edit-distance-3

In this specific real life environment, it ended up that teamviewer_.exe and iexplorer.exe were portable applications, not known malware. We helped the customer with additional examination on the user and system where we observed the portable applications given that use of portable apps on a USB drive might be proof of suspicious activity. The more troubling find was cvshost.exe. Ziften’s intelligence feeds indicate that this is a suspicious file. Searching for the md5 hash for this file on VirusTotal validates the Ziften data, indicating that this is a potentially severe Trojan virus that could be a component of a botnet or doing something much more harmful. Once the malware was discovered, nevertheless, it was simple to resolve the problem and make sure it remains solved utilizing Ziften’s ability to eliminate and persistently block procedures by MD5 hash.

Even as we develop sophisticated predictive analytics to spot destructive patterns, it is necessary that we continue to enhance our capabilities to hunt for recognized patterns and old techniques. Even if brand-new hazards emerge does not mean the old ones disappear!

If you liked this post, watch this space for the second part of this series where we will apply this method to hostnames to detect malware droppers and other harmful websites.

Charles Leaver – Defining Endpoints And Protecting Them Will Be More Challenging As Connected Devices Increase

Published by:

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

Just a short time ago everybody understood exactly what you meant if you brought up an endpoint. If someone wished to sell you an endpoint security product, you understood what devices that software was going to safeguard. But when I hear somebody casually talk about endpoints today, The Princess Bride’s Inigo Montoya comes to mind: “You keep using that word. I do not think it means exactly what you think it implies.” Today an endpoint could be nearly any kind of device.

In all honesty, endpoints are so diverse these days that people have taken to calling them “things.” According to Gartner at the end of 2016 there were over six billion “things” connected to the web. The consulting firm predicts that this number will shoot up to twenty one billion by the year 2020. The business uses of these things will be both generic (e.g. connected light bulbs and A/C systems) and market specific (e.g. oil rig security tracking). For IT and security teams charged with connecting and securing endpoints, this is only half of the brand-new obstacle, however. The acceptance of virtualization technology has actually redefined exactly what an endpoint is, even in environments in which these groups have traditionally operated.

The last decade has seen a huge modification in the method end users gain access to information. Physical devices continue to become more mobile with lots of information workers now doing most of their computing and interaction on laptops and smart phones. More importantly, everyone is ending up being an information worker. Today, better instrumentation and monitoring has enabled levels of data collection and analysis that can make the insertion of information technology into almost any job profitable.

At the same time, more conventional IT assets, especially servers, are ending up being virtualized to get rid of some of the traditional restrictions in having those assets connected to physical devices.

These 2 trends together will affect security groups in crucial ways. The totality of “endpoints” will include billions of long-lived and unsecure IoT endpoints in addition to billions of virtual endpoint instances that will be scaled up and down as needed in addition to migrated to various physical places on demand.

Enterprises will have very different worries about these 2 general types of endpoints. Over their life times, IoT devices will have to be secured from a host of risks some of which have yet to be dreamed up. Tracking and safeguarding these devices will need advanced detection capabilities. On the positive side, it will be possible to maintain well-defined log data to enable forensic examination.

Virtual endpoints, on the other hand, present their own crucial concerns. The capability to move their physical location makes it a lot more tough to make sure right security policies are always attached to the endpoint. The practice of reimaging virtual endpoints can make forensic investigation tough, as essential data is usually lost when a new image is applied.

So it doesn’t matter what word or words are utilized to explain your endpoints – endpoint, systems, client device, user device, mobile phone, server, virtual device, container, cloud workload, IoT device, and so on – it is important to understand exactly what someone indicates when they use the term endpoint.

Charles Leaver – Compromise Is Inevitable Detection Is Vital

Published by:

mitre

Written By Dr Al Hartmann And Presented By Charles Leaver CEO Ziften

If Prevention Has Stopped working Then Detection Is Crucial

The final scene in the well known Vietnam War film Platoon depicts a North Vietnamese Army regiment in a surprise night time attack breaching the concertina wire perimeter of an American Army battalion, overrunning it, and butchering the startled protectors. The desperate company leader, understanding their dire defensive predicament, orders his air support to strike his own position: “For the record, it’s my call – Dump whatever you have actually got left on my position!” Moments later the battleground is immolated in a napalm hellscape.

Although physical dispute, this shows two aspects of cybersecurity (1) You have to handle inevitable border breaches, and (2) It can be bloody hell if you do not identify early and respond powerfully. MITRE Corporation has actually been leading the call for rebalancing cyber security priorities to position due focus on detecting breaches in the network interior rather than simply focusing on penetration prevention at the network perimeter. Rather than defense in depth, the latter produces a flawed “tootsie pop” defense – hard, crispy shell, soft chewy center. Writing in a MITRE blog, “We could see that it wouldn’t be a question of if your network will be breached however when it will be breached,” explains Gary Gagnon, MITRE’s senior vice president, director of cybersecurity, and primary gatekeeper. “Today, companies are asking ‘How long have the hackers been within? How far have they got?'”.

Some call this the “presumed breach” technique to cyber security, or as posted to Twitter by F-Secure’s Chief Research Officer:.

Q: How many of the Fortune 500 are jeopardized – Response: 500.

This is based upon the possibility that any sufficiently intricate cyber environment has an existing compromise, and that Fortune 500 businesses are of magnificently intricate scale.

Shift the Problem of Perfect Execution from the Defenders to the Attackers.

The standard cybersecurity viewpoint, originated from the legacy perimeter defense design, has been that the opponent just has to be right one time, while the protector should be right each time. An adequately resourced and relentless hacker will eventually achieve penetration. And time to effective penetration reduces with increasing size and intricacy of the target business.

A border or prevention-reliant cyber defense design essentially demands the best execution by the defender, while delivering success to any sufficiently continual attack – a plan for particular cyber disaster. For example, a leading cybersecurity red team reports successful enterprise penetration in under 3 hours in more than 90% of their customer engagements – and these white hats are limited to ethical methods. Your business’s black hat hackers are not so constrained.

To be viable, the cyber defense strategy must turn the tables on the assailants, moving to them the unreachable problem of ideal execution. That is the reasoning for a strong detection ability that constantly keeps track of endpoint and network behavior for any uncommon signs or observed assailant footprints inside the boundary. The more sensitive the detection ability, the more care and stealth the opponents need to work out in perpetrating their kill chain sequence, and the more time and labor and talent they must invest. The defenders require but observe a single attacker tramp to uncover their foot tracks and unwind the attack kill chain. Now the protectors become the hunter, the attackers the hunted.

The MITRE ATT&CK Design.

MITRE offers a comprehensive taxonomy of assailant footprints, covering the post compromise sector of the kill chain, known by the acronym ATT&CK, for Adversarial Tactics, Techniques, and Common Knowledge. ATT&CK project group leader Blake Strom states, “We chose to concentrate on the post-attack period [part of kill chain lined in orange listed below], not just because of the strong probability of a breach and the dearth of actionable information, however also because of the many chances and intervention points readily available for efficient protective action that do not always count on anticipation of adversary tools.”

 

mitre

 

As shown in the MITRE figure above, the ATT&CK model offers extra granularity on the attack kill chain post-compromise phases, breaking these out into 10 strategy classifications as shown. Each tactic category is further detailed into a list of techniques an enemy might employ in carrying out that technique. The January 2017 model upgrade of the ATT&CK matrix lists 127 methods throughout its 10 tactic categories. For example, Computer system registry Run Keys/ Start Folder is a technique in the Perseverance category, Brute Force is a technique in the Qualifications classification, and Command Line Interface is a technique in the Execution classification.

Leveraging Endpoint Detection and Response (EDR) in the ATT&CK Model.

Endpoint Detection and Response (EDR) products, such as Ziften supplies, offer crucial visibility into opponent use of techniques noted in the ATT&CK design. For instance, PC registry Run Keys/ Start Folder strategy use is reported, as is Command Line Interface use, because these both include easily observable endpoint habits. Strength usage in the Qualifications classification ought to be obstructed by design in each authentication architecture and be viewable from the resulting account lockout. But even here the EDR product can report occasions such as unsuccessful login efforts, where an opponent might have a few guesses to try, while staying under the account lockout attempt limit.

For attentive protectors, any technique usage may be the attack giveaway that unravels the whole kill chain. EDR solutions contend based on their strategy observation, reporting, and informing capabilities, in addition to their analytics capability to carry out more of the attack pattern detection and kill chain reconstruction, in support of safeguarding security experts staffing the business SOC. Here at Ziften we will outline more of EDR product abilities in support of the ATT&CK post-compromise detection design in future blog posts in this series.

 

Charles Leaver – This Year’s RSA Message Is That Customized Security Solutions Are Wanted

Published by:

Written By Michael Vaughan And Presented By Charles Leaver Ziften CEO

 

More customized products are required by security, network and operational groups in 2017

Much of us have gone to security conventions for many years, but none bring the same high
level of excitement as RSA – where security is talked about by the world. Of all the conventions I have actually attended and worked, absolutely nothing comes close the enthusiasm for brand-new innovation people exhibited this previous week in downtown San Francisco.

After taking a couple of days to digest the lots of discussions about the requirements and constraints with present security tech, Ihave actually had the ability to synthesize a singular theme amongstattendees: Individuals want customized solutions that fit their environment and will work across multiple internal groups.

When I describe the term “individuals,” I indicate everyone in attendance regardless of technological sector. Operational professionals, security professionals, network veterans, as well as user behavior analysts often visited the Ziften booth and shared their stories with us.

Everybody appeared more ready than ever to discuss their needs and wants for their environment. These attendees had their own set of goals they wished to attain within their department and they were desperate for answers. Since the Ziften Zenith solution provides such broad visibility on enterprise devices, it’s not surprising that our cubicle remained crowded with people excited for more information about a new, refreshingly simple endpoint security innovation.

Participants came with complaints about myriad enterprise centric security issues and sought much deeper insight into what’s really happening on their network and on devices taking a trip in and out of the office.

End users of old-school security products are on the hunt for a more recent, more pivotal software applications.

If I could select simply one of the frequent concerns I received at RSA to share, it’s this one:

” What exactly is endpoint discovery?”

1) Endpoint discovery: Ziften exposes a historical view of unmanaged devices which have actually been connected to other business endpoints at some point in time. Ziften allows users to discover known
and unidentified entities which are active or have been interactive with known endpoints.

a. Unmanaged Asset Discovery: Ziften utilizes our extension platform to reveal these unknown entities operating on the network.

b. Extensions: These are custom-fit solutions customized to the user’s specific wants and
requirements. The Ziften Zenith agent can execute the assigned extension one time, on a schedule or persistently.

Generally after the above explanation came the real reason they were going to:

People are searching for a wide variety of services for numerous departments, including executives. This is where operating at Ziften makes answering this concern a real treat.

Just a part of the RSA attendees are security specialists. I talked with lots of network, operation, endpoint management, vice presidents, general supervisors and channel partners.

They plainly all utilize and comprehend the requirement for quality security software however
apparently discover the translation to organization value missing out among security suppliers.

NetworkWorld’s Charles Araujo phrased the concern quite well in an article a short article last week:

Organizations should also rationalize security data in a business context and manage it holistically as part of the total IT and organization operating model. A group of suppliers is also attempting to tackle this challenge …

Ziften was amongst only 3 businesses highlighted.

After paying attention to those wants and needs of people from different business-critical backgrounds and discussing to them the capabilities of Ziften’s Extension platform, I typically described how Ziften would modulate an extension to resolve their need, or I provided a short demonstration of an extension that would permit them to overcome a difficulty.

2) Extension Platform: Customized, actionable solutions.

a. SKO Silos: Extensions based upon fit and requirement (operations, network, endpoint, etc).

b. Customized Requests: Require something you do not see? We can repair that for you.

3) Boosted Forensics:

a. Security: Threat management, Threat Assessment, Vulnerabilities, Metadata that is suspicious.

b. Operations: Compliance, License Rationalization, Unmanaged Assets.

c. Network: Ingress/Egress IP movement, Domains, Volume metadata.

4) Visibility within the network– Not just exactly what enters and leaves.

a. ZFlow: Finally see the network traffic inside your enterprise.

Needless to say, everyone I talked to in our cubicle quickly understood the crucial benefit of having a product such as Ziften Zenith running in and across their business.

Forbes writer, Jason Bloomberg, said it very well when he just recently explained the future of enterprise security software applications and how all signs point toward Ziften blazing a trail:

Maybe the broadest interruption: vendors are improving their capability to comprehend how bad actors behave, and can hence take steps to prevent, discover or mitigate their malicious activities. In particular, today’s suppliers comprehend the ‘Cyber Kill Chain’ – the actions a competent, patient hacker (known in the biz as an innovative relentless risk, or APT) will require to attain his/her dubious objectives.

The product of U.S. Defense contractor Lockheed Martin, The Cyber Kill Chain consists of 7 links: reconnaissance, weaponization, shipment, exploitation, setup, establishing command and control, and actions on goals.

Today’s more innovative suppliers target several of these links, with the goal of preventing, discovering or mitigating the attack. Five suppliers at RSA stood out in this classification.

Ziften provides an agent based method to tracking the behavior of users, devices, applications, and
network components, both in real-time along with across historic data.

In real time, experts use Ziften for danger identification and prevention, while they use the historic data to uncover steps in the kill chain for mitigation and forensic functions.