Monthly Archives: March 2015

Charles Leaver – A Reliable Endpoint Monitoring System Needs More Than Narrow Indicators Of Compromise

Published by:

Presented By Charles Leaver And Written By Dr Al Hartmann Of Ziften Inc.

 

The Breadth Of The Indicator – Broad Versus Narrow

An extensive report of a cyber attack will typically offer details of indicators of compromise. Often these are slim in their scope, referencing a specific attack group as viewed in a particular attack on an organization for a limited period of time. Typically these narrow indicators are specific artifacts of an observed attack that could constitute particular evidence of compromise by themselves. For the attack it implies that they have high uniqueness, however often at the cost of low sensitivity to comparable attacks with different artifacts.

Essentially, slim indicators offer extremely restricted scope, and it is the factor that they exist by the billions in massive databases that are continually expanding of malware signatures, network addresses that are suspicious, malicious pc registry keys, file and packet content snippets, filepaths and intrusion detection guidelines and so on. The continuous endpoint monitoring solution provided by Ziften aggregates a few of these third party databases and threat feeds into the Ziften Knowledge Cloud, to take advantage of understood artifact detection. These detection elements can be used in real time in addition to retrospectively. Retrospective application is essential because of the short-term characteristics of these artifacts as hackers constantly render conceal the info about their cyber attacks to annoy this narrow IoC detection approach. This is the factor that a continuous monitoring solution must archive monitoring results for a long time (in relation to industry reported common attacker dwell times), to provide an enough lookback horizon.

Slim IoC’s have significant detection worth however they are mostly inefficient in the detection of brand-new cyber attacks by knowledgeable hackers. New attack code can be pre tested against common enterprise security solutions in laboratory environments to confirm non-reuse of artifacts that are detectable. Security solutions that operate merely as black/white classifiers suffer from this weak point, i.e. by supplying an explicit decision of destructive or benign. This approach is really easily averted. The defended organization is most likely to be completely attacked for months or years prior to any noticeable artifacts can be recognized (after extensive investigation) for the particular attack circumstances.

In contrast to the simplicity with which cyber attack artifacts can be obscured by normal hacker toolkits, the particular techniques and strategies – the modus operandi – used by attackers have been sustained over several decades. Typical strategies such as weaponized websites and docs, brand-new service installation, vulnerability exploitation, module injection, sensitive directory and registry area adjustment, new arranged tasks, memory and drive corruption, credentials compromise, malicious scripting and numerous others are broadly typical. The proper use of system logging and monitoring can detect a lot of this particular attack activity, when appropriately paired with security analytics to focus on the greatest threat observations. This entirely eliminates the opportunity for hackers to pre test the evasiveness of their destructive code, because the quantification of dangers is not black and white, but nuanced shades of gray. In particular, all endpoint danger is differing and relative, throughout any network/ user environment and time period, and that environment (and its temporal characteristics) can not be duplicated in any lab environment. The basic attacker concealment approach is foiled.

In future posts we will analyze Ziften endpoint risk analysis in more detail, along with the vital relationship between endpoint security and endpoint management. “You can’t protect what you don’t manage, you can’t manage what you do not measure, you can’t measure what you don’t track.” Organizations get breached due to the fact that they have less oversight and control of their endpoint environment than the cyber attackers have. Look out for future posts…

 

The Ziften Continuous Endpoint Monitoring Advantage Carbanak Case Study Part 3 – Charles Leaver

Published by:

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 3 in a 3 part series

 

Below are excerpts of Indicators of Compromise (IoC) from the technical reports on the Anunak/Carbanak APT attacks, with discussions their discovery by the Ziften continuous endpoint monitoring solution. The Ziften system has a concentrates on generic indicators of compromise that have been consistent for years of hacker attacks and cyber security experience. IoC’s can be recognized for any os such as Linux, OS X and Windows. Specific indicators of compromise also exist that suggest C2 infrastructure or particular attack code instances, however these are not utilized long term and not normally made use of once again in fresh attacks. There are billions of these artifacts in the security world with thousands being included every day. Generic IoC’s are ingrained for the supported operating systems by the Ziften security analytics, and the specific IoC’s are employed by the Ziften Knowledge Cloud from subscriptions to a number of industry threat feeds and watch lists that aggregate these. These both have value and will help in the triangulation of attack activity.

1. Exposed vulnerabilities

Excerpt: All observed cases used spear phishing emails with Microsoft Word 97– 2003 (. doc) files attached or CPL files. The doc files exploit both Microsoft Office (CVE-2012-0158 and CVE-2013-3906) and Microsoft Word (CVE- 2014-1761).

Remark: Not actually a IoC, critical exposed vulnerabilities are a major hacker exploit and is a large warning that increases the threat rating (and the SIEM priority) for the end point, particularly if other indications are also present. These vulnerabilities are indicators of lazy patch management and vulnerability lifecycle management which causes a weakened cyber defense position.

2. Locations That Are Suspect

Excerpt: Command and Control (C2) servers situated in China have been determined in this project.

Comment: The geolocation of endpoint network touches and scoring by location both add to the risk score that drives up the SIEM priority. There are valid reasons for having contact with Chinese servers, and some organizations might have installations situated in China, but this should be validated with spatial and temporal checking of abnormalities. IP address and domain information must be added with a resulting SIEM alarm so that SOC triage can be conducted rapidly.

3. Binaries That Are New

Excerpt: Once the remote code execution vulnerability is effectively exploited, it installs Carbanak on the victim’s system.

Comment: Any brand-new binaries are always suspicious, but not all them must raise alarms. The metadata of images should be evaluated to see if there is a pattern, for example a new app or a brand-new variation of an existing app from an existing supplier on a most likely file path for that supplier etc. Hackers will try to spoof apps that are whitelisted, so signing data can be compared as well as size, size of the file and filepath etc to filter out apparent instances.

4. Uncommon Or Sensitive Filepaths

Excerpt: Carbanak copies itself into “% system32% com” with the name “svchost.exe” with the file attributes: system, concealed and read-only.

Comment: Any writing into the System32 filepath is suspicious as it is a sensitive system folder, so it is subject to examination by checking anomalies immediately. A classic abnormality would be svchost.exe, which is a vital system procedure image, in the unusual location the com subdirectory.

5. New Autostarts Or Services

Excerpt: To ensure that Carbanak has autorun privileges the malware produces a new service.

Remark: Any autostart or new service is common with malware and is constantly examined by the analytics. Anything low prevalence would be suspicious. If inspecting the image hash against industry watchlists results in an unknown quantity to the majority of anti-virus engines this will raise suspicions.

6. Low Prevalence File In High Prevalence Folder

Excerpt: Carbanak develops a file with a random name and a.bin extension in %COMMON_APPDATA% Mozilla where it stores commands to be executed.

Remark: This is a classic example of “one of these things is not like the other” that is simple for the security analytics to examine (continuous monitoring environment). And this IoC is absolutely generic, has definitely nothing to do with which filename or which directory is created. Despite the fact that the technical security report notes it as a specific IoC, it is trivially genericized beyond Carabanak to future attacks.

7. Suspect Signer

Excerpt: In order to render the malware less suspicious, the current Carbanak samples are digitally signed

Comment: Any suspect signer will be treated as suspicious. One case was where a signer provides a suspect anonymous gmail e-mail address, which does not inspire confidence, and the risk score will rise for this image. In other cases no e-mail address is supplied. Signers can be easily noted and a Pareto analysis performed, to determine the more versus less trusted signers. If a less trusted signer is found in a more sensitive directory then this is very suspicious.

8. Remote Administration Tools

Excerpt: There seems a preference for the Ammyy Admin remote administration tool for remote control thought that the hackers utilized this remote administration tool due to the fact that it is frequently whitelisted in the victims’ environments as a result of being utilized regularly by administrators.

Comment: Remote admin tools (RAT) constantly raise suspicions, even if they are whitelisted by the organization. Checking of anomalies would happen to identify whether temporally or spatially each new remote admin tool is consistent. RAT’s are subject to abuse. Hackers will always prefer to utilize the RAT’s of an organization so that they can avoid detection, so they need to not be given access each time even if they are whitelisted.

9. Patterns Of Remote Login

Excerpt: Logs for these tools suggest that they were accessed from 2 dissimilar IPs, most likely utilized by the attackers, and located in Ukraine and France.

Remark: Always suspect remote logins, because all hackers are presumed to be remote. They are likewise used a lot with insider attacks, as the insider does not want to be recognized by the system. Remote addresses and time pattern anomalies would be examined, and this ought to expose low prevalence usage (relative to peer systems) plus any suspect locations.

10. Atypical IT Tools

Excerpt: We have actually also discovered traces of various tools utilized by the hackers inside the victim ´ s network to gain control of extra systems, such as Metasploit, PsExec or Mimikatz.

Comment: Being sensitive apps, IT tools ought to always be examined for abnormalities, due to the fact that numerous hackers overturn them for harmful purposes. It is possible that Metasploit could be utilized by a penetration tester or vulnerability researcher, but instances of this would be uncommon. This is a prime example where an uncommon observation report for the vetting of security staff would result in restorative action. It likewise highlights the problem where blanket whitelisting does not help in the identification of suspicious activity.

 

Charles Leaver – Carbanak Case Study Part Two Explains Why Continuous Endpoint Monitoring Is SO Efficient

Published by:

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 2 in a 3 part series

 

Continuous Endpoint Monitoring Is Extremely Efficient

 

Convicting and obstructing malicious scripts before it is able to jeopardize an endpoint is great. But this approach is mainly inefficient in the defense of cyber attacks that have been pre tested to evade this sort of approach to security. The genuine issue is that these hidden attacks are conducted by knowledgeable human hackers, while conventional defense of the endpoint is an automated procedure by endpoint security systems that rely mainly on standard antivirus innovation. The intelligence of human beings is more imaginative and versatile than the intelligence of machines and will constantly be superior to automatic machine defenses. This highlights the findings of the Turing test, where automated defenses are trying to adapt to the intellectual level of a skilled human hacker. At the current time, artificial intelligence and machine learning are not advanced enough to fully automate cyber defense, the human hacker is going to win, while those attacked are left counting their losses. We are not residing in a sci-fi world where machines can out think human beings so you must not think that a security software application suite will automatically take care of all of your issues and prevent all attacks and information loss.

The only genuine way to prevent an undaunted human hacker is with a resolute human cyber defender. In order to engage your IT Security Operations Center (SOC) personnel to do this, they must have complete visibility of network and endpoint operations. This type of visibility will not be accomplished with standard endpoint antivirus solutions, instead they are developed to remain silent unless implementing a capture and quarantining malware. This traditional method renders the endpoints opaque to security personnel, and the hackers use this endpoint opacity to hide their attacks. This opacity extends backwards and forwards in time – your security workers do not know exactly what was running across your endpoint population in the past, or at this moment, or exactly what can be expected in the future. If diligent security personnel discover clues that require a forensic look back to discover hacker traits, your antivirus suite will be unable to help. It would not have acted at the time so no events will have been recorded.

In contrast, continuous endpoint monitoring is always working – providing real time visibility into endpoint operations, offering forensic look back’s to take action against new evidence of attacks that is emerging and find indications earlier, and providing a standard for normal patterns of operation so that it understands exactly what to expect and notify any abnormalities in the future. Supplying not only visibility, continuous endpoint monitoring offers informed visibility, with the application of behavioral analytics to spot operations that appear irregular. Irregularities will be continually analyzed and aggregated by the analytics and reported to SOC staff, through the organization’s security information event management (SIEM) network, and will flag the most worrying suspicious abnormalities for security personnel interest and action. Continuous endpoint monitoring will amplify and scale human intelligence and not replace it. It is a bit like the old game on Sesame Street “One of these things is not like the other.”

A child can play this game. It is simplified due to the fact that a lot of items (referred to as high prevalence) look like each other, but one or a small amount (called low prevalence) are different and stand apart. These different actions taken by cyber bad guys have actually been quite consistent in hacking for decades. The Carbanak technical reports that noted the indicators of compromise ready examples of this and will be discussed below. When continuous endpoint monitoring security analytics are enacted and show these patterns, it is easy to acknowledge something suspicious or unusual. Cyber security personnel will be able to perform fast triage on these unusual patterns, and rapidly figure out a yes/no/maybe response that will identify uncommon but known to be good activities from malicious activities or from activities that require extra tracking and more insightful forensics examinations to confirm.

There is no way that a hacker can pre test their attacks when this defense application is in place. Continuous endpoint monitoring security has a non-deterministic risk analytics component (that notifies suspect activity) along with a non-deterministic human aspect (that performs alert triage). Depending on the present activities, endpoint population mix and the experience of the cyber security workers, developing attack activity might or may not be uncovered. This is the nature of cyber warfare and there are no warranties. However if your cyber security fighters are geared up with continuous endpoint monitoring analytics and visibility they will have an unreasonable advantage.

 

Charles Leaver – First Part Of Carbanak Case Study And The Benefits Of Continuous Endpoint Monitoring

Published by:

Presented By Charles Leaver And Written By Dr Al Hartmann

 

Part 1 in a 3 part series

 

 

Carbanak APT Background Particulars

A billion dollar bank raid, which is targeting more than a hundred banks across the world by a group of unknown cyber wrongdoers, has actually remained in the news. The attacks on the banks began in early 2014 and they have been expanding around the world. The majority of the victims suffered dreadful breaches for a variety of months throughout numerous endpoints prior to experiencing monetary loss. Most of the victims had actually implemented security steps which included the implementation of network and endpoint security software, but this did not provide a great deal of warning or defense against these cyber attacks.

A number of security companies have actually produced technical reports about the incidents, and they have been codenamed either Carbanak or Anunak and these reports noted signs of compromise that were observed. The companies consist of:

Fox-IT from Holland
Group-IB from Russia
Kaspersky Laboratory of Russia

This post will serve as a case study for the cyber attacks and address:

1. The factor that the endpoint security and the standard network security was unable to spot and resist the attacks?
2. Why continuous endpoint monitoring (as supplied by the Ziften solution) would have warned early about endpoint attacks then triggered a reaction to prevent data loss?

Standard Endpoint Security And Network Security Is Inefficient

Based upon the legacy security design that relies too much on obstructing and prevention, traditional endpoint and network security does not offer a well balanced strategy of blocking, prevention, detection and response. It would not be difficult for any cyber criminal to pre test their attacks on a limited number of standard endpoint security and network security products so that they could be sure an attack would not be detected. A number of the hackers have actually looked into the security products that were in place at the victim organizations then became competent in breaking through undetected. The cyber criminals knew that most of these security services just respond after the occasion but otherwise will do nothing. Exactly what this means is that the typical endpoint operation remains generally opaque to IT security workers, which suggests that malicious activity becomes masked (this has actually already been checked by the hackers to avoid detection). After a preliminary breach has actually taken place, the malicious software can extend to reach users with higher privileges and the more delicate endpoints. This can be quickly attained by the theft of credentials, where no malware is required, and conventional IT tools (which have been white listed by the victim organization) can be used by cyber criminal created scripts. This means that the existence of malware that can be identified at endpoints is not utilized and there will be no alarms raised. Conventional endpoint security software application is too over reliant on looking for malware.

Traditional network security can be controlled in a comparable way. Hackers test their network activities first to avoid being found by commonly distributed IDS/IPS guidelines, and they carefully monitor regular endpoint operation (on endpoints that have actually been compromised) to hide their activities on a network within regular transaction durations and normal network traffic patterns. A new command and control infrastructure is produced that is not registered on network address blacklists, either at the IP or domain levels. There is very little to give the hackers away here. Nevertheless, more astute network behavioral assessment, specifically when associated with the endpoint context which will be talked about later on in this series of posts, can be a lot more effective.

It is not time to give up hope. Would continuous endpoint monitoring (as supplied by Ziften) have offered an early warning of the endpoint hacking to begin the procedure of stopping the attacks and avoid data loss? Find out more in part 2.