Author Archives: leavcharl1

Charles Leaver – UK Parliament Play The Blame Game Instead Of Fixing Insecurities

Published by:

Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver

 

In cyberspace the sheep get shorn, chumps get chewed, dupes get deceived, and pawns get pwned. We’ve seen another great example of this in the current attack on the UK Parliament email system.

Rather than admit to an e-mail system that was not secure by design, the main statement read:

Parliament has strong measures in place to safeguard all of our accounts and systems.

Tell us another one. The one protective measure we did see at work was blame deflection – pin it on the Russians, that always works, while implicating the victims for their policy infractions. While details of the attack are scarce, combing different sources does help to assemble at least the gross outlines. If these stories are reasonably close, the United Kingdom Parliament e-mail system failings are scandalous.

What went wrong in this case?

Count on single aspect authentication

“Password security” is an oxymoron – anything password secured alone is insecure, that’s it, irrespective of the strength of the password. Please, no 2FA here, may hinder attacks.

Do not enforce any limitation on failed login efforts

Facilitated by single element authentication, this permits easy brute force attacks, no skill required. However when attacked, blame elite state sponsored hackers – no one can validate.

Do not carry out brute force attack detection

Permit hackers to conduct (otherwise trivially noticeable) brute force violations for extended periods (12 hours versus the UK Parliament system), to maximize account compromise scope.

Do not impose policy, treat it as simply recommendations

Combined with single factor authentication, no limitation on failed logins, and no brute force attack detection, do not impose any password strength validation. Supply assailants with extremely low hanging fruit.

Count on anonymous, unencrypted e-mail for delicate communications

If enemies do prosper in compromising email accounts or sniffing your network traffic, supply a lot of chance for them to score high value message content entirely in the clear. This also conditions constituents to trust easily spoofable e-mail from Parliament, developing an ideal constituent phishing environment.

Lessons learned

In addition to including “Good sense for Dummies” to their summertime reading lists, the United Kingdom Parliament e-mail system administrators may wish to take further actions. Reinforcing weak authentication practices, implementing policies, improving network and endpoint visibility with constant monitoring and anomaly detection, and completely reassessing secure messaging are suggested actions. Penetration testing would have discovered these fundamental weaknesses while staying outside the news headlines.

Even a couple of clever high schoolers with a complimentary weekend could have duplicated this attack. And lastly, stop blaming the Russians for your own security failings. Presume that any weaknesses in your security architecture and policy framework will be probed and exploited by some party somewhere throughout the international internet. All the more incentive to discover and fix those weak points prior to the enemies do, so turn those pen testers loose. And after that if your protectors don’t cannot see the attacks in progress, update your tracking and analytics.

Charles Leaver – IT And Security Working Closer Together With SysSecOps

Published by:

Written By Charles Leaver Ziften CEO

 

It was nailed by Scott Raynovich. Having worked with numerous organizations he understood that one of the biggest obstacles is that security and operations are 2 different departments – with significantly varying goals, different tools, and different management structures.

Scott and his analyst firm, Futuriom, just completed a study, “Endpoint Security and SysSecOps: The Growing Pattern to Develop a More Secure Business”, where one of the essential findings was that clashing IT and security objectives prevent experts – on both groups – from attaining their goals.

That’s precisely what we believe at Ziften, and the term that Scott produced to talk about the convergence of IT and security in this domain – SysSecOps – describes perfectly what we have actually been discussing. Security groups and the IT teams should get on the very same page. That suggests sharing the very same objectives, and in some cases, sharing the very same tools.

Think about the tools that IT individuals utilize. The tools are created to make sure the infrastructure and end devices are working properly, and when something fails, helps them repair it. On the endpoint side, those tools will guarantee that devices that are allowed onto the network, are set up effectively, have software that’s authorized and properly updated/patched, and haven’t recorded any faults.

Think of the tools that security individuals use. They work to impose security policies on devices, infrastructure, and security devices (like firewalls). This may include active monitoring incidents, scanning for abnormal behavior, taking a look at files to ensure they don’t consist of malware, embracing the current risk intelligence, matching against recently discovered zero-days, and performing analysis on log files.

Finding fires, fighting fires

Those are two different worlds. The security teams are fire spotters: They can see that something bad is taking place, can work rapidly to isolate the issue, and identify if harm happened (like data exfiltration). The IT teams are on the ground firefighters: They leap into action when an event strikes to guarantee that the systems are secure and revived into operation.

Sounds great, doesn’t it? Unfortunately, all too often, they don’t speak with each other – it resembles having the fire spotters and fire fighters using dissimilar radios, dissimilar jargon, and different city maps. Worse, the groups can’t share the exact same data directly.

Our technique to SysSecOps is to provide both the IT and security teams with the very same resources – which implies the exact same reports, provided in the proper ways to professionals. It’s not a dumbing down, it’s working smarter.

It’s ludicrous to operate in any other way. Take the WannaCry infection, for instance. On one hand, Microsoft released a patch back in March 2017 that dealt with the underlying SMB flaw. IT operations teams didn’t set up the patch, since they didn’t think this was a big deal and didn’t speak with security. Security groups didn’t know if the patch was installed, due to the fact that they don’t talk to operations. SysSecOps would have had everyone on the very same page – and could have possibly prevented this problem.

Missing data means waste and danger

The dysfunctional gap in between IT operations and security exposes companies to risk. Avoidable danger. Unnecessary risk. It’s just unacceptable!

If your organization’s IT and security groups aren’t on the very same page, you are sustaining risks and costs that you should not have to. It’s waste. Organizational waste. It’s wasteful because you have so many tools that are offering partial data that have spaces, and each of your groups just sees part of the picture.

As Scott concluded in his report, “Coordinated SysSecOps visibility has actually currently shown its worth in assisting organizations examine, analyze, and avoid substantial dangers to the IT systems and endpoints. If these objectives are pursued, the security and management risks to an IT system can be considerably lessened.”

If your teams are interacting in a SysSecOps kind of method, if they can see the same data at the same time, you not only have much better security and more efficient operations – however likewise lower danger and lower expenses. Our Zenith software application can help you accomplish that performance, not just dealing with your existing IT and security tools, but also filling in the gaps to make sure everybody has the ideal data at the correct time.

Charles Leaver – WannaCry Detection And Response With Ziften And Splunk

Published by:

Written by Joel Ebrahami and presented by Charles Leaver

 

WannaCry has created a great deal of media attention. It may not have the massive infection rates that we have seen with a lot of the previous worms, however in the current security world the amount of systems it had the ability to infect in a single day was still rather incredible. The objective of this blog post is NOT to provide a detailed analysis of the threat, however rather to look how the threat behaves on a technical level with Ziften’s Zenith platform and the combination we have with our innovation partner Splunk.

Visibility of WannaCry in Ziften Zenith

My very first action was to reach out to Ziften Labs threat research study group to see exactly what info they might provide to me about WannaCry. Josh Harriman, VP of Cyber Security Intelligence, directs our research group and informed me that they had samples of WannaCry currently running in our ‘Red Lab’ to take a look at the behavior of the risk and carry out further analysis. Josh sent me over the details of exactly what he had found when examining the WannaCry samples in the Ziften Zenith console. He sent over those details, which I provide here.

The Red Laboratory has systems covering all the most popular typical os with various services and configurations. There were already systems in the laboratory that were purposefully susceptible to the WannaCry exploit. Our worldwide threat intelligence feeds used in the Zenith platform are upgraded in real-time, and had no trouble spotting the virus in our lab environment (see Figure 1).

2 laboratory systems have actually been recognized running the destructive WannaCry sample. While it is excellent to see our global risk intelligence feeds upgraded so quickly and identifying the ransomware samples, there were other habits that we found that would have recognized the ransomware threat even if there had actually not been a danger signature.

Zenith agents collect a huge quantity of data on what’s taking place on each host. From this visibility information, we produce non-signature based detection strategies to take a look at typically malicious or anomalous behaviors. In Figure 2 shown below, we reveal the behavioral detection of the WannaCry ransomware.

Investigating the Scope of WannaCry Infections

As soon as it has been identified either through signature or behavioral approaches, it is very simple to see which other systems have actually also been infected or are showing similar behaviors.

WannaCry Detections with Ziften and Splunk

After examining this details, I decided to run the WannaCry sample in my own environment on a susceptible system. I had one susceptible system running the Zenith agent, and in this case my Zenith server was already configured to integrate with Splunk. This allowed me to look at the same data inside Splunk. Let me make it clear about the integration that exists with Splunk.

We have two Splunk apps for Zenith. The first is our technology add on (TA): its role is to consume and index ALL the raw information from the Zenith server that the Ziften agents create. As this info populates it is massaged into Splunk’s Common Info Model (CIM) so that it can be stabilized and easily searched as well as utilized by other apps such as the Splunk App for Enterprise Security (Splunk ES). The Ziften TA likewise consists of Adaptive Response capabilities for taking actions from actions that are rendered in Splunk ES. The second app is a dashboard for showing our information with all the charts and graphs available in Splunk to allow digesting the data much easier.

Given that I currently had the details on how the WannaCry exploit acted in our research lab, I had the advantage of knowing what to look for in Splunk utilizing the Zenith data. In this case I was able to see a signature alert by using the VirusTotal integration with our Splunk app (see Figure 4).

Risk Hunting for WannaCry Ransomware in Ziften and Splunk

But I wished to wear my “event responder hat” and investigate this in Splunk using the Zenith agent data. My first thought was to search the systems in my laboratory for ones running SMB, because that was the initial vector for the WannaCry attack. The Zenith data is encapsulated in various message types, and I knew that I would most likely find SMB data in the running process message type, nevertheless, I used Splunk’s * regex with the Zenith sourcetype so I could search all Zenith data. The resulting search appeared like ‘sourcetype= ziften: zenith: * smb’. As I anticipated I received 1 result back for the system that was running SMB (see Figure 5).

My next step was to use the same behavioral search we have in Zenith that tries to find normal CryptoWare and see if I could get outcomes back. Once again this was extremely easy to do from the Splunk search panel. I utilized the very same wildcard sourcetype as previously so I might search throughout all Zenith data and this time I included the ‘delete shadows’ string search to see if this behavior was ever released at the command line. My search appeared like ‘sourcetype= ziften: zenith: * delete shadows’. This search returned outcomes, displayed in Figure 6, that revealed me in detail the process that was produced and the full command line that was performed.

Having all this detail inside of Splunk made it very easy to determine which systems were vulnerable and which systems had actually already been compromised.

WannaCry Removal Utilizing Splunk and Ziften

Among the next steps in any type of breach is to remediate the compromise as quick as possible to prevent further damage and to act to prevent other systems from being jeopardized. Ziften is one of the Splunk initial Adaptive Response members and there are a variety of actions (see Figure 7) that can be taken through Spunk’s Adaptive Response to mitigate these risks through extensions on Zenith.

When it comes to WannaCry we really could have used practically any of the Adaptive Response actions currently readily available by Zenith. When aiming to minimize the impact and avoid WannaCry initially, one action that can happen is to shut down SMB on any systems running the Zenith agent where the variation of SMB running is known vulnerable. With a single action Splunk can pass to Zenith the agent ID’s or the IP Address of all the susceptible systems where we wanted to stop the SMB service, therefore preventing the threat from ever taking place and enabling the IT Operations team to get those systems patched prior to beginning the SMB service once again.

Avoiding Ransomware from Spreading or Exfiltrating Data

Now in the case that we have already been jeopardized, it is vital to prevent additional exploitation and stop the possible exfiltration of delicate info or company intellectual property. There are really 3 actions we could take. The very first 2 are comparable where we could kill the malicious procedure by either PID (process ID) or by its hash. This works, however given that many times malware will just generate under a brand-new process, or be polymorphic and have a various hash, we can use an action that is guaranteed to prevent any incoming or outbound traffic from those contaminated systems: network quarantine. This is another example of an Adaptive Response action offered from Ziften’s integration with Splunk ES.

WannaCry is already diminishing, but hopefully this technical blog post shows the value of the Ziften and Splunk integration in handling ransomware dangers against the end point.

Charles Leaver – A Breach Out Of Nowhere Get Paranoid About Your Company Security

Published by:

Written By Charles Leaver Ziften CEO

 

Whatever you do don’t undervalue cybersecurity hackers. Even the most paranoid “regular” person wouldn’t worry about a source of data breaches being taken qualifications from its heating, ventilation and a/c (A/C) specialist. Yet that’s exactly what took place at Target in November 2013. Hackers got into Target’s network utilizing credentials offered to the professional, most likely so they might monitor the heating, ventilation and air conditioning system. (For a great analysis, see Krebs on Security). And then hackers had the ability to leverage the breach to spread malware into point of sale (POS) systems, then unload payment card details.

A number of ludicrous errors were made here. Why was the A/C contractor given access to the business network? Why wasn’t the HVAC system on a different, completely separated network? Why wasn’t the POS system on a different network? And so on.

The point here is that in a really complicated network, there are uncounted potential vulnerabilities that could be exploited through recklessness, unpatched software applications, default passwords, social engineering, spear phishing, or insider actions. You get the point.

Whose job is it to discover and fix those vulnerabilities? The security group. The CISO’s office. Security specialists aren’t “normal” people. They are paid to be paranoid. Make no mistake, no matter the particular technical vulnerability that was exploited, this was a CISO failure to prepare for the worst and prepare accordingly.

I cannot talk to the Target HEATING AND COOLING breach particularly, but there is one frustrating reason that breaches like this occur: An absence of financial priority for cybersecurity. I’m not sure how frequently businesses fail to fund security merely since they’re inexpensive and would rather do a share buy-back. Or maybe the CISO is too timid to request for what’s needed, or has been told that he gets a 5% increase, no matter the requirement. Possibly the CEO is worried that disclosures of big allowances for security will scare shareholders. Maybe the CEO is merely naïve enough to believe that the business won’t be targeted by hackers. The problem: Every enterprise is targeted by hackers.

There are substantial competitions over budget plans. The IT department wishes to finance upgrades and improvements, and attack the stockpile of demand for new and enhanced applications. On the other side, you have operational leaders who see IT jobs as directly assisting the bottom line. They are optimists, and have great deals of CEO attention.

By contrast, the security department frequently needs to fight for crumbs. They are viewed as a cost center. Security reduces business danger in a manner that matters to the CFO, the CRO (chief risk officer, if there is one), the basic counsel, and other pessimists who care about compliance and track records. These green-eyeshade people think of the worst case situations. That does not make friends, and budget plan dollars are allocated grudgingly at too many companies (till the company gets burned).

Call it naivety, call it entrenched hostility, but it’s a real difficulty. You cannot have IT given fantastic tools to move the enterprise forward, while security is starved and using second-best.

Worse, you do not wish to end up in situations where the rightfully paranoid security teams are working with tools that do not mesh well with their IT counterpart’s tools.

If IT and security tools don’t fit together well, IT may not be able to quickly act to respond to risky situations that the security groups are keeping track of or are worried about – things like reports from hazard intelligence, discoveries of unpatched vulnerabilities, nasty zero-day exploits, or user habits that indicate dangerous or suspicious activity.

One recommendation: Find tools for both departments that are created with both IT and security in mind, right from the beginning, rather than IT tools that are patched to offer some very little security ability. One budget plan item (take it out of IT, they have more money), however two workflows, one created for the IT professional, one for the CISO group. Everybody wins – and next time somebody wants to provide the A/C professional access to the network, maybe security will notice what IT is doing, and head that disaster off at the pass.

Charles Leaver – WannaCry Ransomware Help From Ziften

Published by:

Written By Michael Vaughn And Presented By Charles Leaver Ziften CEO

 

Answers To Your Questions About WannaCry Ransomware

The WannaCry ransomware attack has infected more than 300,000 computers in 150 nations up until now by making use of vulnerabilities in Microsoft’s Windows os.
In this short video Chief Data Scientist Dr. Al Hartmann and I talk about the nature of the attack, as well as how Ziften can help companies secure themselves from the exploit known as “EternalBlue.”.

As discussed in the video, the issue with this Server Message Block (SMB) file sharing service is that it’s on most Windows operating systems and discovered in the majority of environments. However, we make it simple to determine which systems in your environment have or have not been patched yet. Importantly, Ziften Zenith can likewise from another location disable the SMB file-sharing service entirely, offering organizations important time to guarantee that those computers are correctly patched.

If you want to know more about Ziften Zenith, our 20 minute demo consists of a consultation with our specialists around how we can assist your company prevent the worst digital catastrophe to strike the internet in years.

Charles Leaver – Your 10 Steps For Endpoint Security Service Assessment

Published by:

Written By Roark Pollock And Presented By Chuck Leaver CEO Ziften

 

The Endpoint Security Purchaser’s Guide

The most common point for an advanced consistent attack or a breach is the end point. And they are certainly the entry point for most ransomware and social engineering attacks. Making use of endpoint security products has long been thought about a best practice for protecting endpoints. Unfortunately, those tools aren’t keeping up with today’s threat environment. Advanced risks, and truth be told, even less advanced threats, are typically more than sufficient for fooling the average staff member into clicking something they should not. So organizations are looking at and examining a huge selection of next generation end point security (NGES) services.

With that in mind, here are 10 tips to consider if you’re looking at NGES solutions.

Pointer 1: Start with the end in mind

Do not let the tail wag the dog. A danger reduction strategy should always start by assessing issues and after that trying to find possible solutions for those issues. But all frequently we get enamored with a “shiny” new innovation (e.g., the current silver bullet) and we wind up aiming to shoehorn that technology into our environments without fully examining if it solves an understood and determined problem. So exactly what issues are you aiming to fix?

– Is your current endpoint protection tool failing to stop hazards?
– Do you require much better visibility into activities at the endpoint?
– Are compliance requirements dictating continuous end point tracking?
– Are you attempting to decrease the time and expense of incident response?

Define the problems to deal with, and after that you’ll have a measuring stick for success.

Tip 2: Understand your audience. Who will be using the tool?

Comprehending the issue that has to be resolved is an essential initial step in understanding who owns the problem and who would (operationally) own the solution. Every practical group has its strengths, weak points, choices and prejudices. Specify who will need to utilize the solution, and others that could take advantage of its use. It could be:

– Security team,
– IT group,
– The governance, risk and compliance (GRC) group,
– Help desk or end user support group,
– And even the server group, or a cloud operations team?

Tip 3: Know what you mean by end point

Another frequently neglected early step in defining the problem is defining the end point. Yes, all of us used to understand what we implied when we said end point however today endpoints come in a lot more ranges than before.

Sure we want to protect desktops and laptop computers however how about mobile devices (e.g. smartphones and tablets), virtual end points, cloud based end points, or Internet of Things (IoT) devices? And how about your servers? All these devices, of course, can be found in numerous flavors so platform assistance needs to be attended to too (e.g. Windows only, Mac OSX, Linux, etc?). Also, think about assistance for endpoints even when they are working remote, or are working offline. Exactly what are your requirements and what are “great to haves?”

Pointer 4: Start with a foundation of all the time visibility

Continuous visibility is a fundamental ability for dealing with a host of security and functional management problems on the endpoint. The old expression is true – that you cannot manage exactly what you cannot see or determine. Even more, you can’t protect what you cannot appropriately manage. So it must start with constant or all the time visibility.

Visibility is foundational to Management and Security

And think about exactly what visibility means. Enterprises need one source of truth that at a minimum monitors, stores, and examines the following:

– System data – occasions, logs, hardware state, and file system details
– User data – activity logs and behavior patterns
– Application data – characteristics of installed apps and use patterns
– Binary data – characteristics of set up binaries
– Processes data – tracking info and stats
– Network connection data – stats and internal habits of network activity on the host

Idea 5: Keep track of your visibility data

End point visibility data can be saved and examined on the premises, in the cloud, or some combination of both. There are benefits to each. The proper approach varies, but is usually enforced by regulatory requirements, internal privacy policies, the end points being monitored, and the overall expense factors to consider.

Know if your organization requires on-premise data retention

Know whether your company allows for cloud based data retention and analysis or if you are constrained to on premise services only. Within Ziften, 20-30% of our clients keep data on premise just for regulatory reasons. However, if legally an alternative, the cloud can offer expense advantages (to name a few).

Pointer 6: Know what is on your network

Comprehending the issue you are aiming to resolve needs understanding the assets on the network. We have found that as much as 30% of the endpoints we at first find on customers’ networks are unmanaged or unidentified devices. This obviously develops a big blind spot. Minimizing this blind spot is a vital best practice. In fact, SANS Critical Security Controls 1 and 2 are to carry out a stock of licensed and unauthorized devices and software connected to your network. So look for NGES services that can finger print all connected devices, track software inventory and utilization, and perform on-going constant discovery.

Pointer 7: Know where you are vulnerable

After figuring out exactly what devices you have to monitor, you need to make sure they are running in up to date configurations. SANS Critical Security Controls 3 recommends making sure safe setups tracking for laptops, workstations, and servers. SANS Critical Security Controls 4 suggests enabling continuous vulnerability evaluation and remediation of these devices. So, search for NGES solutions that supply constant tracking of the state or posture of each device, and it’s even better if it can assist implement that posture.

Also look for services that provide continuous vulnerability assessment and remediation.

Keeping your overall end point environment hardened and free of important vulnerabilities prevents a huge amount of security issues and gets rid of a great deal of back end pressure on the IT and security operations teams.

Tip 8: Cultivate constant detection and response

A crucial end goal for lots of NGES services is supporting constant device state tracking, to enable reliable hazard or event response. SANS Critical Security Control 19 advises robust event response and management as a best practice.

Try to find NGES solutions that supply all-the-time or constant threat detection, which leverages a network of worldwide danger intelligence, and several detection methods (e.g., signature, behavioral, machine learning, etc). And try to find incident response services that help prioritize identified dangers and/or concerns and provide workflow with contextual system, application, user, and network data. This can assist automate the proper response or next actions. Lastly, understand all the response actions that each solution supports – and try to find a solution that offers remote access that is as close as possible to “sitting at the end point keyboard”.

Pointer 9: Think about forensics data gathering

In addition to event response, companies must be prepared to address the need for forensic or historical data analysis. The SANS Critical Security Control 6 advises the maintenance, monitoring and analysis of all audit logs. Forensic analysis can take numerous types, but a structure of historic end point monitoring data will be essential to any examination. So look for solutions that preserve historical data that allows:

– Forensic jobs include tracing lateral threat movement through the network gradually,
– Pinpointing data exfiltration efforts,
– Determining origin of breaches, and
– Identifying proper remediation actions.

Suggestion 10: Take apart the walls

IBM’s security team, which supports a remarkable community of security partners, estimates that the average business has 135 security tools in situ and is working with 40 security vendors. IBM customers definitely skew to big enterprise however it’s a typical refrain (problem) from organizations of all sizes that security solutions don’t integrate well enough.

And the grievance is not just that security services do not play well with other security products, but also that they don’t constantly integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations need to think about these (and other) integration points along with the vendor’s desire to share raw data, not just metadata, through an API.

Bonus Tip 11: Prepare for personalizations

Here’s a bonus pointer. Presume that you’ll wish to tailor that shiny brand-new NGES service shortly after you get it. No service will fulfill all of your needs right out of the box, in default configurations. Find out how the service supports:

– Custom data collection,
– Notifying and reporting with custom data,
– Customized scripting, or
– IFTTT (if this then that) performance.

You understand you’ll want new paint or new wheels on that NGES solution soon – so make certain it will support your future personalization projects easy enough.

Try to find support for simple modifications in your NGES service

Follow the bulk of these suggestions and you’ll certainly prevent a lot of the common mistakes that pester others in their assessments of NGES services.

Charles Leaver – Protect Your Business From End To End With Ziften Because We Are The Best

Published by:

Written By Ziften CEO Charles Leaver

 

Do you wish to manage and protect your endpoints, your network, the cloud and your data center? In that case Ziften has the best service for you. We gather data, and let you associate and use that data to make decisions – and be in control over your business.

The information that we obtain from everybody on the network can make a real world distinction. Think about the inference that the U.S. elections in 2016 were influenced by hackers from another nation. If that holds true, cyber criminals can do practically anything – and the concept that we’ll settle for that as the status quo is just ridiculous.

At Ziften, our company believe the best method to fight those risks is with greater visibility than you have actually ever had. That visibility crosses the whole business, and links all the significant players together. On the back end, that’s real and virtual servers in the data center and the cloud. That’s infrastructure and applications and containers. On the other side, it’s laptops and desktop computers, irrespective of how and where they are connected.

End-to-end – that’s the thinking behind everything at Ziften. From endpoint to cloud, all the way from a browser to a DNS server. We connect all that together, with all the other parts to give your company a total solution.

We also record and keep real-time data for as much as 12 months to let you understand what’s taking place on the network today, and offer historical trend analysis and cautions if something is modified.

That lets you identify IT faults and security concerns right away, and also be able to ferret out the source by recalling in time to see where a fault or breach may have first taken place. Active forensics are a total must in security: After all, where a fault or breach initiated an alarm may not be where the issue started – or where a hacker is running.

Ziften offers your security and IT groups with the visibility to comprehend your present security posture, and identify where enhancements are needed. Non-compliant endpoints? Found. Rogue devices? Found. Off-network penetration? Found. Obsolete firmware? Unpatched applications? All discovered. We’ll not just help you find the issue, we’ll help you fix it, and make certain it stays fixed.

End to end security and IT management. Real time and historical active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s what makes Ziften better.

Charles Leaver – Workload Deployments In The Cloud Are Easily Tracked With NetFlow That Is Enhanced

Published by:

Written by Roark Pollock and Presented by Ziften CEO Charles Leaver

 

In accordance with Gartner the public cloud services market went beyond $208 billion in 2016. This represented about a 17% increase year over year. Pretty good when you consider the on-going issues most cloud consumers still have relating to data security. Another particularly interesting Gartner finding is the common practice by cloud consumers to contract services to several public cloud companies.

In accordance with Gartner “most organizations are already using a combination of cloud services from various cloud companies”. While the commercial reasoning for making use of several suppliers is sound (e.g., preventing vendor lock in), the practice does create extra intricacy intracking activity across an organization’s significantly dispersed IT landscape.

While some providers support more superior visibility than others (for example, AWS CloudTrail can monitor API calls across the AWS infrastructure) companies have to comprehend and deal with the visibility problems connected with transferring to the cloud irrespective of the cloud supplier or companies they deal with.

Regrettably, the ability to monitor application and user activity, and networking interactions from each VM or endpoint in the cloud is limited.

Irrespective of where computing resources live, organizations must answer the concerns of “Which users, devices, and applications are interacting with each other?” Organizations need visibility throughout the infrastructure so that they can:

  • Quickly identify and prioritize issues
  • Speed root cause analysis and recognition
  • Lower the mean time to fix problems for end users
  • Rapidly identify and eliminate security dangers, minimizing total dwell times.

Conversely, bad visibility or poor access to visibility data can lower the efficiency of current security and management tools.

Businesses that are familiar with the ease, maturity, and relative cheapness of monitoring physical data centers are likely to be disappointed with their public cloud alternatives.

What has been lacking is a basic, common, and classy service like NetFlow for public cloud infrastructure.

NetFlow, naturally, has had 20 years or so to become a de facto standard for network visibility. A common implementation involves the monitoring of traffic and aggregation of flows at network chokepoints, the collection and storage of flow info from numerous collection points, and the analysis of this flow info.

Flows consist of a basic set of destination and source IP addresses and port and protocol info that is usually collected from a switch or router. Netflow data is relatively low-cost and simple to gather and provides almost ubiquitous network visibility and allows for actionable analysis for both network tracking and performance management applications.

A lot of IT staffs, particularly networking and some security teams are extremely comfy with the technology.

But NetFlow was created for fixing exactly what has become a rather restricted problem in the sense that it just collects network data and does so at a minimal number of prospective locations.

To make much better use of NetFlow, 2 crucial changes are essential.

NetFlow to the Edge: First, we have to broaden the useful implementation circumstances for NetFlow. Instead of just gathering NetFlow at network points of choke, let’s expand flow collection to the edge of the network (clients, cloud, and servers). This would greatly expand the overall view that any NetFlow analytics offer.

This would allow companies to augment and take advantage of existing NetFlow analytics tools to remove the growing blind spot of visibility into public cloud activities.

Rich, contextual NetFlow: Secondly, we have to utilize NetFlow for more than easy visibility of the network.

Rather, let’s use an extended version of NetFlow and take account of information on the device, application, user, and binary responsible for each monitored network connection. That would allow us to quickly correlate every network connection back to its source.

In fact, these two changes to NetFlow, are precisely what Ziften has accomplished with ZFlow. ZFlow provides an broadened variation of NetFlow that can be deployed at the network edge, including as part of a container or VM image, and the resulting information collection can be consumed and examined with existing NetFlow analysis tools. As well as standard NetFlow Internet Protocol Flow Info eXport (IPFIX) networking visibility, ZFlow provides extended visibility with the inclusion of info on application, device, user and binary for each network connection.

Ultimately, this permits Ziften ZFlow to deliver end-to-end visibility in between any two endpoints, physical or virtual, getting rid of conventional blind spots like East West traffic in data centers and enterprise cloud deployments.

Charles Leaver – Using Edit Difference Is Vital Part 2

Published by:

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we took a look at hunting for harmful executables with edit distance (i.e., the number of character edits it takes to make two matching text strings). Now let’s take a look at how we can use edit distance to search for harmful domains, and how we can develop edit distance features that can be combined with other domain features to pinpoint suspicious activity.

Case Study Background

What are bad actors trying to do with harmful domains? It might be simply utilizing a similar spelling of a typical domain name to fool careless users into looking at advertisements or getting adware. Genuine websites are slowly catching onto this technique, sometimes called typo-squatting.

Other destructive domain names are the product of domain generation algorithms, which can be used to do all types of nefarious things like evade counter measures that obstruct recognized compromised websites, or overwhelm domain servers in a distributed denial of service attack. Older variations use randomly generated strings, while further advanced ones add techniques like injecting typical words, further puzzling protectors.

Edit distance can help with both usage cases: here we will find out how. Initially, we’ll leave out typical domains, since these are normally safe. And, a list of regular domains supplies a baseline for discovering anomalies. One excellent source is Quantcast. For this conversation, we will adhere to domains and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain name (input data observed in the wild by Ziften) to its possible neighbors in the very same top level domain (the last part of a domain name – classically.com,. org, and so on now can be practically anything). The standard job is to find the closest next-door neighbor in terms of edit distance. By discovering domains that are one step away from their closest next-door neighbor, we can easily identify typo-ed domains. By discovering domain names far from their neighbor (the stabilized edit distance we presented in Part 1 is beneficial here), we can likewise find anomalous domain names in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in reality. Be careful when browsing to these domains considering that they could consist of destructive content!

Here are a few possible typos. Typo squatters target popular domains considering that there are more possibilities someone will visit. Several of these are suspect in accordance with our danger feed partners, however there are some false positives as well with cute names like “wikipedal”.

ed2-1

Here are some odd looking domains far from their neighbors.

ed2-2

So now we have created two useful edit distance metrics for searching. Not just that, we have three features to potentially add to a machine learning model: rank of nearest neighbor, distance from next-door neighbor, and edit distance 1 from neighbor, indicating a danger of typo tricks. Other features that might play well with these are other lexical functions like word and n-gram distributions, entropy, and the length of the string – and network features like the total count of failed DNS requests.

Simplified Code that you can Play Around with

Here is a streamlined version of the code to have fun with! Developed on HP Vertica, but this SQL should function with a lot of innovative databases. Note the Vertica editDistance function might vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

Charles Leaver – Without Proper Management Your Infrastructure Will Not Be Completely Secure And Vice Versa

Published by:

Written by Charles Leaver Ziften CEO

 

If your business computing environment is not appropriately managed there is no way that it can be absolutely protected. And you can’t effectively manage those complex enterprise systems unless there’s a strong feeling that they are safe and secure.

Some might call this a chicken-and-egg circumstance, where you do not know where to begin. Should you begin with security? Or should you begin with the management of your system? That is the incorrect approach. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter initially. Instead, both are blended together – and dealt with as a single tasty treat.

Lots of companies, I would argue too many companies, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO group have no idea each other, talk with each other just when absolutely required, have distinct spending plans, certainly have different concerns, read various reports, and make use of various management platforms. On a daily basis, what makes up a job, a problem or an alert for one group flies completely under the other team’s radar.

That’s bad, since both the IT and security teams should make presumptions. The IT group believes that everything is secure, unless somebody notifies them otherwise. For example, they presume that devices and applications have not been jeopardized, users have actually not intensified their privileges, and so-on. Similarly, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications fully updated, patches have actually been used, and so on

Since the CIO and CISO groups aren’t talking to each other, don’t understand each others’ functions and concerns, and aren’t using the same tools, those assumptions may not be correct.

And once again, you can’t have a safe and secure environment unless that environment is effectively managed – and you cannot manage that environment unless it’s safe and secure. Or to put it another way: An unsecure environment makes anything you carry out in the IT group suspect and unimportant, and means that you cannot understand whether the information you are seeing is right or manipulated. It might all be phony news.

Bridging the IT / Security Space

The best ways to bridge that space? It sounds easy but it can be difficult: Guarantee that there is an umbrella covering both the IT and security teams. Both IT and security report to the exact same individual or structure somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the business does not have a protected environment, and there’s a breach, the worth of the brand and the business can be reduced to zero. Similarly, if the users, devices, infrastructure, application, and data aren’t managed well, the business cannot work successfully, and the value drops. As we have actually talked about, if it’s not well handled, it can’t be secured, and if it’s not secure, it cannot be well managed.

The fiduciary duty of senior executives (like the CFO) is to secure the worth of company assets, which suggests making sure IT and security speak with each other, comprehend each other’s goals, and if possible, can see the very same reports and data – filtered and displayed to be meaningful to their particular areas of responsibility.

That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security capabilities. No, it’s a Peanut Butter Cup, designed equally around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT groups exactly what they require to do their tasks, and provides security teams what they need too – without coverage gaps that could undermine assumptions about the state of business security and IT management.

We need to guarantee that our organization’s IT infrastructure is built on a secure foundation – and that our security is executed on a well-managed base of hardware, infrastructure, software applications and users. We can’t run at peak performance, and with full fiduciary duty, otherwise.