When meeting with companies newly looking into protection from DDoS attacks, it’s a common occurrence that they have no idea they’ve already been attacked. Certainly, a healthy number of companies that come to Radware looking for help are those that have been hit by large attacks. They either had no solution in place or found that the solution they had was ill equipped. Also not uncommon are organizations that have been threatened by attacks through some kind of ransom-based threat. Naturally, these evaluations occur under some urgency and duress, making it challenging on both sides.
We also frequently meet with companies looking to proactively put DDoS protections in place – often they are responding to the continued headlines around attacks or have caught wind of a peer in their industry becoming a target. Regardless of the background, a proactive stance is a great opportunity to complete a thorough evaluation and proof-of-concept for the customer, working from their own unique requirements.
A typical step in presenting our solution is a demonstration of certain capabilities, usually involving pulling a copy of a portion of their traffic off of a span port and feeding it into our technology. With rare exception, what we find is that there is already attack activity ongoing. So much for proactive protection planning!
OK, to be fair, these companies are by-and-large leading many of their peers in seeking a holistic solution. The fact that they are unaware of ongoing activity is a testament to the importance of purpose built technologies that specialize in the detection of the rapidly growing array of DDoS attack vectors and tools.
Considerations Worth Considering
There are a wide array of capabilities and requirements that organizations seeking effective advanced cyber-attack (including DDoS) protection need to consider. And although the weighting or importance of different requirements will vary from company to company, there are some common factors that should be foremost in any solution consideration. Among them, the efficiency and accuracy of attack detection AND minimizing time to effective mitigation when redirecting attacks to cloud-based resources.
Many security teams struggle to maintain dedicated resources that can keep up with the variety of threats and provide effective manual detection. Sure, it’s easy to tell when you’re getting slammed by a 100+ Gbps attack, but many attacks that don’t exceed link capacity thresholds can exhaust more specific resource constraints within your network or application infrastructure. They are much less obvious to most. Many cloud-only solutions can leave the customer with the burden of detection and having to make a trade-off between low thresholds for mitigation and low false positive rates.
Many solutions are very specific about the various attack vectors its technology can block, but are less specific about how well it identifies and isolates these attacks from legitimate traffic. Over mitigation becomes a common problem with many solutions that cannot use a full view into normal traffic patterns to detect anomalies that warrant further inspection for potential malicious intent.
Additionally, some cloud-based resources will tell you they can implement active attack monitoring through a sample of traffic flows, such as Netflow data. Typically, these solutions are simply detecting traffic patterns that exceed established rates and thresholds, rather than looking deep into the traffic for behavioral patterns that may signal an attack.
The Battle Against the Clock
This lack of an accurate on-premise cyber-attack detection capability has implications not only on the non-volumetric attacks, but also those large attacks that we read about in the news. Let’s assume for the sake of argument that a company utilizing a cloud-only DDoS solution has the ability to detect large attacks. They still need to initiate a swing of traffic to the scrubbing centers of their provider, and then wait for those cloud-based resources to assess the attack for optimal mitigation tactics.
Organizations need to consider a focus on minimizing this ‘time to mitigation’ – the period of time it takes mitigation resources to fully understand the nature of an attack and apply the appropriate defenses. Without proper visibility into the attack prior to receiving the traffic, many cloud-based services can take upwards of 30 minutes to start active mitigation and even longer for application of the right tools for effective protection. Conversely, hybrid solutions that combine the cloud-based resources with on-premise components have the advantage of attack visibility in advance of this traffic redirection.
Advanced hybrid solutions go a step further by supporting deep defense messaging between premise and cloud to share a full footprint of the attack and knowledge of already proven mitigation tactics for defense.
The Easy Way Out?
More often than makes me comfortable, I hear organizations say they are looking hard at cloud-only protection resources because it’s “easier” and they just don’t have the resources to manage the attacks. Hopefully the above examples provide a couple of the flaws in this philosophy . . . cloud-only in the end will often put more not less burden on your teams for ensuring that attacks don’t cause outage. It’s unfortunate that this thinking exists in the marketplace, particularly when the option of fully-managed hybrid solutions exist and can often be less expensive than cloud-only options. The good news is that when they take just a bit of time to fully explore their options, the right choice quickly becomes apparent.