Riding The Cyber-attack “Tsunami”
Internet security and cyber-attacks continue to make front page news with massive Distributed Denial of Service (DDoS) attacks taking down social media networks, Cloud Service Providers and even leading Internet security websites. The complexity, sophistication and frequency of cyber-attacks are evolving at an alarming rate, while the cost of launching an attack remains staggeringly low in comparison to the damage they can cause. The public nature of the Internet adds to the vulnerability of Cloud-based enterprise applications.
With the Internet of Things seeing literally millions of cameras, sensors and barely-protected devices brought online every month, the potential “attack surface” is growing exponentially. Some estimates put the expected total number of “things” connected to the Internet by the end of 2020 to be between 26 billion and 30 billion.
At the same time, the ultimate flexibility and transformational benefits of “Cloud” services see Cloud usage grow inexorably. Cisco predicts annual global Cloud IP traffic will reach 8.6 Zettabytes by the end of 2019. So is that a lot? Well if 1 Zettabyte is equal to 1 million petabytes, and 1 petabyte equals 1 million gigabytes, then 8.6 Zettabytes equals more then 8½ trillion gigabytes of traffic flowing in and out of Clouds by the end of 2019. And of this, Cisco predicts 56% (or just under 5 trillion gigabytes) will be traffic related to workloads and applications residing in public Clouds.
In short, there’s an awful lot of Cloud-bound Internet traffic and Internet-facing applications for the maliciously-motivated cybercriminal to potentially target and attack, via an almost unfathomable number of sources.
So what does that mean for enterprises that are increasingly migrating business-critical applications and business processes to “the Cloud”?
More danger ahead
Cyber-attackers use a range of methods to cause disruption and damage to Internet-based services, that can lead to catastrophic events ranging from stolen funds, customer data, and intellectual property. Even the most sophisticated Web and Cloud services can simply grind to a halt if targeted.
The Rise of DDoS
Recent high profile attacks (like that experienced by a leading DNS provider in late October 2016) have been based upon volumetric Distributed Denial of Service, or DDoS, the primary goal of which is to saturate the connectivity of an Internet facing service (web site or Cloud platform) until traffic can no longer get through, making it impossible for legitimate traffic to reach the target. DDoS attacks have been occurring since 1988, with growing frequency, severity and consequences. The attack on this company is believed to have been the largest volumetric DDoS attack in history, with an estimated load of 1.2 terabits per second hitting its servers.
This begs the question, “how could such a massive attack be instigated and executed without detection?”
This attack was the latest in a growing number of DDoS attacks carried out using a “botnet” (also known ironically playfully as a “zombie army”), consisting of a number of Internet-connected devices that have been compromised and set-up to forward transmissions (including spam or viruses) to other devices on the Internet. But this one was different from most preceding DDoS attacks because the botnet in this case is believed to have been made up of more than 100,000 Internet of Things devices like printers, IP cameras, home broadband routers and baby monitors, all infected with known malware (believed to be the “Mirai” botnet).
And worryingly, security experts believe this attack could actually have been much worse with the particular malware observed to have spread to more than 500,000 devices (mostly set with weak “default” or manufacturers’ passwords, thereby making them easy to infect).
DDoS can be costly; a bombardment can last for hours and is easily replicated, leading to the complete shut-down of Web or Cloud services as bandwidth becomes saturated. In the case of this DNS provider, a number of huge US-based and Europe-based websites were shut down including Twitter, Pinterest, Reddit, GitHub, Etsy, Tumblr, Spotify, PayPal, Verizon, Comcast, and the Playstation network. Beyond these high profile sites, it is likely that thousands of Enterprises depending on SaaS players for their IT solutions were disrupted. Even customers accessing Internet-facing services in AWS weren’t safe with the Cloud giant reporting customers’ services and applications hosted in its East Coast US and West Europe data centers as having been impacted.
US President Barack Obama speaking on Jimmy Kimmel Live commented that future Presidents face the challenge of “how do we continue to get all the benefits of being in cyberspace but protect our finances, protect our privacy. What is true is that we are all connected. We’re all wired now.”
Prior to the recent spate of botnet-based attacks Internet security specialist VeriSign estimated that service-denying attacks can cost an enterprise up to $300,000 per hour in lost revenue alone. And as the scale of attack grows, so will the potential cost. When critical network systems are shut down, productivity grinds to a halt and even the biggest brands can suffer when customers can’t access a website, or worse still, become casualties of a data breach.
But it’s not only DDoS that the Cloud-adopting enterprise needs to protect against. Businesses operating over the Internet face further potential threat with the likes of Man in the Middle (MitM) attacks and Address Spoofing further compounding the risk.
For the enterprise moving or considering moving critical applications out of private data centers or private Clouds, the risk of falling victim to any of these cyber-attacks is amplified enormously. In private data centers or private Clouds, services are commonly protected via a range of expensive specialist appliances, systems and protocols arranged strategically as layers of protection at the “gateway” between the inherently secure private network (typically MPLS-based or IPSec-encrypted Internet) and the “Wild West”, the public Internet.
But order eventually came to even the Wild West, and the same can be said of the Cloud.
A Multi-Layered Approach to Protection
In today’s Cloud-connected world, there’s a range of methods and techniques that can be deployed to add sophisticated protection to even the most complex of Cloud services. Dedicated and Cloud-based DDoS mitigation services, the use of Content Delivery Networks, Web Application Firewalls (WAFs), Cloud-based Anti-Virus systems, and Application Delivery Controllers all add layer upon layer of protection.
“Even without considering complex and sophisticated appliances or Cloud-based security services, there are ways to reduce the risk of cyber-attack for any application or service that need not be public-facing despite being virtualized on a public Cloud platform, for example a corporate intranet site or ERP platform,” said Braham Singh, SVP, Global Product Management at Reliance Communications (Enterprise) and Global Cloud Xchange.
Avoiding the Risk from the Network
Increasingly, Cloud Service Providers like AWS, Microsoft Azure and Softlayer have introduced connectivity options that allow enterprises to connect their private, secure corporate networks to the CSP’s “public” Cloud. To make these inter-connections scalable, they are typically only available through large scale bandwidth options so the most common way for the enterprise to connect is via an “accredited partner” like Global Cloud Xchange through its CLOUD X Fusion offering.
The benefits of connecting to public Clouds in this way are multi-fold.
“Because the underlying corporate network technology (for example MPLS or Ethernet) is extended all the way into the Cloud data center, there are inherent network performance benefits like Round Trip Delay (RTD) predictability, Quality of Service (QoS) and even Service Level Agreements (SLAs) to better guarantee the end-to-end performance of a Cloud application across the network,” Singh added.
Accessing Cloud services via private MPLS connectivity services also protects against some attack types as internal network addressing and infrastructure is “hidden” from the external world. It’s as if there is a firmly locked door standing between application and unauthorized users on Internet. Internal core routing information is not even disclosed within a client VPN; the only addresses visible to the Customer Edge (CE) devices are the addresses of the MPLS Provider Edge (PE) routers, not the core Provider (P) routers. Without a clear Internet-facing target, a DDoS onslaught becomes almost impossible to initiate, while the use of MPLS labelling and secure VRFs render the WAN almost impervious to Address Spoofing and MitM attacks.
So as you can see, as a “private” VPN technology, MPLS is inherently secure, offering in-built protection from cyber-attacks like DDoS and Address Spoofing. “We chose CLOUD X Fusion (from Reliance / GCX) because of the stability and security it offers in comparison to Cloud connectivity over the open Internet,” said Rohit Ambosta, CIO of Angel Broking, one of India’s leading stock broking and wealth management firms. All of this means that applications hosted in public Clouds and accessed over private networks can be protected from the majority of malicious security attacks.
And in addition, most corporate networks already have sophisticated security perimeters protecting locations and users within that perimeter, meaning these defenses can be further used to secure activity between users and applications hosted in public Cloud platforms.
With Internet usage set to grow and grow, and the Internet of Things expected to top 20 billion devices by 2020, applications and services accessible via the Internet, whether hosted on-premise or in a public Cloud platform, are open to an ever-increasing threat of malicious attack. The Internet of Things is the new frontier for botnets to launch SSDP-based (Simple Service Discovery Protocol) reflection attacks. In effect, any network-connected device with a public IP address & vulnerable operating system or improper configuration can now be used as an unwitting participant in an attack.
“We are also looking at DDoS protection to further enhance our security,” continued Ambosta. Security risk mitigation and protection strategies can be complex (and let’s face it, expensive) using layer upon sophisticated layer of appliances and services to build a defense perimeter as impermeable to attack as possible. But that should not deter enterprises from taking full advantage of the benefits offered by migrating critical applications to the Cloud. So why not exploit the inherent security of the corporate network too, by connecting public Cloud services directly and privately to the enterprise WAN?
How to be Prepared for Facing an Attack
Internet and web-based services have become the lifeline for most organizations. BCG expects the Internet economy of the G-20 to reach $4.2 trillion by 2016, representing 5.3% of total GDP and growing at a healthy 12.7 % approx. DDoS is one of the most prominent threats to this ecosystem. The top verticals impacted by DDoS have been Media and Entertainment (including OTTs), Gaming, Software and Technology (including SaaS providers) and the Education verticals.
With the correct risk assessment and a multi-layered mitigation strategy in place, there is a good chance that the damage can be limited. In addition to private Cloud connectivity, organizations should consider the following when devising an effective risk mitigation strategy
DDoS Mitigation Solutions and Services
DDoS mitigation can be achieved through on-premise appliances or Cloud-based services. The benefit of deploying protection on-premise in close proximity to the protected applications is the ease of fine tuning for greater awareness to changes in network traffic flows in and out of the application servers, which in turn will lead to more effective detection of suspicious traffic on the application layer. However, on-premise protection cannot handle volumetric network floods that saturate the connectivity between the application and the public Internet; by the time the DDoS attack reaches the on-premise protection, it is too late. This is where Cloud-based DDoS mitigation from a specialist connectivity provider comes into play as a Cloud-based service will effectively absorb and deflect known DDoS traffic within the provider’s network, before it reaches its target.
In reality, the ideal solution is a hybrid of both approaches. It is important that the key connects to the data center have a flexible on-demand bandwidth service which can scale during an attack, while the on-premise solution should be compatible with the services from a number of Cloud-based service providers and support VeriSign OpenHybrid, Arbor Cloud Signaling Coalition (CSC) etc to integrate effectively.
Content Delivery Networks
For web-based applications hosting them within a Content Delivery Network (CDN) is an effective way of leveraging large-scale protection. Mass-scale CDNs offer multi-layered protection that can easily absorb 100 Gbps of attack attack traffic, maybe even more. However, enterprises do not select all objects and webpages to be served through a CDN as part of their application architecture so this protection is limited only to those addresses or ports that are. Services such as DNS, which is not routed through the CDN, and other such components not included in the CDN service remain at threat of potential attack. A CDN alone, therefore, will not provide full protection from malicious attack, but should be considered when designing a scalable threat mitigation architecture.
Web Application Firewall
Dependent upon the risk assessment and the IT architecture, a Web Application Firewall (WAF) can be used to build protection against web application level attacks to thwart not just application layer attacks but also other data breaches like session-hijacking, SQL injection and cross-site scripting (XSS), OWASP Top 10 vulnerabilities and other DDoS attacks resulting from vulnerabilities inherent in web applications.
Application Delivery Controllers and Global Load Balancers
Load balancing HTTP(S) and SSL proxy to provide a single anycast IP to front-end deployed back-end instances across hosting or Cloud Service Providers adds a further line of defense. Application Delivery Controllers (ADCs) and Global Load Balancers use “spare” capacity to direct user traffic to an application or website back-end during a DDoS attack. This has the advantage of increasing the surface area to absorb an attack by moving traffic to between various Cloud or data center deployments, depending on the available capacity. ADCs also provide a secondary line of defense by leveraging TCP SYN Cookie options, basic HTTP inspection, HTTP Cookie injection and sophisticated ‘human check’ scripts.
Many of these mitigation techniques are available as a service with an overall SLA for the mitigation service rather than individual, unrelated SLAs to cover different techniques. A managed DDoS mitigation service provider such as GCX / RCOM, who understands the overall application architecture and provides a single point of contact for managing the various mitigation mechanism is best suited for the multi-vector layered threat landscape.
However, it is also important to understand that a solid architecture based on the overall design and backed by a strong underlying process is the best defense against a DDoS attack. Response tactics should change as attack volumes increase, and it is essential that there is a documented Incident Response process for ‘before’, ‘during’ and ‘after’ an attack to solidify the response.