

While we can estimate the expected size of future attacks, we need to be prepared for the unexpected, and thus we over-provision our defenses accordingly. The slow growth is unlike the other metrics, suggesting we may be under-estimating the volume of future attacks. That was the largest attack known to us until recently, when a Google Cloud customer was attacked with 6 Mrps. In March 2014, malicious javascript injected into thousands of websites via a network man-in-the-middle attack caused hundreds of thousands of browsers to flood YouTube with requests, peaking at 2.7 Mrps (millions of requests per second). A notable outlier was a 2015 attack on a customer VM, in which an IoT botnet ramped up to 445 Mpps in 40 seconds-a volume so large we initially thought it was a monitoring glitch!

We’ve observed a consistent growth trend, with a 690 Mpps attack generated by an IoT botnet this year. It remains the highest-bandwidth attack reported to date, leading to reduced confidence in the extrapolation. This demonstrates the volumes a well-resourced attacker can achieve: This was four times larger than the record-breaking 623 Gbps attack from the Mirai botnet a year earlier. The attacker used several networks to spoof 167 Mpps (millions of packets per second) to 180,000 exposed CLDAP, DNS, and SNMP servers, which would then send large responses to us. Despite simultaneously targeting thousands of our IPs, presumably in hopes of slipping past automated defenses, the attack had no impact. Our infrastructure absorbed a 2.5 Tbps DDoS in September 2017, the culmination of a six-month campaign that utilized multiple methods of attack. Given the data and observed trends, we can now extrapolate to determine the spare capacity needed to absorb the largest attacks likely to occur. After accounting for the expected growth, the results are less concerning, though still problematic. But we need to factor in the exponential growth of the internet itself, which provides bandwidth and compute to defenders as well. The exponential growth across all metrics is apparent, often generating alarmist headlines as attack volumes grow. (Several years of data prior to this period informed our decision of what to use for the first data point of each metric.) We then plot the largest attacks seen over the past decade to identify trends.

To do this, we analyzed hundreds of significant attacks we received across the listed metrics, and included credible reports shared by others. Getting this right is a necessary step for efficiently operating a reliable network-overprovisioning wastes costly resources, while underprovisioning can result in an outage. Our next task is to determine the capacity needed to withstand the largest DDoS attacks for each key metric. This way, we can focus our efforts on ensuring each system has sufficient capacity to withstand attacks, as measured by the relevant metrics.
