Short of changing the website’s URL, how does a site overcome a DOS attack emanating from a diffuses network of cloud computers (i.e. without a central command center)? I am thinking about the recent attack on US banks which, according to the New York Times, has been orchestrated by Iran.
Filtering at the upstream network boundary - basically, you identify the characteristics of the DDoS packets, then set up a filter (preferably at your upstream provider boundary) so DDoS packets just get dropped aggressively. For simplicity (lazyness, bad coding, deliberate choice) DDoS attack software tends not to obey the normal TCP/IP rules, so even requesting a packet resend may be sufficient to identify a legitimate request. Your filter should also reject spoofed packets with incorrect IP addresses based on the incoming connection - actually, ISPs should do this as a matter of course. A spoofed packet should never exit an end-node network, or cross a correctly configured peering connection. You can use upstream filtering proxies, that validate the connection request before connecting to the application server proper (many DDoS attacks rely on exhausting connections limits with non-responsive half-open connections).
Also, rent more bandwidth and weather the storm - usually, they only last a short period of time as it costs to rent and expose a bot-net in such a fashion.
Wiki covers the prevention options
Si
Wow. Very enlightening. Still, “weather the storm” suggests that there’s not always a great set of solutions. Given that this one is sponsored by a government (ostensibly Iran), there may be an extended period of bad weather.
Yeah, but the fact that those attacks used datacenters (good connectivity for flood attacks) means that they can be more easily be blackholed (limited range of IP addresses/peering connections to drop). Good IPS systems with upstream blackholing will deal with that sort of threat pretty quickly, and the datacenters/cloud providers will be quick to respond so they don’t lose peering links on to the backbone.
Si
This latest round of DDOS attacks on financial institutions is really upping the game. One recent attack on an unnamed FI was clocked at 77 GBps. The days of using a thousand or however many “botted” home PCs on DSL lines may be behind us, at least as far as attacks carried out by governments and well-funded nation-states.
Unfortunately, “black-holing” a datacenter is not as easy as it may sound. The individual servers in the building will not necessarily have IP addresses in the same range, so your network admins can’t just throw a rule like “block everything from 26.103.72.*” on the routers, and unless a Tier 1 ISP knows that the traffic is malicious, they’re unlikely to cut it off. Dropping a peering link may slow down an attack on a bank, but it will also raise hell with everything else.
I agree that the attack data volumes are staggering, and the defensive responses will be interesting. And while you are right about the IP ranges, they will be distinct from consumer ISP ranges (already well known as they are often used for SMTP filters to prevent spam).
As Tier 1 providers get caught up, they may start asking datacenter providers to increase their internal network monitoring - it isn’t that hard to spot anomalous traffic patterns on a bunch of shared servers.
However, interesting times.
Si
I have worked for a semiconductor company that makesthe chips used in routers. One way of at least partially combatting DOS is to have multiple output queues to which traffic is sorted by type (I’m wildly over-simplifying here). That way a DOS of one message type will block only that queue, but other message types still get through OK via the other queues.
Custom ASICs for network filtering are awesome, and are becoming essential for preventing DDoS (particularly at these new volumes).
If you don’t mind me asking, UncleFred, are these chips at least semi-programmable - can new rules/queue filters be applied as a response to new threats?
Si