Could DDOS attacks be made a thing of the past with smarter network switches?

A DDOS attack is a stream of packets containing service requests arriving at a target computer that either
1. Lacks the CPU/memory to process those requests
2. Lacks the bandwidth to handle them

Just thinking about it, these packets wouldn’t be arriving if network switches upstream were not sending them. They are part of the problem.

What if the computer getting DDOSed could send a command upstream blacklisting each and every IP that is DDOSing it? The command packet would block messages to a specific IP, from another specific IP. These packets would get forwarded through the network, all the way to the Colocation centers or even the user’s IP.

Once you have switches smart enough to do that, the switches are expending enough resources on each packet that the switches would be getting DDoSed.

Moreover, how often do you want the Internet’s core routers to fail? We have enough problems with bogus BGP announcements causing all of India’s traffic to flow into a T1 line connected to a BeBox. (Over-dramatization for effect, but the underlying problem is real.)

And those switches’ response to bogus or forged anti-DDOS packets would be what exactly?

Well its common practice to only accept such a thing from the interfaces declared as local or trusted…
But anyway, there are appliances that provide such a protection on the front of a server… eg a firewall appliance

The DDOS is distributed. So, if you were in control of a botnet with 300,000 computers all across the world, good luck.

But, could they stop a botnet in its tracks? Yes, however, the underlying infrastructure of the internet is based on a “peering agreement” where everyone unilaterally accepts to connect to each other without impeding any traffic from the other. It’s already quite a tenuous situation as it is and now throw it in at an international scale. You could just shut down where the command and control servers connect to the internet, which they have done successfully, only to find out that after you did that they then decided to distribute the command and control servers! Which, they did!

So, to protect a public facing web page, like for a news agency or the like its very difficult. If the zombie machines are mostly localized to a geographic IP range, then yes, you could just block those. And, a lot of the time the IP’s are lumped together, but not all the time.

I guess maybe the carriers could be monitoring the traffic of their users and when there is an abnormal occurrence such as a specific web request being made for the same page at the same time from a large customer base, they could start blocking outgoing traffic. That would alleviate the peering issue and would become a customer service issue. But, then, its a customer service issue with hundreds, maybe thousands, maybe more people wanting to know why their internet was turned off!

However, another twist would be what about legitimate requests? Jeremy Clarkson was fired from Top Gear. Everyone is hitting up Jalopnik and other sites to read up on this and post directly to the pages, post via linked FaceBook accounts, etc. ALl this requires back-and-forth communications and server and network resources just to read and post a message. These are all legitimate users doing nothing malicious and those web sites are essentially being DDOS’d right now by the activity.

Two more examples:

Michael Jackson’s funeral procession hit record breaking levels of internet usage due to everyone streaming the live feed.

Barack Obama and his inauguration ceremony was a real close second, and the same thing happened.

Both are groups of legitimate users doing nothing malicious, swamping networks.

Quite right.

But the OP’s idea was to have a DDOS victim send out a packet saying “Help; I (65.32.215.154) am being attacked by 213.245.64.234”. And this packet would fan out across the internet until the entire infrastructure was blocking nasty old 213.245.64.234. IOW, all the intermediate infrastructure would have to trust this unattributed packet.

That’s the point of the OP I was debunking.

Yes, smart reactive firewalls are in front of each and every corporate & government server worthy of the name. The OP’s idea was not that.

One solution that can help with a lot of the typical DDOS attacks going on nowadays would be for providers to filter at their border points to only transmit packets from IPs that they own.

This would prevent a lot of the forged source IPs that are used for reflection and other attack types.

However, because this was not common practice for so long, it can be painful to implement if you’re not careful (it can be surprisingly hard for large providers to completely know what IPs legitimately flow through their network if they haven’t had to historically track that information).

And it’s a process, because there are thousands of network providers out there. We put it in last year, I know other providers are doing it, but it will be a while.

So, the new problem would be/is, that your IP get’s blocked/locked out, because some IP claims your IP is a DDOS attacker?

Your emails don’t work, websites don’t work properly, etc… since your randomly assigned IP that got randomly allocated from your ISP got blacklisted.

Many routers can support Deny ACLs on an egress interface with only a minor hit in performance – if it’s a reasonable number of ACLs.

As noted above, the problem is that the attack is distributed, so you’d have to deny access to thousands of source IP addresses, and that might be an issue. Not a performance issue, though: a well-designed router does one lookup per packet to see if an ACL applies, even if thousands of ACLs might apply to a given packet. A resource issue: how many ACLs does the data plane support? It’s mostly a memory limitation in a TCAM, for routers where packets are handled by special-purpose packet processing engines.

It would apply only to the a specific site of the target company of the DDOS attack.

The bigger problem is that an end system doesn’t know which router the packet came from. It could assume that if it were to send a packet to the same destination it would take the reverse path (generally true), try to do a traceroute on that path, and send to the first upstream hop. But it it’s dropping most incoming packets due to being overloaded, it’s likely to also drop the traceroute responses.

So, for a feature like this to work quickly, potentially targeted systems would have to somehow know (find) all its first-hop routers.

To complicate matters further, sites that are likely targets of DDOS attacks aren’t single systems. They’re vast arrays of systems in data centers, with distributed arrays of systems redirecting each individual service request to a different server.

Finally, it can be difficult to detect the problem, differentiating it from simply more requests than the system was engineered to handle.

I’m sure there are hordes of engineers working on different approaches to this issue. If there was a simple solution, it wouldn’t be a problem.

Not so, your IP gets blacklisted.
Many services are blocked to your IP, until the IP is cleared again from your ISP - but that takes time and not every web-server gets updated.

Sure you can turn your modem off and on again and hope you get a new IP from your ISP and let someone else deal with the problem.

Negative. I specifically meant that these blocking packets would contain both the sender and the destination. So “(65.32.215.154) am being attacked by 213.245.64.234” is the blocking packet.

It won’t fan out across the network, it follows the route the attack packets use in reverse. Each node back to the attacker gets this message, and that node sends it to the node that it was sent from, and so on.

Ideally, it makes it back to the user’s ISP. The user’s ISP would have objective data on who among their users are likely infected machines part of a botnet, because you would assume that those machines would have a large number of “complaint messages” from servers reporting they were DDOSed.

Again, remember that the “source IP address” on a packet is about as secure as the “From” line on an email. You simply cannot trust it.

In many cases, “botnets” are either unneeded, or are just an intermediary for the attack.

A lot of current attacks are amplification attacks. Here’s a basic rundown on how they work.

Find a protocol, say NTP, network time protocol. It has an innocuous feature where, for example, anyone can sent a “status” command and it returns a batch of data about time offsets or whatnot.

The important thing is not what the data contains. The important thing is that I send 10 characters and the server returns 200. That would be a 20-1 amplification.

NTP is an extremely common protocol, and not many people bothered to lock it down because it’s not a “security” risk, there’s no sensitive or interesting information being returned. So I, as a bad guy, have a list of thousands of devices that return a 20-1 traffic rate. So if I have a 1-gig uplink, I can run a program that just runs through the list and sends “status” commands to all of them, and it returns 20 gigs of data.

Part of my program forges my source IP to be my target. So that 20 gigs of data goes to the target and obliterates them. The only reason I use a botnet is to gain more outgoing bandwidth, not saturate my own connection, and obscure the trail a bit. But the actual target will never see the botnet/attacker.

Blocking the servers because they don’t have this seemingly innocuous command locked down yet? Difficult, because again it’s not like it’s one or two sending you gigs of data, it’s thousands of them sending you small amounts of data. You would very quickly suddenly turn the internet into a whitelisting scenario where everything is blocked by default.