DDOS network issue very large DDOS - resolved

Re: Possible network issue - checking

Both Tanmaya and I are in and checking on this matter now.
 
Re: Possible network issue - checking

We are on the phone with the NOC now as it seems there may be a switchport issue with a core switch we are having a link to as well.
 
Re: Possible network issue - checking

There is almost 80% packetloss at times, which is making this matter. It is the top, and only priority right now, restoring access fully.
 
Re: Possible network issue - checking

It is a multiple source attack coming in well over 1 gigabit we are working to get what is being attacked to shut down.
 
Re: Possible network issue - checking

Graphs show a bit of legit traffic making it through but most is being clogged out by this DOS which we are still on call to get blocked before hitting us.
 
Re: Possible network issue - checking

Over 6 gigs of traffic is hitting upstream :(
 
Re: Possible network issue - checking

We are allowing a temporary null route of one Hyper-V VPS range in the 199. segment.
 
Re: DDOS network issue - checking - Multigig DDOS

This is a very complex attack, 6+ gigabit and hard to trace.
 
Re: DDOS network issue - checking - Multigig DDOS

We have null routed a range that is important, but it worked, we are working to fine tune to exactly the IPs that are being attacked now, as we were able to get more info once the network started working better.
 
Re: DDOS network issue - checking - Multigig DDOS

All is resolved now and for a while, I was updating on twitter as on the phone and making progress.
We first blocked a large range, then narrowed it down to the point we got the exact destination, all our monitoring was off the charts and killing CPUs to sort, making the task complex to find.
 
Re: DDOS network issue - checking - Multigig DDOS

DDOS Followup
Currently there are two targeted IPs still fully null routed for all incoming traffic. One of the two was around 3am Central Daylight Time the recipient of a very large DDOS attack from hundreds of sources. It was a UDP Flood, in and of itself this is not uncommon and has been blocked many times before, but the sheer size of this one simply flooded out switches on multiple levels including some outside of our network or control. Once we were able to find the target IP appropriately, we were able to get just that IP and the other on the server blocked, and filtered out with the help of upstream routers to prevent them from coming in further and causing the issues.
The attack impacted our network as well as some others due to the size of it at the time before filtering took place.
This situation is a bit hard to control fully, but we have some plans in motion to help speed the recovery process should such ever occur again, while we certainly hope it would not, in this age you can never just hope, you must plan for it to happen again.
 
Back
Top