Network connection aggregation.

We have a Dell Powerconnect 5524 with two 10 gig fiber ports. Our server has an Intel X520 dual 10 gig fiber port. I wish to create a static LAG.

24 connections come into the switch. We have two cables out to the NIC. I have read all the manuals, online tips, etc. But still fail to aggregate the ports successfully. I sometimes see that the NIC card does not see that the switch is setup properly.

I suspect that I am missing an overall concept that the manuals take for granted that I know.

I LAG the two fiber ports on the switch in various ways. No luck.
Do I have to actually include all 24 ports that are connected to other switches as well as the two fiber ports in the same LAG?

The NIC teams successfully. But not being happy with the switch setup, it reverts to failover teaming not STATIC LAG. We are feeding a 16 SSD drive RAID 0. Speed is required to utilize it fully.

What OS are you running and are you using native teaming or driver provided?

I’m running Win2016 and using native teaming. You need to edit the team and change the properties from failover to both active.

You have to both set the LAG group on the switch and a compatible hashing method on the switch and bond group.

If you are using iSCSI on this link you should not use link aggregation and should use iSCSI multipathing.

Also, if you’re not aware of this, bonding does not increase throughout to single clients*. It increases total bandwidth.

  • unless you are using XOR balancing, which becomes inefficient with more than 3 or so clients and you should not do

Note if you aren’t using a protocol that supports multi channels like modern CIFS or iSCSI you will want to use the X520’s internal Link Aggregation Control Protocol (LACP) if performance and not availability is your concern.

While there will still be problems like the hashing method etc…, most “static laag” configurations will be within the OS kernel and will add enough latency that even reaching the wire rate of a single 10GbE port.

This is because 10GbE tens to be limited by the bandwidth delay product due to latency and not bandwidth. The hash map creates a spinlock dependency which adds latency and while the NICs internal support mitigates some of this latency it still adds some.

As Cleophus mentioned aggregation has numerous issues, and placement even on slower links often lead to TCP window collapse.

If you can add information about what the network is being used for and what platforms you are using we may be able to suggest a configuration which will most likely reach your goals.

Sorry I have not replied.
I was in the field with sketchy WIFI.

Unfortunately we are using 2008 R2 on the server. I am experimenting to see if we can update that and still be compatible with certain vendor software. So the OS does not setup the NIC. It sets up nicely with newer versions of Server.

We must use the Intel Pro software. Which is quite good. The team is easy to initiate. Various configurations for the team are quick to setup and try. But the software informs that the switch is not configured for a LAG connection.

The full setup is. 10 to 12 Dell Power Connect 24 port switches, each with 24 devices attached. Each device can have up to 14 GB of files to transfer. Very many files each. The switches all feed to one 24 port Power Connect with two 10 Gb ports. We use the Microsoft FTP software via IIS setup. Server 2008 R2 running on single or dual CPU systems. 8 to 13 TB SSD raids, composed of 8 to 24 drives. We have one system with a Mellanox NIC, the rest are Intel X520. The devices are downloaded to the server simultaneously due to the nature of the operation. It could be done in various numbers, but is not. If the data is downloaded successfully, the devices are erased ( after data is transferred off the SSD raid. It is a RAID 0. ) More than a terabyte is usually downloaded in a session, sometimes more than 3 TB.

We can manage this in less than 2 hours if things are going smoothly. But time is money. So we want to see if LAG can increase throughput.

Another question.
The FTP has it’s IP. Can I feed the teamed NIC from two switches? Each one with a 10 Gb port. I have not tried that. We have the spare switches.