is internet backbone bandwidth limited by RAM speed? or do multiple servers control one laser?

when I read about huge bandwidth in internet backbone or let’s say about Google planning a gigabit per second connection for low key business use,8599,1964386,00.html I wonder about how it matches up with the limitations of computer speed, especially the RAM processing speed.

So how do they do it? Do they actually have multiple computers connected to the same cable, transmitting data through a single shared or even multiple lasers? Or is this gigabit per second notion really a matter of having lots of thin cables twisted together that, once again, are divvied up between different machines in the same locale?

RAM is easily able to keep up. The latest chipsets are capable of over 50 GB/s of I/O. 1GB/s is no problem. The most advanced networks are 10 GB/s, and even then commodity hardware is able to keep up.

What you typically have is 10G ports plugged into optical aggregators. These platforms take on the various inputs (could be anything from 1G to 10G Ethernet, FC-100, FC-200 etc.) and wrap them up with some framing bits and then optically multiplex them together and ship them off across a fiber optic network. For a good explanation of optical multiplexing Wikipedia has a decent explanation. 40G and 100G links are the new shiny toys coming out from an optical transport perspective.