What do I need for 1Gbit Ethernet?

Rob Williams

Editor-in-Chief
Staff member
Moderator
I'm going to be taking a look at Thecus' latest NAS that I posted about a couple of weeks ago, and because it can use 1Gbit Ethernet, I figured now would be a good time to look at how to go about upgrading my network to support it. I often transfer files from one PC to another as well, and sometimes I'm dealing with 10GB+, so having the increased network speed would be great.

I am not a networking guru, but I reckon that most of what I need is a 1Gbit-capable router, better cables (Cat 6) and of course, a network card that can handle it (fortunately, most of today's motherboards do). So is that it? I keep hearing about using a router that supports Jumbo Frames, which mine doesn't (ASUS RT-N16), but I've been told that it can be flashed with a non-ASUS firmware that can make it happen.

I'm just looking to make sure I have all of the angles here. I'd love to be able to copy and move files both to the NAS and other PC's are higher-than-10Mbit speeds, so I'm anxious to make sure I have all of what I need before that NAS arrives.

Thanks for the help!
 

Tharic-Nar

Senior Editor
Staff member
Moderator
For simple Gigabit networks, it is just a case of plug and play mostly, if you want the most out of it, you'll need to do some tweaking.

CAT6 cable is not needed, you can get away with 5e for 1Gb speeds, CAT6 just has extra EM Shielding through double twisting to protect against interference, you only need 6 if your going over 1Gb, like with teaming of something, in which case it's good for up to 10Gb (supposedly).

You don't need a Gbit router, you can use a switch with Jframes, since all transfers will go through the switch anyway if both NAS and PC are connected to it. I'll try and dig up the Firmware replacement for you if i can find it again, so you don't need to buy a new switch.

Jumbo Frames are the trickier part of the link. All parts in the chain between the 2 end points need to support Jframes, and the right level as well.
JFrame001.png
The NAS, the NIC in your PC and the Switch/router all need to be set to use the right size JFrames. Most Switch's are auto-sensing with packet size, so you shouldn't need to worry to much, but if there are options available, best to set it at the same level at all points. The most common sizes are 4k, 7k, 9k and 16k packets, higher packet size increase maximum throughput, but can increase latency on smaller transfers as the buffer waits to fill before transmit. Setting the JFrame option is done through the advanced panel under properties/config for your NIC driver. Some NICs may allow for a high MTU instead of fumbo frames, since naming schemes change. The biggest problem will be making sure they all line up, and that the OS doesn't interfere. In most cases, the NIC driver will override the OS MTU on that NIC, but you may need to manually edit it...

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\interface-ID
Edit > New, DWORD Value, MTU
Edit > Modify, 4074, 7418, 9024, etc

You might be able to use MTU Discovery as well....

All points need to be set to the same, so if you have one NIC that does 7k and 4k.... but the other doesn't support 7k, only 9k and 4k... then you will have to use 4k. then there are the different count sizes, like some will include header bytes in count, others will ignore, etc, it makes the whole process very difficult. On top of this, some NAS's, like the ReadyNAS, will only support JFrames on write, not read, so you won't get the full benefit.


So, to use a gigabit network without JFrames, it's easy, everything is plug and play. NIC's with Gbit, Switch with Gbit and CAT5e or 6 cables. Most NIC's and switches will auto-sense and enable Gbit without you having to worry. You'll get about 30-50MB/s transfers. With JFrames and all the complications etc, you can get about 80-90MB/s+, any more is unlikely under real world use, but can peak at 120MB/s. Also, some NAS's will have very slow Gbit connections, typically about 20-30MB/s, as this can be the software RAID, hard drive or some such causing a bottleneck, so you may want to test with another PC first to make sure all is ok.

A quick test to check packets are being sent correctly would be to do a fragmentation check...

Command Console:

ping 192.168.1.100 /f /l 1492

Change the ip address to target. the tags are F and L. The last number is the MTU size for frag test, with jframes on at 7k, you could change the MTU frag number to 6500 or something, if you get 'Packet needs to be fragmented but DF set', then jFrames probably aren't working. If it works at 1460 but not at 1500, then jframes are not working as well.

Even more that could complicate matters, especially with windows, is that of the LanManServer and LanManWorkstation settings, with buffering and que depths and all sorts of evil options that can break a computer very easily, but can significantly improve network performance when modified correctly. But those tweaks are... a little beyond my scope and are more server orientated, i've dabled with them before, but usually end up causing NonPagePool errors with too many connections or excessive buffers.... and largely has nothing to do with getting a Gbit network working, lol.
 
Last edited:

Tharic-Nar

Senior Editor
Staff member
Moderator
Fair bit of reading involved, but this is one of the links for the Firmware Replacement for the RT-N16 - Tomato ND USB Mod with kernel 2.6.

The other option would be to use DD-WRT, but that doesn't support jumbo frames, which is a shame since it is far easier to use.
 

Glider

Coastermaker
You don't need jumbo frames for Gbps...

All you need is a decent NIC ( I recommend Intel ones! ), a gigabit switch and decent (cat 5e) cables.
 

Tharic-Nar

Senior Editor
Staff member
Moderator
They're not needed, agreed, but they can make a difference, especially when handling very large files. With the NAS, you're more likely to hit hardware limitations (Hard drive or NIC) before you see any benefit from JFrames, but since this Thecus NAS supports them, might as well test it.

The integrated NIC's in most motherboards are questionable in terms of performance and i do agree with using Intel NIC's, you just get better performance and reduced CPU load. Most of the integrated NIC's from Marvel and Realtek i've used can cause massive CPU usage, not to mention the buffering problems that inevitably arise with large file transfers.
 
Last edited:

Glider

Coastermaker
That is because cheap Realtek and Marvel NICs offload to the CPU, while Intels don't. But do not expect them to perform like real TCP offloading NICs (which cost A LOT, but you don't need them). We have quad NIC cards (1 card, 4 ports) TCP offloading cards in the labs at work, and they cost ~1000 euro each...

The main problem with jumbo frames is that all devices have to support it, and that is confusing cross platform. For instance, for "regular" frames, in Linux you need to set mtu to 1500 (incl header), while in Windows you need to set it to 14XX (not incl header)... It only gets more and more confusing with jumbo (6 or 9k) frames...
 

Psi*

Tech Monkey
Rob ... what did you end up doing?

I am interested in 1 GB network as I routinely transfer 500 MB to a few GB files between machines. To augment this I am thinking about SSDs as the source & "sink" of the files. I haven't attempted to figure out or to see if there is any measure of the transfer rate directly from/to SSDs across a network, but I am looking for any significant improvement I can find.

So what is the fastest setup? SSDs & network parts?

Any "neck of the funnels" that I am over looking?
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
I ended up taking the overkill route for the network card (Intel I-340, ~$325), although a ~$100 Intel NIC should do just fine. For the router, I went with NETGEAR's WNDR3700 (~$130), which I love. I didn't know what a reliable router was until I got this thing. With a previous router I had, gigabit speeds would work when they wanted to, but with this thing, they always work, and are very reliable. I copied over a 30GB folder from one PC to the other in about six minutes the other day... which I find impressive. On the old network I had, it would have been quicker to copy to a USB device and carry that over to the other PC.

I wouldn't recommend SSDs for the sole purpose of having faster network transfers to be honest. While massive transfers could benefit from SSDs on either side, the gains are minimal and don't scale with the investment. Gigabit networks top out at around ~125MB/s (after overhead it's more like 100MB/s), at which point it's just topping out some of the faster mechanical hard drives on the market. SSDs would be better at handling prolonged transfers (like 100GB+), but unless you are transferring such mega-sizes all the time, the price premium on SSDs just isn't worth it.

In case you are interested, I am actually planning to do this kind of testing in the next couple of weeks, as I plan to introduce network performance to our motherboard reviews going forward.
 

Psi*

Tech Monkey
Also, thanks for the comment about the SSDs not being worth it. I have "felt" that since they were introduced, but have been getting drawn to the dark side.:eek:

Is network performance evaluation about a program trying to stuff the network pipe? And/or about cross put between a drive and the network? I would be interested in all variations as well as educated opinions as the testers often get a sense of trends.
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Is network performance evaluation about a program trying to stuff the network pipe? And/or about cross put between a drive and the network? I would be interested in all variations as well as educated opinions as the testers often get a sense of trends.

This is something I am in the process of figuring out. While for one test, I'd like to do real-world copies, I think some synthetic benchmark should be introduced that can properly saturate the pipe. I've talked to Brett about a program before but I forgot the name of it, and I'm sure Glider will be able to offer some recommendations as well.

The problem with real-world testing is that some variables could just cause fluctuations, but I'd still like to test it out because if one board copies at 100MB/s, and another at 60MB/s, it raises an obvious issue.
 

Glider

Coastermaker
How we do it at work is impossible for any tech site to reproduce. We use packet generators (Like Ixia and Spirent). We are more interested in protocol handling and throughput then "useful" transfers. For us it makes no sense to see how long a file takes to transfer, but it does matter how many packets we can cram trough, regardless of source, destination or content.

More realistically for you... Take as much out of the equation... I would put 2 Linux systems back to back with a CROSSED) network cable.

Why back to back? Takes the switch out of the equation. Keep the cables short and free of EM influences (so keep power cables away). Longer cables influence signal quality (even with the twisted pairs). Use off the shelf cables, no home brewed ones (shielding, mostly at the plug is way better).

Why Linux? For starters, Linux its networking stack is superior to what Windows calls a networking stack.

But more importantly, Linux can easily do RAM disks (which takes the HD out of the equation).

How would I setup both systems? In a client server role of course. I would use the 3 main network file system protocols (listed from efficient to less efficient); iSCSI, NFS, CIFS/SMB/Windows file sharing. I would not dig into FTP and HTTP because they have too much protocol overhead.

On the server, first off, I would set up a static IP, to eliminate the DHCP/PPP overhead. You would be surprised how much bandwidth is wasted with these requests. Also use IPv4. While it is fun to use v6, it adds about 1-2% protocol overhead (on our 100Gbps ports at work you really notice this ;)). I would setup a beefy RAM disk which contains the "data".

On the client, of course also a static IP. Make the receiving end also a RAM disk, or /dev/null

For the iSCSI service, use ietd, it is very easy to install and VERY performant.
For NFS and Samba I would use the standard Linux ones.

For throughput, just copy something to/from the server to the local RAM disk. For measurements I would run ntop or bwmon on both the server as the client. Altough they should give identical results (except for traffic direction of course), the results will probably vary.

I would also try to show the protocol overheads. Switch NFS between UDP and TCP mode. You'd be surprised how much more overhead TCP is.

I would definitely vary the MTU sizes. From the standard 1500 to 9k. Using decent network cards is critical for this. I recommend true offloading cards, but regular Intels will do too. But do realize that using jumbo frames will not benefit connections to the internet. Many ISP don't have equipment (mainly DSLAMS) facing their customers that can or will do jumbo frames.

I think by doing tests like this you get the most "reproducible" results. But these aren't the "real-life" ones. It just ensures a stable test.
 

Psi*

Tech Monkey
I think Glider's thoughts could establish a base line and maybe provide a means to multiple follow up comparisons. The suggestion is that this would establish a maximum possible rate for the pair of NICs + cable.

Adding various switches back in to compare switches, boot with Windows to see the "Windows hit", maybe even tell me what effect all of those options for a NIC has on thru put:confused:, and even comparing different CAT cables ... being in a related industry ... there is no way that all of these cables & RJ connectors perform as well as they are rated, for instance. In real companies that produce quality cable assemblies, extensive analysis is done. But cable assemblies and the receptacles in the cards are way too easy to crank out as widgets. End of rant.

Perhaps in the back to back PCs any measurable CPU load could be noted per NIC?
 

Rob Williams

Editor-in-Chief
Staff member
Moderator
Glider, thanks a ton for the great information!

I have to admit though, that while I do appreciate being able to deliver the best possible results, this methodology is a little excessive, and would prove far too time-consuming. Us adding network testing to our motherboard content is simply meant to be another measure to make sure that there are no bottlenecks due to the board's design, and isn't meant to be an exhaustive look at its network performance.

Our readers do appreciate good network performance, but not to the extent of what this testing could provide. They care about throughput and also latencies, neither of which seem to ever be a real problem for most people. Some boards, though, may perform transfers and negotations worse than others, and that's what we want to find out, without sinking too much time into it.

For us, things have to be evaluated from an ROI perspective, and to dedicate a ton of time to testing the NIC on a motherboard just isn't the best idea. It'd be a completely different story if we were testing and reviewing different network cards, but to be honest, those who care a LOT about network performance aren't going to use the on-board NIC anyway... they'd go discrete.

THAT SAID, I do have a couple of questions.

Which Linux distro is best for network testing, in terms of ease-of-use and things being done right?

RAM disks are fine, but what's wrong with just going the SSD route? The SSD's we'd use for such testing would have twice the throughput capability of 1Gbit Ethernet and are low-latency.

I know you don't like Windows, but what would the best kind of test be for that?

Again, thanks a ton for all the information, but where our target audience and time we're able to dedicate to testing is concerned, the "perfect" method is a little overwhelming.
 

Glider

Coastermaker
Which Linux distro is best for network testing, in terms of ease-of-use and things being done right?
I would go for Debian... Buisinesscard install, no X, no fuss, no overhead

RAM disks are fine, but what's wrong with just going the SSD route? The SSD's we'd use for such testing would have twice the throughput capability of 1Gbit Ethernet and are low-latency.
I have never used SSD's, but that might be an option too ;)

I know you don't like Windows, but what would the best kind of test be for that?
I hope every new version of Windows brings in improved networking stacks... But I seriously don't know. Doing anything related to networking on Windows is just asking for troubles (dual homing anyone?)
 
Top