I was recently listening to the Packet Pushers podcast and the topic of 1Gbps Ethernet switching to the desktop came up. I believe the argument was that most organizations or enterprises don’t really need 1Gbps to the desktop and would be fine with 100Mbps to the desktop. I don’t agree with that opinion and thought I’d make my argument here and sample what everyone else thinks on the subject.
I like to use the following analogy when explaining the difference between 100Mbps and 1000Mbps to executive leadership;
100Mbps – Is equivalent to a 1 lane highway with traffic allowed to move at 10 mph.
1000Mbps – Is equivalent to a 10 lane highway with traffic allowed to move at 100 mph.
You don’t like that analogy? Most people immediately recognize the difference in bandwidth/throughput (lanes of the highway) but neglect to consider the difference in latency (speed). It’s that latency and speed that really benefits the desktop environment especially in applications that turnout a lot of packets.
Desktops or VDI or Terminal Services
It’s almost a standard to find 1Gbps NICs on every desktop and laptop being manufactured today. However, if you’re only deploying VDI or Terminal Services (Citrix) you might be tempted to stay with 100Mbps as there would be no real benefit to running 1000Mbps to a thin client or similar device.
It’s no surprise that cost is the major leading factor in this debate. Let’s admit it, if 1000Mbps was the same price as 100Mbps there would be no argument. Let’s look at some pricing;
- Avaya Ethernet Routing Switch 4550T-PWR $2,564 ($53/port)
- Avaya Ethernet Routing Switch 4548GT-PWR $4,476 ($93/port)
I’m going to assume that most vendors are in the same general ballpark. So there’s a $40/port premium utilizing 1000Mbps over 100Mbps for an enterprise class switch.
A major consideration in any enterprise or organization is future proofing your investment.The life cycle of any network infrastructure deployment should be at least 5 years or more in my opinion.
Let’s not forget that there are significant cabling requirements with Gigabit Ethernet. All eight wires are required for 1000BASE-T whereas 100BASE-TX and 10BASE-T only required four wires. Additionally the cabling plant must be CAT5 or better.
There are many factors involved in making a decision whether to deploy 100Mbps or 1000Mbps to the desktop. Hopefully I’ve covered the major decision points above. The bandwidth, throughput and latency are obvious pros. The cost per port and cabling requirements can be significant cons.
What do you think? What are you deploying in your network?
The Street / Network analogy is perfect for all non tech guys.
Everbody wants today gigabit ethernet to the destop.
But if you take a deeper look at the traffic that is flowing on Desktop Ports you will find only a very few ports that really has a high traffic utilization that requires gigabit ethernet.
I dont know if i totally agree with the idea of desktops needing gbit connections. As you said alot of companies (as my own) which consist of over 3500 employee’s, use citrix desktop on thin clients. The benefit here to use gbit would be nihil.
I tell people it’s like comparing a coffee stir, garden hose, and a fire hose. But I think it conveys the same idea.
For the Gig to the desktop debate, I think the answer is “it depends”. For our 1,300 node network, 100Mbps is king, mostly. What we are finding is that one of our apps is very packet hungry. The problem isn’t the amount of data, but the inefficiency of the application and it’s need for a lot of data in very small sizes. And since the vendor says “it works fine for us”, obviously it must be our network, right? So, what we found is if we put these nodes on a gig port, the app worked “just like” the vendor expected.
I would agree that most applications, barring GIS/CAD, Graphic Art/Video, and poorly designed APPs, need only 100Mbps.
One argument that I have heard from others, is that GigE is more efficient overall, because the packet, no matter what size, is on the “wire” for a shorter amount of time. I personally think this is a good argument for uplinks back to the Core and for all verticals, but for the desktop, I think that is only applicable to real-time apps that would saturate a 10/100 Mbps connection.
I am curious to what others have to say. Sean
We have run 1000Mbps to the desktop via ERS 4548GT-PWR switches for three years now and have a couple more closets to go before our ~2300 node network is completely at 1000Mbps.
We wring every last dollar out of our hardware and I can see the ERS4548GT’s being used for at least another 6-8 years.
Now if I could just make the Avaya Tech Support web-site run faster so downloads of software updates don’t take so darn long. What a dog…
In our case we have a 1000 node 100Mbps edge Network where half the users are Citrix terminals and the other half are a mixture of desktops and laptops, but predominantly use Citrix emulation. Also VoIP is used by all the users. The utilisation on our edge switch Gigabit uplinks is typically 1/2%.
I can’t see us ever using Gigabit on the edge unless the price is comparable with 100Mbps?
“because the packet, no matter what size, is on the “wire” for a shorter amount of time.”
Technically, no. The propagation delay, the time it takes for the electrical pulses to travel from one end to the other is the same regardless of the bandwidth. It is a function of physics and is roughly equated to the speed of light.
Their comment, however, is true when you consider the “wire” to include the serialization delay. The rate at which data is placed on the wire (serialization) does depend on he bandwidth. Effectively, the serialization and propagation delays together as a time are less as the bandwidth increase. The gains, however, are from the decreased serialization.
So when you guys migrate to 1Gb/s to the clients what do you do with your uplink(s) to the backbone?
We currently have the standard 2 x 1Gb/s MLT uplinks from the backbone to the edge for 100Mb/s client connectivity but we’re wondering if we’d also need to increase the uplinks when moving to 1Gb/s to the clients? 2 x 10Gb/s in the foreseeable future maybe when the equipment becomes cheaper (XFP, SFP+, …)?
Our ESX farm would have to be migrated to 2 x 10Gb/s uplinks first ofcourse ;-).
In our case our edge user stacks currently have 4x Gigabit uplinks using 2x MLTs from the two backbone ‘routeswitches’ and in our environment we will never exceed the bandwidth of the uplinks, but if we ever did, we can just add more Gigabit uplinks to the MLT’s.
Depending on your environment, check your current uplink utilisation to see if you need to upgrade to 10 Gigabit or just add more Gigabit links to your MLT uplinks? Even though your users are only at 100Mb/s you should still be able to plan what the expected utilisation will be when migrating to 1Gb/s at the edge?
I would like to add two remarks and post my personal opinion:
1) The hidden cost: power consumption. Gbps is 60% more power hungry than 100Mbps, even when in idle.
At both sides of the line…
(actually the Avaya Power Saver sets Gbps-ports to 100Mpbs when activated)
More power consumption results in higher cooling costs.
2) It’s easy to be fooled when using MRTG/Cacti/… to monitor link usage. These poll every 5min, so flatten out the peaks. One download taking 10 at 1Gbps vs. 1min at 100Mbps will show exactly the same. Though users will definitely notice a speed difference. Then we are talking radiologist, HD Video, CAD, DTP, scanning/archiving, …
Normal desktop, phone, thin client, AP almost will not feel the 1Gbps advantage.
Currently I would make following choice (for bigger networks):
new switches = full Gbps and PoE.
Depending on needs: set ports to 100Mbps autoneg. maximum.
Initial CapEx is higher, but results in uniform network, that can provide all required connectivity where needed.
OpEx will definitely be lower in the long term because of simplicity & flexibility.
Try calculating cost per day for lifetime, and cost of swapping to 1Gbps-switches in near future, with some probability factor.
Michael McNamara says
Thanks for all the responses…
As mentioned I think the circumstances are different for almost everyone hence the broad array of responses. In my organization we’ve been 100Mbps switched to the desktop since about 2007 and we’re pushing a lot of rather ugly and inefficient client server applications where application performance (and acceptability) is directly tied to link speed. While people frown on the old line, “just throw bandwidth at it”, this solution still applies today where the people selecting the applications and signing the contracts aren’t really interested in the challenges that IT will face trying to implement their monstrosity. So the solution more times than not is to upgrade the edge switch to 1000Mbps because “it’s a network problem”.
@Jon, I wasn’t that comfortable myself reading that statement… thanks for pointing out the flaw.
Just one follow-up to @Thomas’ comment regarding link utilization and graphing (MRTG/Cacti). You do need to recognize that there is a big difference between 1 minute polling interval and 5 min polling interval. You might have an application that fills the 100Mbps pipe for 10 seconds but on a (5 minute average) graph it might not even show up as more than a little blip. So in essence you think that the link is only 2% utilized, however, it might be 100% utilized dozens of times a minute but the average overall utilization over a 5 minute period only accounts for 2% utilization. I have a number of interfaces that we monitor on both 1 minute and 5 minute intervals and the difference can be dramatic.
@Jon Thank you for the explanation!
When we shifted to new building we had the same conversation. finally we settled for 100 mbps. The utilisations are very low. and we saved a lot of money in not opting for gigabit.
One year has passed…
Do you still have the same opinion?
We are facing a similar discussion for a medical imaging department… Though the application os quite inefficient, it is a fact that 1 Gbps access with a recent workstation will reduce the transfer of medical images quite a bit…
But should we thrive for the apps guys to optimize their apps or just give them what they want (bandwidth)?
Michael McNamara says
We’ve been all digital for a few years now from an imaging perspective. It can be hard pleasing those radiologists! ;)