Orbits of global information

When it comes to moving mission critical systems to a data center for colocation, a major concern for IT and users is the ensuring responsiveness of applications and reducing latency.  While this concern is valid, it shouldn’t be a showstopper.  A good test case is Virtual Desktop Infrastructure (VDI) hosted in a third party data center.

What is VDI?

Virtual Desktop Infrastructure is becoming increasingly popular for many reasons. It allows for centralized administration and storage, ease of backing up and better fault tolerance for desktop users.

Either via thin clients or converted desktops, a user accesses their desktop via a shared server.  The user streams their mouse/keyboard inputs to the server, and the desktop is sent back.  This technology is a boon for systems engineers, who can spend too much time chasing down remote PCs, especially when geographic separation is introduced.  There’s always a tradeoff to technology though, isn’t there?

Managing VDI via Colocation

With VDI, you are now dependent on a handful of servers to provide desktops for your distributed workforce.  One proactive thing you can do is colocate those servers at a Tier 4, managed data center.

Not only will you have reliable A/B power feeds with redundant power sources, you generally have connectivity options from a whole host of providers. Your equipment stays up and your connectivity is always available, which is what you should demand from your facilities.

Bandwidth For VDI

Another factor that plays largely into this is connectivity.  Not just from a bandwidth perspective (how large the pipe should be), but also the latency (how fast it takes the traffic to go from you to the server and back).

The Citrix blog has a decent chart that helps you estimate user traffic in terms of bandwidth: 

Using these numbers we can say the average user demands around 100Kb or even 150Kb to be safe.  Let’s say we have 100 users …how large of a bandwidth pipe do you require?

150Kb x 100 users = 15Mb

Being realistic, we know that we’ll move more than just VDI across this pipe.  The systems admin will still need to access resources and transfer files to and from the environment.  This means we should size the pipe as large as we are willing to spend.

IT always likes more bandwidth, while accounting wants a lower price. I would shoot for 100Mb on a point-to-point circuit.  This gives you quite a bit of overhead on the connection.  To protect user experience, your WAN router should have some QoS to elevate the VDI/VoIP traffic over standard file transfers.

Bandwidth vs Latency

The good thing about bandwidth is that customers can always order more.  Carriers will generally sell you as much bandwidth as you can buy. Latency, on the other hand, isn’t something data centers can adjust as easily.

If you order a point-to-point circuit, you will generally have the lowest latency.  For example, from Houston to College Station you can expect latency to average 2-3 milliseconds per round trip.  From Houston to Connecticut you can expect 28 milliseconds.

Why is latency in a point-to-point circuit fairly fixed? SCIENCE.  It really comes down to the fact that light travels approximately 66% slower through fiber optic glass than it does through free space.  The signal will also need to be regenerated depending on distance, and will often be passed through several pieces of active equipment that will add varying delays to the packets.  This is just a given.

Failover Options

An alternate, and often failover option, is to route through tunnels on the Internet.  You can expect to add 10-20 milliseconds of latency to the same traffic moving over the Internet.  Round trip from Houston to New York is approximately 45-50 milliseconds.  The Internet is considered a “best effort” medium.  This means, it will do its best to get the packets there, but there are no guarantees on speed/path/reliability.  This is often a great option for smaller branch offices or as a backup to a point-to-point.

How does latency affect traffic?

Latency slows traffic down.  How much it slows it down is what’s important.  If we have a point-to-point circuit from CT to TX, you are looking at approximately 30 milliseconds for you to click with your mouse on your thin client, have that register on the VDI server and send the response video stream to your client.  Anything at 250 milliseconds (a quarter second) or greater is considered slow by a user.  Our client is only seeing 30 milliseconds of latency, which equates to 1/33rd of a second.

But, don’t take my word for it. Simulate the WAN delay by using any PC that has two NIC cards and a CD-ROM drive using WANBridge.

WANBridge Simulation and Latency

WANBridge is based off of Knoppix so it boots cleanly off of a CD.  It then bridges all attached NIC cards.  It allows you to easily adjust the bandwidth, latency and packet loss of traffic traversing the device.

Bandwidth and latency are certainly factors to consider when thinking about making the transition to a colocation facility, but it shouldn’t be a serious consideration when working with experienced Tier 4 data centers.

Related Posts Plugin for WordPress, Blogger...