Tag Archives: leostream

VDI: Operational and stable

Everything on the VDI project works as expected and we are in stable production at the moment.

There have been two problems in the past two months:

  • User complaining of slow screen refresh
    • This is inevitable because of the distance. We get 160ms latency and some slowdowns can be expected. It is nothing critical however and the advantages outweigh this minor issue. Maybe in the future we will try to optimize graphics performance so it takes up even less bandwidth.
  • Sudden packet loss on the connection
    • We are routing this connection over a trunk port on a Cisco switch to the external datacenter we have just moved to and we are expecting some kind of problem with this.

System specs

These are the specifications that we have determined to be right for us:

  • 1x HP Proliant DL585R01 O2.4-1MB Model 4-8GB (4 x Opteron 880 Dual Core cpu’s) / 8GB PC3200 memory)
  • 6x 4 GB memory module (PC3200/ECC/DDR/SDRAM/2 x 2 GB Modules)
  • 2x 36GB disk option (15K/Ultra320/hot-pluggable/1-inch high/universal)
  • 2x FC2143 4GB PCI-X2.0 (Fiber Host Bus Adapter)
  • 2x Gigabit PCI-X NC7170 Dualport 1000T Server adapter
  • 1x Hot pluggable Redundant Power Supply Unit(PSU) for ProLiant DL585R01

Additional information:

Memory
32GB PC2-3200 internal memory is the maximum for the chassis. We choose memory speed over total memory size because we ended up with needing four clusternodes for the amount of Virtual Machines we want to run (80 – 100). Which means we end up with 128GB of PC2-3200 memory anyway.

Disks
It is a personal choice if you want to use disks for your ESX / VI installation or if you want to boot from SAN. We like to have the OS disks in the chassis.

SAN connection
Because of the higher impact of a component failure we use two single port Fiber HBA’s coupled to two SAN switches linked to dual controllers in the disk arrays. Failure of one component in the chain doesn’t affect production availability in this way.

Network
Same goes for the Network Interface Cards (NIC). Having a four way Gb NIC makes for higher density but a failure of the card also means failure of all four ports.

Chassis

The decision on the chassis specs is hardly something I would put under rocket science. HP has a configuration with 4 Opteron 880’s (2,4Ghz Dual Cores) built in from the factory and the premium for the 2,6Ghz model was a bit too much for my liking. We also liked this model for not having to do a lot of assembly of the server chassis.

The new models:

Have a look at my post for more explanation. The specs of the DL585 G2’s are as follows:

  • HP ProLiant DL585R02 O/2,4-2P 2 GB (2 Opteron DC 8216 cpu’s / 1 MB cache / SA P400i – 512 MB – BBWC)
  • 2x AMD Opteron 8216 2,4 GHz- 1 MB Dual Core (95 Watt) processor option
  • 32 GB Dual Rank memory (PC2-5300 / ECC / DDR / SDRAM / (2 x 2 GB)
  • 2x 36.4-GB 10K SFF SAS harddisk (10K, SFF SAS, hot pluggable)
  • 3x HP NC360T PCI Express Dual Port Gigabit Server Adapter
  • Hot Plug Redundant PSU for ProLiant DL580R03/R04 en ML570 G3/G4 en DL585G2
  • 2x Fiber channel PCI-X 2.0 4Gb Single HBA

VDI project is on the move

Since the last update on February 8th a lot of progress has been made and we now have an almost functional VDI environment for our offshore development work in Mumbai, India and we already have interest from different other parts of the organization to implement this for their outsourced or offshore/nearshore projects (Spain and two major Dutch clients).

The VDI cluster of two HP DL585’s is up & running with ESX3 and 125 VDI seats. The VM’s themselves are running on a 2TB dedicated allocation in the EMC Clariion SAN. After assigning key users both in Holland and in India the templates for the various expertises (Java, Microsoft, Oracle, Software Testing) have been made and approved by the process managers in Holland and India.

The modelling for the templates varies, some expertise areas like Software Testing have created one template for all users, some expertises have created templates on a per project base (MS, Java). For this last group a process was created for template lifecycle management. Basically, we deploy a standard expertise template in a “staging” area, the projectleader takes care of bringing the template up to project standard, the template is stored in the library and the projectmembers are tagged to this template. At the end of the project the template is “degraded” to the template archive.

Leostream logoOn the Leostream side all went well, 75 seats of the Connection Broker were ordered directly via the Leostream site, and the correct license key was mailed the next working day. Configuration was pretty straightforward with only one issue. This related to putting the virtual desktop to suspend state after a users logs off. This was solved in an update of the software, a proces which is easy to do, backing up the configuration in one file is also a nice feature.

In Leostream Active Directory group membership (one group in AD for each expertise) is matched againt a Leostream policy, only one match possible. Within Leostream groups of VD’s receive a “tag” which can be linked to the policy and a policy can hold multiple tags. With this we can assign users to multiple types of VD’s (for instance if someone is working on multiple projects or cross-competence) by simply adding the tag to the policy and without extra fiddling in the Active Directory. After the users

logs in at the VDI site, they can choose a VD from a list which is controlled by their policy/tag settings and after that they receive a .rdp file which they can open with Remote Desktop from their own desktop. We however decided to use the Leostream Connect Tool which has to be installed at the physical desktop of the user. With this the user simply starts the tool, fills in his credentials and Leostream automatically picks a VD from the user’s available group of VD’s, creates the correct .rdp file, starts the VD and performs the user logon via Remote Desktop and suspends the VD after user logoff.

So basically all configuration on the cluster, Leostream and the templates is finished, we are now focussing on completing documentation, test plan and most of all, waiting on the 20Mb NL-Mumbai connection to be delivered. We had some unexpected delays due to a relocation of the development center to a different building on the same compound without our knowledge. As this specific building is off-net (no fiber coming into the building), so an extra fibre extension was required.

So, we still love VMware and VDI 😉

VDI progress

In my last post in January I wrote about the VMWare VDI project that got an OK to go ahead as planned. The progress has been slow mostly because of the leadtime that the long distance broadband connection to our development center in India has.

The two servers that we will begin with (more of the same HP DL585’s) will arrive next week and will take the place of the G1 585’s that we started our virtualization project with. The G1 machines are the ones where we will begin our VDI implementation with.

We are currently in a trial with the Leostream Connection Broker and I hope I can write more about the results of that trial next week. The project currently has one engineer full time working on the Leostream trial and making preparations for the production of the VDI templates. Ideally we want to use Microsoft Active Directory groups to distinguish between the different VDI users so assigning a VDI to a user will be automatic and the number of templates can be limited. At the moment it’s looking like we will end up with about 5 different templates for the different employee types (Java developer, Java architect, Microsoft .Net developer, Microsoft .Net architect and general Project Lead).

Through the Virtual Desktops these developers will get access to our internal development environments. The advantage of which will be that we only have one development environment to manage (thus avoiding potentially costly errors because of version differences  in development and testing systems) and we can centrally manage the access from the third party into our network infrastructure.

The disadvantage is mostly cost related as the necessary bandwith is not cheap, especially on the India side. Another disadvantage is the lack of all-around (publicised) knowledge about these types of implementations.

A new addition to this blog

Just a quick note to let all our readers know that as of this week this blog will not be limited to the experiences we have migrating to VMWare Virtual Infrastructure but that we have started a project to implement VMWare Virtual Desktop Infrastructure for our outsourced development team in India.

As it has been very hard to find solid business case information for a proper scale VDI implementation (which was the reason for starting this blog in the first place) we will be pioneering some. I have had some nice and informative replies on a topic on the VMWare forums but also found that others are looking into VDI but have the same problem as we do.

Hopefully the addition of the VDI project will be a worthy addition to this blog.