Tag Archives: vdi

How easy is Wyse PocketCloud?

Very easy! And keep in mind it is still going for the reduced price because of the launch of the iPad version.

After you download the app you can set up a connection to for example your home PC in a couple of easy steps:

Step 1: The app is installed on the iPhone/iPad

Step 2: The connection to the desktop is configured. In this example I am connecting to a Windows 7 Enterprise desktop. Make sure you allow incoming Remote Desktop connections (this is in the System menu in the Control Panel) and double check that Remote Desktop access is allowed in the (Windows) firewall.

Step 3: Connect and browse the system. I used Firefox to browse to the Wyse.com website.

Some screenshots to show the steps:

It is truly amazing how easy this app is to use. I am looking forward to configuring it in combination with a VMware View cloud.

About Guest Customization and OU’s

Found these two posts while searching for easy ways to get a virtual desktop into the right OU:

http://www.jasemccarty.com/blog/2008/10/organizational-units-and-virtual.html

http://communities.vmware.com/message/1123380

If you need to generate lots of virtual machines, for example with a non-persistent pool in VMware View it is very nice to have the option to integrate this neatly into the standard VirtualCenter guest customization.

Preparing a VDI desktop: custom default user profile

Ever notice that some settings (like performance and visual effects) are not valid for other users logging into a virtual desktop? That’s because some of these settings are configured into the user profile.

This is a step-by-step guide on how to create a custom default user profile so you can set all once and then all users logging into the machine get these settings automatically from the default profile: http://support.microsoft.com/kb/319974

Deployment follow up

I would have liked to have a lot to write about today but the reality was that a lot of configuration still needed to be done on the part of the ICT outsourcing partner. The connection server installation was the proverbial next-next-finish job and adding the license and connecting to the VC server all went without a hitch. Composor can be installed after the VC and VI are upgraded to Update 3 but that has to be planned by the outsourcing partner.

Now it is waiting until there is access to the right VLAN so I can start working on the templates and desktop configurations in View. Time was not entirely waisted because I could check some things like the state of the multipathing, network configurations etc. and performed a mini-audit for the client which resulted in a couple of good HA and configuration recommendations.

I like the new pool features in View as they give some extra flexibility on the way desktops are provisioned. For this installation though we will mostly be working with the automatic non-persistent desktops that already existed in VDM2. They will be configured to be destroyed after first use so the student always has a nice clean training desktop.

VMware View customer case

Tomorrow I’ll start with the actual installation of VMware View3 at a client. The case for this project is that the client in question needs to be able to quickly provision a sizable amount (30-50 concurrent) of workspaces for training its workforce in two new applications. There was also a need to provide a solution for the workstations of the functional and technical application managers and software testers because one of the new applications (Microsoft Dynamics CRM) is integrated into Outlook on the client side which makes it difficult to switch from the production environment to the testing environment.

After we implement the VMware View infrastructure we will also start working on virtualizing a client/server application that is used for personnel assessments (the ability to destroy a VM after every assessment is important with this one) and there is also a desire to virtualize the desktops of the office bound employees (about 25% of the workforce). Hopefully the ability to take a virtual desktop offline will be production certified soon as the other 75% of the workforce use laptops.

The environment in question consists of Dell PowerEdge 2950 servers that already run VI3. Because these servers have enough available capacity the decision was made to host the virtual desktops on these servers as well. The virtual desktops will be running basic XP SP3, Word, Excel, Powerpoint, Dynamics CRM and a .Net 2.0 application. We will start with 384MB and expect to have to size up to 512MB per desktop. The images will be based on an XP image made with nlite (http://www.nliteos.com/).

Microsoft to buy Kidaro

Via mail from Microsoft’s PR firm the following press release was released today:

“Microsoft Corp. today announced its intended acquisition of Kidaro, a leading provider of desktop virtualization solutions for enterprises. In combining Kidaro’s virtualization technology with its suite of desktop management tools, known as the Microsoft Desktop OptimizationPack for Software Assurance, Microsoft will enable IT professionals to optimize their desktop infrastructure by providing management capabilities for Virtual PCs, streamlining deployments and easing application compatibility issues.”

It was a great coincidence that I had been looking at the Kidaro website and product just this morning. You can find the full release here: http://www.microsoft.com/Presspass/press/2008/mar08/03-12ExpandVirtualizationPR.mspx

I have no practical experience with Kidaro but from the website it looks like it’s comparable to VMWare’s ACE product. I also noticed that you can use VMWare Workstation images as well as Virtual PC images to base your deployment on. Seeing as Kidaro will become a Microsoft product now I suspect that support for the VMWare images will be dropped soon.

VDI: Operational and stable

Everything on the VDI project works as expected and we are in stable production at the moment.

There have been two problems in the past two months:

  • User complaining of slow screen refresh
    • This is inevitable because of the distance. We get 160ms latency and some slowdowns can be expected. It is nothing critical however and the advantages outweigh this minor issue. Maybe in the future we will try to optimize graphics performance so it takes up even less bandwidth.
  • Sudden packet loss on the connection
    • We are routing this connection over a trunk port on a Cisco switch to the external datacenter we have just moved to and we are expecting some kind of problem with this.

System specs

These are the specifications that we have determined to be right for us:

  • 1x HP Proliant DL585R01 O2.4-1MB Model 4-8GB (4 x Opteron 880 Dual Core cpu’s) / 8GB PC3200 memory)
  • 6x 4 GB memory module (PC3200/ECC/DDR/SDRAM/2 x 2 GB Modules)
  • 2x 36GB disk option (15K/Ultra320/hot-pluggable/1-inch high/universal)
  • 2x FC2143 4GB PCI-X2.0 (Fiber Host Bus Adapter)
  • 2x Gigabit PCI-X NC7170 Dualport 1000T Server adapter
  • 1x Hot pluggable Redundant Power Supply Unit(PSU) for ProLiant DL585R01

Additional information:

Memory
32GB PC2-3200 internal memory is the maximum for the chassis. We choose memory speed over total memory size because we ended up with needing four clusternodes for the amount of Virtual Machines we want to run (80 – 100). Which means we end up with 128GB of PC2-3200 memory anyway.

Disks
It is a personal choice if you want to use disks for your ESX / VI installation or if you want to boot from SAN. We like to have the OS disks in the chassis.

SAN connection
Because of the higher impact of a component failure we use two single port Fiber HBA’s coupled to two SAN switches linked to dual controllers in the disk arrays. Failure of one component in the chain doesn’t affect production availability in this way.

Network
Same goes for the Network Interface Cards (NIC). Having a four way Gb NIC makes for higher density but a failure of the card also means failure of all four ports.

Chassis

The decision on the chassis specs is hardly something I would put under rocket science. HP has a configuration with 4 Opteron 880’s (2,4Ghz Dual Cores) built in from the factory and the premium for the 2,6Ghz model was a bit too much for my liking. We also liked this model for not having to do a lot of assembly of the server chassis.

The new models:

Have a look at my post for more explanation. The specs of the DL585 G2’s are as follows:

  • HP ProLiant DL585R02 O/2,4-2P 2 GB (2 Opteron DC 8216 cpu’s / 1 MB cache / SA P400i – 512 MB – BBWC)
  • 2x AMD Opteron 8216 2,4 GHz- 1 MB Dual Core (95 Watt) processor option
  • 32 GB Dual Rank memory (PC2-5300 / ECC / DDR / SDRAM / (2 x 2 GB)
  • 2x 36.4-GB 10K SFF SAS harddisk (10K, SFF SAS, hot pluggable)
  • 3x HP NC360T PCI Express Dual Port Gigabit Server Adapter
  • Hot Plug Redundant PSU for ProLiant DL580R03/R04 en ML570 G3/G4 en DL585G2
  • 2x Fiber channel PCI-X 2.0 4Gb Single HBA

VDI: First connection

Today we made the first connection through the Leostream Connection Broker from a workstation in Mumbai to a Virtual Desktop in The Netherlands. The desktop was correctly assigned and the user could log into it.

Other connections fail however so more checks will be necessary but seeing it work for the first time was great.

VDI: Mumbai is connected

We will be posting another VDI update soon because the 20Mbit connection was delivered yesterday.

A point to point check of the connection revealed no problems and the latency was 158ms which, considering that the length of the cable is around 9000 kilometres (5600 miles) is pretty stunning performance.

The connection will now be made with the workstations of the engineers and then testing can start.