Posted by martijnl on November 10, 2010
The title of this post reflects a blog post on the Exchange Team Blog (aptly named: You Had Me At EHLO). You can find it here: http://msexchangeteam.com/archive/2010/11/09/456851.aspx. While the wording is a bit strong I agree with the points raised in the blog post.
Main point of the post in my opinion is:
Microsoft does not support combining Exchange high availability (DAGs) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers. Microsoft recommends using Exchange DAGs to provide high availability, when deploying the Exchange Mailbox Server role. Because hypervisor HA solutions are not application aware, they cannot adapt to non-hardware failures. Combining these solutions adds complexity and cost, without adding additional high availability. On the other hand, an Exchange high availability solution does detect both hardware and application failures, and will automatically failover to another member server in the DAG, while reducing complexity.
This is an important item to address in a multi-server Exchange design running in a virtualized environment. While things like VMware HA are great for protecting against host failures it will not (as mentioned further along in the article) protect against application failures. If the availability requirements stay within anything that can be done with a single datastore on a single server then using HA and vMotion are perfectly fine in my opinion but go beyond that and you’ll soon see issues arising from using two types of clustering/HA measures for the same application. You’ll be using anti-affinity rules to prevent clusternodes from being migrated to the same host for instance. Now think about having to do that for an Exchange cluster using multiple datastores running on multiple servers. It just makes things more complex than they need to be.
Read the rest of the article for more information on the subject. Thanks to @douglasabrown for posting the article to Twitter.
Posted in BlogPosts | Tagged: best practice, exchange, vmware | 1 Comment »
Posted by martijnl on September 6, 2010
Was notified through Twitter of the existence of this Optimization Guide for Windows 7 deployments under VMware View: http://www.vmware.com/files/pdf/VMware-View-OptimizationGuideWindows7-EN.pdf.
The guide specifically focuses on preparing a clean and properly configured base Windows 7 image through the use of the Microsoft Deployment Toolkit or traditional installation methods. This image can then be used as a template for View full desktop or linked clone deployments. The guide also describes a number of guest optimizations and necessary configuration of Windows Services. It does not decribe end-user performance optimization however.
An excerpt of the Guide’s introduction:
The following documentation provides a guideline on configuring a standard Windows 7 image to be used within a VMware View Infrastructure. This guide provides administrators with the information necessary to create a standard image of Windows 7 leveraging the Microsoft Deployment Toolkit or by utilizing a script-based approach to optimize a traditionally installed Windows 7 virtual machine. The recommended configuration settings optimize Windows 7 to help enhance the overall scalability and performance within a VMware View Virtual Desktop Infrastructure. The first section of the paper will discuss the overall process of optimization and the optimization aids provided. In the next section, step-by-step procedural guidance is given for both methods of optimization. Afterward, the Windows 7 Operating System Customizations section provides background information on the specific optimizations and techniques used by the optimization aids. Finally, the Managing VMware View Desktops section provides guidance and considerations for optimizing the environmental aspects on an ongoing basis
Posted in BlogPosts | 2 Comments »
Posted by martijnl on September 3, 2010
Something I ran into with both Citrix XenDesktop 4 and VMware View 4 and have not been able to find in the standard documentation: when installing a default Windows 7 virtual workstation on vSphere the video drivers default to the WDDM drivers. Both XenDesktop and View however rely on XPDM drivers being available for display acceleration through Thinwire/HDX and PCoIP respectively.
The end result is that with both systems you will get a black screen when you boot up your first virtual desktop session. After about a minute of waiting the session drops. Because it is in neither manual this can be somewhat of a letdown. The fix is easy however and described here (Citrix KB) and here (Jeremy Keen Blog, in more detail). In short: you need to switch to the VMware SVGAII driver which is based on XPDM. See the explanation in the second link for a proper walkthrough on how to change the video driver.
Posted in BlogPosts | Leave a Comment »
Posted by martijnl on August 17, 2010
Very easy! And keep in mind it is still going for the reduced price because of the launch of the iPad version.
After you download the app you can set up a connection to for example your home PC in a couple of easy steps:
Step 1: The app is installed on the iPhone/iPad
Step 2: The connection to the desktop is configured. In this example I am connecting to a Windows 7 Enterprise desktop. Make sure you allow incoming Remote Desktop connections (this is in the System menu in the Control Panel) and double check that Remote Desktop access is allowed in the (Windows) firewall.
Step 3: Connect and browse the system. I used Firefox to browse to the Wyse.com website.
Some screenshots to show the steps:
It is truly amazing how easy this app is to use. I am looking forward to configuring it in combination with a VMware View cloud.
Posted in BlogPosts | Tagged: pocketcloud, vdi, wyse | Leave a Comment »
Posted by martijnl on May 20, 2010
As mentioned in an earlier post we have clients running SAP Finance applications. One of them is now going to upgrade to the latest version and because we host the application platform for them we will expand the infrastructure to hold the test environment and also upgrade from ESX3.5 to vSphere. As was the case last time we again have some anti-virtualization recommendations being done by the application consultants. As the solution is proven to work now for almost a year without fault (with an actual HA event no less) we fortunately have some leverage with regards to how we want to run the environment.
This time the discussion focuses on the use of RDM LUNs versus VMFS LUNs for hosting the SQL Server database files. Currently RDM’s are used mostly because earlier predictions were that the total DB size would near 1TB and we wanted to use SAN-based backup. In the mean time though there has been a switch to Veeam Backup & Replication for backup and DR purposes and that solution does not work with RDM LUNs. No problem as such but as we are preparing this upgrade/migration it is something to look at with regards to optimization. Also the size of the largest DB turned out to be about half the earlier prediction.
During my research on the subject I found it fortunate to have followed the VMware Design course recently as the subject of RDM vs. VMFS is a topic in the training. It was especially interesting to hear the differences on this topic related to what kind of SAN people were using/selling (NetApp, Equallogic, HP/Lefthand were mentioned most).
I also found some excellent papers and posts on the subject:
As soon as we have our final recommendation I’ll write another post explaining what we have chosen to recommend and why.
Posted in BlogPosts | Tagged: rdm, sap, vmfs | 2 Comments »
Posted by martijnl on April 9, 2010
While reworking our demo servers I ran into an old issue with Openfiler regarding static IP config. At first boot of the Openfiler VM it had networking and received an IP address via DHCP. It was not possible however to change this to a static IP. In the Openfiler forums I found this topic from 2008 which describes the issue and the resolution: https://forums.openfiler.com/viewtopic.php?id=2575.
In short: remove the NIC from the VM and add a new one. Also note that the virtual hardware version is 4 so you might want to upgrade that as well if you are using vSphere. The VM will now see the NIC when you boot it again and it is possible to change the IP address.
Posted in BlogPosts | Tagged: iscsi, openfiler | Leave a Comment »
Posted by martijnl on February 23, 2010
With a press release (http://blogs.vmware.com/view/2010/02/vmware-welcomes-rto-software.html) and Twitter post from CTO Stephen Herrod (twitter.com/herrod) VMware has announced the purchase of the assets and staff of RTO Software.
Today we are proud to announce that VMware has acquired the assets of Alpharetta, GA based RTO Software adding their technology and talented people to the VMware View team. For those of you not familiar with RTO software they are well known for their Virtual Profiles, PinPoint and Discover products which help IT organizations simplify desktop deployments while providing end-users with a rich, robust and flexible experience.
The core technology in play here is called Virtual Profiles and it is used for so called ‘persona’ management. Think Appsense and similar products.
More information about the technology and VMware’s plans for it in View can be found here: http://blogs.vmware.com/view-point/2010/02/vmware-to-acquire-rto-software.html
A snippet from that article:
We’ve been talking about provisioning users, not devices, and the importance of composition or layering in a desktop virtual machine – that a desktop VM is comprised of independent virtualized component parts that are dynamically brought together on demand into an encapsulated VM. One of those critical parts is the user persona, a user’s profile, data files and settings. Clean, efficient user persona virtualization is vital to our vision and that is precisely what RTO’s industry-leading Virtual Profiles will deliver for VMware. With persona management, end-user specific information such as user data files, settings and application access is separated from the desktop image and centrally stored, enabling increased flexible access, greater portability and seamless file management and backup.
Posted in BlogPosts | Tagged: vmware | Leave a Comment »
Posted by martijnl on February 22, 2010
Last year we got a request from a client who wanted an application environment for his SAP BusinessObjects Financial Consolidation (formerly Cartesis Magnitude) application. The new environment would be used for group consolidation for a number of companies with most of them located in EMEA. Because of the relative importance of the availability of this environment it was our goal to realize a 100% virtualized environment so we could use VMware HA, vMotion etc.
The reason for this blogpost is to explain that this can be done and that it does work and perform up to and above specification. This in contrast to popular belief among product specialists and third party consultants for the product itself who were trying to convince the client that this product would not run (perform) in an VM.
What we ended up doing was the following:
- Based on product usage characteristics and an existing design for the environment in physical servers we proposed an alternative in the form of a Virtual Infrastructure
- This VI (then based on VMware VI3.5 Enterprise) was housed in an external datacenter with the right facilities (connections, power, cooling, building design etc.) to guarantee sufficient uptime
- Use Veeam Backup&Replication to replicate the entire environment to a secondary location for DR purposes
The complete application environment uses Citrix XenApp and Citrix Access Gateway SSL VPN to deliver the application to the users. Apart from security appliances such as the Access Gateway and firewall devices only the Citrix servers are physical. This choice was made based on the expected high utilization of the Citrix servers and our choice to host only the application related servers in the VI. All other servers (file, web, application, SQL2005 database and analysis, domain controllers) are virtual machines.
To get the necessary storage performance for the SQL 2005 (64bit) database we created dedicated LUNs for the database logfiles and datastores on the FC SAN. VMware has published some excellent whitepapers on SQL Server and SAP sizing: http://www.vmware.com/files/pdf/SQLServerWorkloads.pdf and http://www.vmware.com/files/pdf/SAP-Best-Practices-White-Paper-2009.pdf for example. The total size of the datastores is well over 500GB by the way.
The virtual machines are almost all SMP machines (either two or four-way SMP) because of peak performance caused in consolidation and reporting runs and this works well. To avoid performance issues we did adopt the rule that CPU overcommit would be held to a minimum, because we knew that come peak time several servers on the same host would be fighting for resources.
After implementation of the environment there was a performance and acceptance test and the whole environment performed as good or better than the environment where the application was migrated from. Plus we now have the added benefits of virtualization with regards to availability. Another benefit of having the whole environment fully virtualized is that with Veeam Backup and Replication we are now able to replicate the whole environment to a target host in a second datacenter and switch over to that environment should there be a problem in the primary location. I am not completely unbiased with Veeam but if you compare cost and value it keeps amazing me how much you can do with it for a relatively low investment compared with traditional backup software solutions.
Posted in BlogPosts | Tagged: design, sap, vmware | 3 Comments »
Posted by martijnl on February 22, 2010
Links have started to float around on Twitter and in a blogpost from Doug Hazelman over at VeeamMeUp: http://veeammeup.com/2010/02/is-veeam-the-best-backup-solution-for-vsphere-4.html.
Does all this mean that Veeam is just taking a break to let our competitors catch up to us? Of course not! Over the next few months Veeam is preparing to change the way you think about backups…FOREVER! I can’t disclose anything yet, but watch this space, the countdown has begun!
UPDATE: In case you’re wondering, the next version of Veeam Backup & Replication will not require a complete reinstall/re-architecture. SureBackup is an entirely new feature that will easily integrate in with existing Veeam Backup & Replication installations.
Working for a Veeam Partner I am hoping for some more information ahead of time. I am looking forward to learning more about this product.
Posted in BlogPosts | Tagged: backup, recovery, veeam | 2 Comments »
Posted by martijnl on July 31, 2009
We ran into this problem with two new DL180 G6 servers combined with Western Digital (WD) 750GB SATA drives. The description matches everything in this thread on the HP ITRC forums:
-tried different mediums (centos,redhat)
-same behavior with different RAIDs (tried 1 and RAID 5)
-yes, I also used the smart array linux driver disk
-tried graphical and linux text mode
It fails always at “formatting filesystem”. The only difference I’ve seen, is that it stops around 11% with 2 disks in RAID1. With RAID 5 it fails around 98%.
We have seen this behaviour with regular W2003 installations as well. Further on in this thread there is someone with the solution:
I have found the problem!
The problem is the Western Digital drive. We have received 2 hdd types. 1 Seagate and 1 WD. I created a RAID 0 volume with only the Seagate drive and the OS install fine. When I create a RAID 0 or RAID 0 + 1 with the WD drive the OS install was freezing when it try to format the drive.
I think it is an issue between the Smart Array 212 and the WD drive. I have installed the latest firmware of the Smart Array and of the WD drive but it doesn’t fix this problem.
I have opened a case at HP support and I hope that the HDD will be replaced with a Seagate or a soon firmware update for the 212 or / and the WD drive
We have put this information in our own support call with HP as it seems that is not yet known throughout the service organization.
Update 10/8: Problem seems to be fixed with new firmware for the controller and backplane. Make sure you get it or ask for it with HP.
Posted in BlogPosts | Tagged: dl180, hp, issue, proliant | 2 Comments »