Posted by martijnl on November 10, 2010
The title of this post reflects a blog post on the Exchange Team Blog (aptly named: You Had Me At EHLO). You can find it here: http://msexchangeteam.com/archive/2010/11/09/456851.aspx. While the wording is a bit strong I agree with the points raised in the blog post.
Main point of the post in my opinion is:
Microsoft does not support combining Exchange high availability (DAGs) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers. Microsoft recommends using Exchange DAGs to provide high availability, when deploying the Exchange Mailbox Server role. Because hypervisor HA solutions are not application aware, they cannot adapt to non-hardware failures. Combining these solutions adds complexity and cost, without adding additional high availability. On the other hand, an Exchange high availability solution does detect both hardware and application failures, and will automatically failover to another member server in the DAG, while reducing complexity.
This is an important item to address in a multi-server Exchange design running in a virtualized environment. While things like VMware HA are great for protecting against host failures it will not (as mentioned further along in the article) protect against application failures. If the availability requirements stay within anything that can be done with a single datastore on a single server then using HA and vMotion are perfectly fine in my opinion but go beyond that and you’ll soon see issues arising from using two types of clustering/HA measures for the same application. You’ll be using anti-affinity rules to prevent clusternodes from being migrated to the same host for instance. Now think about having to do that for an Exchange cluster using multiple datastores running on multiple servers. It just makes things more complex than they need to be.
Read the rest of the article for more information on the subject. Thanks to @douglasabrown for posting the article to Twitter.
Posted in BlogPosts | Tagged: best practice, exchange, vmware | Leave a Comment »
Posted by martijnl on February 23, 2010
With a press release (http://blogs.vmware.com/view/2010/02/vmware-welcomes-rto-software.html) and Twitter post from CTO Stephen Herrod (twitter.com/herrod) VMware has announced the purchase of the assets and staff of RTO Software.
Today we are proud to announce that VMware has acquired the assets of Alpharetta, GA based RTO Software adding their technology and talented people to the VMware View team. For those of you not familiar with RTO software they are well known for their Virtual Profiles, PinPoint and Discover products which help IT organizations simplify desktop deployments while providing end-users with a rich, robust and flexible experience.
The core technology in play here is called Virtual Profiles and it is used for so called ‘persona’ management. Think Appsense and similar products.
More information about the technology and VMware’s plans for it in View can be found here: http://blogs.vmware.com/view-point/2010/02/vmware-to-acquire-rto-software.html
A snippet from that article:
We’ve been talking about provisioning users, not devices, and the importance of composition or layering in a desktop virtual machine – that a desktop VM is comprised of independent virtualized component parts that are dynamically brought together on demand into an encapsulated VM. One of those critical parts is the user persona, a user’s profile, data files and settings. Clean, efficient user persona virtualization is vital to our vision and that is precisely what RTO’s industry-leading Virtual Profiles will deliver for VMware. With persona management, end-user specific information such as user data files, settings and application access is separated from the desktop image and centrally stored, enabling increased flexible access, greater portability and seamless file management and backup.
Posted in BlogPosts | Tagged: vmware | Leave a Comment »
Posted by martijnl on February 22, 2010
Last year we got a request from a client who wanted an application environment for his SAP BusinessObjects Financial Consolidation (formerly Cartesis Magnitude) application. The new environment would be used for group consolidation for a number of companies with most of them located in EMEA. Because of the relative importance of the availability of this environment it was our goal to realize a 100% virtualized environment so we could use VMware HA, vMotion etc.
The reason for this blogpost is to explain that this can be done and that it does work and perform up to and above specification. This in contrast to popular belief among product specialists and third party consultants for the product itself who were trying to convince the client that this product would not run (perform) in an VM.
What we ended up doing was the following:
- Based on product usage characteristics and an existing design for the environment in physical servers we proposed an alternative in the form of a Virtual Infrastructure
- This VI (then based on VMware VI3.5 Enterprise) was housed in an external datacenter with the right facilities (connections, power, cooling, building design etc.) to guarantee sufficient uptime
- Use Veeam Backup&Replication to replicate the entire environment to a secondary location for DR purposes
The complete application environment uses Citrix XenApp and Citrix Access Gateway SSL VPN to deliver the application to the users. Apart from security appliances such as the Access Gateway and firewall devices only the Citrix servers are physical. This choice was made based on the expected high utilization of the Citrix servers and our choice to host only the application related servers in the VI. All other servers (file, web, application, SQL2005 database and analysis, domain controllers) are virtual machines.
To get the necessary storage performance for the SQL 2005 (64bit) database we created dedicated LUNs for the database logfiles and datastores on the FC SAN. VMware has published some excellent whitepapers on SQL Server and SAP sizing: http://www.vmware.com/files/pdf/SQLServerWorkloads.pdf and http://www.vmware.com/files/pdf/SAP-Best-Practices-White-Paper-2009.pdf for example. The total size of the datastores is well over 500GB by the way.
The virtual machines are almost all SMP machines (either two or four-way SMP) because of peak performance caused in consolidation and reporting runs and this works well. To avoid performance issues we did adopt the rule that CPU overcommit would be held to a minimum, because we knew that come peak time several servers on the same host would be fighting for resources.
After implementation of the environment there was a performance and acceptance test and the whole environment performed as good or better than the environment where the application was migrated from. Plus we now have the added benefits of virtualization with regards to availability. Another benefit of having the whole environment fully virtualized is that with Veeam Backup and Replication we are now able to replicate the whole environment to a target host in a second datacenter and switch over to that environment should there be a problem in the primary location. I am not completely unbiased with Veeam but if you compare cost and value it keeps amazing me how much you can do with it for a relatively low investment compared with traditional backup software solutions.
Posted in BlogPosts | Tagged: design, sap, vmware | 3 Comments »
Posted by martijnl on April 21, 2009
With a press release and subsequent webcast tonight VMware has announced vSphere 4 as their next generation datacenter virtualization solution. Here is an excerpt from the press release:
PALO ALTO, CA, April 21, 2009 — VMware, Inc. (NYSE:
VMW), the global leader in virtualization solutions from the desktop to the datacenter, today announced VMware vSphere™ 4, the industry’s first operating system for building the internal cloud, enabling the delivery of efficient, flexible and reliable IT as a service. With a wide range of groundbreaking new capabilities, VMware vSphere 4 brings cloud computing to enterprises in an evolutionary, non-disruptive way – delivering uncompromising control with greater efficiency while preserving customer choice.
As the complexity of IT environments has continued to increase over time, customers’ share of IT budgets are increasingly spent on simply trying to “keep the lights on.” With the promise of cloud computing, customers are eager to achieve the benefits, but struggle to see the path to getting there. Leveraging VMware vSphere 4, customers can take pragmatic steps to achieve cloud computing within their own IT environments. With these “internal” clouds, IT departments can dramatically simplify how computing is delivered in order to help decrease its cost and increase its flexibility, enabling IT to respond more rapidly to changing business requirements.
Read it in it’s entirety over here: http://blogs.vmware.com/vmtn/2009/04/introducing-vmware-vsphere-4-the-industrys-first-cloud-operating-system.html
Posted in BlogPosts | Tagged: vmware, vsphere | Leave a Comment »
Posted by martijnl on April 6, 2009
VMware heeft een aankondiging op haar website geplaatst van een webcast op 21 april aanstaande waarin de nieuwe generatie virtualisatiesoftware gepresenteerd gaat worden door Paul Maritz (President en CEO van VMware):
Join us for an exclusive peek at how VMware is bringing cloud computing to businesses of all sizes.
VMware is once again leading the virtualization industry by bringing cloud computing to the datacenter. Transform your IT infrastructure into a private cloud—a collection of internal clouds federated on-demand to external clouds—delivering IT infrastructure as an easily accessible service.
On April 21, 2009, we’ll be unveiling how VMware is taking IT to new heights of efficiency, choice and control through service-level automation—dramatically reducing capital and operating costs and maximizing IT efficiency—with the freedom to choose any application, OS, or hardware.
Join Paul Maritz, President and CEO of VMware, as he and other VMware leaders officially unveil the next generation of virtualization technology from VMware.
Don’t miss your chance to participate in one of the most groundbreaking events this year.
De aankondiging is te vinden op de website van VMware, hier is ook een mogelijkheid om te registreren voor deze webcast.
Posted in BlogPosts | Tagged: dutch, vmware, vsphere | Leave a Comment »
Posted by martijnl on February 28, 2009
This is my video of the mobile virtualization demo from VMworld Europe 2009.
This is the device that was demoed: http://www.nokiausa.com/find-products/phones/nokia-n800-r6. While this is not a phone like what is mentioned in the video it is still very impressive. There are hurdles to take though because you will probably want to have different phone numbers attached to different environments and that usually means multiple SIM cards (in the case of GSM).
Posted in BlogPosts | Tagged: mobile, vmware, vmworld | 1 Comment »
Posted by martijnl on January 16, 2009
Found these two posts while searching for easy ways to get a virtual desktop into the right OU:
If you need to generate lots of virtual machines, for example with a non-persistent pool in VMware View it is very nice to have the option to integrate this neatly into the standard VirtualCenter guest customization.
Posted in BlogPosts | Tagged: customization, vdi, view3, vmware | Leave a Comment »
Posted by martijnl on January 12, 2009
Ever notice that some settings (like performance and visual effects) are not valid for other users logging into a virtual desktop? That’s because some of these settings are configured into the user profile.
This is a step-by-step guide on how to create a custom default user profile so you can set all once and then all users logging into the machine get these settings automatically from the default profile: http://support.microsoft.com/kb/319974
Posted in BlogPosts | Tagged: vdi, view3, vmware | 1 Comment »
Posted by martijnl on January 7, 2009
Got this error while trying to upload some ISO’s to a datastore ISO folder via a VirtualCenter client.
While looking for causes I found these possibilities:
- Not enough space in the datastore
- Port 901/902 not open between the VI Client to the source of the file (and/or VC Server, could not find if that is also a factor)
- DNS configuration for the host servers
As a possible solution I also found that restarting the VC Server service could help.
In this particular case it turned out that DNS was not configured for the host servers so adding the servers to the hosts file of the machine running the VI Client did the trick.
Posted in BlogPosts | Tagged: error, virtualcenter, vmware | 4 Comments »
Posted by martijnl on January 29, 2008
VMWare’s stock price is down significantly in pre-market trading this morning following yesterday’s fourth quarter financial results announcement. It’s currently trading at roughly $59 down from $83 at closing yesterday. Virtualization.info has some more info on the announcement and it’s reception from the financial market analysts in this post: http://www.virtualization.info/2008/01/vmware-doubles-profits-but-misses.html
Posted in BlogPosts | Tagged: stocks, vmware | Leave a Comment »