Tag Archives: hardware

New hosts online, project cluster complete

And with this I wish you all a very happy New Year and good luck on all your virtualization projects. May it work out as well for you as it for us.

To continue the festive spirit we received some gifts from HP just before Christmas: the new DL585G2’s we ordered. And yesterday and today they were fully configured and have found a place in one of our racks next to their older brothers.

Truly impressive machines these G2’s and HP have made some great strides in the layout of the machines. Also for maintenance they are much more engineer-friendly as they don’t have to come all the way out of the rack to replace a CPU or to add memory. You can see something of it on the pictures below (click for larger view) or have a look at the Quickspecs @ HP.com. In effect the whole CPU / memory assembly slides out forward out of the machine after removing to large coolers. This way you can pull the server out half way in stead of with the G1 which needs to be out all the way to open the top lid so that also means less risk of stress on the rail kit.

585 G2 open                            DL585 open on case

I know it was mentioned on the VMWare forums: you can not VMotion between a G1 and a G2 without CPU masking. We will work around this for a short time while we get another two 585G2’s and move the G1 servers to a VDI cluster that we are planning.

Additional NIC’s installed

During the SAN migration we placed an extra dual port NIC in the servers to make a total of eight Gigabit ports per server so we can build a virtual DMZ on two separate ports from the six ports that we use for the Service Console, VMotion and data center traffic.

It’s not essential to bring a DMZ into your VI this way (I guess you could put it on a VLAN and just make a vSwitch that only does that specific VLAN traffic) but it’s just something where we feel more comfortable having this traffic physically separated. A side effect is that it’s easier to explain to someone with less knowledge about virtualization which helps in certain situations (audits for example).

New host delivery confirmed

I received a short update from our supplier to let us know that the hosts that we ordered in November will be delivered this week. Kudos go to them for securing a replacement DL585 G2 chassis within a matter of days when one of the servers was diagnosed as being DOA (Dead On Arrival). The replacement unit was flown in today from the States so they could still hold their prognosed delivery date.

Because of the holiday season we’ll probably not be able to work on them next week so we’ll kick off the New Year with the completion of our planned cluster. Because the experiences are so good we have also decided to include an expansion of the cluster with another two hosts in next year’s budget as we are expecting our business growth of this year to continue and accelerate next year as well.

New VM hosts ordered

As i wrote in my other post we cam to the point of ordering the other two hosts. After some deliberation we have decided to buy the new DL585 G2’s. I haven’t been able to find an exclusive answer to the question of VMotion support between the old hosts (using Opteron 880’s) and the new hosts (using Opteron 8216’s) but CPU masking will enable that anyway.

More important to us is that the Generation 2 of the DL585 can hold up to 64GB of PC5300 memory (running 667Mhz) compared to 32GB PC 3200 (400Mhz) maximum in the Generation 1.

Our new config looks like this (and is added to the server specs page):

  • HP ProLiant DL585R02 O/2,4-2P 2 GB (2 Opteron DC 8216 cpu’s / 1 MB cache / SA P400i – 512 MB – BBWC)
  • 2x AMD Opteron 8216 2,4 GHz- 1 MB Dual Core (95 Watt) processor option
  • 32 GB Dual Rank memory (PC2-5300 / ECC / DDR / SDRAM / (2 x 2 GB)
  • 2x 36.4-GB 10K SFF SAS harddisk (10K, SFF SAS, hot pluggable)
  • 3x HP NC360T PCI Express Dual Port Gigabit Server Adapter
  • Hot Plug Redundant PSU for ProLiant DL580R03/R04 en ML570 G3/G4 en DL585G2
  • 2x Fiber channel PCI-X 2.0 4Gb Single HBA

Estimated delivery time is three to four weeks so it will be a nice Christmas gift.

Deliveries (2)

These photo’s show the servers with full memory banks and Fiber HBA’s in place.

full memory banks hba and nics

Our supplier has corrected the problems and we’ll have the rest of the memory by tomorrow so we are good to go.

First thing is to get the servers mounted and attached to the network and storage.

Deliveries

During my short absence last week we took delivery of the PowerConvert licenses and the hardware.

Our supplier made a mistake with the memory configuration so one box now has 32GB and the other only has 16GB. This will be corrected during the week. As you can see on the second pictures they put 24GB (12x2GB) in a 3-3-3-3 configuration. Because it is dual channel memory that will never work.

They also forgot to put in the Fiber HBA’s so we had the plug them in ourselves as well (why do we still pay them for assembly…..).

For the technically interested:
DL585 in box dl585 internals the dynamic duo
The VMWare licenses are also expected today so we can start installing and configuring the servers.

Host servers confirmed

Our order for the DL585’s has had a delivery date confirmed. First week of October we’ll get our hands on the production VM hosts. For the more technically interested readers I will try to shoot some photo’s of these babies arriving.

It’s exciting for us as they are the biggest investment we have ever made into single servers and they will outperform anything we currently have by a huge margin. (Our “biggest” servers currently are dual IBM x346 and dual Proliant DL380’s).

Another short update: no decision as yet on the storage. We didn’t anticipate it but it’s going to be a decision for the board of directors so it will take more time than expected. Next opportunity for a decision is 18 September or the week after that.