Tag Archives: hp

HP DL180G6 RAID controller issue with WD harddisks

We ran into this problem with two new DL180 G6 servers combined with Western Digital (WD) 750GB SATA drives. The description matches everything in this thread on the HP ITRC forums:

http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1249041708681+28353475&threadId=1357085

-tried different mediums (centos,redhat)

-same behavior with different RAIDs (tried 1 and RAID 5)

-yes, I also used the smart array linux driver disk

-tried graphical and linux text mode

It fails always at “formatting filesystem”. The only difference I’ve seen, is that it stops around 11% with 2 disks in RAID1. With RAID 5 it fails around 98%.

We have seen this behaviour with regular W2003 installations as well. Further on in this thread there is someone with the solution:

Hi there,

I have found the problem!

The problem is the Western Digital drive. We have received 2 hdd types. 1 Seagate and 1 WD. I created a RAID 0 volume with only the Seagate drive and the OS install fine. When I create a RAID 0 or RAID 0 + 1 with the WD drive the OS install was freezing when it try to format the drive.

I think it is an issue between the Smart Array 212 and the WD drive. I have installed the latest firmware of the Smart Array and of the WD drive but it doesn’t fix this problem.

I have opened a case at HP support and I hope that the HDD will be replaced with a Seagate or a soon firmware update for the 212 or / and the WD drive

We have put this information in our own support call with HP as it seems that is not yet known throughout the service organization.

Update 10/8: Problem seems to be fixed with new firmware for the controller and backplane. Make sure you get it or ask for it with HP.

Lesson Learned: memory management in new HP DL360/380

Can’t say I should not have known this (because it’s in the quick specs of both servers) but the new DL360G6 and DL380G6 have different memory management than the previous generation.

This means that there are strict limitations on the amount and type of memory that you can use and there are strict guidelines for how much memory can be used in a server given a CPU/Memory combination.

From the Quick Specs document of the DL360 G6:

DDR3 memory population guidelines

Some DIMM installation guidelines are summarized below:

  • For servers with eighteen (18) memory slots:
    • There are three (3) channels per processor; six (6) channels per server
    • There are three (3) DIMM slots for each memory channel; eighteen (18) total slots
    • Memory channel 1 consists of the three (3) DIMMs that are closest to the processor
    • Memory channel 3 consists of the three (3) DIMMs that are furthest from the processor
  • DIMM slots that are white should be populated first
  • Do not mix Unbuffered memory (UDIMMs) with Registered memory (RDIMMs)
  • Do not install DIMMs if the corresponding processor is not installed
  • If only one processor is installed in a 2CPU system, only half of the DIMM slots are available
  • To maximize performance, balance the total memory capacity between all installed processors
  • It is not required, but it is recommended to load the channels similarly if possible
  • You can only have up to eight (8) ranks installed per channel
  • You can only install two quad-rank DIMMs per channel
  • You can only install two UDIMMs per channel; if available, the third slot in the channel must remain empty
  • Populate DIMMs from heaviest load (quad-rank) to lightest load (single-rank) within a channel
  • Heaviest load (DIMM with most ranks) within a channel goes furthest from the chipset
  • For memory mirroring mode, channel 3 must be unpopulated. Channels 1 and 2 are populated identically
  • For lock-step mode, channel 3 must be unpopulated. DIMMs in channels 1 and 2 will be installed in pairs. The paired slots will be 1,4; 2,5; 3;6 on a 3DPC system or 1,4; 2,5; on a 2DPC system
  • No mixing DIMM voltage; all DIMMs must be the same voltage

WW QuickSpecs for the 360 G6 are here: http://h18004.www1.hp.com/products/quickspecs/13235_div/13235_div.HTML#Memory. You can find the Quick Specs documents (US and WorldWide) from the server information pages on HP.com.

Give the gift of Mini Note

My contact at ETC was kind enough to loan me an HP 2133 Mini Note for the VMworld conference. I aim to use the Mini-Note as my primary machine for blogging during the conference so I don’t have to lug around my business laptop (it’s 3,5 kilo’s).

Because it came with Vista default installed I looked around for a more lightweight OS and tried to get Xubuntu installed on it. There is one catch with the 2133 MiniNote (compared to the Mini1000 for instance) and that’s the graphics driver. The 2133 uses a Via Chrome9 graphics adapter and you have to use the “live xforcevesa” option at boot to force it to use that driver. That is easier said than done when you can’t get a working installer on your USB key via Unetbootin. My 2133 stubbornly refused to boot from USB when I did that. I was able to build a working LiveUSB drive with LinuxMint and Xubuntu via the excellent instructions and tools from PendriveLinux.

Apart from a small issue getting the wireless to work this is working much faster than Vista although there are some issues with windows being too large for the screen. I have also stripped Vista of every performance hogging feature I could find (indexing is off, system restore is off, all visual effects are off) and the performance has improved to an acceptable level. If you are in the market for a machine like this I would recommend going for a unit with a slightly higher resolution so you don’t run into the window problem and go for a unit that has an option to downgrade to Windows XP or a native Linux unit.

The system specs for the specific Mini Note are:
Via C7-M CPU @ 1,6Ghz
1GB memory
120GB hard drive
Vista Home Basic (out of the box, this one has Vista Business now)
Resolution: 1024×600
Weight: 1,2kg

HP to offer ESX3i on Proliant

One of the more interesting hardware announcements coming around VMWorld Europe is HP’s announcement that they will be offering ESX3i on 10 models of Proliant servers. According to the press release “ESX 3i for HP ProLiant features VMware virtualization capabilities with full support for HP System Insight Manager (SIM)” and will also feature integration with HP’s Insight Control Environment.

System specs

These are the specifications that we have determined to be right for us:

  • 1x HP Proliant DL585R01 O2.4-1MB Model 4-8GB (4 x Opteron 880 Dual Core cpu’s) / 8GB PC3200 memory)
  • 6x 4 GB memory module (PC3200/ECC/DDR/SDRAM/2 x 2 GB Modules)
  • 2x 36GB disk option (15K/Ultra320/hot-pluggable/1-inch high/universal)
  • 2x FC2143 4GB PCI-X2.0 (Fiber Host Bus Adapter)
  • 2x Gigabit PCI-X NC7170 Dualport 1000T Server adapter
  • 1x Hot pluggable Redundant Power Supply Unit(PSU) for ProLiant DL585R01

Additional information:

Memory
32GB PC2-3200 internal memory is the maximum for the chassis. We choose memory speed over total memory size because we ended up with needing four clusternodes for the amount of Virtual Machines we want to run (80 – 100). Which means we end up with 128GB of PC2-3200 memory anyway.

Disks
It is a personal choice if you want to use disks for your ESX / VI installation or if you want to boot from SAN. We like to have the OS disks in the chassis.

SAN connection
Because of the higher impact of a component failure we use two single port Fiber HBA’s coupled to two SAN switches linked to dual controllers in the disk arrays. Failure of one component in the chain doesn’t affect production availability in this way.

Network
Same goes for the Network Interface Cards (NIC). Having a four way Gb NIC makes for higher density but a failure of the card also means failure of all four ports.

Chassis

The decision on the chassis specs is hardly something I would put under rocket science. HP has a configuration with 4 Opteron 880’s (2,4Ghz Dual Cores) built in from the factory and the premium for the 2,6Ghz model was a bit too much for my liking. We also liked this model for not having to do a lot of assembly of the server chassis.

The new models:

Have a look at my post for more explanation. The specs of the DL585 G2’s are as follows:

  • HP ProLiant DL585R02 O/2,4-2P 2 GB (2 Opteron DC 8216 cpu’s / 1 MB cache / SA P400i – 512 MB – BBWC)
  • 2x AMD Opteron 8216 2,4 GHz- 1 MB Dual Core (95 Watt) processor option
  • 32 GB Dual Rank memory (PC2-5300 / ECC / DDR / SDRAM / (2 x 2 GB)
  • 2x 36.4-GB 10K SFF SAS harddisk (10K, SFF SAS, hot pluggable)
  • 3x HP NC360T PCI Express Dual Port Gigabit Server Adapter
  • Hot Plug Redundant PSU for ProLiant DL580R03/R04 en ML570 G3/G4 en DL585G2
  • 2x Fiber channel PCI-X 2.0 4Gb Single HBA

New hosts online, project cluster complete

And with this I wish you all a very happy New Year and good luck on all your virtualization projects. May it work out as well for you as it for us.

To continue the festive spirit we received some gifts from HP just before Christmas: the new DL585G2’s we ordered. And yesterday and today they were fully configured and have found a place in one of our racks next to their older brothers.

Truly impressive machines these G2’s and HP have made some great strides in the layout of the machines. Also for maintenance they are much more engineer-friendly as they don’t have to come all the way out of the rack to replace a CPU or to add memory. You can see something of it on the pictures below (click for larger view) or have a look at the Quickspecs @ HP.com. In effect the whole CPU / memory assembly slides out forward out of the machine after removing to large coolers. This way you can pull the server out half way in stead of with the G1 which needs to be out all the way to open the top lid so that also means less risk of stress on the rail kit.

585 G2 open                            DL585 open on case

I know it was mentioned on the VMWare forums: you can not VMotion between a G1 and a G2 without CPU masking. We will work around this for a short time while we get another two 585G2’s and move the G1 servers to a VDI cluster that we are planning.

New host delivery confirmed

I received a short update from our supplier to let us know that the hosts that we ordered in November will be delivered this week. Kudos go to them for securing a replacement DL585 G2 chassis within a matter of days when one of the servers was diagnosed as being DOA (Dead On Arrival). The replacement unit was flown in today from the States so they could still hold their prognosed delivery date.

Because of the holiday season we’ll probably not be able to work on them next week so we’ll kick off the New Year with the completion of our planned cluster. Because the experiences are so good we have also decided to include an expansion of the cluster with another two hosts in next year’s budget as we are expecting our business growth of this year to continue and accelerate next year as well.

New VM hosts ordered

As i wrote in my other post we cam to the point of ordering the other two hosts. After some deliberation we have decided to buy the new DL585 G2’s. I haven’t been able to find an exclusive answer to the question of VMotion support between the old hosts (using Opteron 880’s) and the new hosts (using Opteron 8216’s) but CPU masking will enable that anyway.

More important to us is that the Generation 2 of the DL585 can hold up to 64GB of PC5300 memory (running 667Mhz) compared to 32GB PC 3200 (400Mhz) maximum in the Generation 1.

Our new config looks like this (and is added to the server specs page):

  • HP ProLiant DL585R02 O/2,4-2P 2 GB (2 Opteron DC 8216 cpu’s / 1 MB cache / SA P400i – 512 MB – BBWC)
  • 2x AMD Opteron 8216 2,4 GHz- 1 MB Dual Core (95 Watt) processor option
  • 32 GB Dual Rank memory (PC2-5300 / ECC / DDR / SDRAM / (2 x 2 GB)
  • 2x 36.4-GB 10K SFF SAS harddisk (10K, SFF SAS, hot pluggable)
  • 3x HP NC360T PCI Express Dual Port Gigabit Server Adapter
  • Hot Plug Redundant PSU for ProLiant DL580R03/R04 en ML570 G3/G4 en DL585G2
  • 2x Fiber channel PCI-X 2.0 4Gb Single HBA

Estimated delivery time is three to four weeks so it will be a nice Christmas gift.

Host servers confirmed

Our order for the DL585’s has had a delivery date confirmed. First week of October we’ll get our hands on the production VM hosts. For the more technically interested readers I will try to shoot some photo’s of these babies arriving.

It’s exciting for us as they are the biggest investment we have ever made into single servers and they will outperform anything we currently have by a huge margin. (Our “biggest” servers currently are dual IBM x346 and dual Proliant DL380’s).

Another short update: no decision as yet on the storage. We didn’t anticipate it but it’s going to be a decision for the board of directors so it will take more time than expected. Next opportunity for a decision is 18 September or the week after that.