HP is touring a 2-day show on convergence. I attended the second day, mostly to get an update on ProLiant blades. I was impatient with all the HP boosterism, but perhaps that material was appreciated by the HP employees and vendors present. There were a lot of dense slides on blades & VMware, but unfortunately they aren't yet cleared for publication so we didn't get copies. Hopefully the presentations will be available on the roadshow site next month.

The most interesting parts for me were an update on HP BladeSystem, and a competitive comparison (mostly trash-talking) about HP BladeSystem C-Class vs. Dell, Cisco, & IBM blades -- particularly as VMware hosts. There was really no mention of Itanium, except as a bullet point: HP offers Itanium blades (no mention of Cell).

DIMMs

Intel Nehalem (the Xeon 5500 series) still only supports 2-socket configurations ("2P"), so HP continues to recommend AMD Istanbul (6 cores per socket) for 4P and larger systems. Interestingly, there was only a single mention of boxes larger than 4P. IBM presentations I have attended, by contrast, tended to focus on their Hurricane & X3 chipsets, which are only available in 4P and larger systems. I wonder how much of this is because IBM is proud of Hurricane, and how much to HP's focus on blades (which don't make much sense beyond 4P).

Each Nehalem CPU has its own 3-channel memory controller, and each channel supports up to 3 DIMMs. In a 2-way box, this maxes out at 2 (CPUs) * 3 (channels/CPU) * 3 (slots/channel) = 18 DIMM slots. Unfortunately, Nehalem cannot use all DIMM slots at full speed. The highest speed is 1333MHz, which requires a single 1333MHz DIMM per channel (DPC) and a 95W CPU. Utilizing the second DIMM slot reduces memory access speed to 1066MHz (although HP apparently has a trick to retain the 1333MHz speed at 2DPC); a 3rd DIMM reduces speed to 800MHz. DIMM mismatches within or across channels can also reduce speed.

4gb DIMMs are common, but 8gb is uncommon and still often prohibitively expensive. This means normally the fastest possible configuration is 2 95W 5500s with 6 4gb 1333MHz DIMMs: 24gb (or 6gb or 12gb with smaller DIMMs). HP's 2DPC trick offers the same speed up to 48gb with 12 DIMMs. Maximum Nehalem RAM capacity is 18 8gb DIMMs: 144gb at 800MHz.

Blades

HP's bread & butter blade is the BL460c (they claim it's the most popular server in the world, eclipsing the 2U DL380). The BL460c offers 2 (dual or quad core) Xeon 5500s, 12 DIMM slots, 2 hot-swap 2.5" SAS/SATA drives (with embedded RAID controller), and 2 Flex-10 ports.

The BL490c only accepts quad-core Nehalems (no dual-core) and gets 6 more DIMM slots, suiting it better to large VM loads. But it also gives up the BL460c's pair of hot-swap RAID SAS/SATA bays for a couple non-hot-swap non-RAID SATA SSD bays. Presumably The 490 doesn't have enough cooling for spinning disks, and they expect you to put it on some kind of SAN anyway.

Flex-10 sounds very slick. Physically they're 10GE ports, but when uplinked to an HP Flex-10 Virtual Connect switch module, each Flex-10 connection appears to the host as 4 independent 10GE interfaces. The administrator can carve up the 10gbps of real bandwidth between the virtual interfaces -- something like NIC trunking/bonding/teaming and OS-based virtual interfaces, but implemented below the OS level. This should be extremely useful for clustering and VMware hosts, where the vendor requires (Microsoft) or runs faster with (VMware) more network devices, or as a simple way of implementing QoS. It's now easy to get a 2P 12-core system with up to 192gb of RAM, which can host a lot of VMs. In an iSCSI or NAS environment, the BL490c may not even need mezzanine cards to handle their IO. The HP Virtual Connect Flex-10 Ethernet Module is really a 24-port 10GE switch with 16 internal ports (for blades) and 8 external ports (for uplink & inter-chassis crosslink). This means maximum uplink bandwidth is 80gbps/switch, while total internal bandwidth to blades is 160bps/switch. If you have several blades which sustain >10gbps bandwidth, they'll need to be scattered across chassis to avoid competing for uplink bandwidth -- or perhaps migrated to standalone DL rack servers instead.

They also talked about the BL2x220c high-density blades: 2 2P motherboards in one half-height blade module. Each independent motherboard has 2 (dual or quad core) 5500s, 6 DIMM slots (48gb max), 2 1GE interfaces, and a single non-hot-swap 2.5" SATA drive. To use all the interfaces you need 4 switch modules -- each blade has 4 GE interfaces total, so 2 use the mezzanine connectors. Since you have to remove the whole unit to service either side (including disks), you need to figure out how to handle failures. They look good for HPC clusters, where the job scheduler can work around missing nodes.

Apparently iSCSI boot and acceleration (CPU offload to NICs) are expected in the G7 Flex-10 NICs, which should be very useful for HPC clusters.

Since this was HP boosterism, there was plenty of poking fun at IBM's less-well-endowed motherboards, with less or asymmetrical DIMM slots. And no mention of when 6-core Nehalems will be available in HP blades.

Matrix & Insight Dynamics

HP sells bundles under the BladeSystem Matrix name. This is good, as HP quoting is painfully arcane. Building a specification is a painful process of tracking down many different subcomponents, some of which have unintelligible names, and getting pricing from a rep. I have quoted IBM BladeCenter and HP BladeSystem gear, and IBM was complicated, but I eventually built up a spreadsheet and could calculate complete configurations with part numbers for entire or multiple chassis, fiddling with processor speeds or RAM configuration and getting real pricing for review by a reseller.

With HP, I have to give something much vaguer to a rep, who adds lots of unintelligible line items (without which the things won't work), and sends back pricing. When I complained to an HP rep years ago about how complicated the process was, he explained that HP had once lost a bid (to IBM?) by 25c, and decided to unbundle everything they possibly could so base price would be as low as possible. Once you get the low bid from HP, you get to add all the things (like access to the KVM features of the included ILO hardware), and get a higher real price. But the unbundling means only professionals can quote medium complex HP systems, so hopefully Matrix will help. I don't yet know if purchasing Matrix bundles would require us to purchase management software we don't want and won't use.

They also talked quite a bit about Insight Dynamics, a management system for BladeCenter. ID apparently makes it easily to download a template for a medium-complicated constellation of systems (like a multi-server Exchange installation) and have ID come up with where to deploy the components (to a combination of physical blades and VMs). I believe someone claimed ID can migrate physical machines to VMs (P2V) and vice-versa (V2P). This competes directly agains VMware VirtualCenter.

The idea is that HP Blades and specifically ID get you partway to cloud computing. Amazon & Google do basically effortless provisioning, so HP needed to improve the process of setting up new blades & VMs. Insight Dynamics can provision blades & VMs, although I'm not sure how many people trust it to yet...

Miscellany

They also talked about FCoE (Fibre Channel over Ethernet, which Cisco promotes). Fibre Channel protocols are intended to run over lossless SANs, while one of the main purposes of TCP is to compensate for the lossy nature of Ethernet. Apparently CEE (Converged Enhanced Ethernet) provide lossless layers which enable FCoE to work over longer ranges and more hops. It sounds like CEE will be available in 2010/2011. In the meantime, iSCSI looks interesting, especially if you can provide QoS controls (Flex-10?) to keep it from swamping everything else.

They also talked about LeftHand Networks, which HP bought last year. The LeftHand Virtual SAN Appliance is a VMware image, which presents any locally accessible storage as iSCSI devices.