A few thoughts and observations from a longtime blade admin…

Depending on the blade manufacturer you're talking about (sounds like HP) the 
likelihood of the passive enclosure backplane causing a failure is near zero. 
We have 4+ years of ~20,000 blades in c-class enclosures without a single 
instance of this.

There are pain points grouping that many servers tightly however. Power and 
cooling, as you note, requires careful planning.

The newer Cisco enclosure switches (3120G) allow stacking, creating a single 
switch fabric across 8 switches (4 enclosures = 40U). The network folks need to 
design the topology to ensure uptime. Similarly the SAN fabric (for fiber 
channel environments) needs to consider these factors. I don't think the SAN 
switches support a merged fabric like the network side. We use relatively 
little fiber channel in blades and utilize passthrough modules instead of the 
in-enclosure switches.

Finally, the integrated Lights Out management cards require connectivity to the 
"Onboard Administrator" enclosure management module. We've had a few edge cases 
where hardware problems in one OA caused outages of the iLO, even with a 
redundant module installed. Not a service outage fortunately, but HP is using 
the OA for power management. A great feature if you're power/cooling 
constrained (see point #1!) since you can under clock less utilized servers to 
reduce power consumption. However, it's unclear to me what happens with power 
when the OA fails.

-Adrian


On Apr 3, 2013, at 11:46 AM, Matt Lawrence <m...@technoronin.com> wrote:

> This is a bit of a rant, so adjust your filters accordingly.
> 
> I'm currently doing some work not really production datacenter (unless you 
> ask the developers) that has a variety of systems.  Some of the systems I'm 
> dealing with are the 4 servers in 2U variety.  It's a neat idea, but great 
> care needs to be taken to avoid problems.  One of the big issues is cable 
> density, the systems I'm managing have a BMC connection, 2x 1Gb connections, 
> 2x 10Gb connections and a KVM dongle.  That's 7 things plugged into the back 
> of each server (KVM is VGA + USB).  Multiply times 4 and that's 28 cables per 
> 2U, plus 2x power cables.  That's a lot of cables and they aren't running in 
> neat rows like a 48 port switch.
> 
> Adding to the problem is the fact that the disks plug in to the front of the 
> system and the server electronics plug in from the back of the server. Right 
> through that rat's nest of cabling.  It's a challenge.  If you consider 
> yourself ok at cabling, you don't have anywhere near the skills to do cabling 
> at this density.  Typical cabling standards are not adequate for these kinds 
> of setups.  Mediocre cabling also really blocks the air flow, the systems I'm 
> dealing with are nice and toasty at the back of the cabinet.
> 
> Another option I'm dealing with are blade enclosures.  They manage to get 16 
> servers into 10U of rack space and (at least here) they have switches built 
> in.  This means I only have 7 network cables running to a top of rack 
> switch/patch panel.  So much easier to deal with.  The blades are accessable 
> via the front of the rack, which is also much easier.  The enclosures have 
> built in management, which again makes things easier.  A downside is that 
> certain failures require taking down the entire encloser to fix, so you lose 
> 16 servers instead of the 4 in the other kind of high density server.  I have 
> never ben a big fan of blade enclosures, but I'm starting to come around.
> 
> Of course, one issue that too few people think about until it is too late is 
> the issue of power density and cooling capacity.  Being able to put 4 servers 
> in 2U sounds really nifty until you discover you can only power and cool half 
> a rack of them.
> 
> This concludes my rant for today.  Maybe.
> 
> -- Matt
> It's not what I know that counts.
> It's what I can remember in time to use.
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
> http://lopsa.org/

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to