> 
> I'm not saying the problems are the same, but I am saying that a
> backplane making cooling "hard" is not a good excuse, especially when
> the small empty chassis costs $10K+.


And, not to mention that some vendors do it sometimes.

"The 9-slot Cisco Catalyst 6509 Enhanced Vertical Switch (6509-V-E) provides 
[stuff]. It also provides front-to-back airflow that is optimized for hot and 
cold aisle designs in colocated data center and service provider deployments 
and is compliant with Network Equipment Building Standards (NEBS) deployments."

It only took 298 years from the inception of the 6509 to get a front-to-back 
version. If you can do it with that oversized thing, it certainly can be done 
on a 7200, XMR, juniper whatever, or whatever else you fancy.

There is no good excuse. The datacenter of today (and yesterday) really needs 
front to back cooling; the datacenter of tomorrow requires and demands it.

If vendors cared, they'd do it. Problem is, there is a disconnect between 
datacenter designer, datacenter builder, datacenter operator, IT operator, and 
IT manufacturer. No one is smart enough, yet, to say, "if you want to put that 
hunk of crap in my datacenter, it needs to suck in the front and put out in the 
back, otherwise my PUE will be 1.3 instead of 1.2 and you will be to blame for 
my oversized utility bills."

Perhaps when a bean-counter paying the power bill sees the difference, it will 
matter. I dunno.

I'll crawl back under my rock now.







Reply via email to