, Tom Ammon wrote:
> David,
>
> I'm very interested in hearing more details on this, if you want to
> share...
>
> Tom
>
> -Original Message-
> From: David [mailto:da...@davidswafford.com]
> Sent: Wednesday, February 22, 2012 4:07 PM
> To: Jack Morga
our reason btw was to cut down on cabling/switch costs, it starts to add up
when you consider how many blades get eated by 1gb copper. going to DL580s amd
a few hp chassis. A chassis used to eat nearly 64 copper 1gb and 32 fiber
channel connections. on FCoE/CNAs, we're literally talking 4 x 1
yep, we're doing FCoE in an EMC Symettric, ESX, Nexus environment. All of our
FCoE is over 10gb CNAs. We are having good results, though we hit an odd bug
on QLogics cards initially on pur HP DL 580s (affected twinax only -- if you
dropped on uplink , ie testing failover, throughput dropped to
> FCoE was until very recently the only way to do centralized block storage
> to the Cisco UCS server blades, so I'd imagine it's quite widely adopted.
> That said, we don't run FCoE outside of the UCS - its uplinks
> to the SAN are just regular FC.
Agreed, very much the only implementation I ha
On 2/22/2012 7:02 AM, Jack Morgan wrote:
Does anyone know of any company or organization deploying FCoE[1] in a
production environment? I'm curious how widely adopted this technology is.
[1] http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet
http://fcoe.com/
Thanks,
I do
what wou
* Jack Morgan
> Does anyone know of any company or organization deploying FCoE[1] in
> a production environment? I'm curious how widely adopted this
> technology is.
FCoE was until very recently the only way to do centralized block
storage to the Cisco UCS server blades, so I'd imagine it's quite
6 matches
Mail list logo