Two fold really - firstly I remember the headaches I used to have
configuring Broadcom cards properly under Debain/Ubuntu but the sweetness
that was using an Intel NIC. Bottom line for me was that I know Intel
drivers have been around longer than Broadcom drivers and thus it would make
sense to ensure that we hand intel NIC's on the server. Secondly, I asked
Andy Bennett from Nexenta who told me it would make sense - always good to
get a second opinion :-)

There were/are reports all over Google about Broadcom issues with
Solaris/OpenSolaris so I didn't want to risk it. For a couple of hundred for
a quad port gig NIC - it's worth it when the entire solution is 90K+.

Sometimes (like the issue with bus-resets when some brands/firmware-rev's of
SSD's are used) the knowledge comes from people you work with (Nexenta rode
to the rescue here again - plug! plug! plug!) :-)

These are deployed in a couple of University and a very large data
capture/marketing company I used to work for and I know it works really well
and (plug! plug! plug) I know the dedicated support I got from the Nexenta
guys.

The difference as I see it is that OpenSolaris/ZFS/Dtrace/FMA allow you to
build your own solution to your own problem. Thinking of storage in a
completely new way instead of "just a block of storage" it becomes an
integrated part of performance engineering - certainly has been for the last
two installs I've been involved in.

I know why folks want a "Certified" solution with the likes of Dell/HP etc
but from my point of view (and all points of view are valid here), I know I
can deliver a cheaper, more focussed (and when I say that I'm not just doing
some marketing bs) solution for the requirement at hand. It's sometimes a
struggle to get customers/end-users to think of storage as more than just
storage. There's quite a lot of entrenched thinking to get around/over in
our field (try getting a Java dev to think clearly about thread handling and
massive SMP drawbacks for example).

Anyway - not trying to engage in an argument but it's always interesting to
find out why someone went for certain solutions over others.

My 2p. YMMV.

*goes off to collect cheque from Nexenta* ;-)

---
W. A. Khushil Dep - khushil....@gmail.com -  07905374843
Windows - Linux - Solaris - ZFS - Nexenta - Development - Consulting &
Contracting
http://www.khushil.com/ - http://www.facebook.com/GlobalOverlord





On 6 January 2011 13:28, Edward Ned Harvey <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

> > From: Khushil Dep [mailto:khushil....@gmail.com]
> >
> > I've deployed large SAN's on both SuperMicro 825/826/846 and Dell
> > R610/R710's and I've not found any issues so far. I always make a point
> of
> > installing Intel chipset NIC's on the DELL's and disabling the Broadcom
> ones
> > but other than that it's always been plain sailing - hardware-wise
> anyway.
>
> "not found any issues," "except the broadcom one which causes the system to
> crash regularly in the default factory configuration."
>
> How did you learn about the broadcom issue for the first time?  I had to
> learn the hard way, and with all the involvement of both Dell and Oracle
> support teams, nobody could tell me what I needed to change.  We literally
> replaced every component of the server twice over a period of 1 year, and I
> spent mandays upgrading and downgrading firmwares randomly trying to find a
> stable configuration.  I scoured the internet to find this little tidbit
> about replacing the broadcom NIC, and randomly guessed, and replaced my nic
> with an intel card to make the problem go away.
>
> The same system doesn't have a problem running RHEL/centos.
>
> What will be the new problem in the next line of servers?  Why, during my
> internet scouring, did I find a lot of other reports, of people who needed
> to disable c-states (didn't work for me) and lots of false leads indicating
> firmware downgrade would fix my broadcom issue?
>
> See my point?  Next time I buy a server, I do not have confidence to simply
> expect solaris on dell to work reliably.  The same goes for solaris
> derivatives, and all non-sun hardware.  There simply is not an adequate
> qualification and/or support process.
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to