> 
> Nicholas Harring wrote:
> >> Hi,
> >>
> >> when going the "Maildir on NFS for clustering"-route, is using
NetApp
> >> Filers still considered "state of the art" or has something better
> >> emerged?
> >>
> > There are plenty of other NAS options, see EMC for one vendor (also
not
> > cheap). Dell offers NAS, HP I believe does as well. Not sure how
much
> > clustering they offer, and what sort of feature set it has compared
to
> > NetApp.
> >
> 
> 
> These are all basically W2K3-servers ("Windows Storage Server")
> (EMC uses Windows even in the high-end gear, IIRC, but not necessarily
> WSS).
> 
> I'm not going to gamble with the NFS-performance and the compatibility
> issues of "Microsoft-flavoured NFS".
Yep, they're all based on the SAK and all have the same flaws. NFS being
the big one of the bunch, but even stability isn't entirely where it
should be compared to the competition. I figured I'd throw it out there
because some people seem to be having good luck with them, and I wasn't
sure how familiar you were or weren't with them.

We came to the same conclusion before making our purchasing decision on
the NetApp gear, for pretty much the same reasons you cited.
> 
> 
> >>  From a price-point, I'd rather use FreeBSD, but the fact that
there's
> >> no real volume-manager makes it unusable for our purposes.
> >> I've actually mailed Blue Arc about their hardware, but despite not
> >> being in the black, they didn't feel it necessary to answer my
query.
> >>
> > For a smaller cluster or one that doesn't have hard uptime
commitments
> > in the 4 or 5 9s range I'd say that a *nix solution would work just
> > fine. If you laid something like Veritas Clustering on top of it
then
> > moving into the "real" HA range should also be quite possible and
> > supportable.
> >
> 
> I've also thought about buying an X4100 and fitting it with a Dual
QLA,
> then exporting the mailstorage via NFS from that (using our HP SAN as
> backend).
> But using a NetApp would allow have our hosting-operations being
spread
> over two completely independent technologies (Web-> HP EVA,
> Mail->NetApp), avoiding a complete loss of service should one of the
two
> fail for whatever reason (like a competitor, who put all eggs in a
> single basket recently learned the hard way...).
That sounds like a good division of labor, and mitigation of risk. While
I normally don't advocate consolidation, the NetApps really do make it a
safe and inviting option. Our filers currently have 797 days of uptime
without a single hiccup in service (the only devices we own which have
been more solid are a pair of Cisco Catalyst 6500 switches). This
obviously means I've not been keeping up with ONTAP releases or firmware
upgrades, but since none have addressed anything I'd need this has also
been quite safe.
> 
> >> Does anybody have any sizing-information? NetApp offers a lot of
> >> hardware and even the entry-level stuff is not cheap.
> >> I'd like to know how many deliveries/h one can make e.g. with a
small
> >> FAS 270.
> >>
> > I'm running 8 servers (4 smtp, 4 pop/imap) on an F820c cluster doing
> > around 600k messages daily. I don't have any hourly stats at the
moment,
> > but that load is spread with about 80% across 10-12 hours with the
> > remainder spread evenly across the other 12-14. I'm currently
upgrading
> > my cluster to FAS3050s but not due to performance reasons, but
rather
> > storage consolidation throughout my network.
> >
> >
> 
> 600k deliveries/day?
> How much room is there 'till the NetApp is maxed out?
I'm peaking at around 70% cpu during my busiest periods, doing around 8k
nfs ops/second. To me this means I'm nearly tapped out, but only because
I'm not willing to load these up to the point that a head failure would
mean performance degradation when the other head took over. I believe
the FAS line are all a good bit faster than the old F8xx line, so I'd
expect even the lowest end to be able to handle more than this load
without buckling. 

We did some back of the envelope figuring when buying the FAS3050, and
figured that if we moved to GigE and bonded our Ethernet connections to
get a 3Gig link from the filers to the core switches we could probably
scale to ~8M deliveries/day, however that number should be taken with a
big grain of salt because that's a lot of scaling without any empirical
data between where we're at and where we'd end up. 

Cheers,
Nick

Reply via email to