Thanks guys, it seems the problem is even more difficult than I thought, and
it seems there is no real measure for the software quality of the zfs stack
vs others, neutralizing the hardware used under both. I will be using ECC
RAM, since you mentioned it, and I will shift to using "enterprise" disks (I
had initially thought zfs will always recovers from cheapo sata disks,
making other disks only faster but not also safer), but now I am shifting to
10krpm SAS disks

So, I am changing my question into "Do you see any obvious problems with the
following setup I am considering"

- CPU: 1 Xeon Quad Core E5410 2.33GHz 12MB Cache 1333MHz
- 16GB ECC FB-DIMM 667MHz (8 x 2GB)
- 10  Seagate 400GB 10K 16MB SAS HDD

The 10 disks will be: 2 spare + 2 parity for raidz2 + 6 data => 2.4TB
useable space

* Do I need more CPU power ? How do I measure that ? What about RAM ?!
* Now that I'm using ECC RAM, and enterprisey disks, Does this put this
solution in par with low end netapp 2020 for example ?

I will be replicating the important data daily to a Linux box, just in case
I hit a wonderful zpool bug. Any final advice before I take the blue bill ;)

Thanks a lot


On Tue, Sep 30, 2008 at 8:40 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "ak" == Ahmed Kamal <[EMAIL PROTECTED]> writes:
>
>    ak> I need to answer and weigh against the cost.
>
> I suggest translating the reliability problems into a cost for
> mitigating them: price the ZFS alternative as two systems, and keep
> the second system offline except for nightly backup.  Since you care
> mostly about data loss, not availability, this should work okay.  You
> can lose 1 day of data, right?
>
> I think you need two zpools, or zpool + LVM2/XFS, some kind of
> two-filesystem setup, because of the ZFS corruption and
> panic/freeze-on-import problems.  Having two zpools helps with other
> things, too, like if you need to destroy and recreate the pool to
> remove a slog or a vdev, or change from mirroring to raidz2, or
> something like that.
>
> I don't think it's realistic to give a quantitative MTDL for loss
> caused by software bugs, from netapp or from ZFS.
>
>    ak> The EMC guy insisted we use 10k Fibre/SAS drives at least.
>
> I'm still not experienced at dealing with these guys without wasting
> huge amounts of time.  I guess one strategy is to call a bunch of
> them, so they are all wasting your time in parallel.  Last time I
> tried, the EMC guy wanted to meet _in person_ in the financial
> district, and then he just stopped calling so I had to guesstimate his
> quote from some low-end iSCSI/FC box that Dell was reselling.  Have
> you called netapp, hitachi, storagetek?  The IBM NAS is netapp so you
> could call IBM if netapp ignores you, but you probably want the
> storevault which is sold differently.  The HP NAS looks weird because
> it runs your choice of Linux or Windows instead of
> WeirdNASplatform---maybe read some more about that one.
>
> Of course you don't get source, but it surprised me these guys are
> MUCH worse than ordinary proprietary software.  At least netapp stuff,
> you may as well consider it leased.  They leverage the ``appliance''
> aspect, and then have sneaky licenses, that attempt to obliterate any
> potential market for used filers.  When you're cut off from support
> you can't even download manuals.  If you're accustomed to the ``first
> sale doctrine'' then ZFS with source has a huge advantage over netapp,
> beyond even ZFS's advantage over proprietary software.  The idea of
> dumping all my data into some opaque DRM canister lorded over by
> asshole CEO's who threaten to sick their corporate lawyers on users on
> the mailing list offends me just a bit, but I guess we have to follow
> the ``market forces.''
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to