Darren J Moffat wrote:
Torrey McMahon wrote:
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a waste of engineering time since hardware raid solves all the problems ?

I don't believe it does but I'm no storage expert and maybe I've drank too much cool aid. I'm software person and for me ZFS is brilliant it is so much easier than managing any of the hardware raid systems I've dealt with.


ZFS is great....for the systems that can run it. However, any enterprise datacenter is going to be made up of many many hosts running many many OS. In that world you're going to consolidate on large arrays and use the features of those arrays where they cover the most ground. For example, if I've 100 hosts all running different OS and apps and I can perform my data replication and redundancy algorithms, in most cases Raid, in one spot then it will be much more cost efficient to do it there.

but you still need a local file system on those systems in many cases.

So back to where we started I guess, how to effectively use ZFS to benefit Solaris (and the other platforms it gets ported to) while still using Hardware RAID because you have no choice but to use it.



Too many variables in an overall storage environment. This is why I always jump on people that say, "Dude! You've got ZFS. Just use JBODs". They're not based in a reality outside of the ones that constitute a brand new workstation or SMB server....and we don't really target that market these days.

You need to clearly define what the environment is, what the data growth will look like, what apps are going to be deployed, replication requirements, etc. It's the way things have been for years. ZFS just changes a couple of variables. It doesn't eliminate them or turn the equation into anything easier to solve.



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to