Thank you all for the information, I believe it is much clearer to me.
"Sequential Reads" should scale with the number of disks in the entire
zpool (regardless of amount of vdevs), and "Random Reads" will scale with
just the number of vdevs (aka idea I had before only applies to "Random
Reads"), which I am much more happy with. Everything on my system should be
mostly sequential as editing should not occur much (aka no virtual machine
type things), when things get changed it usually means deleteing the old
file and adding the "updated" file.

I read though that ZFS does not have a "defragmentation" tool, is this
still the case? It would seem with such a performance difference between
"sequential reads" and "random reads" for raidzN's, a defragmentation tool
would be very high on ZFS's TODO list ;).

>    Is the enclosure just a JBOD? If it is not, can it present drives
 &
>I assume your other hardware won't be a bottleneck?
>(PCI buses, disk controllers, RAM access, etc.)

A little more extra information about my system, it is a JBODs. The disks
go to a SAS-2 Expander (RES2SV240), and that has a single connection to a
tyan motherboard which has a LSI SAS 2008 chip controller built-in. The CPU
is a i3 with a DMI of 5 GT/s (DMI is new to me vs FSB). RAM is server grade
unbuffered ECC DDR3 1333 8GB sticks. It is a dedicated machine which will
do nothing but serve files over the network.

To my understanding the Network or disks themselves should be my bottlenet;
SAS-2 connection between SAS Expander and mobo should be 24Gbit/s or
3GB/s(1.5GB/s if SAS-1), and 5 GT/s should provide ~20 GB/s max bandwidth
for the 64bit machine from what I read online. I don't think this affects
me, but, I was also curious, does anyone know if the mobo <> sas expander
will still establish a SAS-2 connection (if they both support SAS-2) if the
backpanes only support SAS-1 / SATA 3Gb/s? I never looked up the backpane
partnumbers in my Norco, but, think they support SATA 6Gb/s, so assume they
support SAS-2. But in essance SAS Expander <> HDD's wont be over 3Gb/s per
port, so as long as SAS Expander <> mobo establish's it's own SAS-2
connection regardless of what SAS Expander <> HDD's do, then I don't even
have to think about it. 1.5GB/s (SAS-1) is still above my optimal max
anyway though.

In essance, if the drives can provide it (and network interface ignored) I
think the theoredical limitation is 3GB/s. I mentioned 1.25GB/s for the
10GigE is max I am looking at, but, I'd be happy with anywhere between
500MB/s->1GB/s for sequential reads of large files (don't really care about
any type of writes, and hopefully random reads do not happen so often *will
test with iopattern*).

> What is the uncorrectable
> error rate on these 3TB drives? What is the real random I/Ops
> capability of these 3TB drives?

I'm unsure of these myself, all the other parts have arrived, or are on
route, but I have not actually bought the HDD's yet so can still choose
almost anything. It will probably be cheapest consumer drives I can get
though (probably "Seagate Barracude Green ST3000DM001"'s or "Hitachi 5K3000
3TB"'s). The 1.5TB's I have in my old system are pretty much the same thing.

>    How much space do you _need_, including reasonable growth?

My old system is 9.55TB and almost full, and I have about 3TB spread out
elsewear. This was setup about 5years ago. With the 20 disk enclosure I'm
thinking about 30TB usuable space (but maybe only usign 15 disks at first),
and hoping it'll last for another 5 years.

>    How did you measure this?

ATTO Benchmark is what I used on the local machine for the 500MB/s number.
For small files (1kB>16kB) it is small (50MB>150MB), for the larger 256kB+
it reads ~550MB/s). This is hardware RAID5 though. Over the 1Gbit network
Windows7 always gets up to ~100MB/s when writing/reading from the RAID5
share.

>    What OS? I have a 16 CPU Solaris 10 SPARC server with 16 GB of RAM

The new ZFS system OS will probably be OpenIndiana with the v28 zpool. I
have been looking at FreeNAS (FreeBSP) and a little up in the air on which
to choose.


Thank you all for the information. I will very likley create two zpools
(one for 1.5TB drives, and one for 3TB drives), initially I thought down
the road if the pool ever fills up (probably like 5+ years) I would start
swaping the 1.5TB drives with 3TB drives to let the small vdev "expand"
after all were replaced, but, I didn't realize there could potentially be
performance problems via block size differences of 1.5TB (~5 year old
drives) drives and 3TB+ drives (~5 years in the future).
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to