Re: [zfs-discuss] Data balance across vdevs

2009-11-23 Thread Jesse Stroik
Erik and Richard: thanks for the information -- this is all very good stuff. Erik Trimble wrote: Something occurs to me: how full is your current 4 vdev pool? I'm assuming it's not over 70% or so. yes, by adding another 3 vdevs, any writes will be biased towards the "empty" vdevs, but that

Re: [zfs-discuss] Data balance across vdevs

2009-11-20 Thread Jesse Stroik
Bruno, Bruno Sousa wrote: Interesting, at least to me, the part where/ "this storage node is very small (~100TB)" /:) Well, that's only as big as two x4540s, and we have lots of those for a slightly different project. Anyway, how are you using your ZFS? Are you creating volumes and pres

Re: [zfs-discuss] Data balance across vdevs

2009-11-20 Thread Jesse Stroik
There are, of course, job types where you use the same set of data for multiple jobs, but having even a small amount of extra memory seems to be very helpful in that case, as you'll have several nodes reading the same data at roughly the same time. Yep. More, faster memory closer to the cons

Re: [zfs-discuss] Data balance across vdevs

2009-11-20 Thread Jesse Stroik
Thanks for the suggestions thus far, Erik: In your case, where you had a 4 vdev stripe, and then added 3 vdevs, I would recommend re-copying the existing data to make sure it now covers all 7 vdevs. Yes, this was my initial reaction as well, but I am concerned with the fact that I do not k

[zfs-discuss] Data balance across vdevs

2009-11-20 Thread Jesse Stroik
utting the data evenly on all vdevs is suboptimal because it is likely the case that different files within a single domain from a single instrument may be used with 200 jobs at once. Because this particular data is 100% static, I cannot count on reads/writes automatically balancing the pool. B