Erik and Richard: thanks for the information -- this is all very good stuff.
Erik Trimble wrote:
Something occurs to me: how full is your current 4 vdev pool? I'm
assuming it's not over 70% or so.
yes, by adding another 3 vdevs, any writes will be biased towards the
"empty" vdevs, but that
On Nov 20, 2009, at 11:22 PM, Erik Trimble wrote:
Something occurs to me: how full is your current 4 vdev pool? I'm
assuming it's not over 70% or so.
yes, by adding another 3 vdevs, any writes will be biased towards
the "empty" vdevs, but that's for less-than-full-stripe-width writes
(
Something occurs to me: how full is your current 4 vdev pool? I'm
assuming it's not over 70% or so.
yes, by adding another 3 vdevs, any writes will be biased towards the
"empty" vdevs, but that's for less-than-full-stripe-width writes (right,
Richard?). That is, if I'm doing a write that w
On Nov 20, 2009, at 12:14 PM, Jesse Stroik wrote:
There are, of course, job types where you use the same set of data
for multiple jobs, but having even a small amount of extra memory
seems to be very helpful in that case, as you'll have several
nodes reading the same data at roughly the sa
Bruno,
Bruno Sousa wrote:
Interesting, at least to me, the part where/ "this storage node is very
small (~100TB)" /:)
Well, that's only as big as two x4540s, and we have lots of those for a
slightly different project.
Anyway, how are you using your ZFS? Are you creating volumes and pres
Interesting, at least to me, the part where/ "this storage node is very
small (~100TB)" /:)
Anyway, how are you using your ZFS? Are you creating volumes and present
them to end-nodes over iscsi/fiber , nfs, or other? Could be helpfull to
use some sort of cluster filesystem to have some more contro
There are, of course, job types where you use the same set of data for
multiple jobs, but having even a small amount of extra memory seems to
be very helpful in that case, as you'll have several nodes reading the
same data at roughly the same time.
Yep. More, faster memory closer to the cons
On Nov 20, 2009, at 10:16 AM, Jesse Stroik wrote:
Thanks for the suggestions thus far,
Erik:
In your case, where you had a 4 vdev stripe, and then added 3
vdevs, I would recommend re-copying the existing data to make sure
it now covers all 7 vdevs.
Yes, this was my initial reaction as we
On Fri, 20 Nov 2009, Jesse Stroik wrote:
Yes, this was my initial reaction as well, but I am concerned with the fact
that I do not know how zfs populates the vdevs. My naive guess is that it
either fills the most empty, or (and more likely) fills them at a rate
relative to their amount of fr
On Fri, 20 Nov 2009, Richard Elling wrote:
Buy a large, read-optimized SSD (or several) and add it as a cache device :-)
But first install as much RAM as the machine will accept. :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagi
Thanks for the suggestions thus far,
Erik:
In your case, where you had a 4 vdev stripe, and then added 3 vdevs, I
would recommend re-copying the existing data to make sure it now covers
all 7 vdevs.
Yes, this was my initial reaction as well, but I am concerned with the
fact that I do not k
Richard Elling wrote:
Buy a large, read-optimized SSD (or several) and add it as a cache
device :-)
-- richard
On Nov 20, 2009, at 8:44 AM, Jesse Stroik wrote:
I'm migrating to ZFS and Solaris for cluster computing storage, and
have some completely static data sets that need to be as fast as
Buy a large, read-optimized SSD (or several) and add it as a cache
device :-)
-- richard
On Nov 20, 2009, at 8:44 AM, Jesse Stroik wrote:
I'm migrating to ZFS and Solaris for cluster computing storage, and
have some completely static data sets that need to be as fast as
possible. One of t
I'm migrating to ZFS and Solaris for cluster computing storage, and have
some completely static data sets that need to be as fast as possible.
One of the scenarios I'm testing is the addition of vdevs to a pool.
Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more
vdevs and
14 matches
Mail list logo