Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM> Hello,
LM> I've got some weird problem: ZFS does not seem to be utilizing
LM> all disks in my pool properly. For some reason, it's only using 2
of the 3 disks in my pool:
LM> capacity operations bandwidth
LM> pool used avail read write read write
LM> ---------- ----- ----- ----- ----- ----- -----
LM> database 8.48G 1.35T 202 0 12.4M 0
LM> c0t1d0 4.30G 460G 103 0 6.21M 0
LM> c0t3d0 4.12G 460G 96 0 6.00M 0
LM> c0t2d0 54.9M 464G 2 0 190K 0
LM> ---------- ----- ----- ----- ----- ----- -----
LM> I've added all the disks at the same time, so it's not like the
LM> last disk was added later. Any ideas on what might be causing this
? I'm using solaris express b62.
LM>
Your third disks is 4GB larger that first two disks and ZFS tries to
"load-balance" data so that you can fill up all devices. As you've
already have about 4GB on each of the first two disks ZFS should start
to use third disks after copying addtitional data.
No, it is not - other two disks have 4G out of 464G used, and disk in
question has only 55M used. So for me it does not look like weighting
problem. This is something else I believe.
I'm not sure but i suspect this may be somehow related to meta data
allocation, given that ZFS stores two copies for file system meta data.
But this is nothing more than a wild guess.
Leon, What kind of data is stored in this pool? What Solaris version are
you using? How is your pool configured?
Cheers,
Victor
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss