ce the NFS share is the culprit here? If so, how to
avoid it?
Thanks,
Eduardo Bragatto
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
find more detailed documentation.
I believe there's very knowledgeable people in this list. Could
someone be kind enough to take a look and at least point me in the
right direction?
Thanks,
Eduardo Bragatto.___
zfs-discuss mailing list
zfs-discuss@
owing ZERO for the third raidz
from the list above (not sure if that means something, but it does
look odd).
I'm really on a dead end here, any help is appreciated.
Thanks,
Eduardo Bragatto.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and the
entire list of threads taken from 'echo "::threadlist -v" | mdb -k'.
Thanks,
Eduardo Bragatto
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
blem, you go failsafe mode and remove
the file, then in your tests you attempt to import using -R so the
cache is not re-created and you don't need to go into failsafe mode
ever again.
best regards,
Eduardo Bragatto.
___
zfs-discu
while mounting some of the ZFS
filesystems, make sure you try to import that pool on the newest
stable system before wasting too much time debugging the problem.
Thanks,
Eduardo Bragatto.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
845GB free.
Is there any way to re-stripe the pool, so I can take advantage of all
spindles across the raidz1 volumes? Right now it looks like the newer
volumes are doing the heavy while the other two just hold old data.
Thanks,
Eduardo Bragatto
_
the data from the nearly
full volumes?
Thanks,
Eduardo Bragatto
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ve to duplicate some data and
erase the old copy, for example.
Thanks,
Eduardo Bragatto
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
899G 22.0M 625G zfs
However I did have CPU spikes at 100% where the kernel was taking all
cpu time.
I have reduced my zfs_arc_max parameter as it seemed the applications
were struggling for RAM and things are looking better now
Thanks for your time,
Eduardo Bragatto.
ile I would gain some space from many text files that I also
have, those are not the majority of my content, and the effort would
probably not justify the small gain.
Thanks
Eduardo Bragatto
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Yes, the disks are the same, no problems there.
On Aug 4, 2010, at 2:11 PM, Bob Friesenhahn wrote:
On Wed, 4 Aug 2010, Eduardo Bragatto wrote:
Checking with iostat, I noticed the average wait time to be between
40ms and 50ms for all disks. Which doesn't seem too bad.
Actually, th
Hi everyone,
I just joined the list after finding an unanswered message from Ray
Van Dolson in the archives.
I'm reproducing his question here, as I'm wondering about the same
issue and did not find an answer for it anywhere yet.
Can anyone shed any light on this subject?
-- Original Mes
On Mar 1, 2010, at 4:04 PM, Tim Cook wrote:
The primary concern as I understand it is performance. If they're
close in size, it shouldn't be a big deal, but when you've got
mismatched rg's it can cause quite the performance troubleshooting
nightmare. It's the same reason you don't want to
14 matches
Mail list logo