> If anyone has any ideas be it ZFS based or any useful scripts that
> could help here, I am all ears.
Something like this one-liner will show what would be allocated by everything
if hardlinks weren't used:
# size=0; for i in `find . -type f -exec du {} \; | awk '{ print $1 }'`; do
size=$(( $s
>I can lay them out as 4*3-disk raidz1, 3*4-disk-raidz1
>or a 1*12-disk raidz3 with nearly the same capacity (8-9
>data disks plus parity). I see that with more vdevs the
>IOPS will grow - does this translate to better resilver
>and scrub times as well?
Yes it would translate in better resilver ti
Anyone know what this means? After a scrub I apparently have an error in a
file name that I don't understand:
zpool status -v pumbaa1
pool: pumbaa1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be
A reboot and then another scrub fixed this. Reboot made no difference. So
after the reboot I started another scrub and now the pool shows clean.
So the sequence was like this:
1. zpool reported ioerrors after a scrub with an error on a file in a snapshot
2. destroyed the snapshot with the err
On Jun 12, 2011, at 1:53 PM, James Sutherland wrote:
> A reboot and then another scrub fixed this. Reboot made no difference. So
> after the reboot I started another scrub and now the pool shows clean.
>
> So the sequence was like this:
> 1. zpool reported ioerrors after a scrub with an erro
On Mon, Jun 13, 2011 at 5:50 AM, Roy Sigurd Karlsbakk
wrote:
>> If anyone has any ideas be it ZFS based or any useful scripts that
>> could help here, I am all ears.
>
> Something like this one-liner will show what would be allocated by everything
> if hardlinks weren't used:
>
> # size=0; for i
On Mon, Jun 13, 2011 at 12:59 PM, Nico Williams wrote:
> Try this instead:
>
> (echo 0; find . -type f \! -links 1 | xargs stat -c " %b %B *+" $p; echo p) |
> dc
s/\$p//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
And, without a sub-shell:
find . -type f \! -links 1 | xargs stat -c " %b %B *+p" /dev/null | dc
2>/dev/null | tail -1
(The stderr redirection is because otherwise dc whines once that the
stack is empty, and the tail is because we print interim totals as we
go.)
Also, this doesn't quit work, sin
On 6/12/2011 5:08 AM, Dimitar Hadjiev wrote:
I can lay them out as 4*3-disk raidz1, 3*4-disk-raidz1
or a 1*12-disk raidz3 with nearly the same capacity (8-9
data disks plus parity). I see that with more vdevs the
IOPS will grow - does this translate to better resilver
and scrub times as well?
Ye