Hi,
Anyone knows if there is any progress on bp_rewrite ? Its much awaited to
solve re-distribution issue, and moving vdevs.
Regards,
Ram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Roy,
You are right. So it looks like re-distribution issue. Initially there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which
which we added 24 more disks and created additional vdevs. The initial
vdevs are filled up and so write speed declined. Now how to find files
tha
> root@host:~# fmadm faulty
> --- --
> -
> TIME EVENT-ID MSG-ID SEVERITY
> --- --
> -
> Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC Major
>
On 2013-02-11 17:14, Borja Marcos wrote:
On Feb 11, 2013, at 4:56 PM, Tim Cook wrote:
The zpool iostat output has all sorts of statistics I think would be
useful/interesting to record over time.
Yes, thanks :) I think I will add them, I just started with the esoteric ones.
Anyway, still t
On 02/11/2013 04:53 PM, Borja Marcos wrote:
>
> Hello,
>
> I'n updating Devilator, the performance data collector for Orca and FreeBSD
> to include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
> writes and reads, and several hit/misses data pairs.
>
> Any suggestions to i
On Feb 11, 2013, at 4:56 PM, Tim Cook wrote:
> The zpool iostat output has all sorts of statistics I think would be
> useful/interesting to record over time.
Yes, thanks :) I think I will add them, I just started with the esoteric ones.
Anyway, still there's no better way to read it than runn
On Mon, Feb 11, 2013 at 9:53 AM, Borja Marcos wrote:
>
> Hello,
>
> I'n updating Devilator, the performance data collector for Orca and
> FreeBSD to include ZFS monitoring. So far I am graphing the ARC and L2ARC
> size, L2ARC writes and reads, and several hit/misses data pairs.
>
> Any suggestion
Hello,
I'n updating Devilator, the performance data collector for Orca and FreeBSD to
include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
writes and reads, and several hit/misses data pairs.
Any suggestions to improve it? What other variables can be interesting?
An exam
Hi,
My OmniOS host is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.
# zpool status -v
root@host:~# zpool status -v
pool: t