Hi,
Actually, it seems a common problem with WD "EARS" drives (advanced format) !
Please, see this other OpenSolaris thread :
https://opensolaris.org/jive/thread.jspa?threadID=126637
It is worth investigating !
I quote :
> Just replacing back, and here is the iostat for the new EARS drive:
> h
Hi,
I known that ZFS is aware of I/O errors, and can alert or disable a crappy disk.
However, ZFS didn't notice at all these "service time" problems.
I think it is a good idea to integrate service time triggers in ZFS !
What to you think ?
Best regards !
Philippe
--
This message posted from ope
> Now, I just have to do the same drive replacement for
> the 2 other failing drives...
For information, current iostat results :
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot
device
0
> ...and let the resilver complete.
> -- richard
Hi !
pool: zfs_raid
state: ONLINE
scrub: resilver completed after 16h34m with 0 errors on Fri May 21 05:39:42
2010
config:
NAMESTATE READ WRITE CKSUM
zfs_raidONLINE 0 0 0
raidz1ONLI
On Thu, 20 May 2010, Edward Ned Harvey wrote:
Also, since you've got "s0" on there, it means you've got some
partitions on that drive. You could manually wipe all that out via
format, but the above is pretty brainless and reliable.
The "s0" on the old disk is a bug in the way we're formattin
On 20/05/2010 12:46, Edward Ned Harvey wrote:
Also, since you've got "s0" on there, it means you've got some partitions on
that drive.
There are always partitions once the disk is in use by ZFS, but there
may be 1 or more of them and they maybe SMI or EFI partitions.
So just because there is
> Any idea ?
> action: Wait for the resilver to complete.
> -- richard
Very fine ! And thank you a lot for your answers !
Philippe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
On May 20, 2010, at 4:24 AM, Philippe wrote:
> Current status :
>
> pool: zfs_raid
> state: DEGRADED
> status: One or more devices is currently being resilvered. The pool will
>continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
> scrub: resil
On May 20, 2010, at 4:46 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Philippe
>>
>> c7t2d0s0/o FAULTED 0 0 0 corrupted data
>>
>> When I've done the "zpool replace", I had to a
On May 20, 2010, at 4:12 AM, Philippe wrote:
>> I'm starting with the replacement of the very bad
>> disk, and hope the resilvering won't take too long !!
>
> Replacing c7t2d0, I get the following :
>
>NAME STATE READ WRITE CKSUM
>zfs_raid DEGRADED 0
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Philippe
>
> c7t2d0s0/o FAULTED 0 0 0 corrupted data
>
> When I've done the "zpool replace", I had to add "-f" to force, because
> ZFS told that these was a ZFS la
Current status :
pool: zfs_raid
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h17m, 3,72% done, 7h22m to go
config
> I'm starting with the replacement of the very bad
> disk, and hope the resilvering won't take too long !!
Replacing c7t2d0, I get the following :
NAME STATE READ WRITE CKSUM
zfs_raid DEGRADED 0 0 0
raidz1 DEGRADED 0
> > One question : if I halt the server, and change the
> order of the disks on the SATA array, will RAIDZ
> still detect the array fine
> >
>
> Yes, it will.
Hi !
I've done the moves this morning, and the high service times followed the disks
!
So, I have 3 disks to replace urgently !
> it looks like your 'sd5' disk is performing horribly
> bad and except
> for the horrible performance of 'sd5' (which
> bottlenecks the I/O),
> 'sd4' would look just as bad. Regardless, the first
> step would be to
> investigate 'sd5'.
Hi Bob !
I've already tried the pool without the sd5 dis
On 05/19/10 09:34 PM, Philippe wrote:
Hi !
It is strange because I've checked the SMART data of the 4 disks, and
everything seems really OK ! (on another hardware/controller, because I needed
Windows to check it). Maybe it's a problem with the SAS/SATA controller ?!
One question : if I halt t
> How full is your filesystem? Give us the output of
> "zfs list"
> You might be having a hardware problem, or maybe it's
> extremely full.
Hi Edward,
The "_db" filesystems have a recordsise of 16K (the others have the default
128K) :
NAME USED AVAIL REFER MOUNTPOIN
> mm.. Service time of sd3..5 are waay too high to be
> good working disks.
> 21 writes shouldn't take 1.3 seconds.
>
> Some of your disks are not feeling well, possibly
> doing
> block-reallocation like mad all the time, or block
> recovery of some
> form. Service times should be closer to what s
How full is your filesystem? Give us the output of "zfs list"
You might be having a hardware problem, or maybe it's extremely full.
Also, if you have dedup enabled, on a 3TB filesystem, you surely want more
RAM. I don't know if there's any rule of thumb you could follow, but
offhand I'd say 16G
On Tue, 18 May 2010, Philippe wrote:
The 4 disks are Western Digital ATA 1TB (one is slighlty different) :
1 x ATA-WDC WD10EACS-00D-1A01-931.51GB
3 x ATA-WDC WD10EARS-00Y-0A80-931.51GB
I've done lots of tests (speed tests + SMART reports) with each of these 4 disk
on another system (another co
, May 18, 2010 8:11 AM
To: OpenSolaris ZFS discuss
Subject: Re: [zfs-discuss] Very serious performance degradation
Howdy,
Is dedup on? I was having some pretty strange problems including slow
performance when dedup was on. Disabling dedup helped out a whole bunch. My
system only has 4gig of ram
On 18 May, 2010 - Philippe sent me these 6,0K bytes:
> Hi,
>
> The 4 disks are Western Digital ATA 1TB (one is slighlty different) :
> 1 x ATA-WDC WD10EACS-00D-1A01-931.51GB
> 3 x ATA-WDC WD10EARS-00Y-0A80-931.51GB
>
> I've done lots of tests (speed tests + SMART reports) with each of these 4
Howdy,
Is dedup on? I was having some pretty strange problems including slow
performance when dedup was on. Disabling dedup helped out a whole bunch. My
system only has 4gig of ram, so that may have played a part too.
Good luck!
John
On May 18, 2010, at 7:51 AM, Philippe wrote:
> Hi,
>
> T
Hi,
The 4 disks are Western Digital ATA 1TB (one is slighlty different) :
1 x ATA-WDC WD10EACS-00D-1A01-931.51GB
3 x ATA-WDC WD10EARS-00Y-0A80-931.51GB
I've done lots of tests (speed tests + SMART reports) with each of these 4 disk
on another system (another computer, running Windows 2003 x64)
On Tue, 18 May 2010, Philippe wrote:
The usage of the pool is for daily backups, with rsync. Some big files are
updated simulteanously, in different FS. So, I suspect a huge fragmentation of
the files ! Or maybe..., a need of more RAM ??
You forgot to tell us what brand/model of disks you ar
Hi,
I'm running Opensolaris 2009.06, and I'm facing a serious performance loss with
ZFS ! It's a raidz1 pool, made of 4 x 1TB SATA disks :
zfs_raidONLINE 0 0 0
raidz1ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c7t3d
26 matches
Mail list logo