> > by the way, in HDtune, I saw C7: Ultra DMA CRC
> error count is a little high which indicates a
> potential connection issue. Maybe all are caused by
> the enclosure?
>
> Bingo!
You are right, I've done a lot of tests and the defect is narrorw down the
"problem hardware". The two pool wo
On Sep 9, 2010, at 5:55 PM, Fei Xu wrote:
> Just to update the status and findings.
Thanks for the update.
> I've checked TLER settings and they are off by default.
>
> I moved the source pool to another chassis and do the 3.8TB send again. this
> time, not any problems! the difference is
>
Just to update the status and findings.
I've checked TLER settings and they are off by default.
I moved the source pool to another chassis and do the 3.8TB send again. this
time, not any problems! the difference is
1. New chassis
2. BIGGER memory. 32GB v.s 12GB
3. although wdidle time is dis
> "ml" == Mark Little writes:
ml> Just to clarify - do you mean TLER should be off or on?
It should be set to ``do not have asvc_t 11 seconds and <1 io/s''.
...which is not one of the settings of the TLER knob.
This isn't a problem with the TLER *setting*. TLER does not even
apply unl
On Thu, 9 Sep 2010 14:05:51 +, Markus Kovero
wrote:
> On Sep 9, 2010, at 8:27 AM, Fei Xu wrote:
>
>
>> This might be the dreaded WD TLER issue. Basically the drive keeps retrying
>> a read operation over and over after a bit error trying to recover from a >
>> read error themselves. With
On Sep 9, 2010, at 8:27 AM, Fei Xu wrote:
> This might be the dreaded WD TLER issue. Basically the drive keeps retrying a
> read operation over and over after a bit error trying to recover from a >
> read error themselves. With ZFS one really needs to disable this and have the
> drives fail i
On Sep 9, 2010, at 8:27 AM, Fei Xu wrote:
>>
>> Service times here are crap. Disks are malfunctioning
>> in some way. If
>> your source disks can take seconds (or 10+ seconds)
>> to reply, then of
>> course your copy will be slow. Disk is probably
>> having a hard time
>> reading the data or som
>
> Service times here are crap. Disks are malfunctioning
> in some way. If
> your source disks can take seconds (or 10+ seconds)
> to reply, then of
> course your copy will be slow. Disk is probably
> having a hard time
> reading the data or something.
>
Yeah, that should not go over 15ms. I
On 08 September, 2010 - Fei Xu sent me these 5,9K bytes:
> I dig deeper into it and might find some useful information.
> I attached an X25 SSD for ZIL to see if it helps. but no luck.
> I run IOstate -xnz for more details and got interesting result as
> below.(maybe too long)
> some explainatio
I dig deeper into it and might find some useful information.
I attached an X25 SSD for ZIL to see if it helps. but no luck.
I run IOstate -xnz for more details and got interesting result as below.(maybe
too long)
some explaination:
1. c2d0 is SSD for ZIL
2. c0t3d0, c0t20d0, c0t21d0, c0t22d0 is so
>
> ve you get dedup enabled? Note the read bandwith is
> much higher.
>
> --
> Ian.
>
no, dedup is not enabled since it's still not stable enough even for test
environment.
here is a JPG of Read/Write indicator. RED line is read and GREEN line is
write.
you can see, because destination
On 09/ 9/10 02:42 PM, Fei Xu wrote:
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operationsbandwidth
pool alloc free read write read write
--- - - - -
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operationsbandwidth
pool alloc free read write read write
--- - - - - - -
sh001a 37.6G 16.
thank you Ian. I've re-build the pool to 9*2TB Raidz2 and start the ZFS send
command. result will come out after about 3 hours.
thanks
fei
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On 09/ 9/10 01:14 PM, Fei Xu wrote:
Hi all:
I'm a new guy who is just started ZFS for half a year. We are using
Nexenta in corporate pilot environment. these days, when I was trying to move
around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it
seems will never e
Hi all:
I'm a new guy who is just started ZFS for half a year. We are using
Nexenta in corporate pilot environment. these days, when I was trying to move
around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it
seems will never end up successfully.
1. I used CP first.
16 matches
Mail list logo