I have both EVDS and EARS 2TB green drive. And I have to say they are not good
to build storage servers.
EVDS has compatibility issue with my supermicro appliance. it will hang when
doing huge data send or copy. from IOSTAT I can see the data throughput is
stuck on green disks with extremely
> > by the way, in HDtune, I saw C7: Ultra DMA CRC
> error count is a little high which indicates a
> potential connection issue. Maybe all are caused by
> the enclosure?
>
> Bingo!
You are right, I've done a lot of tests and the defect is narrorw down the
"problem hardware". The two pool wo
Just to update the status and findings.
I've checked TLER settings and they are off by default.
I moved the source pool to another chassis and do the 3.8TB send again. this
time, not any problems! the difference is
1. New chassis
2. BIGGER memory. 32GB v.s 12GB
3. although wdidle time is dis
>
> Service times here are crap. Disks are malfunctioning
> in some way. If
> your source disks can take seconds (or 10+ seconds)
> to reply, then of
> course your copy will be slow. Disk is probably
> having a hard time
> reading the data or something.
>
Yeah, that should not go over 15ms. I
I dig deeper into it and might find some useful information.
I attached an X25 SSD for ZIL to see if it helps. but no luck.
I run IOstate -xnz for more details and got interesting result as below.(maybe
too long)
some explaination:
1. c2d0 is SSD for ZIL
2. c0t3d0, c0t20d0, c0t21d0, c0t22d0 is so
>
> ve you get dedup enabled? Note the read bandwith is
> much higher.
>
> --
> Ian.
>
no, dedup is not enabled since it's still not stable enough even for test
environment.
here is a JPG of Read/Write indicator. RED line is read and GREEN line is
write.
you can see, because destination
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operationsbandwidth
pool alloc free read write read write
--- - - - - - -
sh001a 37.6G 16.
thank you Ian. I've re-build the pool to 9*2TB Raidz2 and start the ZFS send
command. result will come out after about 3 hours.
thanks
fei
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Hi all:
I'm a new guy who is just started ZFS for half a year. We are using
Nexenta in corporate pilot environment. these days, when I was trying to move
around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it
seems will never end up successfully.
1. I used CP first.