Harley Gorrell wrote:
I do wonder what accounts for the improvement -- seek
time, transfer rate, disk cache, or something else? Does
anywone have a dtrace script to measure this which they
would share?
You might also be seeing the effects of defect management. As
drives get older, they ten
On Mon, 25 Sep 2006, Roch wrote:
This looks like on the second run, you had lots more free
memory and mkfile completed near memcpy speed.
Both times the system was near idle.
Something is awry on the first pass though. Then,
zpool iostat 1
can put some lights on this. IO will kee
Harley Gorrell writes:
> On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:
> > Are you just trying to measure ZFS's read performance here?
>
> That is what I started looking at. We scrounged around
> and found a set of 300GB drives to replace the old ones we
> started with. Comparing the
Harley:
> Old 36GB drives:
>
> | # time mkfile -v 1g zeros-1g
> | zeros-1g 1073741824 bytes
> |
> | real2m31.991s
> | user0m0.007s
> | sys 0m0.923s
>
> Newer 300GB drives:
>
> | # time mkfile -v 1g zeros-1g
> | zeros-1g 1073741824 bytes
> |
> | real0m8.425s
> | user0m0.010
On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:
Are you just trying to measure ZFS's read performance here?
That is what I started looking at. We scrounged around
and found a set of 300GB drives to replace the old ones we
started with. Comparing these new drives to the old ones:
Old 36GB dr
Harley:
>I had tried other sizes with much the same results, but
> hadnt gone as large as 128K. With bs=128K, it gets worse:
>
> | # time dd if=zeros-10g of=/dev/null bs=128k count=102400
> | 81920+0 records in
> | 81920+0 records out
> |
> | real2m19.023s
> | user0m0.105s
> | sys
On Fri, 22 Sep 2006, johansen wrote:
ZFS uses a 128k block size. If you change dd to use a
bs=128k, do you observe any performance improvement?
I had tried other sizes with much the same results, but
hadnt gone as large as 128K. With bs=128K, it gets worse:
| # time dd if=zeros-10g of=/de
ZFS uses a 128k block size. If you change dd to use a bs=128k, do you observe
any performance improvement?
> | # time dd if=zeros-10g of=/dev/null bs=8k
> count=102400
> | 102400+0 records in
> | 102400+0 records out
>
> | real1m8.763s
> | user0m0.104s
> | sys 0m1.759s
It's also wor