On Tue, 26 Feb 2013, hagai wrote:
for what is worth..
I had the same problem and found the answer here -
http://forums.freebsd.org/showthread.php?t=27207
Given enough sequential I/O requests, zfs mirrors behave every much
like RAID-0 for reads. Sequential prefetch is very important in order
Be careful when testing ZFS with ozone, I ran a bunch of stats many
years ago that produced results that did not pass a basic sanity check. There
was *something* about the ozone test data that ZFS either did not like or liked
very much, depending on the specific test.
I eventual
for what is worth..
I had the same problem and found the answer here -
http://forums.freebsd.org/showthread.php?t=27207
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, 17 Jul 2012, Michael Hase wrote:
To work around these caching effects just use a file > 2 times the size of
ram, iostat then shows the numbers really coming from disk. I always test
like this. a re-read rate of 8.2 GB/s is really just memory bandwidth, but
quite impressive ;-)
Ok, th
On Tue, 17 Jul 2012, Michael Hase wrote:
The below is with a 2.6 GB test file but with a 26 GB test file (just add
another zero to 'count' and wait longer) I see an initial read rate of 618
MB/s and a re-read rate of 8.2 GB/s. The raw disk can transfer 150 MB/s.
To work around these caching
On Tue, 17 Jul 2012, Bob Friesenhahn wrote:
On Tue, 17 Jul 2012, Michael Hase wrote:
If you were to add a second vdev (i.e. stripe) then you should see very
close to 200% due to the default round-robin scheduling of the writes.
My expectation would be > 200%, as 4 disks are involved. It may
On Tue, 17 Jul 2012, Michael Hase wrote:
If you were to add a second vdev (i.e. stripe) then you should see very
close to 200% due to the default round-robin scheduling of the writes.
My expectation would be > 200%, as 4 disks are involved. It may not be the
perfect 4x scaling, but imho it s
sorry to insist, but still no real answer...
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Tue, 17 Jul 2012, Michael Hase wrote:
So only one thing left: mirror should read 2x
I don't think that mirror should necessarily read 2x faster even though the
potential is there to do so. Last I h
> From: Michael Hase [mailto:mich...@edition-software.de]
> Sent: Monday, July 16, 2012 6:41 PM
>
>
> So only one thing left: mirror should read 2x
>
That is still weird -
But all your numbers so far are coming from bonnie. Why don't you do a test
like this? (below)
Write a big file to mirro
On Tue, 17 Jul 2012, Michael Hase wrote:
So only one thing left: mirror should read 2x
I don't think that mirror should necessarily read 2x faster even
though the potential is there to do so. Last I heard, zfs did not
include a special read scheduler for sequential reads from a mirrored
pa
On Mon, 16 Jul 2012, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael Hase
got some strange results, please see
attachements for exact numbers and pool config:
seq write factor seq read factor
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Mon, 16 Jul 2012, Michael Hase wrote:
This is my understanding of zfs: it should load balance read requests even
for a single sequential reader. zfs_prefetch_disable is the default 0. And
I can see exactly this scaling behaviour with sas disks a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael Hase
>
> got some strange results, please see
> attachements for exact numbers and pool config:
>
>seq write factor seq read factor
>MB/sec MB/
On Mon, 16 Jul 2012, Michael Hase wrote:
This is my understanding of zfs: it should load balance read requests even
for a single sequential reader. zfs_prefetch_disable is the default 0. And I
can see exactly this scaling behaviour with sas disks and with scsi disks,
just not on this sata poo
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Mon, 16 Jul 2012, Stefan Ring wrote:
It is normal for reads from mirrors to be faster than for a single disk
because reads can be scheduled from either disk, with different I/Os being
handled in parallel.
That assumes that there *are* outstandin
On Mon, 16 Jul 2012, Stefan Ring wrote:
It is normal for reads from mirrors to be faster than for a single disk
because reads can be scheduled from either disk, with different I/Os being
handled in parallel.
That assumes that there *are* outstanding requests to be scheduled in
parallel, which
> It is normal for reads from mirrors to be faster than for a single disk
> because reads can be scheduled from either disk, with different I/Os being
> handled in parallel.
That assumes that there *are* outstanding requests to be scheduled in
parallel, which would only happen with multiple reader
On Mon, 16 Jul 2012, Stefan Ring wrote:
I wouldn't expect mirrored read to be faster than single-disk read,
because the individual disks would need to read small chunks of data
with holes in-between. Regardless of the holes being read or not, the
disk will spin at the same speed.
It is normal
> 2) in the mirror case the write speed is cut by half, and the read
> speed is the same as a single disk. I'd expect about twice the
> performance for both reading and writing, maybe a bit less, but
> definitely more than measured.
I wouldn't expect mirrored read to be faster than single-disk rea
On Jul 16, 2012, at 2:43 AM, Michael Hase wrote:
> Hello list,
>
> did some bonnie++ benchmarks for different zpool configurations
> consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
> bytes/sector, 7.2k), and got some strange results, please see
> attachements for exact numb
Hello list,
did some bonnie++ benchmarks for different zpool configurations
consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
bytes/sector, 7.2k), and got some strange results, please see
attachements for exact numbers and pool config:
seq write factor seq read
21 matches
Mail list logo