> From: Michael Hase [mailto:mich...@edition-software.de]
> Sent: Monday, July 16, 2012 6:41 PM
>
>
> So only one thing left: mirror should read 2x
>
That is still weird -
But all your numbers so far are coming from bonnie. Why don't you do a test
like this? (below)
Write a big file to mirro
On Tue, 17 Jul 2012, Michael Hase wrote:
So only one thing left: mirror should read 2x
I don't think that mirror should necessarily read 2x faster even
though the potential is there to do so. Last I heard, zfs did not
include a special read scheduler for sequential reads from a mirrored
pa
On Mon, 16 Jul 2012, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael Hase
got some strange results, please see
attachements for exact numbers and pool config:
seq write factor seq read factor
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Mon, 16 Jul 2012, Michael Hase wrote:
This is my understanding of zfs: it should load balance read requests even
for a single sequential reader. zfs_prefetch_disable is the default 0. And
I can see exactly this scaling behaviour with sas disks a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael Hase
>
> got some strange results, please see
> attachements for exact numbers and pool config:
>
>seq write factor seq read factor
>MB/sec MB/
On Mon, 16 Jul 2012, Michael Hase wrote:
This is my understanding of zfs: it should load balance read requests even
for a single sequential reader. zfs_prefetch_disable is the default 0. And I
can see exactly this scaling behaviour with sas disks and with scsi disks,
just not on this sata poo
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Mon, 16 Jul 2012, Stefan Ring wrote:
It is normal for reads from mirrors to be faster than for a single disk
because reads can be scheduled from either disk, with different I/Os being
handled in parallel.
That assumes that there *are* outstandin
On Mon, 16 Jul 2012, Stefan Ring wrote:
It is normal for reads from mirrors to be faster than for a single disk
because reads can be scheduled from either disk, with different I/Os being
handled in parallel.
That assumes that there *are* outstanding requests to be scheduled in
parallel, which
> It is normal for reads from mirrors to be faster than for a single disk
> because reads can be scheduled from either disk, with different I/Os being
> handled in parallel.
That assumes that there *are* outstanding requests to be scheduled in
parallel, which would only happen with multiple reader
I speak for myself... :-)
If the real bug is in procfs, I can file a CR.
When xattrs were designed right down the hall from me,
I don't think /proc interactions were considered, which
is why I mentioned an RFE.
Thanks,
Cindy
On 07/15/12 15:59, Cedric Blancher wrote:
On 14 July 2012 02:33,
On Mon, 16 Jul 2012, Stefan Ring wrote:
I wouldn't expect mirrored read to be faster than single-disk read,
because the individual disks would need to read small chunks of data
with holes in-between. Regardless of the holes being read or not, the
disk will spin at the same speed.
It is normal
> 2) in the mirror case the write speed is cut by half, and the read
> speed is the same as a single disk. I'd expect about twice the
> performance for both reading and writing, maybe a bit less, but
> definitely more than measured.
I wouldn't expect mirrored read to be faster than single-disk rea
On Jul 16, 2012, at 2:43 AM, Michael Hase wrote:
> Hello list,
>
> did some bonnie++ benchmarks for different zpool configurations
> consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
> bytes/sector, 7.2k), and got some strange results, please see
> attachements for exact numb
Hi all,
this is a follow up some help I was soliciting with my corrupted pool.
The short story is I can have no confidence in the quality in the labels on 2
of my 5 drive RAIDZ array. For various reasons.
There is a possibility even that one drive has label of another (a mirroring
accident).
A
Hello list,
did some bonnie++ benchmarks for different zpool configurations
consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
bytes/sector, 7.2k), and got some strange results, please see
attachements for exact numbers and pool config:
seq write factor seq read
On 14 July 2012 02:33, Cindy Swearingen wrote:
> I don't think that xattrs were ever intended or designed
> for /proc content.
>
> I could file an RFE for you if you wish.
So Oracle Newspeak now calls it an RFE if you want a real bug fixed, huh? ;-)
This is a real bug in procfs. Problem is, proc
16 matches
Mail list logo