sorry to insist, but still no real answer...
On Mon, 16 Jul 2012, Bob Friesenhahn wrote:
On Tue, 17 Jul 2012, Michael Hase wrote:
So only one thing left: mirror should read 2x
I don't think that mirror should necessarily read 2x faster even though the
potential is there to do so. Last I h
On Tue, 17 Jul 2012, Michael Hase wrote:
If you were to add a second vdev (i.e. stripe) then you should see very
close to 200% due to the default round-robin scheduling of the writes.
My expectation would be > 200%, as 4 disks are involved. It may not be the
perfect 4x scaling, but imho it s
Hi all,
I'm using Opensolaris snv_134 with LSI Controllers and a motherboard
supermicro, with 20 sata disks, zfs in raid-10 conf. I mounted this
zfs_storage with NFS.
I'm not opensolaris specialist. What're the commands to show hardware
information? Like 'lshw' in linux but for opensolaris.
The s
On Tue, 17 Jul 2012, Roberto Scudeller wrote:
Hi all,
I'm using Opensolaris snv_134 with LSI Controllers and a motherboard
supermicro, with 20 sata disks, zfs in raid-10 conf. I mounted this zfs_storage
with
NFS.
I'm not opensolaris specialist. What're the commands to show hardware
informati
On Tue, 17 Jul 2012, Bob Friesenhahn wrote:
On Tue, 17 Jul 2012, Michael Hase wrote:
If you were to add a second vdev (i.e. stripe) then you should see very
close to 200% due to the default round-robin scheduling of the writes.
My expectation would be > 200%, as 4 disks are involved. It may
On Tue, 17 Jul 2012, Michael Hase wrote:
The below is with a 2.6 GB test file but with a 26 GB test file (just add
another zero to 'count' and wait longer) I see an initial read rate of 618
MB/s and a re-read rate of 8.2 GB/s. The raw disk can transfer 150 MB/s.
To work around these caching
While I'm aware of the published maximums of zfs, I was wondering if
anyone could share information on the largest zpool (both total usable
capacity and vdevs) they've seen/deployed?
I know that generally speaking more vdevs = more iops, but I would think
that there is a point of diminishing r
On Tue, 17 Jul 2012, Michael Hase wrote:
To work around these caching effects just use a file > 2 times the size of
ram, iostat then shows the numbers really coming from disk. I always test
like this. a re-read rate of 8.2 GB/s is really just memory bandwidth, but
quite impressive ;-)
Ok, th
Hi Bob,
Thanks for the answers.
How do I test your theory?
In this case, I use common disks SATA 2, not Nearline SAS (NL SATA) or SAS.
Do you think the disks SATA are the problem?
Cheers,
2012/7/17 Bob Friesenhahn
> On Tue, 17 Jul 2012, Roberto Scudeller wrote:
>
> Hi all,
>>
>> I'm using
On Tue, 17 Jul 2012, Roberto Scudeller wrote:
Hi Bob,
Thanks for the answers.
How do I test your theory?
I would use 'dd' to see if it is possible to transfer data from one of
the problem devices. Gain physical access to the system and check the
signal and power cables to these devices cl
We have a running zpool with a 12 disk raidz3 vdev in it ... we gave ZFS the
full, raw disks ... all is well.
However, we built it on two LSI 9211-8i cards and we forgot to change from IR
firmware to IT firmware.
Is there any danger in shutting down the OS, flashing the cards to IT firmware,
a
Hi Jason,
I have done this in the past. (3x LSI 1068E - IBM BR10i).
Your pool has no tie with the hardware used to host it (including your
HBA). You could change all your hardware, and still import your pool
correctly.
If you really want to be on the safe side; you can export your pool before
th
Ok, and your LSI 1068E also had alternate IR and IT firmwares, and you went
from IR -> IT ?
Is that correct ?
Thanks.
--- On Tue, 7/17/12, Damon Pollard wrote:
From: Damon Pollard
Subject: Re: [zfs-discuss] Has anyone switched from IR -> IT firmware on the
fly ? (existing zpool on LSI 921
Correct.
LSI 1068E has IR and IT firmwares + I have gone from IR -> IT and IT -> IR
without hassle.
Damon Pollard
On Wed, Jul 18, 2012 at 8:13 AM, Jason Usher wrote:
>
> Ok, and your LSI 1068E also had alternate IR and IT firmwares, and you
> went from IR -> IT ?
>
> Is that correct ?
>
> Tha
14 matches
Mail list logo