Robert Milkowski wrote:
ps. however I'm really concerned with ZFS behavior when a pool is
almost full, there're lot of write transactions to that pool and
server is restarted forcibly or panics. I observed that file systems
on that pool will mount in 10-30 minutes each during zfs mount -a, and
o
The test case was build 38, Solaris 11, a 2 GB file, initially created
with 1 MB SW, and a recsize of 8 KB, on a pool with two raid-z 5+1,
accessed with 24 threads of 8 KB RW, for 500,000 ops or 40 seconds which
ever came first. The result at the pool level was 78% of the operations
Hello Dave,
Thursday, August 10, 2006, 12:29:05 AM, you wrote:
DF> Hi,
DF> Note that these are page cache rates and that if the application
DF> pushes harder and exposes the supporting device rates there is
DF> another world of performance to be observed. This is where ZFS
DF> gets to be a chall
Hi Matthew,
In the case of the 8 KB Random Write to the 128 KB recsize filesystem
the I/O were not full block re-writes, yet the expected COW Random Read
(RR) at the pool level is somehow avoided. I suspect it was able to
coalesce enough I/O in the 5 second transaction window to construct 128
On Wed, Aug 09, 2006 at 04:24:55PM -0700, Dave C. Fisk wrote:
> Hi Eric,
>
> Thanks for the information.
>
> I am aware of the recsize option and its intended use. However, when I
> was exploring it to confirm the expected behavior, what I found was the
> opposite!
>
> The test case was build
Hi Eric,
Thanks for the information.
I am aware of the recsize option and its intended use. However, when I
was exploring it to confirm the expected behavior, what I found was the
opposite!
The test case was build 38, Solaris 11, a 2 GB file, initially
created with 1 MB SW, and a recsize
On Wed, Aug 09, 2006 at 03:29:05PM -0700, Dave Fisk wrote:
>
> For example the COW may or may not have to read old data for a small
> I/O update operation, and a large portion of the pool vdev capability
> can be spent on this kind of overhead.
This is what the 'recordsize' property is for. If y
Hi,
Note that these are page cache rates and that if the application pushes harder
and exposes the supporting device rates there is another world of performance
to be observed. This is where ZFS gets to be a challenge as the relationship
between the application level I/O and the pool level is v
Hello Matthew,
Tuesday, August 8, 2006, 7:25:17 PM, you wrote:
MA> On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
>> filebench/singlestreamread v440
>>
>> 1. UFS, noatime, HW RAID5 6 disks, S10U2
>> 70MB/s
>>
>> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as
On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
> filebench/singlestreamread v440
>
> 1. UFS, noatime, HW RAID5 6 disks, S10U2
> 70MB/s
>
> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
> 87MB/s
>
> 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
>
Robert,
> LL> Most of my ZFS experiments have been with RAID10, but there were some
> LL> massive improvements to seq I/O with the fixes I mentioned - I'd expect
> that
> LL> this shows that they aren't in snv44.
>
> So where did you get those fixes?
>From the fine people who implemented them!
Luke Lonergan wrote:
Robert,
On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
130MB/s
4. ZFS, atime
Hello Luke,
Tuesday, August 8, 2006, 6:18:39 PM, you wrote:
LL> Robert,
LL> On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
>> 1. UFS, noatime, HW RAID5 6 disks, S10U2
>> 70MB/s
>> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
>> 87MB/s
>> 3. ZFS,
Robert,
On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> 1. UFS, noatime, HW RAID5 6 disks, S10U2
> 70MB/s
> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
> 87MB/s
> 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
> 130MB/s
> 4. ZFS, atime=off, SW
Hello Luke,
Tuesday, August 8, 2006, 4:48:38 PM, you wrote:
LL> Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the
prefetch logic?
LL> These are great results for random I/O, I wonder how the sequential I/O
looks?
LL> Of course you'll not get great results for sequential I/O
dheld (www.good.com)
-Original Message-
From: Robert Milkowski [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 08, 2006 10:15 AM Eastern Standard Time
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID
Hi.
This time some
Hi.
This time some RAID5/RAID-Z benchmarks.
This time I connected 3510 head unit with one link to the same server as 3510
JBODs are connected (using second link). snv_44 is used, server is v440.
I also tried changing max pending IO requests for HW raid5 lun and checked with
DTrace that larger
17 matches
Mail list logo