On Dec 25, 2009, at 3:01 PM, Jeroen Roodhart wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Freddie, list,
Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.
14 disks in a single rai
Richard Elling wrote:
On Dec 25, 2009, at 4:15 PM, Erik Trimble wrote:
I haven't seen this mentioned before, but the OCZ Vertex Turbo is
still an MLC-based SSD, and is /substantially/ inferior to an Intel
X25-E in terms of random write performance, which is what a ZIL
device does almost excl
On Dec 25, 2009, at 6:01 PM, Jeroen Roodhart
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Freddie, list,
Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.
14 disks in a single
On Dec 25, 2009, at 4:15 PM, Erik Trimble wrote:
I haven't seen this mentioned before, but the OCZ Vertex Turbo is
still an MLC-based SSD, and is /substantially/ inferior to an Intel
X25-E in terms of random write performance, which is what a ZIL
device does almost exclusively in the case
Jeroen Roodhart wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Freddie, list,
Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.
14 disks in a single raidz2 vdev is going to give h
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Freddie, list,
> Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
> giving more vdevs to the pool, and thus increasing the IOps for the
> whole pool.
>
> 14 disks in a single raidz2 vdev is going to give horrible IO,
> regard
On Dec 24, 2009, at 5:34 PM, Freddie Cash wrote:
Mattias Pantzare wrote:
That would leave us with three options;
1) Deal with it and accept performance as it is.
2) Find a way to speed things up further for this
workload
3) Stop trying to use ZFS for this workload
Option 4 is to re-do your p
> Mattias Pantzare wrote:
> That would leave us with three options;
>
> 1) Deal with it and accept performance as it is.
> 2) Find a way to speed things up further for this
> workload
> 3) Stop trying to use ZFS for this workload
Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Richard,
Richard Elling wrote:
> How about posting the data somewhere we can see it?
As stated in an earlier posting it should be accessible at:
http://init.science.uva.nl/~jeroen/solaris11_iozone_nfs2zfs
Happy holidays!
~Jeroen
- --
Jero
[revisiting the OP]
On Dec 23, 2009, at 8:27 AM, Auke Folkerts wrote:
Hello,
We have performed several tests to measure the performance
using SSD drives for the ZIL.
Tests are performed using a X4540 "Thor" with a zpool consisting of
3 14-disk RaidZ2 vdevs. This fileserver is connected to a
On Dec 24, 2009, at 12:44 AM, Jeroen Roodhart wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Mattias Pantzare wrote:
The ZIL is _not_ optional as the log is in UFS.
Right, thanks (also to Richard and Daniel) for the explanation. I was
afraid this was to good to be true, nice to s
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Jeroen Roodhart wrote:
>> Questions: 1. Client wsize?
>
> We usually set these to 342768 but this was tested with CenOS
> defaults: 8192 (were doing this over NFSv3)
Is stand corrected here. Looking at proc/mounts I see we are in fact
using diff
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Mattias Pantzare wrote:
>
> The ZIL is _not_ optional as the log is in UFS.
Right, thanks (also to Richard and Daniel) for the explanation. I was
afraid this was to good to be true, nice to see it stated this clearly
though.
That would leave us
>> UFS is a totally different issue, sync writes are always sync'ed.
>>
>> I don't work for Sun, but it would be unusual for a company to accept
>> willful negligence as a policy. Ambulance chasing lawyers love that
>> kind of thing.
>
> The Thor replaces a geriatric Enterprise system running Sola
On Thu, Dec 24, 2009 at 12:07:03AM +0100, Jeroen Roodhart wrote:
> We are under the impression that a setup that server NFS over UFS has
> the same assurance level than a setup using "ZFS without ZIL". Is this
> impression false?
Completely. It's closer to "UFS mount -o async", without the risk o
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Richard, ZFS-discuss.
> Message: 2
> Date: Wed, 23 Dec 2009 09:49:18 -0800
> From: Richard Elling
> To: Auke Folkerts
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Benchmarks results for ZFS + NFS, us
Some questions below...
On Dec 23, 2009, at 8:27 AM, Auke Folkerts wrote:
Hello,
We have performed several tests to measure the performance
using SSD drives for the ZIL.
Tests are performed using a X4540 "Thor" with a zpool consisting of
3 14-disk RaidZ2 vdevs. This fileserver is connected t
Hello,
We have performed several tests to measure the performance
using SSD drives for the ZIL.
Tests are performed using a X4540 "Thor" with a zpool consisting of
3 14-disk RaidZ2 vdevs. This fileserver is connected to a Centos 5.4
machine which mounts a filesystem on the zpool via NFS, over
18 matches
Mail list logo