t access from
multiple hosts (which of course is an additional license, aka $$$).
Cheers,
Tomer
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mike Gerdts
Sent: Monday, 24 November 2008 3:44 AM
To: Chris Greer
Cc: zfs-discuss@opensolaris.org
Subject: Re
On Sat, Nov 22, 2008 at 11:41 AM, Chris Greer <[EMAIL PROTECTED]> wrote:
> vxvm with vxfs we achieved 2387 IOPS
In this combination you should be using odm, which comes as part of
the Storage Foundation for Oracle or Storage Foundation for Oracle RAC
products. It makes the database files on vxfs
Chris Greer wrote:
> Right now we are not using Oracle...we are using iorate so we don't have
> separate logs. When the testing was with Oracle the logs were separate.
> This test represents the 13 data luns that we had during those test.
>
> The reason it wasn't striped with vxvm is that the o
On Sat, 22 Nov 2008, Chris Greer wrote:
> zfs with the datafiles recreated after the recordsize change was 3079 IOPS
> So now we are at least in the ballpark.
ZFS is optimized for fast bulk data storage and data integrity and not
so much for transactions. It seems that adding a non-volatile
ha
Right now we are not using Oracle...we are using iorate so we don't have
separate logs. When the testing was with Oracle the logs were separate. This
test represents the 13 data luns that we had during those test.
The reason it wasn't striped with vxvm is that the original comparison test was
> For those interested, we are using the iorate command from EMC for
> the benchmark. For the different test, we have 13 luns presented.
> Each one is its own volume and filesystem and a singel file on those
> filesystems. We are running 13 iorate processes in parallel (there
> is no cpu
Are you putting your archive and redo logs on a separate zpool (not
just a different zfs fs with the same pool as your data files) ?
Are you using direct io at all in any of the config scenarios you
listed?
/dale
On Nov 22, 2008, at 12:41 PM, Chris Greer wrote:
> So to give a little backgr
zfs with the datafiles recreated after the recordsize change was 3079 IOPS
So now we are at least in the ballpark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
that should be set zfs:zfs_nocacheflush=1
in the post above...that was my typo in the post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So to give a little background on this, we have been benchmarking Oracle RAC on
Linux vs. Oracle on Solaris. In the Solaris test, we are using vxvm and vxfs.
We noticed that the same Oracle TPC benchmark at roughly the same transaction
rate was causing twice as many disk I/O's to the backend DMX
10 matches
Mail list logo