GOKHAN wrote:
coreutils-4.5.3-26 is installed on RH3
It is a bit more work than just having it installed:
'man dd' will tell you this:
--o_direct
Use O_DIRECT file access. Mainly for use with FSâs That
support O_DIRECT access. There are three forms --o_direct (selects
default block size) --o_direct=<r/w block size> (use given
block size for read and write operations)
--o_direct=<read>,<write> (use specific block size reads
and writes)
For the purposes of handling data between FSâs that do
and do not handle O_DIRECT access, the special value, 0, dictates that
access will be via NON O_DIRECT means. This is mainly
for use with stdin/stdout. eg, dd --o_direct=8192,0 if=/ocfs/file |
gzip > /tmp/backup.gz
So, the command you would probably want to try would look like this:
'dd --o_direct of=/dev/zero if=./sill.t bs=1M count=1000'
But don't get too excited, I would still not expect it too be as fast as
ocfs2.
hth
Steffen
----------------
RH3:
more /etc/issue
Red Hat Enterprise Linux AS release 3 (Taroon Update 2)
uname -a
Linux dbcluster7 2.4.21-40.EL #1 SMP Thu Feb 2 22:12:47 EST 2006 ia64
ia64 ia64 GNU/Linux
---------------
Message: 7
Date: Wed, 17 Jan 2007 02:06:37 -0800 (PST)
From: Luis Freitas <[EMAIL PROTECTED]>
Subject: Re: [Ocfs2-users] ocfs Vs ocfs2
To: ocfs2-users@oss.oracle.com, [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"
Joel,
It is not using o_direct only if the coreutils package was not
installed on the RH3.0 machine. (coreutils-4.5.3-41.i386.rpm ).
http://oss.oracle.com/projects/coreutils/files/
If it is installed, then both tests are using O_DIRECT, and can
be compared.
I do not have both a OCFS and a OCFS2 environment to compare
here, but I am perceiving too a very slow performance with copy
operations on the OCFS2 volume, compared to what I was used to in OCFS.
Regards,
Luis
Joel Becker <[EMAIL PROTECTED]> wrote:
On Tue, Jan 16, 2007 at 01:28:41AM -0800, GOKHAN wrote:
> Hi everbody this is my first post,
> I have two test server .(Both of them is idle)
> db1 : RHEL4 OCFS2
> db2 : RHEL3 OCFS
>
> I test the IO both of them
> The result is below.
>
> db1(Time Spend)db2(Time Spend)OS Test Command
> dd (1GB) (Yazma)0m0.796s0m18.420stime dd if=/dev/zero of=./sill.t
bs=1M count=1000
> dd (1GB) (Okuma)0m0.241s8m16.406stime dd of=/dev/zero if=./sill.t
bs=1M count=1000
> cp (1GB)0m0.986s7m32.452stime cp sill.t sill2.t
You are using dd(1), which does not use O_DIRECT. The original
ocfs (on 2.4 kernels) does not really support buffered I/O well. What
you are seeing is ocfs2 taking much better care of your buffered I/Os.
They will be consistent across the cluster. In the ocfs case, you are
caching a lot more because these safety precautions aren't taken.
HOWEVER, the most important factor is that you are not using
O_DIRECT. When you actually run the database, you _will_ be using
O_DIRECT (make sure to mount ocfs2 with '-o datavolume'). Without the
OS caching in the way, both filesystems should run at the same speed.
The upshot is that buffered I/O operations (such as plain dd(1))
are often not good indicators of database speed.
Joel
--
"To announce that there must be no criticism of them president, or
that we are to stand by the president, right or wrong, is not only
unpatriotic and servile, but is morally treasonable to the American
public."
- Theodore Roosevelt
Joel Becker
Principal Software Developer
Oracle
E-mail: [EMAIL PROTECTED]
Phone: (650) 506-8127
------------------------------------------------------------------------
Any questions? Get answers on any topic at Yahoo! Answers
<http://answers.yahoo.com/;_ylc=X3oDMTFvbGNhMGE3BF9TAzM5NjU0NTEwOARfcwMzOTY1NDUxMDMEc2VjA21haWxfdGFnbGluZQRzbGsDbWFpbF90YWcx>.
Try it now.
------------------------------------------------------------------------
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users