[EMAIL PROTECTED] writes:
> After delving into this a little, it seems to me that if you are going to
> do this:
> write(file, buffer, size);
> f[data]sync(file);
> Opening with O_SYNC seems to be an optimization specifically to this
> methodology.
What you are missing is that we don't necessari
[EMAIL PROTECTED] wrote:
I have been considering a full sweep in my test lab off client time later on.
ext2, ext3, jfs, xfs, and ReiserFS, fsync on with fdatasync or open_sync,
and fsync off.
Before you start: double check that the disks are not lying:
At least the suse 2.4 kernel send cache flu
> On Tue, 2004-08-10 at 07:48, [EMAIL PROTECTED] wrote:
>> Some more information:
>>
>> I started to perform the tests on one of the machines in my lab, and
>> guess
>> what, almost no difference between fsync and open_sync. Either on jfs or
>> ext2.
>>
>> The difference, Linux 2.6.3? My original t
[EMAIL PROTECTED] writes:
> Does it make sense, then, to say that WAL O_SYNC should be O_SYNC? If
> there are no reasons not too, doesn't it make sense to make this the
> default. It will give a boost for any 2.4 Linux machines and won't seem to
> hurt anyone else.
You have got the terms of debate
>
> In particular, you need to offer some evidence for that completely
> undocumented assertion that "it won't hurt anyone else".
It should be easy enough to prove whether or not O_SYNC hurts anyone.
OK, let me ask a few questions:
(1) what is a good sample set on which to run? Linux, FreeBSD,
Some more information:
I started to perform the tests on one of the machines in my lab, and guess
what, almost no difference between fsync and open_sync. Either on jfs or
ext2.
The difference, Linux 2.6.3? My original tests where on Linux 2.4.25.
The good part is that open_sync wasn't worse.
Ju
Tom Lane wrote:
[EMAIL PROTECTED] writes:
The improvements were REALLY astounding, and I would like to know if other
Linux users see this performance increase, I mean, it is almost 8~10 times
faster than using fsync.
Furthermore, it seems to also have the added benefit of reducing the I/O
storm
[EMAIL PROTECTED] writes:
>> Just out of interest, what happens to the difference if you use *ext3*
>> (perhaps with data=writeback)
>
> Actually, I was working for a client, so it wasn't a general exploritory,
> but I can say that early on we discovered that ext3 was about the worst
> file system
> Just out of interest, what happens to the difference if you use *ext3*
> (perhaps with data=writeback)
Actually, I was working for a client, so it wasn't a general exploritory,
but I can say that early on we discovered that ext3 was about the worst
file system for PostgreSQL. We gave up on it an
Just out of interest, what happens to the difference if you use *ext3*
(perhaps with data=writeback)
regards
Mark
[EMAIL PROTECTED] wrote:
I did a little test on the various options of fsync.
...
create table testndx (value integer, name varchar);
create index testndx_val on testndx (value);
for
[EMAIL PROTECTED] writes:
> The improvements were REALLY astounding, and I would like to know if other
> Linux users see this performance increase, I mean, it is almost 8~10 times
> faster than using fsync.
> Furthermore, it seems to also have the added benefit of reducing the I/O
> storm at checkp
[EMAIL PROTECTED] wrote:
> Furthermore, it seems to also have the added benefit of reducing the I/O
> storm at checkpoints over a system running with fsync off.
>
> I'm really serious about this, changing this one parameter had dramatic
> results on performance. We should have a general call to us
> [EMAIL PROTECTED] writes:
>> I did a little test on the various options of fsync.
>
> There were considerably more extensive tests back when we created the
> different WAL options, and the conclusions seemed to be that the best
> choice is platform-dependent and also usage-dependent. (In particu
[EMAIL PROTECTED] writes:
> I did a little test on the various options of fsync.
There were considerably more extensive tests back when we created the
different WAL options, and the conclusions seemed to be that the best
choice is platform-dependent and also usage-dependent. (In particular,
it ma
I did a little test on the various options of fsync.
I'm not sure my tests are scientific enough for general publication or
evaluation, all I am doing is performaing a loop that inserts a value into
a table 1 million times.
create table testndx (value integer, name varchar);
create index testndx_
I did a little test on the various options of fsync.
I'm not sure my tests are scientific enough for general publication or
evaluation, all I am doing is performaing a loop that inserts a value into
a table 1 million times.
create table testndx (value integer, name varchar);
create index testndx_v
16 matches
Mail list logo