(For those not knowing - it's ReadFile/WriteFile where you pass an array
of "this many bytes to this address" as parameters)
Isn't that like the BSD writev()/readv() that Linux supports also? Is
that something we should be using on Unix if it is supported by the OS?
Nope, readv()/writev() read/w
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with litt
Sven Willenberger <[EMAIL PROTECTED]> writes:
> 3) Each month:
> CREATE newmonth_dynamically_named_table (like mastertable) INHERITS
> (mastertable);
> modify the copy.sql script to copy newmonth_dynamically_named_table;
> pg_dump 3monthsago_dynamically_named_table for archiving;
> drop table 3mont
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with little value
beyond archival
Cosimo Streppone <[EMAIL PROTECTED]> writes:
> The performance level of Pg 8 is at least *five* times higher
> (faster!) than 7.1.3 in "query-intensive" transactions,
> which is absolutely astounding.
Cool.
> In my experience, Pg8 handles far better non-unique indexes
> with low cardinality built
Cosimo Streppone wrote:
Merlin Moncure wrote:
> If everything is working the way it's supposed to, 8.0 should be faster
> than 7.1 (like, twice faster) for what you are probably trying to do.
In the next days I will be testing the entire application with the
same database only changing the backend
Hi *,
I am looking for the fastest wal_sync_method
(postgres 8, Linux (Redhat) 2.4.29, ext3, SCSI HW-Raid 5).
Any experiences and/or tips?.
Thanks in advance
Stefan
On Mon, Feb 28, 2005 at 16:46:34 +0100,
Markus Schaber <[EMAIL PROTECTED]> wrote:
> Hi, Matthew,
>
> Matthew T. O'Connor schrieb:
>
> > The version of pg_autovacuum that I submitted for 8.0 could be
> > instructed "per table" but it didn't make the cut. Aside from moved out
> > of contrib and
On Fri, 2005-02-25 at 08:49 -0500, Jeff wrote:
> Also another thing I started working on back in the day and hope to
> finish when I get time (that is a funny idea) is having explain analyze
> report when a step required the use of temp files.
Sounds useful. Please work on it...
Best Regards, S
Hi, John,
John Allgood schrieb:
> My question is what is the best way to setup
> postgres databases on different disks. I have setup multiple postmasters
> on this system as a test. The only problem was configuring each
> databases "ie postgresql.conf, pg_hba.conf". Is there anyway in
> postgres
Gaetano Mendola wrote:
Yes, I'm aware about it indeed I need the analyze because usualy I do on that
table select regarding last 24 ours so need to analyze it in order to
collect the statistics for this period.
Beside that I tried to partition that table, I used both tecnique on
my knowledge
1) A
Hi, Matthew,
Matthew T. O'Connor schrieb:
> The version of pg_autovacuum that I submitted for 8.0 could be
> instructed "per table" but it didn't make the cut. Aside from moved out
> of contrib and integrated into the backend, per table autovacuum
> settings is probably the next highest priority
Hi, All
I'm trying to tune a software RAID 0 (striped) on a solaris 9, sparc box.
Currently I'm using a raid 1 (mirrored) array on two discs for the data area,
and I put in 4 new drives last night (all are f-cal). On the new array I have
a width of 4, and used the default interleave factor of 3
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Markus Schaber wrote:
> Hi, Gaetano,
>
> Gaetano Mendola schrieb:
>
>
>>I have the same requirement too. Actually pg_autovacuum can not be
>>instructed "per table" so some time the global settings are not good
>>enough. I have a table of logs with 6
Hello,
I'm experiencing performance problems with 7.4.3 on OpenBSD 3.6, at
least I think so. It is running on a Xeon 3 GHz with 2 GB RAM.
I have a table with 22 columns, all integer, timestamp or varchar and
10 indizes on integer, timestamp and varchar columns.
The table got 8500 rows (but growi
15 matches
Mail list logo