Hi! Here is the Specifications of My Server.
I would really appreciate the best configuration of postgresql.conf for my
sevrer.
I have tried so many value in the parameters but It seems that I cannot get
the speed I want.
OS: Redhat Linux
CPU: Dual Xeon
Memory: 6 gigabyte
PostgreSQL Version 8.0
That seems like a pretty horrible way to do that query, given the table sizes.
What about something like:
SELECT count(*)
FROM fotos f
LEFT JOIN archivo a USING(archivo)
WHERE a.archivo IS NULL
Incidentally, can someone explain what the "Materialize" subplan does? Is this
new in 8.1?
Dmitri
what happend with postgresql 8.1b4 performance on query?
please help me !!!
look at this:
select count(*) from fotos where archivo not in (select archivo from
archivos)
Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)
-> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716
where can i find bests practices for tunning postgresql?
_
Consigue aquí las mejores y mas recientes ofertas de trabajo en América
Latina y USA: http://latam.msn.com/empleos/
---(end of broadcast)-
I am creating a temporary table in every function that I execute.
Which I think is bout 100,000 temporary tables a day.
What is the command for vacuuming these 3 tables?
Also I read about the auto vacuum of postgresql.
How can I execute this auto vacuum or the settings in the configuration?
In this particular case both outer joins are to the same table, and
the where clause is applied to one or the other, so it's pretty easy
to prove that they should generate identical results. I'll grant that
this is not generally very useful; but then, simple test cases often
don't look very useful
Rich Doughty <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> The reason these are different is that the second case constrains only
>> the last-to-be-joined table, so the full cartesian product of t and h1
>> has to be formed. If this wasn't what you had in mind, you might be
>> able to rearrange
>> Am I correct in assuming that even though I'm passing my 52000
>> bytes as a (char *) to PQexecPrepared(), encoding/decoding is
>> happening (think 0 -> \000) somewhere in the transfer?
>
> Are you specifying it as a text or binary parameter? Have you looked to
> see if the stored data is what
>>Is there a better, faster way to do these inserts?
>
> COPY is generally the fastest way to do bulk inserts (see
> PQputCopyData).
Thanks :)
I'll give that I try and report the results here later.
Bye, Chris.
---(end of broadcast)---
TIP 4: Ha
On Tue, 2005-10-25 at 14:46, Kishore B wrote:
> Hi All,
>
> We are executing a single query that returned very fast on the first
> instance. But when I executed the same query for multiple times, it is
> giving strange results. It is not coming back.
>
> When I checked with the processes runn
Hi All,
We are executing a single query that returned very fast on the first instance. But when I executed the same query for multiple times, it is giving strange results. It is not coming back.
When I checked with the processes running in the system, I observed that multiple instances of pos
Tom Lane wrote:
Rich Doughty <[EMAIL PROTECTED]> writes:
EXPLAIN SELECT *
FROM
tokens.ta_tokens t LEFT JOIN
tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN
tokens.ta_tokenhist h2 ON t.token_id = h2.token_id
WHERE
h1.histdate = 'now';
EXPLAIN SELECT *
F
Kishore B <[EMAIL PROTECTED]> writes:
>> I surmise that you are testing on toy tables and extrapolating to what
>> will happen on larger tables.
>>
> These tables participating here contain more than 8 million records as of
> now, and on every day, 200K records, will add to them.
In that case, ha
> If I turn on stats_command_string, how much impact would it have on
> PostgreSQL server's performance during a period of massive data
> INSERTs? I know that the answer to the question I'm asking will
> largely depend upon different factors so I would like to know in which
> situations it would b
On Thu, Oct 20, 2005 at 01:33:07PM -0700, [EMAIL PROTECTED] wrote:
> If I turn on stats_command_string, how much impact would it have on
> PostgreSQL server's performance during a period of massive data
> INSERTs?
Do you really need to be doing "massive data INSERTs"? Can you use
COPY, which is m
Ok, thanks for the limits info, but I have that in the manual. Thanks.
But what I really want to know is this:
1) All large objects of all tables inside one DATABASE is kept on only one table. True or false?
Thanks =o)
RodrigoOn 10/25/05, Nörder-Tuitje, Marcus <[EMAIL PROTECTED]> wrote:
o
Hi Tom,
Thank you for your response.
I surmise that you are testing on toy tables and extrapolating to whatwill happen on larger tables.
These tables participating here contain more than 8 million records as of now, and on every day, 200K records, will add to them.
Thank you,
Kishore.
Markus Benne <[EMAIL PROTECTED]> writes:
> Our table in question is vacuumed every 4 minutes, and
> we are reindexing after each one.
That's pretty silly. You might need a reindex once in awhile, but
not every time you vacuum.
The draft 8.1 docs contain some discussion of possible reasons for
pe
"Chris Mair" <[EMAIL PROTECTED]> writes:
> Am I correct in assuming that even though I'm passing my 52000
> bytes as a (char *) to PQexecPrepared(), encoding/decoding is
> happening (think 0 -> \000) somewhere in the transfer?
Are you specifying it as a text or binary parameter? Have you looked t
Markus Benne wrote:
We are reindexing frequently, and I'm wondering if
this is really necessary, given that it appears to
take an exclusive lock on the table.
Our table in question is vacuumed every 4 minutes, and
we are reindexing after each one.
I'm not a fan of locking this table that freque
Rich Doughty <[EMAIL PROTECTED]> writes:
> EXPLAIN SELECT *
> FROM
> tokens.ta_tokens t LEFT JOIN
> tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN
> tokens.ta_tokenhist h2 ON t.token_id = h2.token_id
> WHERE
> h1.histdate = 'now';
> EXPLAIN SELECT *
> FR
On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:
Is there a better, faster way to do these inserts?
COPY is generally the fastest way to do bulk inserts (see
PQputCopyData).
Mike Stone
---(end of broadcast)---
TIP 9: In versions bel
We are reindexing frequently, and I'm wondering if
this is really necessary, given that it appears to
take an exclusive lock on the table.
Our table in question is vacuumed every 4 minutes, and
we are reindexing after each one.
I'm not a fan of locking this table that frequently,
even if it is on
Hi,
I have the following test setup:
* PG 8.0.4 on Linux (Centos 4) compiled from source.
* DB schema: essentially one table with a few int columns and
one bytea column that stores blobs of 52000 bytes each, a
primary key on one of the int columns.
* A test client was written in C using lib
Hi!
> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Im Auftrag
> von Richard Huxton
> Gesendet: Dienstag, 25. Oktober 2005 12:07
> An: Markus Wollny
> Cc: pgsql-performance@postgresql.org
> Betreff: Re: [PERFORM] Strange planner decision on quite simple
Kishore B <[EMAIL PROTECTED]> writes:
> Can you guys please take a look at the following query and let me know why
> the index is not considered in the plan?
"Considered" and "used" are two different things.
The two examples you give have the same estimated cost (within two
decimal places) so th
I tried on pgsql-general but got no reply. re-posting here as it's
probably the best place to ask
I'm having some significant performance problems with left join. Can
anyone give me any pointers as to why the following 2 query plans are so
different?
EXPLAIN SELECT *
FROM
tokens.ta_tokens
Markus Wollny wrote:
Hello!
I've got a table BOARD_MESSAGE (message_id int8, thread_id int8, ...)
with pk on message_id and and a non_unique not_null index on thread_id.
A count(*) on BOARD_MESSAGE currently yields a total of 1231171 rows,
the planner estimated a total of 1232530 rows in this ta
Hello!
I've got a table BOARD_MESSAGE (message_id int8, thread_id int8, ...)
with pk on message_id and and a non_unique not_null index on thread_id.
A count(*) on BOARD_MESSAGE currently yields a total of 1231171 rows,
the planner estimated a total of 1232530 rows in this table. I've got
pg_autova
Hi All,
Thank you very much for your help in configuring the database.
Can you guys please take a look at the following query and let me know why the index is not considered in the plan?
Here is the extract of the condition string of the query that is taking the transaction_date in index con
Does Creating Temporary table
in a function and NOT dropping them affects the performance of the database?
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60
shared_buffers = 48000
sort_mem = 32168
sync = false
Do you think this is enough? Or
32 matches
Mail list logo