> From: Matthew Wakeling
>
> Perhaps reading the other replies in the thread before
> replying yourself might be advisable, because this previous
> reply directly contradicts you:
>
> On Wed, 28 Oct 2009, Kevin Grittner wrote:
> > I recommend VACUUM ANALYZE of the table(s) after this step. With
> -Original Message-
> From: Denis BUCHER
>
> And each morning huge tables are DELETED and all data is
> INSERTed new from a script. (Well, "huge" is very relative,
> it's only 400'000 records)
If you are deleting ALL rows in the tables, then I would suggest using
TRUNCATE instead of DE
>From: keshav upadhyaya
>Subject: [PERFORM] What is the role of #fsync and #synchronous_commit in
configuration file .
>
>Hi ,
>I want to imporve the performance for inserting of huge data in my table .
>I have only one idex in table .
>
>First question - i want to know the role played b
>From: Shiva Raman
>Subject: Re: [PERFORM] High CPU load on Postgres Server during Peak
times
>
>Andy Colson Wrote : ,
>>Eww. I think that's bad. A connection that has a transaction open will
cause lots of row versions,
>>which use up ram, and make it slower to step through the table (even
> -Original Message-
> From: Brian Cox
> Subject: [PERFORM] select max() much slower than select min()
>
> seems like max() shouldn't take any longer than min() and
> certainly not 10 times as long. Any ideas on how to determine
> the max more quickly?
That is odd. It seems like max
>We have two machines. Both running Linux Redhat, both running postgres
8.2.5.
>Both have nearly identical 125 GB databases. In fact we use PITR Recovery
to
>Replicate from one to the other.
I have to ask the obvious question. Do you regularly analyze the machine
you replicate too?
Dave
> -Original Message-
> From: Alexander Staubo
>
>-> Nested Loop (cost=0.00..5729774.95 rows=10420 width=116)
> (actual time=262614.470..262614.470 rows=0 loops=1)
> Join Filter: ((photos.taken_at > (event_instances."time" +
> '-01:00:00'::interval)) AND (photos.taken_at < (e
> -Original Message-
> From: Anne Rosset
> Subject: Re: [PERFORM] Unexpected query plan results
>
> >>
> >>SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum
> - 1824592
> >>(1
> >>row)
> >>SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641
> >>'; sum
> >>
> >>1224
> From: Anne Rosset
> Subject: Re: [PERFORM] Unexpected query plan results
> >
> >
> Thank Dave. We are using postgresql-server-8.2.4-1PGDG and
> have work-mem set to 20MB.
> What value would you advise?
> thanks,
>
> Anne
Work-mem is kind of tricky because the right setting depends on how mu
> From: Anne Rosset
> Subject: [PERFORM] Unexpected query plan results
>
> Hi,
> We have one query which has a left join. If we run this query without
> the left join, it runs slower than with the left join.
[snip]
> I am having a hard time to understand why the query runs
> faster with the
> l
What I don't understand is the part where you talking about disabling hash
joins:
>* result: planner replaced hash join is replaced by merge join
>* execution time: 0.84ms !
>* NOTE: curiously planner is expecting to execute this query in 0.29ms
- so it's supposed from its logic to be
> -Original Message-
> From: Brian Cox
> Subject: [PERFORM] Deleting millions of rows
>
> I'm using 8.3.5. Table ts_defects has 48M rows. Through psql:
> delete from ts_defects;
> Result: out of memory/Can't allocate size: 32 I then did 10
> or so deletes to get rid of the rows. Afterwar
> -Original Message-
> From: SPMLINGAM
> Subject: [PERFORM] postgresql performance
>
> Dear Friends,
> I have a table with 50 lakhs records, the table has more
> then 10 fields, i have primary key, i have select query with
> count(*) without any condition, it takes 17 seconds.
17 s
> -Original Message-
> From: tmp
> We have primarily two tables of interest here: questions
> (~100k rows) and posts (~400k rows). Each post refers to a
> question, but only the "posts" rows for which the
> corresponding "question.status = 1" are relevant. This
> reduces the number of r
> -Original Message-
> From: cluster
>
> If I disable the nested loops, the query becomes *much* slower.
>
> A thing that strikes me is the following. As you can see I have the
> constraint: q.status = 1. Only a small subset of the data set
> has this status. I have an index on q.status
> -Original Message-
> From: cluster
>
> >> Probably by buying much faster disk hardware.
> > Or buy more RAM, so that the data can stay cached.
>
> So the only problem here is lack of RAM and/or disk speed?
I don't think you can reach that conclusion yet. Like everybody said the
reason
> -Original Message-
> From: Ow Mun Heng
> Subject: Re: [PERFORM] PostgreSQL vs MySQL, and FreeBSD
>
> Even for Postgresql, nested loops are still evil and hampers
> performance.
I don't know about that. There are times when it is the right plan:
explain analyze select * from table1
>From: Christian Rengstl
>Subject: [PERFORM] Optimizing PostgreSQL for Windows
>
>Hi list,
>
>I have the following query:
>select t.a1, t.a2 from table1 t inner join table2 s
>using(id) where t.pid='xyz' and s.chromosome=9 order by s.pos;
>
>"-> Bitmap Heap Scan on table1 t (cost=388.25..
> -Original Message-
> From: Gábor Farkas
>
>
> i see.
>
> will i achieve the same thing by simply dropping that table
> and re-creating it?
Yes. Or even easier (if you don't need the data anymore) you can use the
truncate command. Which deletes everything in the table including dead
>From: [EMAIL PROTECTED]
>Subject: Re: [PERFORM] Low CPU Usage
>
>I have no idea how to continue researching this problem. Now I'm going to
do some networks tests.
I would go back to the slow program and try to capture the slow queries in
the log file. Once you have some queries which are runn
>From: smiley2211
>Subject: Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!
>
>-> Seq Scan on encounters_questions_answers eqa
>(cost=1.00..17608.66 rows=464766 width=8) (actual
>time=0.003..735.934 rows=464766 loop
>s=7430)
I think Tom is talking about something like this:
explain select * from foo();
QUERY PLAN
--
Function Scan on foo (cost=0.00..12.50 rows=1000 width=50)
The planner is estimating the function will r
> From: Scott Feldstein
> Subject: [PERFORM] update, truncate and vacuum
>
> Hi,
> I have a couple questions about how update, truncate and
> vacuum would work together.
>
> 1) If I update a table foo (id int, value numeric (20, 6))
> with update foo set value = 100 where id = 1
>
> Would a va
> From: Christo Du Preez
> Sent: Tuesday, June 12, 2007 2:38 AM
>
> Where do I set the planner settings or are you reffering to
> settings in postgres.conf that may affect the planner?
>
Yes I'm reffering to settings in postgres.conf. I'm wondering if
enable_indexscan or something got turned o
> -Original Message-
> From: Christo Du Preez
> Sent: Monday, June 11, 2007 10:10 AM
>
> I have narrowed down the problem (I think) and it's the query
> planner using different plans and I haven't got a clue why.
> Can anyone please shed some light on this?
Different plans can be caused
I think the manual is implying that if you store a value like "Sid" in a
field either of type varchar(128) or type text there is no performance
difference. The manual is not saying that you get the same performance
storing a 500k text field as when you store the value "Sid".
Dave
-Original
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Jeroen Kleijer
>
> The problems comes when I try to do a query without using a
> where clause
> because by then, it completely discards the indexes and does
> a complete
> table scan which takes over half an hour! (40.710.725
I think a database with all natural keys is unrealistic. For example if you
have a table that refers to people, are you going to use their name as a
primary key? Names change all the time due to things like marriage,
divorce, or trouble with the law. We have tables with 20 million rows which
ref
In your first post you said that the query is taking much longer than a
second, and in your second post you say the performance is horrible, but
explain analyze shows the query runs in 219 milliseconds, which doesn't seem
too bad to me. I wonder if the slow part for you is returning all the rows
t
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Xiaoning Ding
>
>
> Hi all,
>
> When I run multiple TPC-H queries (DBT3) on postgresql, I
> found the system
> is not scalable. My machine has 8GB memory, and 4 Xeon Dual
> Core processor
> ( 8 cores in total). OS kernel
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Vincenzo Romano
>
> Is there any "workaround"?
>
> In my opinion the later the query planner decisions are taken the more
> effective they can be.
> It could be an option for the function (body) to delay any
> query planner
>
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Dan Harris
>
> After some digging, I've found that the planner is choosing
> to apply a necessary seq scan to the table. Unfortunately,
> it's scanning the whole table, when it seems that it could
> have joined it to a small
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Vincenzo Romano
>
> I thought that the query planner usually did a bad job on
> function bodies
> because they'd appear opaque to it.
> In this case it seems to me that the body is opaque only if I
> use the "like"
> operator.
-Original Message-
>From: [EMAIL PROTECTED] On Behalf Of Gauri Kanekar
>Subject: Re: [PERFORM] Nested Loop
>
>join_collapse_limit = 1 # JOINs
Is there a reason you have this set to 1? Postgres can't consider multiple
join orders when you do that. I would try setting that
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Execution plan changed after upgrade
> from 7.3.9 to 8.2.3
>
>
> Increasing the default_statistics_target to 1000 did not help.
> It just make the vacuum full analyze to take longer
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Execution plan changed after upgrade
> from 7.3.9 to 8.2.3
>
> The following did not change anything in the execution plan
>
> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min S
Did you run ANALYZE on your data after importing it into 8.2.3? Is there an
index on the seqfil column? If so, you should post the output of EXPLAIN
ANALYZE from both systems if possible.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Glenn S
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
>
> I took another look and think I found the problem: 8.2's new code for
> flattening UNION ALL subqueries into "append relations" is failing to
> initialize all the fields of the appendrel, which confuses
> estimate_num
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
>
>
> In fact, since there isn't any "parent relation" in a UNION, I'm not
> sure that this patch actually changed your results ... but I'm not
> seeing what else would've ...
>
Thanks for looking into it. I thought I
_index (cost=0.00..4.26 rows=1 width=0)
(actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (process_state = 2)
Total runtime: 27.055 ms
Thanks,
Dave Dutcher
Telluride Asset Management
952.653.6411
---(end of broadcast)---
gres 8.2.1 and real world queries, and the two machines are
basically performing the same, but those seek numbers kinda bug me.
Thanks,
Dave Dutcher
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Rolf Østvik (HA/EXA)
Have you tried "set enable_sort=off" with 8.1.2? I'm not sure if that will
change anything because it has to do at least one sort. Its just a lots
faster to do a hashagg + sma
Have you run vacuum and analyze on the table? What version of Postgres are
you running? What OS are you using?
This looks like a straight forward query. With any database the first time
you run the query its going to be slower because it actually has to read off
disk. The second time its fast
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
>
> [ fools around with it for awhile... ] I think this is already fixed
> for 8.2.1. Note the costs of the two related index scans:
I installed 8.2.1 this morning and it works much better.
compairison is still valid.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adam Rich
Sent: Thursday, January 04, 2007 7:19 PM
To: 'Dave Dutcher'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow Query on Postgres 8.2
Dave,
Is it me
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
>
> Um ... what indexes has this table got exactly? It's very
> unclear what
> alternatives the planner is being faced with.
>
Here is the table definition. Thanks.
d increasing my stats target to 1000 and analyzing, but that
didn't help so I put that back to 10. While the query is running the CPU is
at 100%. Is there a more efficient way to write a query like this? I've
attached the output from EXPLAIN ANALYZE in a file because it is somewhat
> -Original Message-
> From: [EMAIL PROTECTED] On Behalf Of Alexandru Coseru
> asterisk=> explain analyze SELECT * FROM destlist WHERE
> '0039051248787' ~
> prefix AND id_ent='-2' AND dir=0 ORDER by length(prefix) DESC;
>
>
> Q
> -Original Message-
> From: [EMAIL PROTECTED]
> Nikolay Samokhvalov
>
> What should I do to make Postgres work properly in such cases (I have
> a lot of similar queries; surely, they are executed w/o seqscans, but
> overall picture is the same - I see that starting from sub-selects
> dra
> Hello friends,
>
> I am responsible for maintaining a high volume website using
> postgresql
> 8.1.4. Given the amount of reads and writes, I vacuum full
> the server a
> few times a week around 1, 2 AM shutting down the site for a few
> minutes. The next day morning around 10 - 11 AM the serv
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
>
> lc_collate is C, as are all the other lc settings.
>
> I have run the analyze commands.
>
> Still the same.
That is strange. I figured it had to be related to the locale and the LIKE
operator
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Adnan DURSUN
> i want to be can read an execution plan when
> i look at it.
> So, is there any doc about how it should be read ?
You are asking how to read the output from EXPLAIN
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Carlo
Stonebanks
> Subject: [PERFORM] Performace Optimization for Dummies
>
> At this early stage in the project, we are initializing our portal's
> database with millions of rows of imported data in ove
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim Truman
>
> Hi,
>
> I have the following query which has been running very slowly
> and after a
> lot of testing/trial and error I found an execution plan that
> ran the query
> in a fraction of th
I would start by reading this web page:
http://powerpostgresql.com/PerfList
There are probably some other web pages out there with similar information,
or you can check the mailing list archives for a lot of info. If those
places don't help, then you should try to indentify what queries are slow
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Jérôme BENOIS
>
explain analyze select distinct
> INTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_
> VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from (((
> select distinct e
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Piñeiro
> > TIP 6: explain analyze is your friend
> Well, yes, it is a friend, but as the select at postgre Sarge version
> never finished I can't use a explain analyze. I show you the explain,
> with the hope that someone has an
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Piñeiro
> Subject: [PERFORM] Performance problem with Sarge compared with Woody
> a week ago we migrate a Woody(postgre 7.2.1) server to Sarge(postgre
> 7.4.7). To migrate the database we use a dump, u
Title: Message
I
would expect that the age of Template0 is increasing at the same rate as every
other database in your cluster. Transaction IDs are global across all
databases in the cluster, so as I understand it, executing a transaction in any
database will increase the age of all databas
Title: Message
That's an
interesting situation. Your CPU's are pegged, and you're hardly doing any
IO. I wonder if there is some ineficient query, or if its just very high
query volume. Maybe you could try setting log_min_duration_statement to
try to track down the slowest of the queries.
It seems to me that what would work best is an index scan backward on the
eventtime index. I don't see why that wouldn't work for you, maybe the
planner is just esitmating the seq scan and sort is faster for some reason.
What does EXPLAIN say if you use a small limit and offset like 10? Or what
d
I would guess that you are not running vacuumdb as a user with permission to
vacuum the postgres or template1 databases. Try telling vacuumdb to log in
as postgres or whatever your superuser account is called.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Luiz K. Matsumura
>
>
> Where I can see the current random_page_cost value ? There
> are some hint
> about what value I must set ?
> Thanks in advance.
> Luiz
On Linux the random_page_cost is s
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Luiz K. Matsumura
> Well, in this case the queries with LEFT OUTER join and with
> inner join
> returns the same result set. I don´t have the sufficient knowledge to
> affirm , but I suspect that if the query plan used for
Title: Message
Could
you post an explain analyze of the query? Just FYI, if you do an explain
analyze of the insert statement, it will actually do the insert. If you
don't want that just post an explain analyze of the select
part.
To me
it would be interesting to compare just the select
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Greg Stark
> Sent: Tuesday, June 13, 2006 11:16 PM
> Subject: Re: [PERFORM] OT - select + must have from - sql
> standard syntax?
[SNIP]
>
> Well you could always create a "dual", it was always jus
Do you run analyze on the production server regularly?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Ruben Rubio Rey
> Sent: Monday, June 12, 2006 9:39 AM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Posrgres speed problem
>
>
Actually It looks to me like the sorting is the slow part of this query.
Maybe if you did create an index on both kuupaev and kellaaeg it might
make the sorting faster. Or maybe you could try increasing the server's
work mem. The sort will be much slower if the server can't do the whole
thing in
What I do when I'm feeling lazy is execute a delete statement and then
an insert. I only do it when I'm inserting/updating a very small number
of rows, so I've never worried if its optimal for performance. Besides
I've heard that an update in postgres is similar in performance to a
delete/insert.
If you can live with possible database corruption, you could try turning
Fsync off. For example if you could just reinsert the data on the off
chance a hardware failure corrupts the database, you might get a decent
improvement.
Also have you tried creating the index after you have inserted all y
Title: Message
Are
you using the Postgres JDBC driver? Or are you using an ODBC JDBC
driver? The Postgres specific driver is usually
faster.
-Original Message-From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Thomas
VatterSent: Wednesday, May 10, 2006
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Clemens Eisserer
> Sent: Wednesday, May 10, 2006 6:50 AM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Question about explain-command...
>
> What does the hash-lines mean, does that
Title: Message
For a
standard config most of the memory used by Postgres is the shared buffers.
The shared buffers are a cache to store blocks read from the disk, so if you do
a query, Postgres will allocate and fill the shared buffers up to the max amount
you set in your postgresql.conf fi
> -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
> (actual time=68.322..529472.026 rows=57925 loops=1)
>-> Seq Scan on ticketing_codes_played
> (cost=0.00..863.25 rows=57925 width=8) (actual time=0.042..473.881
> rows=57925 loops=1)
>-> Index
Title: Message
You
are pulling a fair amount of data from the database and doing a lot of
computation in the SQL. I'm not sure how fast this query could be expected
to run, but I had one idea. If you've
inserted and deleted a lot into this table, you will need to run vacuum
ocasionally.
Title: Message
If you
are using a locale other than the C locale, you need to create the index with an
operator class to get index scans with like.
See
here for details:
http://www.postgresql.org/docs/8.1/interactive/indexes-opclass.html
-Original Message-From:
[EMAIL P
Maybe you could post the query and an
EXPLAIN ANALYZE of the query. That would give more information for trying to
decide what is wrong.
So your question is basically why you get
a slower read rate on this query than on other queries? If I had to guess, maybe it could be that
you are s
I’ve never used a cursor in Postgres, but I don’t think it will help you a
lot. In theory cursors make it
easier to do paging, but your main problem is that getting the first page is
slow. A cursor isn’t going to
be any faster at getting the first page than OFFSET/LIMIT is.
Did you tr
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Tarabas (Manuel Rorarius)
> Subject: [PERFORM] Problem with LIKE-Performance
>
> Hi!
>
> I am having trouble with like statements on one of my tables.
It looks like you are getti
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Brian Herlihy
> Sent: Thursday, April 06, 2006 6:56 PM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Query planner is using wrong index.
[Snip]
> I am really surpri
I use Npgsql, and the connection string I use is real simple:
Server=192.168.0.36;Database=mydb;User Id=myuserid;Password=123456
Hope that helps,
Dave
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Greg Quinn
> Sent: Wednesday,
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Steinar H. Gunderson
> A merge join requires sorted inputs.
>
> > Most of the time was spent sorting the parameters parameters table
by
> > opset_num even though opset_num is indexed
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Steinar H. Gunderson
> Sent: Tuesday, March 28, 2006 10:29 AM
>
> An index on (A,B,C) can be used for a query on (A,B) or (A), so it
doesn't
> really matter. It isn't usable for a que
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of george young
> Sent: Monday, March 27, 2006 12:48 PM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] simple join uses indexes, very slow
>
[Snip]
>
> Indexes:
> "pa
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Jim C. Nasby
> Subject: Re: [PERFORM] Problem with query, server totally unresponsive
>
> On Thu, Mar 23, 2006 at 01:12:08PM +0100, Bendik Rognlien Johansen
wrote:
> > Hello, I have a big problem with one of my
[Snip]
> >
> > shared_buffers = 256
>
> Make this higher too. If this is a dedicated machine with 512 MB of
ram,
> set it to something like 125000.
>
> You may need to adjust shared memory settings for your operating
system.
> See the manual for details.
>
Whoa. Maybe I'm wrong, but isn't each
Actually I think LEFT OUTER JOIN is equivalent to LEFT JOIN. The
Postgres manual says that the word OUTER is optional. Either way you
get "...all rows in the qualified Cartesian product (i.e., all combined
rows that pass its join condition), plus one copy of each row in the
left-hand table for wh
d: (patientidentifier = 690193)
Total runtime: 91166.540 ms
Tim Jones
Healthcare Project Manager
Optio Software, Inc.
(770) 576-3555
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Friday, February 10, 2006 5:52 PM
To: Tim Jones
Cc: Scott Marlowe; Dave Dutcher; pgsql-
What version of postgres
are you using? Can you post the
output from EXPLAIN ANALYZE?
-Original Message-
From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tim Jones
Sent: Friday, February 10, 2006
4:07 PM
To:
pgsql-performance@postgresql.org
Subject: [PERFORM]
First I'm wondering if the tables have been recently analyzed. If an
analyze has been run recently, then it is probably a good idea to look
at the statistics target.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim C.
Nasby
Sent: Thursday, February 09
I don't think pgpool is what you need. If I understand pgpool
correctly, pgpool lets you pool multiple postgres servers together. You
are just looking for database connection pooling.
A simple connection pool is basically just an application wide list of
connections. When a client needs a con
Hi,
I’ve looked around through the docs, but can’t
seem to find an answer to this. If
I change a column’s statistics with “Alter table alter column set
statistics n”, is there a way I can later go back and see what the number
is for that column? I want to be
able to tell which columns
Hello,
I have an inner join query that runs fast, but I when I
change to a left join the query runs 96 times slower. I wish I could always do an inner join,
but there are rare times when there isn’t data in the right hand
table. I could expect a small
performance hit, but the differen
93 matches
Mail list logo