id in 8.1.
Note that the solution is not to force the plan, but it can give us more
information.
/Dennis
> is at least one query which has degraded in performance quite a bit. Here
> is the plan on 8.4.2:
> http://wood.silentmedia.com/bench/842
>
> Here is the very much less compa
On Thu, Aug 28, 2008 at 8:11 PM, Scott Marlowe <[EMAIL PROTECTED]>wrote:
> > wait a min here, postgres is supposed to be able to survive a complete
> box
> > failure without corrupting the database, if killing a process can corrupt
> > the database it sounds like a major problem.
>
> Yes it is a m
Slightly off-topic, but judging from the fact that you were able to
"fix" the query, it seems you have some way to modify the application
code itself. In that case, I'd try to implement caching (at least for
this statement) on the application side, for example with memcached.
--
Sent via pgsql-pe
The "fast" server makes a much more accurate estimation of the number
of rows to expect (4 rows are returning, 1 was estimated). The "slow"
server estimates 1151 rows. Try running ANALYZE on the slow one
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
from raid0 is not worth the downtime
risk, even when you have multiple servers. I'll start pricing things
out and see what options we have.
Thanks again,
Dennis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.post
Greg Smith wrote:
On Tue, 6 May 2008, Dennis Muhlestein wrote:
> Since disks are by far the most likely thing to fail, I think it would
be bad planning to switch to a design that doubles the chance of a disk
failure taking out the server just because you're adding some
serv
Greg Smith wrote:
On Tue, 6 May 2008, Dennis Muhlestein wrote:
RAID0 on two disks makes a disk failure that will wipe out the database
twice as likely. If you goal is better reliability, you want some sort
of RAID1, which you can do with two disks. That should increase read
throughput a
x27;m originally
setting out to do?
TIA
-Dennis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
at is
taking a long time and focus on optimizing those statements that take
the longest to execute.
That ought to get you a long way down the road.
-Dennis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresq
state that it isn't a very common case anyway.
/Dennis
-
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
? For every
commit pg will wait for the data to be written down to the disk platter
before it move on. So if you do several writes you want to do them in
one transaction so you only need one commit.
/Dennis
---(end of broadcast)---
TIP 7: You can
low. Pg can for example
log queries for you that run for longer than X seconds.
/Dennis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
N ANALYZE output we would just be guessing anyway. If you want
to share it then it might be helpful to show the plan both with and
without seqscan enabled.
How often do you run VACUUM ANALYZE; on the database?
/Dennis
---(end of broadcast)---
TIP 6: explain analyze is your friend
generate the same
plan as in 7.4 and we can compare the costs and maybe understand what go
wrong.
For example, try
set enable_hashjoin to false;
set enable_bitmapscan to false;
but you might need to turn off more things to get it to generate the 7.4
plan.
/Dennis
ck and it will be a lot faster.
Copy is even faster, but for just 9 rows I wouldn't bother.
--
/Dennis Björklund
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
what else your memory is used for.
I don't know if this setting will affect this very query, but it should
have a positive effect on a lot of queries.
work_mem also seems low, but it's hard to suggest a good value on it
without knowing more about how your database is used.
--
/Denni
vacuum?
A query like this can help find bloat:
SELECT oid::regclass, reltuples, relpages FROM pg_class ORDER BY 3 DESC;
I assume to do updates and deletes as well, and not just inserts?
--
/Dennis Björklund
---(end of broadcast)---
TIP 2
lved in the query
* the output of SHOW ALL;
* The amount of memory the machine have
The settings that are the most important to tune in postgresql.conf for
performance is in my opinion; shared_buffers, effective_cache_size and
(to a lesser extent) work_mem.
--
/Denni
nough.
The parameters in the config that is most important in my experience is
effective_cache_size and shared_buffers.
This is a text I like (it's for pg 7.4 but still useful):
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
--
/Dennis Björklund
-
7;s hard to say what is wrong without more information.
--
/Dennis Björklund
---(end of broadcast)---
TIP 6: explain analyze is your friend
if it's faster then what you have.
What it does do is mimic the way you insert values in mysql. It only work
on pg 8.0 and later however since the exception handling was added in 8.0.
--
/Dennis Björklund
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
nick.
So should I have vacuum run during the load test? At what level of
updates should it run every ten minutes?
Dennis
---(end of broadcast)---
TIP 6: explain analyze is your friend
load average at the 100% utilization point was about 30! A vacuum
analyze was done before the test was started. I believe there are many
more selects than updates happening at any one time.
Dennis
---(end of broadcast)---
TIP 9: In versions belo
ther major applications on the machine. Its a dedicated database
server, only for this application.
It doesn't seem to make sense that PostgreSQL would be maxed out at this
point. I think given the size of the box, it could do quite a bit
better. So, what is going on? I don
Hi,
I've got a java based web application that uses PostgreSQL 8.0.2.
PostgreSQL runs on its own machine with RHEL 3, ia32e kernel, dual Xeon
processor, 4 Gb ram.
The web application runs on a seperate machine from the database. The
application machine has three tomcat instances configured t
performing hits.
Doesn't it depend on what jdbc driver you are using?
Dennis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
e something
slow compared to the upper plan where the startup cost is 0.0.
A stand alone test case would be nice, but even without the above plans
are interesting.
--
/Dennis Björklund
---(end of broadcast)---
TIP 8: explain analyze is your friend
rule of tumb you might
want effective_cache_size to be something like 1/2 or 2/3 of your total
memory. I don't know how much you had, but effective_cache_size = 4096 is
only 32M.
shared_buffers and effective_cache_size is normally the two most important
settings in my experience.
--
/Dennis Björklund
--
w if
> it causes an error, as opposed to aborting the transaction altogether?
You don't need to roll back the whole transaction if you use savepoints or
the exception features in pl/pgsql
Take a look at this example:
http://developer.postgresql.org/docs/postgres/plpgsql-control-structures.htm
> NOTICE: shared_buffers is 256
This looks like it's way too low. Try something like 2048.
--
/Dennis Björklund
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
elect index scans more often. It
doesn't explain why the databases behave like they do now, but it might
make pg select the same plan nevertheless.
--
/Dennis Björklund
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
of SHOW ALL; could help explain a lot together with info of how much
memory your computer have.
The first thing that comes to mind to me is that you probably have not
tuned shared_buffers and effective_cache_size properly (SHOW ALL would
tell).
--
/Dennis Björklund
-
if other databases support it.
The sql standard include this, except that you can not have the outer ().
So it should be
INSERT INTO table VALUES
(1,2,3),
(4,5,6),
(7,8,9);
Do DB2 demand these extra ()?
--
/Dennis Björklund
---(end of broadcast)
l return 62350411 rows. To return 62350411 rows it's faster to
just scan the table and not use the index.
--
/Dennis Björklund
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
0.005..70.519 rows=41631 loops=2452)
> Filter: (id = 18181::bigint)
It looks like you have not run ANALYZE recently. Most people run VACUUM
ANALYZE every night (or similar) in a cron job.
--
/Dennis Björklund
---(end of broadcast)
gger shared
mem?
Wouldn't it be the opposit, that now we don't invalidate as much of the
cache for vacuums and seq. scan so now we can do as good caching as
before but with less shared buffers.
That said, testing and getting some numbers of good sizes for shared mem
is go
way around might be a good idea
because you in some cases are lucky to find an empty scans and can omit
the other.
The above are just observations of the behaviour, I've not seen the source
at all.
--
/Dennis Björklund
---(end of broadcast)--
y hard to make an estimate that is
correct in all situations. That's why it's called an estimate after all.
--
/Dennis Björklund
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joinin
ster had there been a couple of result
rows (just a guess).
It would be interesting to compare the plans in 7.4 with and without
hash_join active and see what costs it estimates for a merge join compared
to a hash join.
--
/Dennis Björklund
---(end of b
cation.
We are all interested in how it goes (well, at least me :-), so feel free
to send more mails keeping us posted. Good luck.
--
/Dennis Björklund
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
d_buffer setting. But most people usually have
it tuned in my experience (but usually too high). Here is an article that
might help you:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
--
/Dennis Björklund
---(end of broadcast)--
same.
When you have more data pg will start to use the index since then it will
be faster to use an index compared to a seq. scan.
--
/Dennis Björklund
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
23'::date))
> >Filter: ((trn_patno = 19) AND (trn_bill_inc = 'B'::bpchar))
> > (713 rows)
>
> These queries are different. The first returns 687 rows and the second
> 713 rows.
The 687 and 713 are the number of rows in the plan, not the number of rows
the qu
t
if possible it's much better to try to do more work in a transaction then
before.
--
/Dennis Björklund
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
rget for one session like this:
SET default_statistics_target TO 100;
ANALYZE;
and then see if you get a better plan when you run the query afterwards.
If it helps you can either set the default_statistics_target in
postgresql.conf or set it just for some column using ALTER TABLE.
--
/Denn
you have both read and writes. But it should still
be comparable in speed even if you only do reads.
--
/Dennis Björklund
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
often enough you probably don't need to run vacuum full at
all.
--
/Dennis Björklund
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] s
are lucky that can help.
--
/Dennis Björklund
---(end of broadcast)---
TIP 8: explain analyze is your friend
about the inner workings of
mssql.
--
/Dennis Björklund
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
we wanted.
We got side tracked into talking about what datatype exists in all
platforms, that's not an issue at all.
--
/Dennis Björklund
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
on
the biggest int on that platform it should work.
--
/Dennis Björklund
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
would convert all numbers to int8 before making the hash it
would work.
I don't see any portability problems.
--
/Dennis Björklund
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
ers to 64 bits (or
whatever the biggest integer is) before calculating the hash. It makes the
hash function a little slower for smaller types, but it's mostly an
operation in the cpu and no memory involved, so it's probably not
noticable.
--
/Dennis Björklund
-
lete on the table. It will of course make
all inserts and deletes slower, but if you count all rows often maybe it's
worth it. Most people do not need to count all rows in a table anyway. You
usually count all rows such as this and that (some condition).
--
/Dennis Björk
transaction. If you want
to do 1000 inserts, then do BEGIN; insert ; insert; ... ; COMMIT;
--
/Dennis Björklund
---(end of broadcast)---
TIP 8: explain analyze is your friend
On Mon, 8 Mar 2004, Chris Smith wrote:
> assetid| integer | not null default 0
> assetid| character varying(255) | not null default '0'
The types above does not match, and these are the attributes you use to
join.
--
/
_profile = 6
> and t1.id_job_attribute = t0.id_job_attribute
> and t1.int_value < t0.int_value;
Try to add an index on (id_job_profile, id_job_attribute) or maybe even
(id_job_profile, id_job_attribute, int_value)
--
/Dennis Björklund
---(end of broadcast)
n itch to
fix in the future.
The error however comes when you try to insert the value. Doing a reindex
will not change the length of the value and will always work.
--
/Dennis Björklund
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
rent view of the table, some rows are visible and some others
are not. To find row N you have to walk from the start and inspect every
tuple to see if it's visible to this transaction or not.
--
/Dennis Björklund
---(end of broadcast)---
TI
On Wed, 11 Feb 2004 [EMAIL PROTECTED] wrote:
> somebody please knows to give tips to me to increase the
> performance
Run VACUUM ANALYZE. Find one query that is slow. Run EXPLAIN ANALYZE on
that query. Read the plan and figure out why it is slow. Fix it.
--
/Dennis Bjö
e there was no attachment.
--
/Dennis Björklund
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
do the original query
> quickly. I find that a bit hard to believe because I don't see any
> relevant optimization techniques.
Getting rid of the group by would not give that kind of speedup? Maybe
mssql manage to rewrite the query like that before executing.
--
/Dennis
On Wed, 21 Jan 2004, Jeroen Baekelandt wrote:
> jms_messages again. It takes 80 seconds!?! While before, with 1000
> records, it took only a fraction of a second.
run: VACUUM FULL ANALYZE;
--
/Dennis Björklund
---(end of broadcast)---
out the effective cache size, is that set properly?
--
/Dennis Björklund
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ead wrong is not
common, but not impossible.
Run VACUUM ANALYZE and see if the estimate is better after that.
--
/Dennis Björklund
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
). Then it should not need to sort at all to do the
> > grouping and it should all be fast.
>
> Not sure if that would make a difference here, since the whole table is being
> read.
The goal was to avoid the sorting which should not be needed with that
index (I hope). So I still
ords, invheadref and
> invprodref are both char(10) and indexed.
For the above query, shouldn't you have one index for both columns
(invheadref, invprodref). Then it should not need to sort at all to do the
grouping and it should all be fast.
--
/Dennis Björklund
it choose
that when the other would be faster I don't know. Maybe explain analyze
will give some hint.
--
/Dennis
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
On Thu, 18 Dec 2003, Shridhar Daithankar wrote:
> Well, then the only issue left is file sytem defragmentation.
And the internal fragmentation that can be "fixed" with the CLUSTER
command.
--
/Dennis
---(end of broadcast)---
in full use.
If this helps you probably need to do normal vacuums more often and maybe
tune the max_fsm_pages to be bigger.
--
/Dennis
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
sub
plans you showed was way off. I
assume you have run VACUUM ANALYZE recently? If that does not help maybe
you need to increaste the statistics gathering on some columns so that pg
makes better estimates. With the wrong statistics it's not strange that pg
chooses bad plans
ing computer is
never fast.
Using some swap space is not bad, but a lot of page in and page out to the
swap is not good.
--
/Dennis
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
back and try to find this previous discussion.
--
/Dennis
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
;
> GROUP BY company_name,ts.company_id;
What if you do
ts.runtime >= '2003-10-01' AND ts.runtime < '2003-11-01'
and add an index like (runtime, company_name, company_id)?
--
/Dennis
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ssor.
So, with only one connection you use only one processor (and the OS might
use an other processor). Most databases has many concurrent users and then
it will use more processors.
--
/Dennis
---(end of broadcast)---
TIP 6: Have you searched
esql you
can do:
CREATE TABLE foo (
x int,
CONSTRAINT bar CHECK (x > 5)
);
and then
# INSERT INTO foo VALUES (4);
ERROR: ExecInsert: rejected due to CHECK constraint "bar" on "foo"
I don't know MySQL, but I've got the impression from other posts on the
l
ocumentation is good and still needed.
--
/Dennis
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
then it's probably because you
before run the database with the "C" locale but now you run it with
something else.
If all you did was to install the extra memory then I don't see how that
can affect it at all (especially so if you have not altered
postgresql.conf to make use
he_size
you almost always want to increase from the default setting, also shared
memory.
--
/Dennis
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
from them.
What version of pg was used in debian and redhat? For freebsd it's 7.2.4
it says on the page, but I see nothing about the other two. The version
that comes with Redhat 9 (Shrike) is 7.3.2.
--
/Dennis
---(end of broadcast)---
TIP 3: if p
mpanies in world. I've done it
several times.
http://housecall.antivirus.com/
--
Dennis Gearon
---(end of broadcast)---
TIP 8: explain analyze is your friend
to use the memory for different things in a
good way.
Then that setting could have an auto setting so it uses 40% of all memory
or something like that. Not perfect but okay for most people.
--
/Dennis
---(end of broadcast)---
TIP 8: explain
all 1000 have been written out on disk.
There is also a configuration variable that can tell postgresql to not
wait until the insert is out on disk, but that is not recomended if you
value your data.
And last, why does it help integrity to insert data one row at a time?
--
/Dennis
--
efore and after the
vacuum? Does it explain why it goes slower?
--
/Dennis
---(end of broadcast)---
TIP 8: explain analyze is your friend
an setting
> random_page_cost to physically nonsensical values.
Hehe, just before this letter there was talk about changing
random_page_cost. I kind of responed that 0.042 is not a good random page
cost. But now of course I can see that it says cpu_tuple_cost :-)
Sorry fo
hing I need to consider when raising it to such "high" values?
You could fill the table with more data and it will probably come to a
point where it will stop using the seq. scan.
You could of course also change pg itself so it calculates a better
estimate.
--
/Dennis
-
ded and slower response times. Especially when
you have more connections figthing for the available IO.
--
/Dennis
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL
87 matches
Mail list logo