u hit some other slow query, you might want to report it in
the manner suggested here:
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@p
leneck. That's when I learned about hint bits.)
You should also make sure that autovacuum is aggressive enough on
the new cluster. Without that, any performance benefit from the
above will slowly disappear.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
-
a LAN for storage.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
current clients,
but perhaps not as good at cranking out a single big report or
running dump/restore.
Yes, it is quite possible that the new machine could be faster at
some things and slower at others.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sen
the results of both of the
above-mentioned tests for the two environments.
Just one observation, based on the limited data -- a higher network
latency between the client and the database might explain what
you've presented. I would check that, too.
--
Kevin Grittner
EDB: http://www.enterprisedb.
ved to a temporary workspace and this is "diffed"
against the existing permanent copy, which is modified to match the
new data through simple DML statements. No explicit index rebuild
is needed; entries are adjusted as part of running the DML.
--
Kevin Grittner
EDB: http://www.enterprisedb.c
t options, but I suspect that you
will see better performance by putting each database on a separate
cluster and using cpusets (or the equivalent) so that each cluster
uses a subset of the 160 cores and the RAM directly attached to the
subset.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
r for the start of such an event and capture the full
contents of pg_stat_activity and pg_locks during that 2 minute
window.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
minimum for something greater than the maximum for
that same thing. That should have no bearing the performance issue
raised on the thread, but you might want to fix it anyway.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing
ng psql '-f'.
... but I would be surprised if that happened when reading from a file.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscriptio
ing this theory would be to restore a copy
and reindex all indexes used by the problem query to see if that
fixes it. If it does, close examination of the corrupted index
might provide clues about how the corruption occurred.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise Postg
e-id/flat/55783940.8080...@wi3ck.info#55783940.8080...@wi3ck.info
The short version is that in existing production versions you can
easily run in to such symptoms when you get to 8 or more CPU
packages. The problem seems to be solved in the development
versions of 9.5 (with changes not suitable for back-p
ly block them for < 3 seconds.
Visibility hinting and/or hot pruning?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
acuum and/or background writer settings. Various
OS settings may matter, too.
To get a handle on all this, it might be worth looking for Greg
Smith's book on PostgreSQL high performance.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pg
-through).
Reduce the OS vm.dirty_background_bytes setting to less than the
size of the persistent write cache. Make sure that vm.dirty_ratio
is at least 20, possibly higher. Configure the PostgreSQL
background writer to be more aggressive. If those don't do it,
reduce the size of shared_bu
caused by failure to disable
transparent huge page support. The larger shared_buffers is
configured, the bigger the problem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
OR (EXTRACT(MONTH FROM dob) = 7
AND EXTRACT(DAY FROM dob) <= 4);
The first query I showed is faster than either of the alternatives,
especially if there is an index on dob.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performan
ow much time the top level nested loop took to do its
work.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ence test that needs to be met in addition to the percentage
difference, as kind of a "safety" on this foot-gun.
I'm not sold on this as being a good idea, and had not been
planning on raising it without further research; but since it plays
into this other scenario it seems worth
determine good settings for the new server.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
arge number of index tuples and chasing
them to the heap just to find that the tuples are not visible is
about as expensive as if they were visible *for the index scan step
itself*. I wonder whether there would be any way to allow the
index scan cost to be based on the work it has to do while somehow
nd see what might apply, and if you still have a problem
pick a specific slow-running query and use the process described
here:
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performa
up any
case, and click the "Court Record Events" button, it will search a
table with hundreds of millions of rows. The table is not
partitioned, but has several indexes on it which are useful for
queries such as the one that is used when you click the button.
--
Kevin Grittner
EDB: http
4..12.124 rows=24 loops=1)
Index Cond: (word ~~* '%john%'::text)
Planning time: 0.392 ms
Execution time: 12.252 ms
(7 rows)
Note that a trigram index is case-insensitive; doing a
case-sensitive search requires an extra Recheck node to eliminate
the rows that match in the case-insen
two ways to write a query that are logically
equivalent, it is better to put the AND at the higher level than
the OR. On the other hand, why not simply write it as?:
select *
from commons.financial_documents fd
where (fd.creation_time, fd.financial_document_id)
< ('2011-11-07 10
e second you might want to
make autovacuum more aggressive.
To get more specific advice, you may want to read this page and
follow the advice there:
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
Björn Wittich wrote:
> I do not want the db server to prepare the whole query result at
> once, my intention is that the asynchronous retrieval starts as
> fast as possible.
Then you probably should be using a cursor.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise P
rhaps to 0.03.
You might want to play with the above, and if you still have a
problem, read this page and post with more detail:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
> Is there some column level setting I can set?
The statistics looked pretty accurate, so that shouldn
Alex Goncharov wrote:
> Kevin Grittner wrote:
>> The rows will all be in the table, but not visible to any other
>> transaction.
>
> How much data can I fit there while doing COPY? Not 1 TB?
As has already been said, why not? This is not some special
section of the
feed -- how will this all error out?
The rows will all be in the table, but not visible to any other
transaction. Autovacuum will clean them out in the background, but
if you want to restart your load against an empty table it might be
a good idea to TRUNCATE that table; it will be a lot faster.
Kevin Grittner wrote:
> Dave Owens wrote:
>
>> I now have 8 hours worth of snapshots from pg_stat_activity and
>> pg_locks (16 snapshots from each table/view). I have turned off
>> collection at this point, but I am still able to query pg_locks
>
> Could you take t
d transactions, they are probably from
the summarization. But you would not normally accumulate much
there unless you have a long-running transaction which is not
flagged as READ ONLY.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-p
Dave Owens wrote:
> On Tue, Aug 19, 2014 at 11:01 AM, Kevin Grittner wrote:
>> CREATE TABLE activity_snap_1 AS SELECT * FROM pg_stat_activity;
> Would the you or the list be interested in snapshots of pg_locks as well?
Most definitely! I'm sorry that copied/pasted the
ber for each subsequent run.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
gly workaround, but it might get you
into better shape. If that does work, it's good evidence that we
should tweak those heuristics.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
pt until an overlapping
transaction completes, a single long-running transaction can bloat
the lock count.
Also, could you show use the output from?:
SELECT version();
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing
Tom Lane wrote:
> Kevin Grittner writes:
>> Dave Owens wrote:
>>> max_connections = 450 ...we have found that we run out of shared
>>> memory when max_pred_locks_per_transaction is less than 30k.
>
>> It gathers the information in memory to return for all tho
so
it can spill to disk when it gets to be more than work_mem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n-default configuration
settings. In particular, I'm curious whether there is an index on
the message_id column of origo_email_delivery.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
T
sible to reduce the rate of false
positives, which has largely gone undone so far due to a general
lack of problem reports from people which could not be solved
through tuning. If you have such a case, it would be interesting
to have all relevant details, so that we can target which of the
many e
Mark Kirkwood wrote:
> On 12/07/14 01:19, Kevin Grittner wrote:
>>
>> It might be worth a test using a cpuset to interleave OS cache and
>> the NUMA patch I submitted to the current CF to see whether this is
>> getting into territory where the patch makes a bigger diffe
do much better than using numactl --interleave
because work_mem and other process-local memory would be allocated
in "near" memory for each process.
http://www.postgresql.org/message-id/1402267501.4.yahoomail...@web122304.mail.ne1.yahoo.com
--
Kevin Grittner
EDB: http://www.enterp
t-filled) tracks on a disk drive and the
inner tracks. One of the reasons performance falls as a drive
fills is that the OS is compelled to use slower and slower portions
of the disk. Part of the benefit you are seeing might be due to
freeing "fast" tracks and data being relocated there.
ve a way to reproduce this from a new
cluster, please share it. That always makes diagnosis much easier.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
er than
for their semantic merit takes some effort.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ons;
select * from pg_prepared_xacts;
Thanks.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n other words, setting this too high leads to
unstable performance. It looks better than a lower setting until
too many users hit Enter at about the same time, causing
performance to collapse for a while.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sen
what I
consider to be rote good practice, tried it, and it solved the
problem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgres
ults with
graph here:
http://www.postgresql.org/message-id/4b71358e02250002f...@gw.wicourts.gov
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscript
oth minor and major releases. If a little
sensible tuning of cost factors to better match reality doesn't do
it for you, you might want to consider an upgrade.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance m
ms per
page read on my Linux experience, and I remember benchmarking the
same application hitting PostgreSQL on the same hardware as about
30% faster on Linux than on Windows, so that *almost* makes up for
the difference.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL
jacket41142 wrote:
> [ subject issue in detail ]
Please review this thread:
http://www.postgresql.org/message-id/flat/CAPNY-2Utce-c+kNTwsMCbAk58=9myeaevitxt9lo7r1k77j...@mail.gmail.com#CAPNY-2Utce-c+kNTwsMCbAk58=9myeaevitxt9lo7r1k77j...@mail.gmail.com
--
Kevin Grittner
EDB: h
tsv @@ to_tsquery('english', 'provinces & distance');
QUERY PLAN
-----
Bitmap Heap Scan on war_and_peace (cost=40.00..44.02 rows=1 width=115)
(actual time=0.080..0.080 rows=1 loops=1)
Recheck Cond: ((tsv @@ '''ladi
salah jubeh wrote:
> The hardware is pretty good, I have 8 cpus of Intel(R) Core(TM)
> i7, 2.4 GH , and 16 Gib of RAM. Is there any configuration
> parameter that can lead to this issue.
What OS?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
d extract(year from tanggal) = '2013')
... not:
where (jualid is not null or returjualid is not null)
and extract(year from tanggal) = '2013'
AND has higher priority than OR; so if you want to limit by year
from tanggal even when jualid is not null, you must use
parentheses.
ok like Robert did either, if you read the whole
message. In fact, he also questioned why index tuples which would
need to be read if we process from that end of the index don't
matter for purposes of cost estimation.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise Pos
The plan looks reasonable to me; it looks like
you need more RAM to cache data if you want better speed.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
onfigured to be aggressive enough. Another possible
cause is a transaction which has been left open for too long. Look
at pg_stat_activity and pg_prepared_xacts for xact_start or
prepared more than an hour old.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
aw connect/disconnect speed. I would
benchmark RAM using STREAM and disk using bonnie++.
You might want to review this page, and post a more detailed report
to the pgsql-performance list:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
Posting to multiple lists is generally considered bad
ty of the row
related to the most extreme index entry? Should we even go to the
heap during the plan phase?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
e generation of old hardware is kept in replication for running
ad hoc queries and to provide availability in case the new one
crashes.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
ou should boost autovacuum_max_workers until that problem is
solved.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
"every time I delete a row, delete it immedialty and
> don't take care of other transactions" ?
You can configure autovacuum to be more aggressive, or you could
run VACUUM statements.
> Do you have any suggestion for me?
8.4 is getting pretty old; there have been a lot of autov
allow
people to give the most helpful advice:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
around in this application here, if you like:
http://wcca.wicourts.gov/
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/
dices on user_id, owner_id, email, and significant.
Have you tried those queries with an index like this?:
CREATE INDEX contacts_owner_null_user
ON contacts (owner_id)
WHERE user_id IS NULL;
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
my system will only ever have just one
> phone number. Hence the JOIN to "phone_numbers" versus the column
> in "my_users".
In looking it over, nothing jumped out at me as a problem. Are you
having some problem with it, like poor performance or getting
results diffe
dditional* 900 GB of RAM which would be needed
to avoid problems.
Reducing connections through a pooler is strongly indicated, and
you may still need to reduce work_mem or temp_buffers.
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
--
Kevin Grittner
EDB: http://www.enterprised
olatile and easier to tune
with cpu_tuple_cost increased. I just always start by bumping
that to 0.03.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your s
for that connection for as long as the connection lasts*.
So, for 900 connections, that could be 112.5 GB. I would expect to
see performance decrease and eventually completely tank as more
connections reserved memory for this purpose.
--
Kevin Grittner
EnterpriseDB: http://www.enterprised
at least without hours digging in the source code).
> On Jun 25, 2013, at 6:20 AM, Kevin Grittner wrote:
>> Ben wrote:
>>
>>> PostgreSQL 9.1.1 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux)
>>> 4.6.2, 64-bit
>>
>> Consider applying the latest bu
dex usage in queries like your example.)
Do you get a different plan if you set cpu_tuple_cost = 0.03? How
about 0.05? You can set this just for a single connection and run
explain on the query to do a quick check.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgr
>> checkpointer process
>> 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres: writer
>> process
>> 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal
>> writer process
>> 3977 postgres 20 0 70932 3352 716 S 0 0.0
time=0.011..0.047 rows=16 loops=1)
> Output: c1.id, c1.blocked, c1.first_name, c1.last_name,
>c1.owner_id, c1.user_id
> Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))
> Total runtime: 0.224 ms
So, it looks like you can get about 3000 to 400
not only
globally adjustable, you can override the setting for individual
columns -- again, we don't go to the trouble of supporting that
without a good reason.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list
;re doing an inner join to data_area and that has a
foreign key to area, there should always be a match anyway, right?
The optimizer doesn't recognize that, so it can't start from the
area and just match to the appropriate points.
--
Kevin Grittner
EnterpriseDB: http://www.enterpri
l columns as you find plans which
benefit. Don't set it right at the edge of the tipping point, but
don't automatically jump to 5000 every time either.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance
current cost settings aren't accurately modeling
actual costs in your environment for your workload. You need to
adjust them.
One of the estimates was off, so increasing the statistics sample
size might help, but I suspect that you need to make adjustments
like the above in any event.
--
order of the columns in the primary key, add a unique index
with the columns switched, or add an index on just the area ID.
Perhaps you thought that the foreign key constraints would create
indexes? (They don't.)
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postg
adjust estimates based
on such correlations. If an inefficient plan is being chosen due
to this, there are a few tricks to coerce the plan.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-p
pushing the
data around in RAM. I'm not sure why 100% CPU usage would surprise
you. Are you wondering why the CPU works on the query straight
through until it is done, rather than taking a break periodically
and letting the unfinished work sit there?
--
Kevin Grittner
EnterpriseDB: http://www.en
ts report better performance assigning over 50% of RAM to
shared_buffers; OLTP loads often need to reduce this to prevent
periodic episodes of high latency.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (p
ctly?
In my experience, it can make a big difference. If you are just
using the pooler for this reason, and don't need any of the other
features of pgpool, I suggest pgbouncer. It is a simpler, more
lightweight tool.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise P
Alexander Staubo wrote:
> On Friday, February 22, 2013 at 21:47 , Kevin Grittner wrote:
>> In my experience these problems come largely from the planner
>> not knowing the cost of dealing with each tuple. I see a lot
>> less of this if I raise cpu_tuple_cost to something i
ort them to pick the top 13 after the sort.]
In my experience these problems come largely from the planner not
knowing the cost of dealing with each tuple. I see a lot less of
this if I raise cpu_tuple_cost to something in the 0.03 to 0.05
range.
--
Kevin Grittner
EnterpriseDB: http://www
<> 0
OR cStatus <> 'NEW'::StatusT
OR bOnSetBlue IS DISTINCT FROM false
OR bOnSetYellow IS DISTINCT FROM false
OR nLastBackupTS <> '0001-01-01 00:00:00');
Another way to accomplish this is with the
suppress_redundant_updates_trigger(
Beyond a certain point, starting the query sooner will cause it to
finish later. Really.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscri
"Foster, Kristian B (HSC)" wrote:
> Currently the queries are running about 4x slower on postgres than
> sql server. Is there any settings I need to check?
Your best bet would be to pick one query and follow the steps
recommended here:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
That i
Johnny Tan wrote:
> Wouldn't this be controlled by our checkpoint settings, though?
Spread checkpoints made the issue less severe, but on servers with
a lot of RAM I've had to make the above changes (or even go lower
with shared_buffers) to prevent a burst of writes from overwhelming
the RAID c
Will Platnick wrote:
> Will Platnick wrote:
>> The only thing that stands out is: on your production server I see
>> "Total runtime: 7.515 ms", but the top node in EXPLAIN ANAYZE shows
>> actual time as 0.179 ms. Not sure where that additional time is being
>> spent though. It could be ExecutorS
"a...@hsk.hk" wrote:
> Johnny Tan wrote:
>>shared_buffers = 48GB# min 128kB
> From the postgresql.conf, I can see that the shared_buffers is
> set to 48GB which is not small, it would be possible that the
> large buffer cache could be "dirty", when a checkpoint starts, it
> would cause a check
AJ Weber wrote:
> Is it possible that some spikes in IO could be attributable to
> the autovacuum process? Is there a way to check this theory?
Taking a look at the ps aux listing, pg_stat_activity, and pg_locks
should help establish a cause, or at least rule out a number of
possibilities. There
Merlin Moncure wrote:
>> I'm running postgresl 9.0. After partitioning a big table, CPU
>> usage raised from average 5-10% to average 70-80%.
> First thing that jumps to mind is you have some seq-scan heavy
> plans that were not seq-scan before.
Make sure that all indexes are defined for each pa
Ghislain ROUVIGNAC wrote:
>> I would leave default_statistics_target alone unless you see a lot of
>> estimates which are off by more than an order of magnitude. Even then, it
>> is often better to set a higher value for a few individual columns than for
>> everything.
>
>
> We had an issue with
Huan Ruan wrote:
> Kevin Grittner wrote:
>> Frankly, at 12 microseconds per matched pair of rows, I think
>> you're doing OK.
>
> This plan is the good one, I want the indexscan nested loop join and this
> is only achieved after making all these costing factors chang
"Huan Ruan" wrote:
> explain (analyze, buffers)
> SELECT
> *
> FROM IM_Match_Table smalltable
> inner join invtran bigtable on bigtable.invtranref = smalltable.invtranref
Well, one table or the other will need to be read in full, and you
would normally want that one to be the small table. When
Ghislain ROUVIGNAC wrote:
> Memory : In use 4 Go, Free 15Go, cache 5 Go.
If the active portion of your database is actually small enough
that it fits in the OS cache, I recommend:
seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.05
> I plan to increase various parameters as follow:
AI Rumman wrote:
> The ENTITY table has 2164493 rows with data as follows:
>
> type | count
> ---+
> Contacts | 327352
> Candidate | 34668
> Emailst | 33604
> Calendar | 493956
> Contacts Image | 7
> PriceBooks | 2
> Notes Attachment | 17
> SalesOrder | 6
> A
Claudio Freire wrote:
> Selectivity is decided based on the number of distinct values on
> both sides, and the table's name "entity" makes me think it's a
> table that is reused for several things. That could be a problem,
> since that inflates distinct values, feeding misinformation to
> the plan
AI Rumman wrote:
> Claudio Freire wrote:
>> I think it's more likely a missing FK constraint.
> Does FK Constraint help to improve performance? Or it is only
> for maintaining data integrity?
I'm not aware of any situation where adding a foreign key
constraint would improve performance.
-Kevin
1 - 100 of 1030 matches
Mail list logo