m, shared_buffers, max_connections
> etc. Kernel version? If possible avoid 3.2 and 3.8-3.13. Also think to
> upgrade your OS version.
>
> From today I'm on vacancy, so others could help :)
>
> Pietro Pugni
> Il 12/ago/2015 03:49, "Rural Hunter" ha scritto:
article_729 has about 0.8 million rows. The rows of the children tables are
variance from several thousands to dozens of millions. How can it help to
create index on the partition key?
2015-08-12 1:03 GMT+08:00 Pietro Pugni :
> Hi Rural Hunter,
> Try to create an index on cid attribute.
yes i'm very sure. from what i observed, it has something to do with the
concurrent query planing. if i disconnect other connections, the plan is
very quick.
2015-08-11 22:42 GMT+08:00 Maxim Boguk :
>
>
> Check constraints:
>> "article_729_cid_check" CHECK (cid = 729)
>>
>
>
> Used partition
(cid = 729)
Foreign-key constraints:
"article_729_cid_fk" FOREIGN KEY (cid) REFERENCES company(cid) ON
DELETE CASCADE
Triggers:
trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE
PROCEDURE fn_article_delete()
trg_article_729_inser
uot;article_729_cid_fk" FOREIGN KEY (cid) REFERENCES company(cid) ON
DELETE CASCADE
Triggers:
trg_article_729_delete AFTER DELETE ON article_729 FOR EACH ROW EXECUTE
PROCEDURE fn_article_delete()
trg_article_729_insert AFTER INSERT ON article_729 FOR EACH ROW EXECUTE
PROCEDURE fn
table, but with all of
them.
2015-08-11 19:43 GMT+08:00 Maxim Boguk :
>
>
> On Tue, Aug 11, 2015 at 6:46 PM, Rural Hunter
> wrote:
>
>> Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.
>> I have a table which is partitioned to about 80 children. There are
>> usualy severa
Platform: pg 9.2.9 on Ubuntu 12.04.4 LTS.
I have a table which is partitioned to about 80 children. There are usualy
several dozens of connections accessing these tables concurrently. I found
sometimes the query planing time is very long if I query against the parent
table with partition key. The c
pls check this if it helps:
http://ubuntuforums.org/showthread.php?t=2258734
在 2015/3/15 18:54, Robert Kaye 写道:
Hi!
We at MusicBrainz have been having trouble with our
Postgres install for the past few days. I’ve colle
在 2014/7/30 18:03, Rural Hunter 写道:
I think I understand what happened now. I have another monitor script
runs periodically and calls pg_cancel_backend and pg_terminate_backend
for those hanging update sqls. However for some unkown reason the
cancle and termiante command doesn't work at
Hi Tom,
Could my problem be a victim of this issue?
http://postgresql.1045698.n5.nabble.com/Planner-performance-extremely-affected-by-an-hanging-transaction-20-30-times-td5771686.html
is the patch mentioned in that thread applied in 9.2.9?
--
Sent via pgsql-performance mailing list (pgsql-perf
I think I understand what happened now. I have another monitor script
runs periodically and calls pg_cancel_backend and pg_terminate_backend
for those hanging update sqls. However for some unkown reason the cancle
and termiante command doesn't work at pgsql side for those update sqls.
But I th
This was no error in the log of pgbouncer, but there is a sudden drop of
request count when the problem happened:
2014-07-30 11:36:51.919 25369 LOG Stats: 2394 req/s, in 339478 b/s, out
1422425 b/s,query 3792 us
2014-07-30 11:37:51.919 25369 LOG Stats: 2207 req/s, in 314570 b/s, out
2291440 b/s,
This happened again. This time I got the connection status(between
pgbouncer host to pgsql host) at postgresql side. When the problem
happens, the connection status is this:
ESTABLISHED: 188
CLOSE_WAIT: 116
The count of connections in CLOSE_WAIT is abnormal. Comparing with
normal situation, th
在 2014/7/30 1:27, Jeff Janes 写道:
It sounds like someone is bypassing your pgbouncer and connecting
directly to your database. Maybe they tried to create their own
parallelization and have a master connection going through pgbouncer
and create many auxiliary connections that go directly to t
More information found. After the hang connection appears, I noticed
there were several hundreds of connections of the same user. Since I use
pgbouncer and I only set the pool size to 50 for each user, this is very
strange. I checked the pgbouncer side, 'show pools' showed the active
server co
在 2014/7/29 1:29, Jeff Janes 写道:
If it were waiting on a pg_locks lock, the semop should
be coming from ProcSleep, not from LWLockAcquire,
shouldn't it?
I'm
I am now seeing another phenominom of hanging connections. They are
showing 'UPDATE' status in process list.
(gdb) bt
#0 0x7f783f79d4f7 in semop () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x005f97d3 in PGSemaphoreLock ()
#2 0x00638153 in LWLockAcquire ()
#3 0x004a9
This is the vmstat output when the high load peak happens:
# vmstat 3
procs ---memory-- ---swap-- -io -system--
cpu
r b swpd free buff cache si sobibo in cs us sy
id wa
54 1 756868 1047128 264572 34157347200 243 2721 2 51
Yes I checked. The connection I inspected is the longest running one.
There was no other connections blocking it. And I also see all locks are
granted for it. Does the planning phase require some internal locks?
?? 2014/7/28 0:28, Tom Lane :
Yeah, that's what the stack trace suggests. Hav
ut it didn't help.
?? 2014/7/25 22:23, Rural Hunter :
I run dbg on the backend process and got this:
(gdb) bt
#0 0x7fc4a1b6cdb7 in semop () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x005f8703 in PGSemaphoreLock ()
#2 0x00636703 in LWLockAcquire ()
#3 0x0
?? 2014/7/25 9:53, Tom Lane :
[ shrug... ] Insufficient data. When I try a simple test case based on
what you've told us, I get planning times of a couple of milliseconds.
I can think of contributing factors that would increase that, but not
by four orders of magnitude. So there's something
2014/7/25 9:53, Tom Lane wrote:
Rural
[ shrug... ] Insufficient data. When I try a simple test case based on
what you've told us, I get planning times of a couple of milliseconds.
I can think of contributing factors that would increase that, but not by
four orders of magnitude. So there's somet
On Wed, Jul 23, 2014 at 6:21 AM,
Rural Hunter <ruralhun...@gmail.com>
wrote:
What's
wrong and how can I improve the planning performance?
What is constr
Hi,
I have a table partitioned with about 60 children tables. Now I found
the planning time of simple query with partition key are very slow.
# explain analyze select count(*) as cnt from article where pid=88 and
hash_code='2ca3ff8b17b163f0212c2ba01b80a064';
QUERY PLAN
---
Hi Jeff,
Thanks a lot for such a detailed guide!
于 2013/8/21 8:24, Jeff Janes 写道:
OK, that certainly does sound like network problems and not disk
contention. But what I don't see is why it would be listed as
"active" in pg_stat_activity. If it is blocked on a network
connection, I would
于 2013/8/20 12:34, Jeff Janes 写道:
On Monday, August 19, 2013, Rural Hunter wrote:
I think that this should generally not happen at the server
if you are using pgbouncer, as you should configure it so that
pgbouncer has a lower limit than
于 2013/8/20 10:38, Sergey Konoplev 写道:
On Mon, Aug 19, 2013 at 6:44 PM, Rural Hunter wrote:
What do you mean by recycling pgbouncer?
I mean restarting pgbouncer.
Haven't you noticed what was in the state column of the
pg_state_activity view? In 9.2 the query column in this view show
Hi,
I'm on 9.2.4 with Ubuntu server. There are usually hundereds of
connections doing the same insert with different data from different
networks every minute, through pgbouncer in the same network of the
database server. The database has been running for about one year
without problem. Yeste
Why not just use simple Statement instead of PreparedStatement and
construct the SQL with concated string or StringBuilder? like this:
int col1=xxx;
String col2="";
String sql="select * from table where col1="+col+" and col2='"+col2+"'";
于 2012/7/26 3:59, Vinicius Abrahao 写道:
On Wed, Jul 25
于 2012/6/11 20:07, Kevin Grittner 写道:
Rural Hunter wrote:
于 2012/6/9 22:39, Kevin Grittner 写道:
You neglected to mention the LIMIT clause in your earlier
presentation of the problem. A LIMIT can have a big impact on plan
choice. Is the LIMIT 10 part of the actual query you want to
optimize
于 2012/6/9 22:39, Kevin Grittner 写道:
Rural Hunter wrote:
于 2012/6/9 0:39, Kevin Grittner 写道:
name | current_setting
full_page_writes | off
There may be exceptions on some file systems, but generally turning
this off leaves you vulnerable to possible database corruption if you
OS or
于 2012/6/9 0:39, Kevin Grittner 写道:
Rural Hunter wrote:
How can adjust the statistics target?
default_statistics_target
http://www.postgresql.org/docs/current/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER
or ALTER TABLE x ALTER COLUMN y SET STATISTICS n
Hi Kevin,
Thanks for your detailed explanation.
于 2012/6/8 22:37, Kevin Grittner 写道:
Rural Hunter wrote:
于2012年6月8日 22:10:58,Tom Lane写到:
Rural Hunter writes:
I have a query like this:
select a.* from a inner join b on a.aid=b.aid where a.col1=33
a.col2=44 and b.bid=8
postgresql selected
No, it's not the analyze problem. For some other values on b.bid such
as 9, 10, the plan is fine since there a a lot of rows in table b for
them. But for some specific values such as 8 I want the plan changed.
于2012年6月8日 22:10:58,Tom Lane写到:
Rural Hunter writes:
I have a query like
I have a query like this:
select a.* from a inner join b on a.aid=b.aid where a.col1=33 a.col2=44
and b.bid=8
postgresql selected the index on a.col1 then selected the index on
b.bid. But in my situation, I know that the query will be faster if it
chose the index on b.bid first since there are
This is very similar with my problem:
http://postgresql.1045698.n5.nabble.com/index-choosing-problem-td5567320.html
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I have a table with serveral million records. they are divided into
about one hundred catagory(column cid). I created index includes the cid
as the first column. I had a problem with some cids they only have few
records comparing with other cids. Some of them only have serveral
thousand rows. S
I disabled fastupdate on the gin index. looks it solved my problem, at
least for now. Thanks a lot for your help Jesper!
--
Thanks for the reply. Your index is much larger than mine..so I see some
light. :)
?? 2012/3/13 14:29, Jesper Krogh :
Our solution is to
Thanks for the reply. Your index is much larger than mine..so I see some
light. :)
?? 2012/3/13 14:29, Jesper Krogh :
Our solution is to turn "fastupdate" off for our gin-indices.
http://www.postgresql.org/docs/9.0/static/sql-createindex.html
Can also be set with ALTER TABLE ALTER INDEX
I
I'm using gin index for my full text search engine in production. These
days the gin index size grows to 20-30G and the system started to suffer
with periodical insert hanging. This is same as described in the 2 posts:
http://postgresql.1045698.n5.nabble.com/Random-penalties-on-GIN-index-updates
what's the size of the index? is it too big to fit in shared_buffers?
maybe the firt 15 rows by asc order are in buffer but the ones of desc
order are not, while your disk IO is very slow?
btw, your mem configuration of work_men is very strange.
于 2012/2/8 0:49, Kevin Traster 写道:
PostgreSQL 9
what's the size of the index? is it too big to fit in
shared_buffers? maybe the firt 15 rows by asc order are in buffer
but the ones of desc order are not, while your disk IO is very slow?
btw, your mem configuration of work_men is very strange.
于 2012/2/8 0:49, Kev
actually I stored the pattern in col1. I want to get the row whose col1
pattern matches one string 'aaa'.
于2011年12月15日 4:43:37,Marti Raudsepp写到:
2011/12/14 Rural Hunter:
for example, the where condition is: where '' ~ col1. I created a normal
index on col1 but seems
for example, the where condition is: where '' ~ col1. I created a
normal index on col1 but seems it is not used.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
44 matches
Mail list logo