Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Ilya Kosmodemiansky
On Wed, Feb 12, 2014 at 8:57 PM, Бородин Владимир wrote: > > Yes, this is legacy, I will fix it. We had lots of inactive connections but > right now we use pgbouncer for this. When the workload is normal we have some > kind of 80-120 backends. Less than 10 of them are in active state. Having >

Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Бородин Владимир
Yes, this is legacy, I will fix it. We had lots of inactive connections but right now we use pgbouncer for this. When the workload is normal we have some kind of 80-120 backends. Less than 10 of them are in active state. Having problem with locks we get lots of sessions (sometimes more than 1000

Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Ilya Kosmodemiansky
another thing which is arguable - concurrency degree. How many of your max_connections = 4000 are actually running? 4000 definitely looks like an overkill and they could be a serious source of concurrency, especially then you have had barrier enabled and software raid. Plus for 32Gb of shared buf

Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Бородин Владимир
Oh, I haven't thought about barriers, sorry. Although I use soft raid without batteries I have turned barriers off on one cluster shard to try. root@rpopdb01e ~ # mount | fgrep data /dev/md2 on /var/lib/pgsql/9.3/data type ext4 (rw,noatime,nodiratime) root@rpopdb01e ~ # mount -o remount,nobarrier

Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Ilya Kosmodemiansky
My question was actually about barrier option, by default it is enabled on RHEL6/ext4 and could cause serious bottleneck on io before disks are actually involved. What says mount without arguments? > On Feb 12, 2014, at 18:43, Бородин Владимир wrote: > > root@rpopdb01e ~ # fgrep data /etc/fst

Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Бородин Владимир
root@rpopdb01e ~ # fgrep data /etc/fstab UUID=f815fd3f-e4e4-43a6-a6a1-bce1203db3e0 /var/lib/pgsql/9.3/data ext4 noatime,nodiratime 0 1 root@rpopdb01e ~ # According to iostat the disks are not the bottleneck. 12.02.2014, в 21:30, Ilya Kosmodemiansky написал(а): > Hi Vladimir, > > Just in case:

Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Ilya Kosmodemiansky
Hi Vladimir, Just in case: how is your ext4 mount? Best regards, Ilya > On Feb 12, 2014, at 17:59, Бородин Владимир wrote: > > Hi all. > > Today I have started getting errors like below in logs (seems that I have not > changed anything for last week). When it happens the db gets lots of >

[PERFORM] Problem with ExclusiveLock on inserts

2014-02-12 Thread Бородин Владимир
Hi all. Today I have started getting errors like below in logs (seems that I have not changed anything for last week). When it happens the db gets lots of connections in state active, eats 100% cpu and clients get errors (due to timeout). 2014-02-12 15:44:24.562 MSK,"rpop","rpopdb_p6",30061,

Re: [PERFORM] list number of entries to be delete in cascading deletes

2014-02-12 Thread Eildert Groeneveld
On Di, 2014-02-11 at 18:58 -0200, Claudio Freire wrote: > On Tue, Feb 11, 2014 at 5:54 PM, Eildert Groeneveld > wrote: > > Dear All > > > > this probably not the best list to post this question: > > > > I use cascading deletes but would like to first inform the user what she > > is about to do. >

Re: [PERFORM] increasing query time after analyze

2014-02-12 Thread Pavel Stehule
2014-02-12 9:58 GMT+01:00 Katharina Koobs : > explain.depesz.com/s/HuZ fast query is fast due intesive use a hashjoins but you can see Hash Left Join (cost=9343.05..41162.99 rows=6889 width=1350) (actual time=211.767..23519.296 rows=639137 loops=1) a estimation is out. Is strange so after AN

[PERFORM] increasing query time after analyze

2014-02-12 Thread Katharina Koobs
Hi, We have still problems with our query time. After restoring the database the query time is about one minute. After an analyze the query time is about 70 minutes. We could discover one table which cause the problem. After analyzing this table the query time increases. We have made an explain p