This was no error in the log of pgbouncer, but there is a sudden drop of
request count when the problem happened:
2014-07-30 11:36:51.919 25369 LOG Stats: 2394 req/s, in 339478 b/s, out
1422425 b/s,query 3792 us
2014-07-30 11:37:51.919 25369 LOG Stats: 2207 req/s, in 314570 b/s, out
2291440 b/s,
This happened again. This time I got the connection status(between
pgbouncer host to pgsql host) at postgresql side. When the problem
happens, the connection status is this:
ESTABLISHED: 188
CLOSE_WAIT: 116
The count of connections in CLOSE_WAIT is abnormal. Comparing with
normal situation, th
Explained here:
https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf
13 out of 15 tested SSD's had various kinds of corruption on a power-out.
(thanks, Neil!)
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-performance mailing list (pgsql-per
On 17/07/14 11:58, Mark Kirkwood wrote:
Trying out with numa_balancing=0 seemed to get essentially the same
performance. Similarly wrapping postgres startup with --interleave.
All this made me want to try with numa *really* disabled. So rebooted
the box with "numa=off" appended to the kernel c
在 2014/7/30 1:27, Jeff Janes 写道:
It sounds like someone is bypassing your pgbouncer and connecting
directly to your database. Maybe they tried to create their own
parallelization and have a master connection going through pgbouncer
and create many auxiliary connections that go directly to t
On Sat, Jul 26, 2014 at 3:46 AM, Jiří Nádvorník
wrote:
> The reason why I solve the performance issues here is that the table of
> observations has atm cca 3e8 rows after 1.5 year of gathering the data. The
> number growth is linear.
>
So about 500,000 new records a day.
Hello Sergey,
Oh dear, I should have written you before several months :). I actually did
exactly what you suggest - it was actually Markus Demleitners (GAVO org.) idea.
The groupby in ipixcenter runs indeed less than an hour on the whole table and
is linear by definition so no problem with mem
On Tue, Jul 29, 2014 at 1:21 AM, Rural Hunter wrote:
> More information found. After the hang connection appears, I noticed there
> were several hundreds of connections of the same user. Since I use
> pgbouncer and I only set the pool size to 50 for each user, this is very
> strange. I checked t
Hi Jiri,
I’m really interested in those [clustering] algorithms and study them. But
> I would need somebody to point me directly at a specific algorithm to look
> at. The main problem with choosing the right one (which couldn’t get over
> even my university teacher) is that you don’t know the numb
Hi Oleg, Sergey,
The problem would be crossmatch if I would have a catalog to crossmatch with.
But I am actually trying to build this catalog.
The crossmatching can be actually used to solve that problem, when I crossmatch
the observations with themselves on q3c_join with 1 arcsec. But as I sai
Hi Craig,
I’m really interested in those algorithms and study them. But I would need
somebody to point me directly at a specific algorithm to look at. The main
problem with choosing the right one (which couldn’t get over even my university
teacher) is that you don’t know the number of cluste
More information found. After the hang connection appears, I noticed
there were several hundreds of connections of the same user. Since I use
pgbouncer and I only set the pool size to 50 for each user, this is very
strange. I checked the pgbouncer side, 'show pools' showed the active
server co
12 matches
Mail list logo