[PERFORM] Handling small inserts from many connections.

2017-09-04 Thread 우성민
Hi team,

I'm trying to configure postgres and pgbouncer to handle many inserts from
many connections.

Here's some details about what i want to achieve :

  We have more than 3000 client connections, and my server program forks
backend process for each client connections.
  If backend processes send a request to its connected client, the client
send some text data(about 3000 bytes) to the backend process and wait for
  next request.
  The backend process execute insert text data using PQexec from libpq
lbirary, if PQexec is done, backend process send request to
  client again.

  All the inserts using one, same table.

The problem is, clients wait too long due to insert process is too slow.
It seems to working fine at first, but getting slows down after couple of
hours,
each insert query takes 3000+ ms and keep growing.

Need some help to figure out an actual causes of this problem.

System information :
  PGBouncer 1.7.2.
  PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).
  Kernel version 2.6.32-696.10.1.el6.x86_64
  Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.
  32GB ECC/REG-Buffered RAM.
  128GB Samsung 840 evo SSD.
pool_mode = transaction


max_client_conn = 10

default_pool_size = 200
max_connections = 9000  # (change requires restart)
superuser_reserved_connections = 1  # (change requires restart)

shared_buffers = 8GB

work_mem = 2MB

maintenance_work_mem = 64MB # min 1MB

dynamic_shared_memory_type = posix  # the default is the first option
# supported by the operating system:
#   posix
#   sysv
#   windows
#   mmap
# use none to disable dynamic shared 
memory
synchronous_commit = off

wal_buffers = 256MB

checkpoint_timeout = 10min  # range 30s-1d

max_wal_size = 4096MB

min_wal_size = 3072MB

checkpoint_completion_target = 1.0  # checkpoint target duration, 0.0 - 1.0

random_page_cost = 4.0  # same scale as above

default_statistics_target = 500 # range 1-1



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Handling small inserts from many connections.

2017-09-04 Thread 우성민
I’am already using pgbouncer as a connection pooler and default_pool_size
 = 96.

i checked “show pools”, the max_wait was as high as 70 or more while INSERT
statement duration is about 3000ms in postgres log.
These numbers increase over time.

I’ll try RAID with more SSDs.

Thank you for your response.

2017년 9월 5일 (화) 오전 3:15, Scott Marlowe 님이 작성:

> On Mon, Sep 4, 2017 at 2:14 AM, 우성민  wrote:
> > Hi team,
> >
> > I'm trying to configure postgres and pgbouncer to handle many inserts
> from
> > many connections.
> >
> > Here's some details about what i want to achieve :
> >
> >   We have more than 3000 client connections, and my server program forks
> > backend process for each client connections.
>
> This is a terrible configuration for any kind of performance. Under
> load all 3,000 connections can quickly swamp your server resulting in
> it slowing to a crawl.
>
> Get a connection pooler involved. I suggest pgbouncer unless you have
> very odd pooling needs. It's easy, small, and fast. Funnel those 3,000
> connections down to <100 if you can. It will make a huge difference in
> performance and reliability.
>
> > System information :
> >   PGBouncer 1.7.2.
> >   PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> > 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).
> >   Kernel version 2.6.32-696.10.1.el6.x86_64
> >   Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.
> >   32GB ECC/REG-Buffered RAM.
> >   128GB Samsung 840 evo SSD.
>
> If it's still slow after connection pooling is setup, then look at
> throwing more SSDs at the problem. If you're using a HW RAID
> controller, turn off caching with SSDs unless you can prove it's
> faster with it. It almost never is.
>