Re: [PERFORM] Using pgiosim realistically

2011-05-14 Thread k...@rice.edu
On Fri, May 13, 2011 at 09:09:41PM +, John Rouillard wrote:
> Hi all:
> 
> I am adding pgiosim to our testing for new database hardware and I am
> seeing something I don't quite get and I think it's because I am using
> pgiosim incorrectly.
> 
> Specs:
> 
>   OS: centos 5.5 kernel: 2.6.18-194.32.1.el5
>   memory: 96GB
>   cpu: 2x Intel(R) Xeon(R) X5690  @ 3.47GHz (6 core, ht enabled)
>   disks: WD2003FYYS RE4
>   raid: lsi - 9260-4i with 8 disks in raid 10 configuration
>   1MB stripe size
>   raid cache enabled w/ bbu
>   disk caches disabled
>   filesystem: ext3 created with -E stride=256
> 
> I am seeing really poor (70) iops with pgiosim.  According to:
> http://www.tomshardware.com/reviews/2tb-hdd-7200,2430-8.html in the
> database benchmark they are seeing ~170 iops on a single disk for
> these drives. I would expect an 8 disk raid 10 should get better then
> 3x the single disk rate (assuming the data is randomly distributed).
> 
> To test I am using 5 100GB files with
> 
> sudo ~/pgiosim -c -b 100G -v file?
> 
> I am using 100G sizes to make sure that the data read and files sizes
> exceed the memory size of the system.
> 
> However if I use 5 1GB files (and still 100GB read data) I see 200+ to
> 400+ iops at 50% of the 100GB of data read, which I assume means that
> the data is cached in the OS cache and I am not really getting hard
> drive/raid I/O measurement of iops.
> 
> However, IIUC postgres will never have an index file greater than 1GB
> in size
> (http://www.postgresql.org/docs/8.4/static/storage-file-layout.html)
> and will just add 1GB segments, so the 1GB size files seems to be more
> realistic.
> 
> So do I want 100 (or probably 2 or 3 times more say 300) 1GB files to
> feed pgiosim? That way I will have enough data that not all of it can
> be cached in memory and the file sizes (and file operations:
> open/close) more closely match what postgres is doing with index
> files?
> 
> Also in the output of pgiosim I see:
> 
>   25.17%,   2881 read,  0 written, 2304.56kB/sec  288.07 iops
> 
> which I interpret (left to right) as the % of the 100GB that has been
> read, the number of read operations over some time period, number of
> bytes read/written and the io operations/sec. Iops always seems to be
> 1/10th of the read number (rounded up to an integer). Is this
> expected and if so anybody know why?
> 
> While this is running if I also run "iostat -p /dev/sdc 5" I see:
> 
>   Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
>   sdc 166.40  2652.80 4.80  13264 24
>   sdc1   2818.80 1.20   999.20  6   4996
> 
> which I am interpreting as 2818 read/io operations (corresponding more
> or less to read in the pgiosim output) to the partition and of those
> only 116 are actually going to the drive??? with the rest handled from
> OS cache.
> 
> However the tps isn't increasing when I see pgiosim reporting:
> 
>48.47%,   4610 read,  0 written, 3687.62kB/sec  460.95 iops
> 
> an iostat 5 output near the same time is reporting:
> 
>   Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
>   sdc 165.87  2647.50 4.79  13264 24
>   sdc1   2812.97 0.60   995.41  3   4987
> 
> so I am not sure if there is a correlation between the read and tps
> settings.
> 
> Also I am assuming blks written is filesystem metadata although that
> seems like a lot of data 
> 
> If I stop the pgiosim, the iostat drops to 0 write and reads as
> expected.
> 
> So does anybody have any comments on how to test with pgiosim and how
> to correlate the iostat and pgiosim outputs?
> 
> Thanks for your feedback.
> -- 
>   -- rouilj
> 
> John Rouillard   System Administrator
> Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111
> 

Hi John,

Those drives are 7200 rpm drives which would give you a maximum write
rate of 120/sec at best with the cache disabled. I actually think your
70/sec is closer to reality and what you should anticipate in real use.
I do not see how they could make 170/sec. Did they strap a jet engine to
the drive. :)

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] UPDATEDs slowing SELECTs in a fully cached database

2011-07-11 Thread k...@rice.edu
Hi Lars,

I do not know if this makes sense in PostgreSQL and that readers
do not block writers and writes do not block readers. Are your
UPDATEs to individual rows, each in a separate transaction, or
do you UPDATE multiple rows in the same transaction? If you
perform multiple updates in a single transaction, you are
synchronizing the changes to that set of rows and that constraint
is causing other readers that need to get the correct values post-
transaction to wait until the COMMIT completes. This means that
the WAL write must be completed.

Have you tried disabling synchronous_commit? If this scenario
holds, you should be able to reduce the slowdown by un-batching
your UPDATEs, as counter-intuitive as that is. This seems to
be similar to a problem that I have been looking at with using
PostgreSQL as the backend to a Bayesian engine. I am following
this thread with interest.

Regards,
Ken

On Thu, Jul 07, 2011 at 04:56:13PM -0700, lars wrote:
> I am doing some research that will hopefully lead to replacing a big
> Oracle installation with a set PostgreSQL servers.
> 
> The current Oracle installations consists of multiple of RAC
> clusters with 8 RAC nodes each. Each RAC node has 256gb of
> memory (to be doubled soon).
> The nature of our service is such that over a reasonable time (a day
> or so) the database *is* the working set.
> 
> So I am looking at Postgres in a context where (almost) all of the
> data is cached and disk IO is only required for persistence.
> 
> Setup:
> PostgreSQL 9.1beta2 on a high memory (~68gb, 12 cores) EC2 Linux
> instance (kernel 2.6.35) with the database and
> WAL residing on the same EBS volume with EXT4 (data=ordered,
> barriers=1) - yes that is not an ideal setup
> (WAL should be on separate drive, EBS is slow to begin, etc), but I
> am mostly interested in read performance for a fully cached
> database.
> 
> shared_buffers: varied between 1gb and 20gb
> checkpoint_segments/timeout: varied accordingly between 16-256 and
> 5-10m, resp.
> bgwriter tweaked to get a good distribution of checkpoints,
> bg-writes, and backend writes.
> wal_sync_method: tried fdatasync and open_datasync.
> 
> I read "PostgreSQL 9.0 high performance", and have spent some
> significant amount of time on this already.
> 
> PostgreSQL holds up extremely well, once things like "storing
> hint-bits", checkpoints vs bgwriter vs backend_writes, etc
> are understood. I installed pg_buffercache and pgfincore to monitor
> how and where the database is stored.
> 
> There is one observation that I wasn't able to explain:
> A SELECT only client is severely slowed down by a concurrent client
> performing UPDATES on the same table the other
> client selects from, even when the database resides 100% in the
> cache (I tried with shared_buffers large enough to hold
> the database, and also with a smaller setting relying on the OS
> cache, the behavior is the same).
> 
> As long as only the reader is running I get great performance
> (20-30ms, query reading a random set of about 1 rows
> out of 100m row table in a single SELECT). The backend is close to
> 100% cpu, which is what want in a cached database.
> 
> Once the writer starts the read performance drops almost immediately
> to >200ms.
> The reading backend's cpu drop drop to <10%,  and is mostly waiting
> (D state in top).
> The UPDATE touches a random set of also about 1 rows (in one
> update statement, one of the columns touched is
> indexed - and that is the same index used for the SELECTs).
> 
> What I would have expected is that the SELECTs would just continue
> to read from the cached buffers (whether dirtied
> or not) and not be affected by concurrent updates. I could not find
> anything explaining this.
> 
> The most interesting part:
> that this does not happen with an exact clone of that relation but
> UNLOGGED. The same amount of buffers get dirty,
> the same amount checkpointing, bgwriting, vacuuming. The only
> difference is WAL maintenance as far as I can tell.
> 
> Is there some (intentional or not) synchronization between backend
> when the WAL is maintained? Are there times when
> read only query needs to compete disk IO when everything is cached?
> Or are there any other explanations?
> 
> I am happy to provide more information. Although I am mainly looking
> for a qualitative answer, which could explain this behavior.
> 
> Thanks.
> 
> -- Lars
> 
> 
> -- 
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
> 

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] UPDATEDs slowing SELECTs in a fully cached database

2011-07-11 Thread k...@rice.edu
On Mon, Jul 11, 2011 at 05:26:49PM +0200, Robert Klemme wrote:
> On Mon, Jul 11, 2011 at 3:13 PM, k...@rice.edu  wrote:
> > I do not know if this makes sense in PostgreSQL and that readers
> > do not block writers and writes do not block readers. Are your
> > UPDATEs to individual rows, each in a separate transaction, or
> > do you UPDATE multiple rows in the same transaction? If you
> > perform multiple updates in a single transaction, you are
> > synchronizing the changes to that set of rows and that constraint
> > is causing other readers that need to get the correct values post-
> > transaction to wait until the COMMIT completes. This means that
> > the WAL write must be completed.
> 
> What readers should that be?  Docs explicitly state that readers are
> never blocked by writers:
> http://www.postgresql.org/docs/9.0/interactive/mvcc-intro.html
> http://www.postgresql.org/docs/9.0/interactive/mvcc.html
> 
> From what I understand about this issue the observed effect must be
> caused by the implementation and not by a conceptual issue with
> transactions.
> 
> > Have you tried disabling synchronous_commit? If this scenario
> > holds, you should be able to reduce the slowdown by un-batching
> > your UPDATEs, as counter-intuitive as that is. This seems to
> > be similar to a problem that I have been looking at with using
> > PostgreSQL as the backend to a Bayesian engine. I am following
> > this thread with interest.
> 
> I don't think this will help (see above).  Also, I would be very
> cautious to do this because although the client might get a faster
> acknowledge the DB still has to do the same work as without
> synchronous_commit (i.e. WAL, checkpointing etc.) but it still has to
> do significantly more transactions than in the batched version.
> 
> Typically there is an optimum batch size: if batch size is too small
> (say, one row) the ratio of TX overhead to "work" is too bad.  If
> batch size is too large (say, millions of rows) you hit resource
> limitations (memory) which inevitable force the RDBMS to do additional
> disk IO.
> 
> Kind regards
> 
> robert
> 
Okay,

If we assume that the current implementation of MVCC is preventing
readers from blocking writers and writers from blocking readers, then
the application may have some statements that are implicitly locking
the database and that is conflicting with the UPDATEs. Maybe the
slowdown is caused by index updates caused by the write activity.
Just throwing out some ideas.

Regards,
Ken
regarding index updates with the read-only queries. 
> -- 
> remember.guy do |as, often| as.you_can - without end
> http://blog.rubybestpractices.com/
> 

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] cpu comparison

2011-07-18 Thread k...@rice.edu
On Mon, Jul 18, 2011 at 01:48:20PM -0600, M. D. wrote:
> I have 2 small servers, one a fairly new server with a x3450 (4-core
> with HT) cpu running at 2.67GHz and an older E5335 (4-core) cpu
> running at 2GHz.
> 
> I have been quite surprised how the E5335 compares very closely to
> the x3450, but maybe I have tested it wrongly.
> 
> here's the CPUINFO:
> processor   : 3
> vendor_id   : GenuineIntel
> cpu family  : 6
> model   : 15
> model name  : Intel(R) Xeon(R) CPU   E5335  @ 2.00GHz
> stepping: 7
> cpu MHz : 1995.036
> cache size  : 4096 KB
> physical id : 3
> siblings: 1
> core id : 0
> cpu cores   : 1
> fpu : yes
> fpu_exception   : yes
> cpuid level : 10
> wp  : yes
> flags   : fpu tsc msr pae cx8 apic mtrr cmov pat clflush
> acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni vmx
> ssse3 cx16 lahf_lm
> bogomips: 4989.65
> clflush size: 64
> cache_alignment : 64
> address sizes   : 36 bits physical, 48 bits virtual
> power management:
> OS: CentOS 64bit
> Postgres: 9.0.4 compiled
> 
> processor   : 7
> vendor_id   : GenuineIntel
> cpu family  : 6
> model   : 30
> model name  : Intel(R) Xeon(R) CPU   X3450  @ 2.67GHz
> stepping: 5
> cpu MHz : 2660.099
> cache size  : 8192 KB
> physical id : 0
> siblings: 8
> core id : 3
> cpu cores   : 4
> apicid  : 7
> fdiv_bug: no
> hlt_bug : no
> f00f_bug: no
> coma_bug: no
> fpu : yes
> fpu_exception   : yes
> cpuid level : 11
> wp  : yes
> flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
> pbe nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx
> smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm [8]
> bogomips: 5319.92
> OS: CentOS 32bit
> Postgres: 9.0.4 compiled
> 
> 
> In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS
> on the E5335.  Can this make such a bit difference or should the
> perform fairly close to the same speed?  Both servers have 8GB of
> RAM, and the database I tested with is only 3.7GB.
> 
> I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2,
> and of course because of the cycle speed difference alone I would
> think the X3450 should beat the E5335.
> 

Yes, you have basically shown that  running two different tests give
different results -- or that an apple is not an orange. You need to
only vary 1 variable at a time for it to mean anything.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] cpu comparison

2011-07-18 Thread k...@rice.edu
On Mon, Jul 18, 2011 at 11:56:40PM +0200, Tomas Vondra wrote:
> Dne 18.7.2011 22:11, k...@rice.edu napsal(a):
> >> > In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS
> >> > on the E5335.  Can this make such a bit difference or should the
> >> > perform fairly close to the same speed?  Both servers have 8GB of
> >> > RAM, and the database I tested with is only 3.7GB.
> >> > 
> >> > I'm a bit surprised as the x3450 has DDR3, while the E5335 has DDR2,
> >> > and of course because of the cycle speed difference alone I would
> >> > think the X3450 should beat the E5335.
> >> > 
> > Yes, you have basically shown that  running two different tests give
> > different results -- or that an apple is not an orange. You need to
> > only vary 1 variable at a time for it to mean anything.
> 
> He just run the same test on two different machines - I'm not sure
> what's wrong with it? Sure, it would be nice to compare 32bit to 32bit,
> but the OP probably can't do that and wonders if this is the cause. Why
> is that comparing apples and oranges?
> 
It is only that 32 vs. 64 bit, compiler and other things can easily make
a factor of 2 change in the results. So it is not telling you much about
the processor differences, neccessarily.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Recommended optimisations slows down PostgreSQL 8.4

2011-08-11 Thread k...@rice.edu
On Thu, Aug 11, 2011 at 04:35:34PM -0700, Waldo Nell wrote:
> I have PostgreSQL 8.4.8 on Ubuntu Linux x64.  Server is a Core i7 950 with 
> 6GB of RAM.  2GB of RAM us used by Java, some small amount by the kernel / 
> services and the rest is available to PostgreSQL.  Hard drive is a single 
> 7200 RPM SATA 1TB Caviar Black HDD.  No other applications / processes are 
> running when I perform my tests.
> 
> I have an application that performs about 80% reads and 20% writes for a 
> specific billrun.   It takes about 60 minutes to complete, and I can have it 
> perform precisely the same queries repeatedly.  I have consistently showed 
> that when shared_buffers = 24MB (the default), and wal_buffers = 64kB, the 
> system completes the process in 50 minutes.  When I bump shared_buffers to 
> 1500MB, the system slows down and takes 60 minutes to complete the same 
> process.  Changing that back to 24MB, but then changing wal_buffers to 16MB 
> has the same impact - performance drops from 50 minutes to about 61 minutes.  
> Changing those two parameters back to the defaults returns the time to 50 
> minutes.
> 
> fsync = off for these tests - not sure if it is relevant.  All other settings 
> are at their defaults.
> 
> Please explain why the system is slower with the recommended values for these 
> two settings?  The DB is about 74GB, the largest table has 180 million rows.

One guess is that you are using the defaults for other costing parameters and 
they
do not accurately reflect your system. This means that it will be a crap shoot 
as
to whether a plan is faster or slower and what will affect the timing.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] DBT-5 & Postgres 9.0.3

2011-08-17 Thread k...@rice.edu
On Wed, Aug 17, 2011 at 10:59:12AM -0700, bobbyw wrote:
> Awesome.. that did it!  It was actually not set at all in postgresql.conf,
> although it was commented out as:
> 
> # unix_socket_directory = '' 
> 
> Presumably it was using the default of '/tmp'?
> 
> Anyway, after making that change dbt5 runs fine, but now when I try to
> connect via "psql" I get:
> 
> psql.bin: could not connect to server: No such file or directory
> Is the server running locally and accepting
> connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
> 
> Why is psql looking in /tmp?
> 

Because that is the default location. If you want to change it, you need
to use the -h commandline option.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-08-17 Thread k...@rice.edu
On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:
> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with 
> the new one we have, which I have configured with RAID 10. The drives are the 
> same (SAS 15K). I tried the new system with ext3 and then XFS but the results 
> seem really outrageous as compared to the current system, or am I reading 
> things wrong?
> 
> The benchmark results are here:
> 
> http://malekkoheavyindustry.com/benchmark.html
> 
> 
> Thank you
> 
> Ogden

That looks pretty normal to me.

Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-08-17 Thread k...@rice.edu
On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:
> 
> On Aug 17, 2011, at 1:31 PM, k...@rice.edu wrote:
> 
> > On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:
> >> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) 
> >> with the new one we have, which I have configured with RAID 10. The drives 
> >> are the same (SAS 15K). I tried the new system with ext3 and then XFS but 
> >> the results seem really outrageous as compared to the current system, or 
> >> am I reading things wrong?
> >> 
> >> The benchmark results are here:
> >> 
> >> http://malekkoheavyindustry.com/benchmark.html
> >> 
> >> 
> >> Thank you
> >> 
> >> Ogden
> > 
> > That looks pretty normal to me.
> > 
> > Ken
> 
> But such a jump from the current db01 system to this? Over 20 times 
> difference from the current system to the new one with XFS. Is that much of a 
> jump normal?
> 
> Ogden

Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would get 
similar
results with EXT4 as well, I suspect, although you did not test that.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Raid 5 vs Raid 10 Benchmarks Using bonnie++

2011-08-17 Thread k...@rice.edu
On Wed, Aug 17, 2011 at 03:40:03PM -0500, Ogden wrote:
> 
> On Aug 17, 2011, at 1:35 PM, k...@rice.edu wrote:
> 
> > On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:
> >> 
> >> On Aug 17, 2011, at 1:31 PM, k...@rice.edu wrote:
> >> 
> >>> On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:
> >>>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) 
> >>>> with the new one we have, which I have configured with RAID 10. The 
> >>>> drives are the same (SAS 15K). I tried the new system with ext3 and then 
> >>>> XFS but the results seem really outrageous as compared to the current 
> >>>> system, or am I reading things wrong?
> >>>> 
> >>>> The benchmark results are here:
> >>>> 
> >>>> http://malekkoheavyindustry.com/benchmark.html
> >>>> 
> >>>> 
> >>>> Thank you
> >>>> 
> >>>> Ogden
> >>> 
> >>> That looks pretty normal to me.
> >>> 
> >>> Ken
> >> 
> >> But such a jump from the current db01 system to this? Over 20 times 
> >> difference from the current system to the new one with XFS. Is that much 
> >> of a jump normal?
> >> 
> >> Ogden
> > 
> > Yes, RAID5 is bad for in many ways. XFS is much better than EXT3. You would 
> > get similar
> > results with EXT4 as well, I suspect, although you did not test that.
> 
> 
> i tested ext4 and the results did not seem to be that close to XFS. 
> Especially when looking at the Block K/sec for the Sequential Output. 
> 
> http://malekkoheavyindustry.com/benchmark.html
> 
> So XFS would be best in this case?
> 
> Thank you
> 
> Ogden

It appears so for at least the Bonnie++ benchmark. I would really try to 
benchmark
your actual DB on both EXT4 and XFS because some of the comparative benchmarks 
between
the two give the win to EXT4 for INSERT/UPDATE database usage with PostgreSQL. 
Only
your application will know for sure:)

Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)

2011-09-14 Thread k...@rice.edu
On Wed, Sep 14, 2011 at 03:40:07PM -0400, Igor Neyman wrote:
> 
> 
> From: Carlo Stonebanks [mailto:stonec.regis...@sympatico.ca] 
> Sent: Tuesday, September 13, 2011 9:27 PM
> To: Performance support Postgresql
> Subject: Re: Migrated from 8.3 to 9.0 - need to update config (re-post)
> 
> 
>  
> 
> Ok, connection pooler it is. As I understand it, even if there are no idle 
> connections available we'll get the benefit of putting a turnstile on 
> the butcher's door.
> I also ordered the book as soon as you mentioned - the title alone was enough 
> to sell me on it! The book won't be for the errant sys admin who increased 
> the connections, it's for me - I'll use it to whack the sys admin on the 
> head. Thanks fo rthe tip, the author owes you a beer - as do I.
>  
> Will the book recommend any particular connection pooler product, or is it 
> inappropriate to ask for a recommendation on the forum?
>  
> Carlo
>  
> 
> I'd start with the pg_bouncer: very simple to setup, reliable, no "extra" 
> functionality, which seems by your message you don't need.
> 
> Igor Neyman

+1 for pg_bouncer being easy to setup and use and being robust. We also use 
pgpool here
but its is a much bigger beast and I suspect that you do not need its bells and 
whistles.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query optimization using order by and limit

2011-09-22 Thread k...@rice.edu
On Wed, Sep 21, 2011 at 11:22:53PM -0400, Tom Lane wrote:
> Michael Viscuso  writes:
> > Greg/Tom, you are correct, these columns should be modified to whatever
> > is easiest for Postgres to recognize 64-bit unsigned integers.  Would
> > you still recommend bigint for unsigned integers?  I likely read the
> > wrong documentation that suggested bigint for signed 64-bit integers and
> > numeric(20) for unsigned 64-bit integers.
> 
> Unsigned?  Oh, hm, that's a bit of a problem because we don't have any
> unsigned types.  If you really need to go to 2^64 and not 2^63 then
> you're stuck with numeric ... but that last bit is costing ya a lot.
> 
>   regards, tom lane
> 

Hi Michael,

If you have access to the application, you can map the unsigned 64-bits
to the PostgreSQL signed 64-bit type with a simple subtraction. That will
allow you to drop all the numeric use. Also if the guid is a 64-bit
values stuffed into a numeric(20), you can do it there as well. I achieved
a hefty performance boost by making those application level changes in a
similar situation.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-01 Thread k...@rice.edu
On Tue, Nov 01, 2011 at 08:33:51AM +0530, Mohamed Hashim wrote:
> Any idea or suggestions how to improve my database best
> performance.???
> 
> Regards
> Hashim
> 
Hi Hashim,

Ignoring the description of your tables, you should probably try
updating to the latest release 9.0.5. You are two point releases
back and they really, really, really fix bugs in each release or
they do not bother releasing.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSL encryption makes bytea transfer slow

2011-11-03 Thread k...@rice.edu
On Thu, Nov 03, 2011 at 03:48:11PM +0100, Albe Laurenz wrote:
> 
> I experimented some more on a recent system (RHEL6, OpenSSL 1.0.0-fips),
> and it is as you say. Disabling OpenSSL compression in the source (which
> is possible since OpenSSL 1.0.0) does not give me any performance
> improvement.
> 
> Seems you pretty much have to live with at most 1/4 of the performance
> if you want to SELECT large images using SSL.
> 
> Yours,
> Laurenz Albe
> 

Have you tried different ciphers? RC4 is much lighter weight CPU-wise
then the typically negotiated cipher. AES128 is also not bad if you
have the newer Intel chips with the hardware encryption support. Just
another thing to check.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL perform poorly on VMware ESXi

2011-11-07 Thread k...@rice.edu
On Mon, Nov 07, 2011 at 08:36:10AM -0200, Lucas Mocellin wrote:
> Hi everybody,
> 
> I'm having some issues with PostgreSQL 9.03 running on FreeBSD 8.2 on top
> of VMware ESXi 4.1 U1.
> 
> The problem is query are taking too long, and some times one query "blocks"
> everybody else to use the DB as well.
> 
> I'm a network administrator, not a DBA, so many things here can be "newbie"
> for you guys, so please be patient. :)
> 
> Clonning this database to another machine not virtualized, any "crap"
> machine, it runs a way fastar, I've measured one specific "SELECT" and at
> the virtualized system (4GB RAM, 4 processors, SATA disk, virtualized), it
> took 15minutes!, in the crap machine (2GB RAM, 1 processor, SATA disk, NOT
> virtualized), it took only 2!!!
> 
> I always think the bottleneck is disk I/O as I can see from the vSphere
> performance view, but the virtual machine is using exclusively the SATA
> disk with no concurrency with other machines.
> 
> how do you guys deal with virtualization? any tips/recommendations? does
> that make sense the disk I/O? any other sugestion?
> 
> thanks in advance!
> 
> Lucas.

Hi Lucas,

Virtualization is not a magic bullet. It has many advantages but also
has disadvantages. The resources of the virtual machine are always a
subset of the host machine resources. In addition, the second layer of
disk I/O indirection through the virtual disk can effectively turn
a sequential I/O pattern into a random I/O pattern with the accompanying
10:1 decrease in I/O throughput.

I would recommend testing your I/O on your virtual machine.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Dramatic change in memory usage with version 9.1

2011-12-19 Thread k...@rice.edu
Wow, upgrading 3 major releases at a go. :) It would probably be
useful to use the helpful:

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

to get the information that is needed to the right people.

Regards,
Ken

On Mon, Dec 19, 2011 at 04:04:54PM +0100, Rafael Martinez wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hello
> 
> I am sending this email to ask if anyone has noticed a change in how
> a server running postgreSQL 9.1 uses and allocates memory compared to
> older versions.
> 
> We upgraded all our systems from 8.3 to 9.1 a couple of weeks ago, and
> we have experienced a radical change in how our servers make use of
> memory. How memory is allocated has become more unstable and the swap
> usage has increased dramatically.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very long deletion time on a 200 GB database

2012-02-23 Thread k...@rice.edu
On Thu, Feb 23, 2012 at 05:25:46PM +0200, Reuven M. Lerner wrote:
> >
> >What is the distribution of end_dates? It might be worth running this in
> >several steps, deleting records older than, say, 90 days, 60 days, 30 days.
> 
> I've suggested something similar, but was told that we have limited
> time to execute the DELETE, and that doing it in stages might not be
> possible.
> 
> Reuven
> 

In cases like this, I have often found that doing the delete in smaller
pieces goes faster, sometimes much faster, than the bigger delete.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] DBD-Pg prepared statement versus plain execution

2012-03-21 Thread k...@rice.edu
Hi Rafael,

Try disabling the prepare statement processing in DBD::Pg and
try the timing runs again.

Regards,
Ken

On Wed, Mar 21, 2012 at 12:21:23PM +0100, Rafael Martinez wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hello
> 
> We are having some performance problems with an application that uses
> prepared statement heavily.
> 
> We have found out that it creates-executes-destroys a prepared statement
> *per* statement it sends to the database (pg-9.1) via DBD-Pg.
> 
> A normal log entry for a sql-statement looks e.g. like this:
> - --
> [2012-03-15 14:49:12.484 CET]  LOG:  duration: 8.440 ms  parse
> dbdpg_p32048_3:
> 
> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers
> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =
> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId  =
> ACL.PrincipalId AND CachedGroupMembers.GroupId  = Principals.id AND
> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0
>  AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR
> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))
> 
> [2012-03-15 14:49:12.485 CET]  LOG:  duration: 0.087 ms  bind
> dbdpg_p32048_3:
> 
> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers
> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =
> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId  =
> ACL.PrincipalId AND CachedGroupMembers.GroupId  = Principals.id AND
> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0
>  AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR
> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))
> 
> 
> [2012-03-15 14:49:12.487 CET] LOG:  duration: 1.692 ms  execute
> dbdpg_p32048_3:
> 
> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers
> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =
> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId  =
> ACL.PrincipalId AND CachedGroupMembers.GroupId  = Principals.id AND
> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0
>  AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR
> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))
> 
> 
> [2012-03-15 14:49:12.488 CET] LOG:  duration: 0.029 ms  statement:
> DEALLOCATE dbdpg_p32048_3
> - --
> 
> As you can see, the parse+bind+deallocate part uses much more time than
> the execution part. This is the same for many of the statements send to
> the database.
> 
> My question is:
> 
> Is the parse+bind time reported, a time (not reported) that the planer
> will use anyway when running a sql-statement in a normal way or the
> parse+bind+deallocate time is *extra* time needed by the prepared statement?
> 
> Can we assume that running this application without using prepared
> statements will do that it runs faster the time used by
> parse+bind+deallocate?
> 
> Thanks in advance.
> 
> regards,
> - -- 
>  Rafael Martinez Guerrero
>  Center for Information Technology
>  University of Oslo, Norway
> 
>  PGP Public Key: http://folk.uio.no/rafael/
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAk9pubAACgkQBhuKQurGihTYkwCcCFYQRDGWD0yaR+f2FFwKs7gN
> RfgAoJdPrAzUhfBfsXmst7/l7LVLisHy
> =l7Fl
> -END PGP SIGNATURE-
> 
> -- 
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
> 

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Tablespaces on a raid configuration

2012-03-30 Thread k...@rice.edu
On Fri, Mar 30, 2012 at 02:45:36PM +, Campbell, Lance wrote:
> PostgreSQL 9.0.x
> When PostgreSQL  storage is using a relatively large raid  5 or 6 array is 
> there any value in having your tables distributed across multiple tablespaces 
> if those tablespaces will exists on the same raid array?  I understand the 
> value if you were to have the tablespaces on different raid arrays.  But what 
> about on the same one?
> 
> 
> Thanks,
> 
> Lance Campbell
> Software Architect
> Web Services at Public Affairs
> 217-333-0382
> 

I have seen previous discussions about using different filesystems versus
a single filesystem and one advantage that multiple tablespaces have is
that an fsync on one table/tablespace would not block or be blocked by
an fsync on a different table/tablespace at the OS level.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] DELETE vs TRUNCATE explanation

2012-07-11 Thread k...@rice.edu
On Wed, Jul 11, 2012 at 10:05:48AM -0400, Tom Lane wrote:
> Daniel Farina  writes:
> > TRUNCATE should simply be very nearly the fastest way to remove data
> > from a table while retaining its type information, and if that means
> > doing DELETE without triggers when the table is small, then it should.
> >  The only person who could thwart me is someone who badly wants their
> > 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB
> > of catalog anyway.
> 
> > Does that sound reasonable?  As in, would anyone object if TRUNCATE
> > learned this behavior?
> 
> Yes, I will push back on that.
> 
> (1) We don't need the extra complexity.
> 
> (2) I don't believe that you know where the performance crossover point
> would be (according to what metric, anyway?).
> 
> (3) The performance of the truncation itself should not be viewed in
> isolation; subsequent behavior also needs to be considered.  An example
> of possible degradation is that index bloat would no longer be
> guaranteed to be cleaned up over a series of repeated truncations.
> (You might argue that if the table is small then the indexes couldn't
> be very bloated, but I don't think that holds up over a long series.)
> 
> IOW, I think it's fine as-is.  I'd certainly wish to see many more
> than one complainant before we expend effort in this area.
> 
>   regards, tom lane
> 

+1 TRUNCATE needs to keep the same properties independent of the size
of the table. Smearing it into a DELETE would not be good at all. If
there are optimizations that can be done to keep its current behavior,
those might be possible, but the complexity may not be worthwhile for
a relative corner case.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] cluster on conditional index?

2012-08-14 Thread k...@rice.edu
On Tue, Aug 14, 2012 at 10:10:47AM -0700, Jeff Janes wrote:
> On Tue, Aug 14, 2012 at 8:27 AM, Doug Hunley  wrote:
> > According to the docs on cluster:
> > if you tend to access some data more than others, and there is an
> > index that groups them together, you will benefit from using CLUSTER
> >
> > however, this doesn't address the situation where you have a
> > conditional index.
> 
> It seems like it is not allowed.
> 
> jjanes=# create index on pgbench_accounts (aid) where bid=33;
> jjanes=# cluster pgbench_accounts USING pgbench_accounts_aid_idx ;
> ERROR:  cannot cluster on partial index "pgbench_accounts_aid_idx"
> 
> But I don't see a fundamental reason it can't be allowed, maybe
> implementing that should be on the to-do list.
> 
> Cheers,
> 
> Jeff
> 

It probably has to do with the fact that a conditional index, does
not include every possible row in the table. Although, a "cluster" of
the matching rows and then leave the rest in place, should work. How
is that for hand-waving.  :)

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] hardware advice

2012-09-28 Thread k...@rice.edu
On Thu, Sep 27, 2012 at 03:50:33PM -0500, Shaun Thomas wrote:
> On 09/27/2012 03:44 PM, Scott Marlowe wrote:
> 
> >This 100x this.  We used to buy our boxes from aberdeeninc.com and got
> >a 5 year replacement parts warranty included.  We spent ~$10k on a
> >server that was right around $18k from dell for the same numbers and a
> >3 year warranty.
> 
> Whatever you do, go for the Intel ethernet adaptor option. We've had
> so many headaches with integrated broadcom NICs. :(
> 
+++1 Sigh.

Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Support Create package

2012-10-16 Thread k...@rice.edu
On Tue, Oct 16, 2012 at 01:26:37PM +0100, Alejandro Carrillo wrote:
> Hi,
> 
> Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the 
> opensource doesn't?
> Is a project or never will have support?
> 
Hi Alejandro,

Isn't that part of their Oracle compatibility secret sauce? For the opensource
version, it has never been important enough to anyone invest in the development
effort.

Cheers,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query-Planer from 6seconds TO DAYS

2012-10-26 Thread k...@rice.edu
On Fri, Oct 26, 2012 at 04:37:33PM +0200, Böckler Andreas wrote:
> Hi,
> 
> 
> Am 25.10.2012 um 20:22 schrieb Kevin Grittner:
> 
> > 
> > The idea is to model actual costs on your system.  You don't show
> > your configuration or describe your hardware, but you show an
> > estimate of retrieving over 4000 rows through an index and describe a
> > response time of 4 seconds, so you must have some significant part of
> > the data cached.
> Sure my effective_cache_size 10 GB
> But my right Table has the size of 1.2 TB (yeah Terra) at the moment 
> (partitioned a 40GB slices) and has 3 * 10^9 records
> 
> My left table has only the size of 227MB and 1million records. Peanuts.
> > I would see how the workload behaves with the following settings:
> > 
> > effective_cache_size =  >shows as cached pages>
> > seq_page_cost = 1
> > random_page_cost = 2
> > cpu_tuple_cost = 0.05
> > 
> > You can set these in a session and check the plan with EXPLAIN. Try
> > various other important important queries with these settings and
> > variations on them. Once you hit the right factors to model your
> > actual costs, the optimizaer will make better choices without needing
> > to tinker with it each time.
> 
>  i've played with that already ….
> 
> NESTED LOOP -> GOOD
> SEQSCAN -> VERY BAD
> 
> SET random_page_cost = 4;
> 2012-08-14' AND '2012-08-30' -> NESTED LOOP
> 2012-08-13' AND '2012-08-30' -> SEQSCAN
> SET random_page_cost = 2;
> 2012-08-14' AND '2012-08-30' -> NESTED LOOP
> 2012-08-07' AND '2012-08-30' -> NESTED LOOP
> 2012-08-06' AND '2012-08-30' -> SEQSCAN
> SET random_page_cost = 1;
> 2012-08-14' AND '2012-08-30' -> NESTED LOOP
> 2012-08-07' AND '2012-08-30' -> NESTED LOOP
> 2012-07-07' AND '2012-08-30' -> NESTED LOOP
> 2012-07-06' AND '2012-08-30' -> SEQSCAN
> 
> The thing is ..
> - You can alter what you want. The planner will switch at a certain time 
> range.
> - There is not one case, where the SEQSCAN-Method will be better .. It's not 
> possible.
> 
> So the only way to tell the planner that he's doomed is 
> SET enable_seqscan=0
> which is not very elegant. (Query Hints would be BTW jehovah!)
> 
> You would be forced to write something like this:
> var lastValueEnable_seqscan = "SHOw enable_seqscan"
> SET enable_seqscan=0;
> SELECT ...
> SET enable_seqscan=lastValueEnable_seqscan;
> 
> Kind regards
> 
> Andy
> 

Hi Andy,

You have the sequential_page_cost = 1 which is better than or equal to
the random_page_cost in all of your examples. It sounds like you need
a sequential_page_cost of 5, 10, 20 or more.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query-Planer from 6seconds TO DAYS

2012-10-26 Thread k...@rice.edu
On Fri, Oct 26, 2012 at 05:15:05PM +0200, Böckler Andreas wrote:
> Hi Ken,
> 
> Am 26.10.2012 um 16:55 schrieb k...@rice.edu:
> 
> > Hi Andy,
> > 
> > You have the sequential_page_cost = 1 which is better than or equal to
> > the random_page_cost in all of your examples.
> > It sounds like you need
> > a sequential_page_cost of 5, 10, 20 or more.
> 
> You're right it was sequential_page_cost = 1 because it's really irrelevant 
> what I do here:
> set random_page_cost=2;
> set seq_page_cost=5;
> '2012-05-01' AND '2012-08-30' -> NESTEDLOOP
> '2012-04-01' AND '2012-08-30' -> SEQSCAN
> 
> a) there will be a point, where things will go bad 
>  this is like patching up a roof 'till you find the next hole instead of 
> making it right at the beginning of construction process
> b) they high seq costs might be true for that table (partition at 40gb), but 
> not for the rest of the database 
>  Seqscan-Costs per table would be great.
> 
> Regards,
> 
> Andy
> 

Hi Andy,

You can set them per tablespace. Maybe you could put the appropriate tables
that need the higher costing on the same one.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-29 Thread k...@rice.edu
On Mon, Oct 29, 2012 at 02:05:24PM +0100, Albe Laurenz wrote:
> I am configuring streaming replication with hot standby
> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
> PostgreSQL was compiled from source.
> 
> It works fine, except that starting the standby took for ever:
> it took the system more than 80 minutes to replay 48 WAL files
> and connect to the primary.
> 
> Can anybody think of an explanation why it takes that long?
> 
> This is decent hardware: 24 cores of AMD Opteron 6174, 128 GB RAM,
> NetApp SAN attached with 8 GBit Fibrechannel (ext4 file system).
> An identical system performed fine in performance tests.
> 
> Here is the log; I have edited it for readability:
> 
> 2012-10-29 09:22:22.945  database system was interrupted; last known up
> at 2012-10-26 01:11:59 CEST
> 2012-10-29 09:22:22.945  creating missing WAL directory
> "pg_xlog/archive_status"
> 2012-10-29 09:22:22.947  entering standby mode
> 2012-10-29 09:22:23.434  restored log file "000101D100C4"
> from archive
> 2012-10-29 09:22:23.453  redo starts at 1D1/C420
> 2012-10-29 09:22:25.847  restored log file "000101D100C5"
> from archive
> 2012-10-29 09:22:27.457  restored log file "000101D100C6"
> from archive
> 2012-10-29 09:22:28.946  restored log file "000101D100C7"
> from archive
> 2012-10-29 09:22:30.421  restored log file "000101D100C8"
> from archive
> 2012-10-29 09:22:31.243  restored log file "000101D100C9"
> from archive
> 2012-10-29 09:22:32.194  restored log file "000101D100CA"
> from archive
> 2012-10-29 09:22:33.169  restored log file "000101D100CB"
> from archive
> 2012-10-29 09:22:33.565  restored log file "000101D100CC"
> from archive
> 2012-10-29 09:23:35.451  restored log file "000101D100CD"
> from archive
> 
> Everything is nice until here.
> Replaying this WAL file suddenly takes 1.5 minutes instead
> of mere seconds as before.
> 
> 2012-10-29 09:24:54.761  restored log file "000101D100CE"
> from archive
> 2012-10-29 09:27:23.013  restartpoint starting: time
> 2012-10-29 09:28:12.200  restartpoint complete: wrote 242 buffers
> (0.0%);
>  0 transaction log file(s) added, 0 removed, 0
> recycled;
>  write=48.987 s, sync=0.185 s, total=49.184 s;
>  sync files=1096, longest=0.016 s, average=0.000
> s
> 2012-10-29 09:28:12.206  recovery restart point at 1D1/CC618278
> 2012-10-29 09:28:31.226  restored log file "000101D100CF"
> from archive
> 
> Again there is a difference of 2.5 minutes
> between these WAL files, only 50 seconds of
> which were spent in the restartpoint.
> 
> From here on it continues in quite the same vein.
> Some WAL files are restored in seconds, but some take
> more than 4 minutes.
> 
> I'll skip to the end of the log:
> 
> 2012-10-29 10:37:53.809  restored log file "000101D100EF"
> from archive
> 2012-10-29 10:38:53.194  restartpoint starting: time
> 2012-10-29 10:39:25.929  restartpoint complete: wrote 161 buffers
> (0.0%);
>  0 transaction log file(s) added, 0 removed, 0
> recycled;
>  write=32.661 s, sync=0.066 s, total=32.734 s;
>  sync files=251, longest=0.003 s, average=0.000
> s
> 2012-10-29 10:39:25.929  recovery restart point at 1D1/ED95C728
> 2012-10-29 10:42:56.153  restored log file "000101D100F0"
> from archive
> 2012-10-29 10:43:53.062  restartpoint starting: time
> 2012-10-29 10:45:36.871  restored log file "000101D100F1"
> from archive
> 2012-10-29 10:45:39.832  restartpoint complete: wrote 594 buffers
> (0.0%);
>  0 transaction log file(s) added, 0 removed, 0
> recycled;
>  write=106.666 s, sync=0.093 s, total=106.769 s;
>  sync files=729, longest=0.004 s, average=0.000
> s
> 2012-10-29 10:45:39.832  recovery restart point at 1D1/EF5D4340
> 2012-10-29 10:46:13.602  restored log file "000101D100F2"
> from archive
> 2012-10-29 10:47:38.396  restored log file "000101D100F3"
> from archive
> 2012-10-29 10:47:38.962  streaming replication successfully connected to
> primary
> 
> I'd be happy if somebody could shed light on this.
> 
> Yours,
> Laurenz Albe
> 
> PS: Here is the configuration:
> 
>  name | current_setting 
> --+---
>  version  | PostgreSQL 9.1.3 on
> x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat
> 4.4.6-3), 64-bit
>  archive_command  | gzip -1 <"%p" | tee
> /POSTGRES/data/exchange/"%f".gz >/POSTGRES/data/backups/ELAK/"%f".gz
>  archive_mode | on
>  checkpoint_completion_target | 0.9
>  checkpoint_segments  | 30
>  client_encoding  | UTF8
>  constraint_exclusion  

Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread k...@rice.edu
On Tue, Oct 30, 2012 at 09:50:44AM +0100, Albe Laurenz wrote:
> >> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz
>  wrote:
> >>> I am configuring streaming replication with hot standby
> >>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
> >>> PostgreSQL was compiled from source.
> >>>
> >>> It works fine, except that starting the standby took for ever:
> >>> it took the system more than 80 minutes to replay 48 WAL files
> >>> and connect to the primary.
> >>>
> >>> Can anybody think of an explanation why it takes that long?
> 
> Jeff Janes wrote:
> >> Could the slow log files be replaying into randomly scattered pages
> >> which are not yet in RAM?
> >>
> >> Do you have sar or vmstat reports?
> 
> The sar reports from the time in question tell me that I read
> about 350 MB/s and wrote less than 0.2 MB/s.  The disks were
> fairly busy (around 90%).
> 
> Jeff Trout wrote:
> > If you do not have good random io performance log replay is nearly
> unbearable.
> > 
> > also, what io scheduler are you using? if it is cfq change that to
> deadline or noop.
> > that can make a huge difference.
> 
> We use the noop scheduler.
> As I said, an identical system performed well in load tests.
> 
> The sar reports give credit to Jeff Janes' theory.
> Why does WAL replay read much more than it writes?
> I thought that pretty much every block read during WAL
> replay would also get dirtied and hence written out.
> 
> I wonder why the performance is good in the first few seconds.
> Why should exactly the pages that I need in the beginning
> happen to be in cache?
> 
> And finally: are the numbers I observe (replay 48 files in 80
> minutes) ok or is this terribly slow as it seems to me?
> 
> Yours,
> Laurenz Albe
> 

Hi,

The load tests probably had the "important" data already cached. Processing
a WAL file would involve bringing all the data back into memory using a
random I/O pattern. Perhaps priming the file cache using some sequential
reads would allow the random I/O to hit memory instead of disk. I may be
misremembering, but wasn't there an associated project/program that would
parse the WAL files and generate cache priming reads?

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Replaying 48 WAL files takes 80 minutes

2012-10-30 Thread k...@rice.edu
On Tue, Oct 30, 2012 at 02:16:57PM +0100, Albe Laurenz wrote:
> k...@rice.edu wrote:
> >>> If you do not have good random io performance log replay is nearly
> >>> unbearable.
> >>>
> >>> also, what io scheduler are you using? if it is cfq change that to
> >>> deadline or noop.
> >>> that can make a huge difference.
> >>
> >> We use the noop scheduler.
> >> As I said, an identical system performed well in load tests.
> 
> > The load tests probably had the "important" data already cached.
> Processing
> > a WAL file would involve bringing all the data back into memory using
> a
> > random I/O pattern.
> 
> The database is way too big (1 TB) to fit into cache.
> 
> What are "all the data" that have to be brought back?
> Surely only the database blocks that are modified by the WAL,
> right?
> 
> Yours,
> Laurenz Albe
> 

Right, it would only read the blocks that are modified.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2

2012-11-06 Thread k...@rice.edu
Hi Rodrigo,

It looks like a lot of joins and 9.2 does some optimizations that
internally add additional joins. Did you try raising the
join_collapse_limit and maybe the from_collapse_limit from the
default values of 8?

Regards,
Ken

On Tue, Nov 06, 2012 at 03:11:58PM -0200, Rodrigo Rosenfeld Rosas wrote:
> Hello, this is my first message to this list, so sorry if this is
> not the right place to discuss this or if some data is missing from
> this message.
> 
> I'll gladly send any data you request that would help us to
> understand this issue. I don't believe I'm allowed to share the
> actual database dump, but other than that I can provide much more
> details you might ask for.
> 
> I can't understand why PG 9.2 performs so differently from PG 9.1.
> 
> I tested these queries in my Debian unstable amd64 box after
> restoring the same database dump this morning in both PG 9.1 (Debian
> unstable repository) and PG9.2 (Debian experimental repository) with
> same settings:
> 
> https://gist.github.com/3f1f3aad3847155e1e35
> 
> Ignore all lines like the line below because it doesn't make any
> difference on my tests if I just remove them or any other column
> from the SELECT clause:
> 
> "  exists(select id from condition_document_excerpt where
> condition_id=c1686.id) as v1686_has_reference,"
> 
> The results below are pretty much the same if you assume "SELECT 1
> FROM ...".
> 
> I have proper indices created for all tables and the query  is fast
> in both PG versions when I don't use many conditions in the WHERE
> clause.
> 
> fast.sql returns the same data as slow.sql but it returns much
> faster in my tests with PG 9.1.
> 
> So here are the completion times for each query on each PG version:
> 
> Query   | PG 9.1 | PG 9.2 |
> ---
> fast.sql| 650 ms (0.65s) | 690s   |
> slow.sql| 419s   | 111s   |
> 
> 
> For the curious, the results would be very similar to slow.sql if I
> use inner joins with the conditions inside the WHERE moved to the
> "ON" clause of the inner join instead of the left outer join +
> global WHERE approach. But I don't have this option anyway because
> this query is generated dynamically and not all my queries are
> "ALL"-like queries.
> 
> Here are the relevant indices (id is SERIAL primary key in all tables):
> 
> CREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx
>   ON transaction_condition
>   USING btree
>   (transaction_id, type_id);
> CREATE INDEX index_transaction_condition_on_transaction_id
>   ON transaction_condition
>   USING btree
>   (transaction_id);
> CREATE INDEX index_transaction_condition_on_type_id
>   ON transaction_condition
>   USING btree
>   (type_id);
> 
> CREATE INDEX acquirer_target_names
>   ON company_transaction
>   USING btree
>   (acquiror_company_name COLLATE pg_catalog."default",
> target_company_name COLLATE pg_catalog."default");
> CREATE INDEX index_company_transaction_on_target_company_name
>   ON company_transaction
>   USING btree
>   (target_company_name COLLATE pg_catalog."default");
> CREATE INDEX index_company_transaction_on_date
>   ON company_transaction
>   USING btree
>   (date);
> CREATE INDEX index_company_transaction_on_edit_status
>   ON company_transaction
>   USING btree
>   (edit_status COLLATE pg_catalog."default");
> 
> CREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id
>   ON condition_boolean_value
>   USING btree
>   (condition_id);
> CREATE INDEX index_condition_boolean_value_on_value_and_condition_id
>   ON condition_boolean_value
>   USING btree
>   (value COLLATE pg_catalog."default", condition_id);
> 
> CREATE UNIQUE INDEX index_condition_option_value_on_condition_id
>   ON condition_option_value
>   USING btree
>   (condition_id);
> CREATE INDEX index_condition_option_value_on_value_id_and_condition_id
>   ON condition_option_value
>   USING btree
>   (value_id, condition_id);
> 
> 
> CREATE INDEX index_condition_option_label_on_type_id_and_position
>   ON condition_option_label
>   USING btree
>   (type_id, "position");
> CREATE INDEX index_condition_option_label_on_type_id_and_value
>   ON condition_option_label
>   USING btree
>   (type_id, value COLLATE pg_catalog."default");
> 
> 
> CREATE UNIQUE INDEX index_condition_string_value_on_condition_id
>   ON condition_string_value
>   USING btree
>   (condition_id);
> CREATE INDEX index_condition_string_value_on_value_and_condition_id
>   ON condition_string_value
>   USING btree
>   (value COLLATE pg_catalog."default", condition_id);
> 
> 
> Please let me know of any suggestions on how to try to get similar
> results in PG 9.2 as well as to understand why fast.sql performs so
> much better than slow.sql on PG 9.1.
> 
> Best,
> Rodrigo.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFS

2013-01-07 Thread k...@rice.edu
Hi Patrick,

You really need a flash ZIL with ZFS to handle syncs effectively. Setting
the sync_commit to off is the best you can do without it. Not that that
is bad, we do that here as well.

Regards,
Ken

On Tue, Jan 08, 2013 at 01:18:02AM +0800, Patrick Dung wrote:
> Hi,
> 
> Updated information in this post.
> 
> I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.
> The pgsql base directory is in a ZFS dataset.
> 
> I
>  have noticed the performance is sub-optimal, but I know the default 
> setting should be the most safest one to be use (without possible data 
> corruption/loss).
> 
> a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL.
> The user interactive response is not slow (switching web pages or create a 
> change).
> 
> b) There is a benchmark in the support module of OTRS.
> It tested insert,update,select and delete performance.
> The response time is slow (>10 sec), except select.
> 
> I have done some research on web, with below settings (just one change, not 
> both), the performance returned to normal:
> 
> 1) Disabled sync in the pgsql dataset in ZFS
> zfs set sync=disabled mydata/pgsql
> or 
> 2) In
>  postgresql.conf, set synchronous_commit from on to off
> 
> I know the above setting would lead to data loss (e.g.power goes off), any 
> comments?
> 
> PS:
> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not 
> seem to help.
> 
> 2)
> I have tried the default setting on Linux too:
> RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1.
> The web site is responsive and the benchmark result is more or less the same 
> as FreeBSD with the 'sync' turned off.
> 
> 3)
> For FreeBSD, same setting with Postgresql on UFS:
> The performance is between ZFS (default, sync enabled) and ZFS (sync 
> disabled).
> 
> Thanks,
> Patrick
> 
> --- On Mon, 1/7/13, Patrick Dung  wrote:
> 
> From: Patrick Dung 
> Subject: Sub optimal performance with default setting of Postgresql with 
> FreeBSD 9.1 on ZFS
> To: pgsql-performance@postgresql.org
> Date: Monday, January 7, 2013, 11:32 PM
> 
> Hi,
> 
> I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.
> The pgsql base directory is in a ZFS dataset.
> 
> I have noticed the performance is sub-optimal, but I know the default setting 
> should be the most safest one to be use (without possible data 
> corruption/loss).
> 
> a) I use OTRS ticketing system, the backend is PostgreSQL.
> The user interactive response is not slow (switching web pages or create a 
> change).
> 
> b) There is a benchmark in the support module of OTRS.
> It tested insert,update,select and delete performance.
> The response time is slow (>10 sec), except select.
> 
> I have done some research on web, with below settings (just one change, not 
> both), the performance returned to normal:
> 
> 1) Disabled sync in the pgsql dataset in ZFS
> zfs set sync=disabled mydata/pgsql
> or 
> 2) In
>  postgresql.conf, set synchronous_commit from on to off
> 
> I know the above setting would lead to data loss (e.g.power goes off), any 
> comments?
> 
> PS:
> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not 
> seem to help.
> 
> 2)
> I have tried the default setting on Linux too:
> RHEL 6.3, stock postgresql 8.x, OTRS 3.1.
> The web site is responsive and the benchmark result is more or less the same 
> as FreeBSD with the 'sync' turned off.
> 
> Thanks,
> Patrick


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL over internet

2013-01-26 Thread k...@rice.edu
On Sun, Jan 27, 2013 at 03:15:45AM +0300, belal hamed wrote:
> 
> I connect to my server through ADSL connection 4Mbps
> 

Here is your "problem". You need to understand the performance
characteristics of your communication channel. ADSL is a VERY
asymmetric communications channel. Down is usually much faster
than up.

> I try this query
> 
> select "MatID", "MatName", "MatCode"
> from "Materials"
> where "MatCode" ~* '^1101'
> order by "MatCode"
> limit 2
> 
> by wireshark I monitor TCP packets I found total data transmit/received 400B
> I took about 2.5s to fetch results  why ??
> 
> after trying every solution mentioned in previous messages (DNS, tcpip,
> postgresql.conf, ...) not found any improve,
> 
> I tried this one:
> 
> using Zebedee(http://www.winton.org.uk/zebedee/)
> I build an IP tunnel between me and my data server (I used compression
> level 9)
> 
> surprisingly same query now took about 600 ms, "very impressive"
> 
> same thing with this query
> select "MatID", "MatName", "MatCode", "MatParent" from "Materials"
> from 48s down to  17s
> 
> all these tests done on same connection with same devices so same dns,
> tcp-ip, 
> 
> now I am sure there is something wrong with libpq.

When you wrap the communication channel in an IP tunnel, you are
collapsing much of the syn-ack of the libpq protocol. You can see
the same effect trying to run any sort of X windows application.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL over internet

2013-01-27 Thread k...@rice.edu
On Sun, Jan 27, 2013 at 03:09:55PM +0300, belal hamed wrote:
> >Here is your "problem". You need to understand the performance
> >characteristics of your communication channel. ADSL is a VERY
> >asymmetric communications channel. Down is usually much faster
> >than up.
> 
> How it could be ADSL problem when it's the same in tow tests ?
> beside data transferred when using tunnel is much bigger (about 10KB)  than
> direct connection (400B)
> so it should be slower when using tunnel but the result shows it was faster
> 
> 

Due to the asymmetric communication, a bigger data output in a single
packet (the result of using compression on the tunnel) will get sent
without waiting. A smaller packet will delay a bit waiting for some
additional data, which in your case does not come. You may want to 
check out this document describing some of what I believe is causing
your observed behavior:

http://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html#BACKGROUND

> >When you wrap the communication channel in an IP tunnel, you are
> >collapsing much of the syn-ack of the libpq protocol. You can see
> >the same effect trying to run any sort of X windows application.
> 
> If that so, why these is not same option in Postgresql, Is it necessary to
> use IP tunnel to do that and perform fast fetch?
> 
> 
> >Try creating a simple SSH tunnel
> my server is windows 7
> 
> >It should be possible to distinguish between:
> >  - slowness caused by the database query itself
> >  - slowness caused by the network fundamentally.
> >  - slowness caused by the postgresql/libpq.
> 
> I run the same query on same network connection so I eliminate the
> slowness caused by the database query and network fundamentally,
> nothing left but postgresql/libpq
> 
> not anyone consider there may be a bug when connection to a remote server
> over internet in libpq
> the only different when I used the tunnel is I connect to localhost
> instead  of server IP or domain name (I try both)

You would find that if you log in to your DB server and use libpq
to it over a localhost connection that the performance is good which
points to your network as the problem.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Question about postmaster's CPU usage

2013-03-28 Thread k...@rice.edu
On Thu, Mar 28, 2013 at 02:03:42PM -0700, Kevin Grittner wrote:
> kelphet xiong  wrote:
> 
> > When I use postgres and issue a simple sequential scan for a
> > table inventory using query "select * from inventory;", I can see
> > from "top" that postmaster is using 100% CPU, which limits the
> > query execution time. My question is that, why CPU is the
> > bottleneck here and what is postmaster doing? Is there any way to
> > improve the performance? Thanks!
> 
> > explain analyze select * from inventory;
> > 
> > Seq Scan on inventory  (cost=0.00..180937.00 rows=11745000 width=16) 
> > (actual time=0.005..1030.403 rows=11745000 loops=1)
> >  Total runtime: 1750.889 ms
> 
> So it is reading and returning 11.7 million rows in about 1 second,
> or about 88 nanoseconds (billionths of a second) per row.  You
> can't be waiting for a hard drive for many of those reads, or it
> would take a lot longer, so the bottleneck is the CPU pushing the
> data around in RAM.  I'm not sure why 100% CPU usage would surprise
> you.  Are you wondering why the CPU works on the query straight
> through until it is done, rather than taking a break periodically
> and letting the unfinished work sit there?
> 
> --
> Kevin Grittner
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 

Alternatively, purchase a faster CPU if CPU is the bottleneck as it
is in this case or partition the work into parallel queuries that can
each use a processor.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] slow bitmap heap scans on pg 9.2

2013-04-10 Thread k...@rice.edu
On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:
> I'm encountering an issue where PG 9.2.4 (we also see this with
> 9.2.3) is picking a plan involving a bitmap heap scan that turns out
> to be much slower than a nested-loop plan using indexes.
> 
> The planner picks the hashjoin plan by default (see attached files)
> 
> Bitmap Heap Scan on public.table_b_2 b  (cost=172635.99..9800225.75
> rows=8435754 width=10) (actual t
> ime=9132.194..1785196.352 rows=9749680 loops=1)
>Recheck Cond: ((b.organization_id = 3)
> AND (b.year = 2013) AND (b.month = 3))
>Rows Removed by Index Recheck: 313195667
>Filter: (b.product_id = 2)
> 
> Is the part that seems be causing the problem (or at least taking
> most of the time, other than the final aggregation)
> 
> If I set enable_hashjoin=false and enable_mergejoin=false I get the
> nestedloop join plan.
> 
> table_b is 137 GB plus indexes each on is around 43 GB
> table_a is 20 GB
> 
> random_page_cost = 2.0
> effective_cache_size = 3500MB
> cpu_tuple_cost = 0.01
> cpu_index_tuple_cost = 0.005
> cpu_operator_cost = 0.0025
> work_mem = 64MB
> shared_buffers = 300MB  (for this output, I've also had it at 2GB)
> 
> If I bump cpu_tuple_cost to the 10-20 range it will pick the nested
> loop join for some date ranges but not all. cpu_tuple_cost of 20
> doesn't sound like an sane value.
> 
> This database used to run 8.3 where it picked the nested-loop join.
> We used pg_upgrade to migrate to 9.2
> 
> Any ideas why the bitmap heap scan is much slower than the planner expects?
> 
> Steve

Hi Steve,

The one thing that stands out to me is that you are working with 200GB of
data on a machine with 4-8GB of ram and you have the random_page_cost set
to 2.0. That is almost completely uncached and I would expect a value of
10 or more to be closer to reality.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] slow bitmap heap scans on pg 9.2

2013-04-10 Thread k...@rice.edu
On Wed, Apr 10, 2013 at 11:56:32AM -0400, Steve Singer wrote:
> On 13-04-10 09:56 AM, k...@rice.edu wrote:
> >On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:
> 
> >
> >Hi Steve,
> >
> >The one thing that stands out to me is that you are working with 200GB of
> >data on a machine with 4-8GB of ram and you have the random_page_cost set
> >to 2.0. That is almost completely uncached and I would expect a value of
> >10 or more to be closer to reality.
> 
> Setting random_page_cost to 15 makes the planner choose the
> nested-loop plan (at least the date range I tried).
> 
> I thought that the point of effective cache size was to tell the
> planner high likely it is for a random page to be in cache.  With
> 200GB of data for this query and an effective cache size of 3.5 GB I
> would have expected that to be accounted for.
> 
For random_page_cost to be that low, the database would need to be
mostly cached. 3.5GB is almost 100X too small to do that unless your
query exhibits a large amount of locality of reference. Values for
random_page_cost between 10 and 20 are very reasonable for disk-bound
I/O scenarios.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] High CPU usage buy low I/O wait

2013-04-18 Thread k...@rice.edu
On Thu, Apr 11, 2013 at 04:30:54PM -0700, bing1221 wrote:
> Our server is running postgresql 8.4.15. During day time the cpu usage always
> around 80%, but it's not IO bound. The swap space is looking OK also. Also
> we setup pgbadger and enable all logs to monitor the slow query but they all
> finished quick. Usually it has 60 incoming connections, and we have
> pgbouncer to control the pool. Any suggestions I can look into it? Thank
> you.

Why do you think you have a problem? It sounds like normal system usage.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread k...@rice.edu
On Wed, May 01, 2013 at 02:05:06PM +, Anne Rosset wrote:
> Hi all,
> We are running a stress test that executes one select query with multiple 
> threads.
> The query executes very fast (10ms). It returns 100 rows.  I see 
> deterioration in the performance when we have multiple threads executing the 
> query. With 100 threads, the query takes between 3s and 8s.
> 
> I suppose there is a way to tune our database. What are the parameters I 
> should look into? (shared_buffer?, wal_buffer?)
> 
> Thanks for your help,
> Anne

Try a connection pooler like pgbouncer to keep the number of simultaneous 
queries
bounded to a reasonable number. You will actually get better performance.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Deterioration in performance when query executed in multi threads

2013-05-01 Thread k...@rice.edu
On Wed, May 01, 2013 at 04:07:55PM +, Anne Rosset wrote:
> Hi Ken,
> Thanks for your answer. My test is actually running with jboss 7/jdbc and the 
> connection pool is defined  with min-pool-size =10 and max-pool-size=400.
> 
> Why would you think it is an issue with the connection pool?
> 
> Thanks,
> Anne
> 

Hi Anne,

You want to be able to run as many jobs productively at once as your hardware is
capable of supporting. Usually something starting a 2 x number of CPUs is best.
If you make several runs increasing the size of the pool each time, you will
see a maximum throughput somewhere near there and then the performance will
decrease as you add more and more connections. You can then use that sweet spot.
Your test harness should make that pretty easy to find.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1

2013-05-15 Thread k...@rice.edu
On Tue, May 14, 2013 at 11:52:29PM -0700, Christoph Berg wrote:
> Re: Mark Felder 2013-05-13 
> > What version of DBIx-SearchBuilder do you have on that server? The
> > RT guys usually recommend you have the latest possible so RT is
> > performing the most sane/optimized queries possible for your
> > database. I honestly don't know if it will make a difference for
> > you, but it's worth a shot.
> 
> That's a "never touch a running system" kind of machine there, we are
> happy that they let us finally upgrade at least the PostgreSQL part of
> the setup, so changing any perl libs there is out of the question.
> 
> The version is libdbix-searchbuilder-perl 1.26-1 from Debian Sarge/3.1
> *cough*.
> 
> Christoph
> -- 

Hi Christoph,

I understand the sentiment but you really should consider upgrading. I
think the current release is 1.63 and since it is the DB interface it
could have a positive effect on your problem not to mention that they
do fix bugs. :)

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance database for backup/restore

2013-05-21 Thread k...@rice.edu
On Tue, May 21, 2013 at 05:28:31PM +0400, Evgeny Shishkin wrote:
> 
> On May 21, 2013, at 5:18 PM, Jeison Bedoya  wrote:
> 
> > Hi people, i have a database with 400GB running in a server with 128Gb RAM, 
> > and 32 cores, and storage over SAN with fiberchannel, the problem is when i 
> > go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a 
> > restore and take a lot of 17 hours, that is a normal time for that process 
> > in that machine? or i can do something to optimize the process of 
> > backup/restore.
> > 
> 
> I'd recommend you to dump with 
> 
> pg_dump --format=c
> 
> It will compress the output and later you can restore it in parallel with
> 
> pg_restore -j 32 (for example)
> 
> Right now you can not dump in parallel, wait for 9.3 release. Or may be 
> someone will back port it to 9.2 pg_dump.
> 
> Also during restore you can speed up a little more by disabling fsync and 
> synchronous_commit. 
> 

If you have the space and I/O capacity, avoiding the compress option will be
much faster. The current compression scheme using zlib type compression is
very CPU intensive and limits your dump rate. On a system that we have, a
dump without compression takes 20m and with compression 2h20m. The parallel
restore make a big difference as well.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] seqscan for 100 out of 3M rows, index present

2013-06-26 Thread k...@rice.edu
On Wed, Jun 26, 2013 at 10:36:10PM +0200, Willy-Bas Loos wrote:
> On Wed, Jun 26, 2013 at 10:31 PM, Jeff Janes  wrote:
> 
> >
> > Why is it retrieving 3.1 million, when it only needs 17?
> >
> >
> > that's because of the sequential scan, it reads all the data.
> 
> cheers,
> 
> willy-bas

Well, the two plans timings were pretty close together. Maybe your
cost model is off. Try adjusting the various cost parameters to
favor random I/O more.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intermittent hangs with 9.2

2013-09-10 Thread k...@rice.edu
On Tue, Sep 10, 2013 at 11:04:21AM -0400, David Whittaker wrote:
> Hi All,
> 
> I've been seeing a strange issue with our Postgres install for about a year
> now, and I was hoping someone might be able to help point me at the cause.
> At what seem like fairly random intervals Postgres will become unresponsive
> to the 3 application nodes it services. These periods tend to last for 10 -
> 15 minutes before everything rights itself and the system goes back to
> normal.
> 
> During these periods the server will report a spike in the outbound
> bandwidth (from about 1mbs to about 5mbs most recently), a huge spike in
> context switches / interrupts (normal peaks are around 2k/8k respectively,
> and during these periods they‘ve gone to 15k/22k), and a load average of
> 100+. CPU usage stays relatively low, but it’s all system time reported,
> user time goes to zero. It doesn‘t seem to be disk related since we’re
> running with a shared_buffers setting of 24G, which will fit just about our
> entire database into memory, and the IO transactions reported by the
> server, as well as the disk reads reported by Postgres stay consistently
> low.
> 
> We‘ve recently started tracking how long statements take to execute, and
> we’re seeing some really odd numbers. A simple delete by primary key, for
> example, from a table that contains about 280,000 rows, reportedly took
> 18h59m46.900s. An update by primary key in that same table was reported as
> 7d 17h 58m 30.415s. That table is frequently accessed, but obviously those
> numbers don't seem reasonable at all.
> 
> Some other changes we've made to postgresql.conf:
> 
> synchronous_commit = off
> 
> maintenance_work_mem = 1GB
> wal_level = hot_standby
> wal_buffers = 16MB
> 
> max_wal_senders = 10
> 
> wal_keep_segments = 5000
> 
> checkpoint_segments = 128
> 
> checkpoint_timeout = 30min
> 
> checkpoint_completion_target = 0.9
> 
> max_connections = 500
> 
> The server is a Dell Poweredge R900 with 4 Xeon E7430 processors, 48GB of
> RAM, running Cent OS 6.3.
> 
> So far we‘ve tried disabling Transparent Huge Pages after I found a number
> of resources online that indicated similar interrupt/context switch issues,
> but it hasn’t resolve the problem. I managed to catch it happening once and
> run a perf which showed:
> 
> 
> +  41.40%   48154  postmaster  0x347ba9 f 0x347ba9
> +   9.55%   10956  postmaster  0x2dc820 f
> set_config_option
> +   8.64%9946  postmaster  0x5a3d4  f writeListPage
> +   5.75%6609  postmaster  0x5a2b0  f
> ginHeapTupleFastCollect
> +   2.68%3084  postmaster  0x192483 f
> build_implied_join_equality
> +   2.61%2990  postmaster  0x187a55 f
> build_paths_for_OR
> +   1.86%2131  postmaster  0x794aa  f
> get_collation_oid
> +   1.56%1822  postmaster  0x5a67e  f
> ginHeapTupleFastInsert
> +   1.53%1766  postmaster  0x1929bc f
> distribute_qual_to_rels
> +   1.33%1558  postmaster  0x249671 f cmp_numerics
> 
> I‘m not sure what 0x347ba9 represents, or why it’s an address rather than a
> method name.
> 
> That's about the sum of it. Any help would be greatly appreciated and if
> you want any more information about our setup, please feel free to ask.
> 
> Thanks,
> Dave

Hi Dave,

A load average of 100+ means that you have that many processes waiting to
run yet you only have 16 cpus. You really need to consider using a connection
pooler like pgbouncer to keep your connection count in the 16-32 range.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 57 minute SELECT

2013-10-03 Thread k...@rice.edu
On Thu, Oct 03, 2013 at 04:19:29AM +, Samuel Stearns wrote:
> Thanks, Claudio.
> 
> I'll have a look at the clustering.
> 
> We have also noticed that the same query with a datetime range of 3 hours 
> (rather than 4 months) runs in just 30 seconds:
> 
> AND datetime <= '2013-10-03 10:03:49'
> AND datetime >= '2013-10-03 07:03:49'
> 

Hi Samuel,

That is even worse performance relatively. 30s for a 3 hour range equals
28800s for a 4 month (2880 hours) range, or 8 hours. I definitely would
consider clustering.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql recommendation memory

2013-11-11 Thread k...@rice.edu
On Mon, Nov 11, 2013 at 09:14:43AM -0700, Scott Marlowe wrote:
> On Mon, Nov 11, 2013 at 1:09 AM, Евгений Селявка  
> wrote:
> > Scott hi, i calculate all of my jdbc pool size. Maximum is 300 connections
> > from components wich use jdbc. I don't think that this is a good idea use
> > pgbouncer, because our application using spring framework which using jdbc
> > and prepared statement. I try to talk with our developer about disabling
> > prepared statement in this framework, they don't want do this. Thats why i
> > will try to upgrade HW and buy CPU with more core as you say based on
> > formula 3-4xcore. But most of this connection is idle. This is a web based
> > app not a datawarehouse, thats why all this connection is lightwear.
> >
> > About my db freeze i set this kernel parameter:
> > echo 1048576 > /proc/sys/vm/min_free_kbytes
> > echo 80 > /proc/sys/vm/vfs_cache_pressure
> >
> > And my freeze intervals is steel smaller. I try to dig deeper.
> 
> well you can hopefully reduce connections from jdbc pooling then. The
> fact that the connections are idle is good.
> 
> The problem you run into is what happens when things go into
> "overload" I.e. when the db server starts to slow down, more of those
> idle connections become not idle. If all 300 are then waiting on the
> db server, it will slow to a crawl and eventually fall over.
> 
+1 I would definitely encourage the use of pgbouncer to map the 300 connections
to a saner number that your DB can actually handle. We had a similar problem
and very, very occasionally the server would "lockup". Once we put the
resource management pooler in place, performance has been the same best-case
and much, much better worse-case and NO lockups.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query in cache

2013-11-18 Thread k...@rice.edu
On Mon, Nov 18, 2013 at 02:38:09PM -0200, Rogerio Pereira wrote:
> Hi,
> 
> I am need help, about subject "Query cache in  Postgres".
> how is it possible to put sql statements cached in postgres ?
> I did some tests and I can not get even with improved tuning
> parameters in the postgresql.conf.
> 
> Regards,
> 

Hi Rogerio,

PostgreSQL does not have a query cache. I think you would need to
roll your own.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query in cache

2013-11-19 Thread k...@rice.edu
On Mon, Nov 18, 2013 at 04:36:20PM -0800, salah jubeh wrote:
> Hello,
> 
> pgpool supports memcache.
> Regards
> 
Very cool. I had missed that in the release announcement.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PGSQL 9.3 - Materialized View - multithreading

2014-04-04 Thread k...@rice.edu
On Fri, Apr 04, 2014 at 10:26:22PM +0200, PARIS Nicolas wrote:
> this postgres documentation :
> http://www.postgresql.org/docs/9.3/static/ecpg-connect.html
> says it is actually possible to manage connection in C stored procedure.
> 
> I may be wrong...
> 
> 
> Le 04/04/2014 22:14, Thom Brown a écrit :
> > lear on how triggers come into this.  You can't have triggers
> > on materialized views, and they don't fire triggers on tables or views
> > that they are based o
> 

Hi,

I do not know if it can be used in this fashion, but could pl/proxy be
used by defining a cluster to be the same server and use a partitioned
remote call? Someone with pl/proxy experience may have more information.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Planning for Scalability

2014-10-03 Thread k...@rice.edu
On Fri, Oct 03, 2014 at 10:55:04AM +0200, Roberto Grandi wrote:
> Dear Pg people,
> 
> I would ask for your help considering this scaling issue. We are planning to 
> move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 
> 5 millions of items/day.
> What do you suggest in order to plan this switch? Add separate server? 
> Increase RAM? Use SSD?
> 
> Any real help will be really precious and appreciated.
> Roberto
> 

Hi Roberto,

This change is within a factor of 2 of your existing load. I would start with
analyzing the load on your existing system to determine where your bottlenecks
are. 5M/day is 57/sec evenly distributed or 174/sec in an 8 hour period. This
does not seems like a lot, but you have given us no details on your actual
workload.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query doesn't use index on hstore column

2014-12-04 Thread k...@rice.edu
On Fri, Dec 05, 2014 at 09:42:20AM +1300, Michael Barker wrote:
> 1)  Created table with hstore column and btree index.
> 
> barkerm=# \d audit
>Table "public.audit"
> Column |Type |
> Modifiers
> ---+-+
>  id| integer | not null default
> nextval('audit_id_seq'::regclass)
>  principal_id  | integer |
>  created_at| timestamp without time zone |
>  root  | character varying(255)  |
>  template_code | character(3)|
>  attributes| hstore  |
>  args  | character varying(255)[]|
> Indexes:
> "audit_pkey" PRIMARY KEY, btree (id)
> "audit_attributes_idx" btree (attributes)
> 
> ...
> 5) Explain query using the attributes column in the where clause (uses Seq
> Scan).
> 
> barkerm=# explain analyse select * from audit where attributes->'accountId'
> = '1879355460';
>  QUERY PLAN
> 
> 
>  Seq Scan on audit  (cost=0.00..35409.00 rows=5000 width=133) (actual
> time=114.314..218.821 rows=1 loops=1)
>Filter: ((attributes -> 'accountId'::text) = '1879355460'::text)
>Rows Removed by Filter: 99
>  Planning time: 0.074 ms
>  Execution time: 218.843 ms
> (5 rows)
> 
Hi Michael,

I think your index definitions need to be on the particular attribute from
attributes and not attributes itself. That works but it does not apply to
the query you show above. I think that the binary json type in 9.4 will
do what you want. I have not worked with it myself, just looked at the docs.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Configuration tips for very large database

2015-02-13 Thread k...@rice.edu
On Thu, Feb 12, 2015 at 11:25:54PM +0100, Nico Sabbi wrote:
> Hello,
> I've been away from  postgres for several years, so please forgive
> me if I forgot nearly everything:-)
> 
> I've just inherited a database collecting environmental data.
> There's a background process continually inserting records (not so
> often, to say the truth) and a web interface to query data.
> At the moment the record count of the db is 250M and growing all the
> time. The 3 main tables have just 3 columns.
> 
> Queries get executed very very slowly, say 20 minutes. The most
> evident problem I see is that io wait load is almost always 90+%
> while querying data, 30-40% when "idle" (so to say).
> Obviously disk access is to blame, but I'm a bit surprised because
> the cluster where this db is running is not at all old iron: it's a
> vmware VM with 16GB ram, 4cpu 2.2Ghz, 128GB disk (half of which
> used). The disk system underlying vmware is quite powerful, this
> postgres is the only system that runs slowly in this cluster.
> I can increase resources if necessary, but..
> 
> Even before analyzing queries (that I did) I'd like to know if
> someone has already succeeded in running postgres with 200-300M
> records with queries running much faster than this. I'd like to
> compare the current configuration with a super-optimized one to
> identify the parameters that need to be changed.
> Any link to a working configuration would be very appreciated.
> 
> Thanks for any help,
>   Nico
> 

Hi Nico,

No one has mentioned the elephant in the room, but a database can
be very I/O intensive and you may not be getting the performance
you need from your virtual disk running on your VMware disk subsystem.
What do IOmeter or other disk performance evaluation software report?

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL disk fragmentation causes performance problems on Windows

2015-04-29 Thread k...@rice.edu
On Wed, Apr 29, 2015 at 07:07:04AM -0700, Joshua D. Drake wrote:
> 
> On 04/29/2015 01:08 AM, Andres Freund wrote:
> 
> >>Which OS and filesystem is this done on? Because many halfway modern
> >>systems, like e.g ext4 and xfs, implement this in the background as
> >>'delayed allocation'.
> >
> >Oh, it's in the subject. Stupid me, sorry for that. I'd consider testing
> >how much better this behaves under a different operating system, as a
> >shorter term relief.
> 
> This is a known issue on the Windows platform. It is part of the
> limitations of that environment. Linux/Solaris/FreeBSD do not suffer
> from this issue in nearly the same manner.
> 
> jD
> 

You might consider a CLUSTER or VACUUM FULL to re-write the table with
less fragmentation.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pg bouncer issue what does sv_used column means

2015-06-12 Thread k...@rice.edu
On Fri, Jun 12, 2015 at 09:37:36PM +, Sheena, Prabhjot wrote:
> Here is some more information
> 
> pool_mode | transaction
> 
> We have transactional pooling and our application is set up in such a way 
> that we have one query per transaction. We have set  default pool size to 100.
> 
> This is output . As you guys can see active connection are 100 and 224 are 
> waiting. We are planning to move default pool size to 250. Please suggest if 
> you guys think otherwise
> 
> pgbouncer=# show pools;
> database  |   user| cl_active | cl_waiting | sv_active | sv_idle | 
> sv_used | sv_tested | sv_login | maxwait
> ---+---+---++---+-+-+---+--+-
> pgbouncer | pgbouncer | 2 |  0 | 0 |   0 |   
> 0 | 0 |0 |   0
> site  | feature   |   418 |  0 |20 |  17 |   
> 0 | 0 |0 |   0
> site  | service   |   621 |224 |   100 |   0 |   
> 0 | 0 |0 |   0
> site  | zabbix| 0 |  0 | 0 |   0 |   
> 0 | 0 |0 |   0
> 
> Prabhjot Singh
> Database Administrator
> 
> CLASSMATES
> 1501 4th Ave., Suite 400
> Seattle, WA 98101
> 206.301.4937 o
> 206.301.5701 f
> 
> From: Sheena, Prabhjot
> Sent: Friday, June 12, 2015 10:57 AM
> To: 'pgsql-gene...@postgresql.org'; 'pgsql-performance@postgresql.org'
> Subject: pg bouncer issue what does sv_used column means
> 
> Guys we see spike in pg bouncer during the peak hours and that was slowing 
> down the application. We did bump up the connection limit and it is helpful 
> but now we  again notice little spike in connection. And one thing that I 
> notice that is different is jump in sv_used value when I run command show 
> pools during problem times
> 
> 
> Can anyone please explain what value of sv_used means when i run show pools;
> 
> 
> 
> Regards
> Prabhjot
> 

Hi Parbhjot,

The spike in pgbouncer during peak hours just indicates that you are busier 
then. How
many sv_active do you have in non-peak hours? What kind of system is this on? I 
suspect
that your hardware cannot actually handle 100 simultaneous processes at once 
and if you
increase that to 250 processes there is a good likelyhood that your system 
response
will get even worse. Number of CPU to 2x number of CPU is typical for peak 
performance
throughput. Are you using a 50-core system? What do the I/O stats look like? 
You may be
I/O limited.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread k...@rice.edu
On Thu, Jun 18, 2015 at 05:09:10PM +, Sheena, Prabhjot wrote:
> Guys
>  I have an issue going on with PGBOUNCER which is slowing down the 
> site
> 
> PGBOUNCER VERSION:  pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 
> cpu) 98GB RAM
> DATABASE VERION: postgresql 9.3
> 
> When the total client connections to pgbouncer are close to 1000, site 
> application works fine but when  the total client connections crosses 1150 
> site application starts showing slowness.
> 
> Here is an example of output
> 
> postgres@symds-pg:~ $ netstat -atnp | grep 5432  | wc
> (Not all processes could be identified, non-owned process info
> will not be shown, you would have to be root to see it all.)
> 9606720  104640
> 
> 
> As you can see total connections are like 960 right now my site application 
> is working fine. When connections crosses 1150  and even though I see lot of 
> available connections coz my  default_pool_size is set high to 250  but still 
> the application gets slow. Database performance  on the other end is great 
> with no slow running queries or anything. So the only place I can think the 
> issue is at PGBOUNCER end.
> 

Hi Prabhjot,

This is classic behavior when you have a 1024 file limit. When you are below 
that
number, it work fine. Above that number, you must wait for a connection to close
and exit before you can connect which will cause a delay. See what ulimit has to
say?

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread k...@rice.edu
On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
> Here is the output of OS limits
> 
> postgres@symds-pg:~ $ ulimit -a
> 
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 790527
> max locked memory   (kbytes, -l) 32
> max memory size (kbytes, -m) unlimited
> open files  (-n) 4096
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 10240
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 16384
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
> 
> 
> Thanks
> Prabhjot
> 

I would bump your open files as was suggested in your pgbouncer start
script.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread k...@rice.edu
On Thu, Jun 18, 2015 at 07:19:13PM +, Sheena, Prabhjot wrote:
> Hi Ken/ Will
> 
>   I have checked the ulimit value and we are nowhere hitting the max 4096 
> that we have currently set. Is there any other explanation why we should be 
> thinking of bumping it to like ulimit -n 5 ( Add ulimit -n 5 to the 
> start of whatever you use to start pgbouncer (init script, etc..)) even 
> though we are not reaching 4096 max value
> 
> Regards
> Prabhjot Singh
> 

Hi,

Try attaching to the pgbouncer with strace and see if you are getting any 
particular
errors. Do you have a /etc/security/limits.d directory? And if so, what is in 
it?
We found a nice default ulimit of 1024 for all non-root users. :(

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Occasional Really Slow Running Updates/Inserts

2015-09-17 Thread k...@rice.edu
On Thu, Sep 17, 2015 at 03:14:43PM -0400, Dave Stibrany wrote:
> Hi all. I am occasionally seeing really slow update/inserts into a fairly
> large table. By really slow I mean around 10-40 seconds,
> while the majority of queries take milliseconds. I cannot reproduce this
> problem myself, but it is occurring multiple times a day
> (maybe 30 times).
> 
> System Info
> ---
> Model: Dell PowerEdge R420
> CPU: 12 core Intel(R) Xeon(R) @ 2.20GHz
> Memory: 16GB
> Disk: PERC H310 Mini Raid Controller using Raid 1
> OS: Ubuntu 14.04.3 LTS
> 
> DB Settings
> 
> ... a lot of information deleted...

Hi Dave,

This search index is almost certainly the cause of your slowdowns:

> Indexes:
> "document_search_ix" gin (contents_search)

We observed similar odd slowdowns with a GIN text search index. We
had to disable the 'fastupdate' option for the index to stop the large
pauses by the index entry clean-up processing. There have been some
recent patches to address the penalty problem caused by the fastupdate
processing.

> What I haven't tried
> 
> - more aggressive auto-vacuum
> - trying gist table for full text search index instead of gin
> - removing full text search altogether (are users don't use it very much)

Nope, keep using GIN. GIST is too slow for this usage. Just disable the
'fastupdate' on the index:

ALTER INDEX document_search_ix SET (fastupdate = off);

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance problem with gin index

2015-09-29 Thread k...@rice.edu
On Tue, Sep 29, 2015 at 05:45:41PM +0200, Bertrand Paquet wrote:
> Hi,
> 
> We have got big slow down on our production plateform (PG 9.4.4).
> After analyzing wals with pg_xlogdump, we see lot of writing in Gin Indexes.
> We suspect slow down are related to the write of pending update on the
> index.
> 
> So, is there any method to see
> - what is the current config of gin_pending_list_limit on a given index ?
> - the current size of pending list on a given index ?
> 
> Regards,
> Bertrand

Hi Bertrand,

You might try disabling fastupdate for the index. 9.5 has some work in
this area, but prior to that disabling it is the best fix. It certainly
helped our system with the same issue.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] One long transaction or multiple short transactions?

2015-10-08 Thread k...@rice.edu
On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:
> >> Sounds like a locking problem
> 
> This is what I am trying to get at. The reason that I am not addressing
> hardware or OS configuration concerns is that this is not my environment,
> but my client's. The client is running my import software and has a choice
> of how long the transactions can be. They are going for long transactions,
> and I am trying to determine whether there is a penalty for single long
> transactions over a configuration which would allow for more successive
> short transactions. (keep in mind all reads and writes are single-row). 
> 
> There are other people working on hardware and OS configuration, and that's
> why I can't want to get into a general optimization discussion because the
> client is concerned with just this question.
> 

Hi Carlo,

Since the read/writes are basically independent, which is what I take your
"single-row" comment to mean, by batching them you are balancing two
opposing factors. First, larger batches allow you to consolodate I/O and
other resource requests to make them more efficient per row. Second, larger
batches  require more locking as the number of rows updated grows. It may
very well be the case that by halving your batch size that the system can
process them more quickly than a single batch that is twice the size.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] One long transaction or multiple short transactions?

2015-10-08 Thread k...@rice.edu
On Thu, Oct 08, 2015 at 05:43:11PM -0400, Carlo wrote:
> -Original Message-
> From: k...@rice.edu [mailto:k...@rice.edu] 
> Sent: October 8, 2015 1:00 PM
> To: Carlo
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] One long transaction or multiple short transactions?
> 
> On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:
> > >> Sounds like a locking problem
> > 
> > This is what I am trying to get at. The reason that I am not 
> > addressing hardware or OS configuration concerns is that this is not 
> > my environment, but my client's. The client is running my import 
> > software and has a choice of how long the transactions can be. They 
> > are going for long transactions, and I am trying to determine whether 
> > there is a penalty for single long transactions over a configuration 
> > which would allow for more successive short transactions. (keep in mind
> all reads and writes are single-row).
> > 
> > There are other people working on hardware and OS configuration, and 
> > that's why I can't want to get into a general optimization discussion 
> > because the client is concerned with just this question.
> > 
> 
> On October 8, 2015 1:00 PM Ken wrote:
> > Hi Carlo,
> 
> > Since the read/writes are basically independent, which is what I take your
> "single-row" comment to mean, by batching them you are balancing two 
> > opposing factors. First, larger batches allow you to consolodate I/O and
> other resource requests to make them more efficient per row. Second, larger 
> > batches  require more locking as the number of rows updated grows. It may
> very well be the case that by halving your batch size that the system can 
> > process them more quickly than a single batch that is twice the size.
> 
> Just to clarify, one transaction of this type may take longer to commit than
> two successive transactions of half the size?
> 

Yes, but where the optimum count is located should be determined by testing.
Just varying the batch size and note where the performance is at a maximum.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Adding a ROLLUP switches to GroupAggregate unexpectedly

2016-03-31 Thread k...@rice.edu
On Thu, Mar 31, 2016 at 02:56:48PM -0400, Tom Lane wrote:
> Chris Cogdon  writes:
> > Hi folks! I’ve a query where adding a rollup to the group by switches to
> > GroupAggregate unexpectedly, where the standard GROUP BY uses
> > HashAggregate.
> 
> The current implementation of rollup doesn't support using hashed
> aggregation.  I don't know if that's for lack of round tuits or because
> it's actually hard, but it's not the planner's fault.
> 
>   regards, tom lane
> 

Hi,

Cribbed from the mailing list:

http://www.postgresql.org/message-id/e1ytrd5-0005q7...@gemulon.postgresql.org

The current implementation of grouping sets only supports using sorting
for input. Individual sets that share a sort order are computed in one
pass. If there are sets that don't share a sort order, additional sort &
aggregation steps are performed. These additional passes are sourced by
the previous sort step; thus avoiding repeated scans of the source data.

The code is structured in a way that adding support for purely using
hash aggregation or a mix of hashing and sorting is possible. Sorting
was chosen to be supported first, as it is the most generic method of
implementation.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Indexes for hashes

2016-06-15 Thread k...@rice.edu
On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:
> Hi,
> 
> I have an application which stores a large amounts of hex-encoded hash
> strings (nearly 100 GB of them), which means:
> 
>- The number of distinct characters (alphabet) is limited to 16
>- Each string is of the same length, 64 characters
>- The strings are essentially random
> 
> Creating a B-Tree index on this results in the index size being larger than
> the table itself, and there are disk space constraints.
> 
> I've found the SP-GIST radix tree index, and thought it could be a good
> match for the data because of the above constraints. An attempt to create
> it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more
> than 12 hours (while a similar B-tree index takes a few hours at most), so
> I've interrupted it because "it probably is not going to finish in a
> reasonable time". Some slides I found on the spgist index allude that both
> build time and size are not really suitable for this purpose.
> 
> My question is: what would be the most size-efficient index for this
> situation?

Hi Ivan,

If the strings are really random, then maybe a function index on the first
4, 8, or 16 characters could be used to narrow the search space and not need
to index all 64. If they are not "good" random numbers, you could use a hash
index on the strings. It will be much smaller since it currently uses a 32-bit
hash. It has a number of caveats and is not currently crash-safe, but it seems
like it might work in your environment. You can also use a functional index on
a hash-function applied to your values with a btree to give you crash safety.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Indexes for hashes

2016-06-15 Thread k...@rice.edu
On Wed, Jun 15, 2016 at 03:09:04PM +0200, Ivan Voras wrote:
> On 15 June 2016 at 15:03, k...@rice.edu  wrote:
> 
> > On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:
> > > Hi,
> > >
> > > I have an application which stores a large amounts of hex-encoded hash
> > > strings (nearly 100 GB of them), which means:
> > >
> > >- The number of distinct characters (alphabet) is limited to 16
> > >- Each string is of the same length, 64 characters
> > >- The strings are essentially random
> > >
> > > Creating a B-Tree index on this results in the index size being larger
> > than
> > > the table itself, and there are disk space constraints.
> > >
> > > I've found the SP-GIST radix tree index, and thought it could be a good
> > > match for the data because of the above constraints. An attempt to create
> > > it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes
> > more
> > > than 12 hours (while a similar B-tree index takes a few hours at most),
> > so
> > > I've interrupted it because "it probably is not going to finish in a
> > > reasonable time". Some slides I found on the spgist index allude that
> > both
> > > build time and size are not really suitable for this purpose.
> > >
> > > My question is: what would be the most size-efficient index for this
> > > situation?
> >
> > Hi Ivan,
> >
> > If the strings are really random, then maybe a function index on the first
> > 4, 8, or 16 characters could be used to narrow the search space and not
> > need
> > to index all 64. If they are not "good" random numbers, you could use a
> > hash
> > index on the strings. It will be much smaller since it currently uses a
> > 32-bit
> > hash. It has a number of caveats and is not currently crash-safe, but it
> > seems
> > like it might work in your environment. You can also use a functional
> > index on
> > a hash-function applied to your values with a btree to give you crash
> > safety.
> >
> >
> Hi,
> 
> I figured the hash index might be helpful and I've tried it in the
> meantime: on one of the smaller tables (which is 51 GB in size), a btree
> index is 32 GB, while the hash index is 22 GB (so btree is around 45%
> larger).
> 
> I don't suppose there's an effort in progress to make hash indexes use WAL?
> :D

Hi Ivan,

Several people have looked at it but it has not made it to the top of anyone's
to-do list. So if you need WAL and crash-safety, a functional index on a hash
of your values is currently your best bet.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Indexes for hashes

2016-06-15 Thread k...@rice.edu
On Wed, Jun 15, 2016 at 04:20:46PM +0200, Ivan Voras wrote:
> Hi,
> 
> Just for testing... is there a fast (i.e. written in C) crc32 or a similar
> small hash function for PostgreSQL?
> 

Hi Ivan,

Here is an extension that provides a number of different hash
functions, including a version of the version used internally:

https://github.com/markokr/pghashlib

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance