ru...@gmail.com; a...@paperlesspost.com;
> pgsql-performance@postgresql.org
>
> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes wrote:
> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe
> > wrote:
> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes
> >> wrote:
> >&g
I've benchmarked shared_buffers with high and low settings, in a server
dedicated to postgres with 48GB my settings are:
shared_buffers = 37GB
effective_cache_size = 38GB
Having a small number and depending on OS caching is unpredictable, if the
server is dedicated to postgres you want make su
> Date: Thu, 17 Jan 2013 15:38:14 +0100
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: alipou...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
>
> 2012/12/27 Charles
> From: matioli.math...@gmail.com
> Date: Thu, 10 Jan 2013 16:45:43 -0200
> Subject: Partition insert trigger using C language
> To: pgsql-performance@postgresql.org
> CC: charle...@outlook.com
>
> Hi All,
>
> Inspired by Charles' thread and the work of Em
> From: devn...@mail.ua
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database
> Date: Fri, 4 Jan 2013 18:41:25 +0400
>
>
>
>
> Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe
> :
> On Thu, Jan
Pavel,
I've been trying to port the work of Emmanuel
http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php
His implementation is pretty straight forward. Simple trigger doing constrain
checks with caching for bulk inserts.
So far that's what I got http://www.widesol.com/~charles/p
; On Monday, December 24, 2012, Charles Gomes wrote:
>
>
> >
> > I think your performance bottleneck is almost certainly the dynamic
> > SQL. Using C to generate that dynamic SQL isn't going to help much,
> > because
Markus,
Have you looked over here:
http://www.postgresql.org/docs/9.2/static/populate.html
> From: markus.innereb...@inf.unibz.it
> Subject: [PERFORM] Improve performance for writing
> Date: Thu, 27 Dec 2012 14:10:40 +0100
> To: pgsql-performance@postgres
code related to these improvements must still be accessible in
> the archive. If you can't find something, let me know, I'll try to find
> it in my backups!
>
> Happy holidays
> Emmanuel
>
>
> On 12/24/2012 13:36, Charles Gomes wrote:
> > I've just found
n Bulk Insert to Partitioned Table
> From: itparan...@gmail.com
> Date: Mon, 24 Dec 2012 21:11:07 +0400
> CC: jeff.ja...@gmail.com; ondrej.iva...@gmail.com;
> pgsql-performance@postgresql.org
> To: charle...@outlook.com
>
>
> On Dec 24, 2012, at 9:07 PM, Charles Gomes wro
tlook.com
> > CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
> >
> > On Thursday, December 20, 2012, Charles Gomes wrote:
> > True, that's the same I feel, I will be looking to translate the
> > trigger to C if I can find good examples, that should accelerate.
t;
> On Thursday, December 20, 2012, Charles Gomes wrote:
> True, that's the same I feel, I will be looking to translate the
> trigger to C if I can find good examples, that should accelerate.
>
> I think your performance bottleneck is almost certainly the dynamic
> SQL. U
o: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
>
>
> On Thursday, December 20, 2012, Charles Gomes wrote:
> Jeff,
>
> The 8288 writes are fine, as the array has a BBU, it's fine. You see
> about 4% of the utilization.
>
> BBU is gr
rows.
> From: t...@sss.pgh.pa.us
> To: charle...@outlook.com
> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> Date: Thu, 20 Dec 2012 18:39:07 -0500
>
> Charles Gomes writes:
> &
fun to maintain.
> Date: Fri, 21 Dec 2012 09:50:49 +1100
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: ondrej.iva...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
> Hi,
>
> On 21 December 2012 04:29, C
012 14:31:44 -0800
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: jeff.ja...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
> On Thu, Dec 20, 2012 at 9:29 AM, Charles Gomes wrote:
> > Hello guys
> >
> &
PERFORM] Performance on Bulk Insert to Partitioned Table
>
> Charles,
>
> * Charles Gomes (charle...@outlook.com) wrote:
> > I’m doing 1.2 Billion inserts into a table partitioned in
> > 15.
>
> Do you end up having multiple threads writing to the same, underlying,
>
igger in C ?
>
>
> > Date: Thu, 20 Dec 2012 10:39:25 -0700
> > Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> > From: scott.marl...@gmail.com
> > To: charle...@outlook.com
> > CC: pgsql-performance@postg
igger in C ?
> Date: Thu, 20 Dec 2012 10:39:25 -0700
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: scott.marl...@gmail.com
> To: charle...@outlook.com
> CC: pgsql-performance@postgresql.org
>
> On Thu, Dec 20, 2012 at
Hello guys
I’m doing 1.2 Billion inserts into a table partitioned in
15.
When I target the MASTER table on all the inserts and let
the trigger decide what partition to choose from it takes 4 hours.
If I target the partitioned table directly during the
insert I can get 4 times better perfor
20 matches
Mail list logo