On Thu, Dec 5, 2013 at 9:55 PM, Skarsol wrote:
> The rule is being used to return the id of the insert...
>
Take a look at the RETURNING clause of the INSERT statement. That should
meet your needs here without having to bother with rules.
rls
--
:wq
2013/12/6 Josh Berkus
> On 12/05/2013 02:42 AM, Max wrote:
> > Hello,
> >
> > We are starting a new project to deploy a solution in cloud with the
> possibility to be used for 2.000+ clients. Each of this clients will use
> several tables to store their information (our model has about 500+ table
On Thu, Dec 5, 2013 at 1:19 PM, bricklen wrote:
>
> On Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe wrote:
>
>> Rules have a lot of overhead. Is there a reason you're not using
>> defaults or triggers?
>>
>
> Or for even less overhead, load the partitions directly, and preferably
> use "DEFAULT ne
On 12/05/2013 02:42 AM, Max wrote:
> Hello,
>
> We are starting a new project to deploy a solution in cloud with the
> possibility to be used for 2.000+ clients. Each of this clients will use
> several tables to store their information (our model has about 500+ tables
> but there's less than 10
On 06/12/13 05:13, Skarsol wrote:
On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe wrote:
On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
psql (PostgreSQL) 9.2.5
Red Hat Enterprise Linux Server release 6.4 (Santiago)
Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013
x86_64
x86
One of the many questions we have is about performance of the db if we
work with only one (using a ClientID to separete de clients info) or thousands
of separate dbs. The management of the dbs is not a huge concert as we
have an automated tool.
If you are planning on using persisted connectio
> -Original Message-
> From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-
> performance-ow...@postgresql.org] On Behalf Of Max
> Sent: Thursday, December 05, 2013 5:42 AM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] One huge db vs many small dbs
>
> Hello,
>
>
> W
salah jubeh wrote:
> The hardware is pretty good, I have 8 cpus of Intel(R) Core(TM)
> i7, 2.4 GH , and 16 Gib of RAM. Is there any configuration
> parameter that can lead to this issue.
What OS?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
On Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe wrote:
> Rules have a lot of overhead. Is there a reason you're not using
> defaults or triggers?
>
Or for even less overhead, load the partitions directly, and preferably use
"DEFAULT nextval('some_sequence')" as Scott mentioned.
On Thu, Dec 5, 2013 at 1:03 PM, Metin Doslu wrote:
>> From what I've seen so far the bigger problem than contention in the
>> lwlocks itself, is the spinlock protecting the lwlocks...
>
> Postgres 9.3.1 also reports spindelay, it seems that there is no contention
> on spinlocks.
Did you check hu
On Thu, Dec 5, 2013 at 9:13 AM, Skarsol wrote:
> On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe
> wrote:
>>
>> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
>> > psql (PostgreSQL) 9.2.5
>> > Red Hat Enterprise Linux Server release 6.4 (Santiago)
>> > Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Ma
On Thu, Dec 5, 2013 at 7:01 AM, Skarsol wrote:
> I'm trying to increase the speed of inserts in a database that is on a not
> super fast storage system. I have installed a pair of SSDs and placed
> pg_xlog on them but am still getting inserts that take up to a second to
> complete, with .3 second
maxabbr wrote
> Hello,
>
> We are starting a new project to deploy a solution in cloud with the
> possibility to be used for 2.000+ clients. Each of this clients will use
> several tables to store their information (our model has about 500+ tables
> but there's less than 100 core table with heavy
On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe wrote:
> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
> > psql (PostgreSQL) 9.2.5
> > Red Hat Enterprise Linux Server release 6.4 (Santiago)
> > Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013
> x86_64
> > x86_64 x86_64 GNU/Linux
> From what I've seen so far the bigger problem than contention in the
> lwlocks itself, is the spinlock protecting the lwlocks...
Postgres 9.3.1 also reports spindelay, it seems that there is no contention
on spinlocks.
PID 21121 lwlock 0: shacq 0 exacq 33 blk 1 spindelay 0
PID 21121 lwlock 33:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/05/2013 02:42 AM, Max wrote:
> Hello,
>
> We are starting a new project to deploy a solution in cloud with
> the possibility to be used for 2.000+ clients. Each of this clients
> will use several tables to store their information (our model has
> You tested the correct branch, right? Which commit does "git rev-parse
> HEAD" show?
I applied last two patches manually on PostgreSQL 9.2 Stable.
On 2013-12-05 17:46:44 +0200, Metin Doslu wrote:
> I tried your patches on next link. As you suspect I didn't see any
> improvements. I tested it on PostgreSQL 9.2 Stable.
You tested the correct branch, right? Which commit does "git rev-parse
HEAD" show?
But generally, as long as your profile hid
On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
> psql (PostgreSQL) 9.2.5
> Red Hat Enterprise Linux Server release 6.4 (Santiago)
> Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64
> x86_64 x86_64 GNU/Linux
> All relevant filesystems are ext4
>
> Changes from defaults:
>
> You could try my lwlock-scalability improvement patches - for some
> workloads here, the improvements have been rather noticeable. Which
> version are you testing?
I tried your patches on next link. As you suspect I didn't see any
improvements. I tested it on PostgreSQL 9.2 Stable.
http://git.p
I'm managing a database that is adding about 10-20M records per day to a
table and time is a core part of most queries, so I've been looking into
seeing if I need to start using partitioning based on the time column and
I've found these general guidelines:
Don't use more than about 50 paritions (
psql (PostgreSQL) 9.2.5
Red Hat Enterprise Linux Server release 6.4 (Santiago)
Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64
x86_64 x86_64 GNU/Linux
All relevant filesystems are ext4
Changes from defaults:
max_connections = 500
shared_buffers = 32000MB
temp_buffers = 2
Hello,
could you please post the postgresql version, the
postgresql.conf, the operative system used, the kernel version and the
filesystem used ?
Thank you
2013/12/5 Skarsol
> I'm trying to increase the speed of inserts in a database that is on a not
> super fast storage system. I
I'm trying to increase the speed of inserts in a database that is on a not
super fast storage system. I have installed a pair of SSDs and placed
pg_xlog on them but am still getting inserts that take up to a second to
complete, with .3 seconds being about average. Iostat doesn't show the SSDs
stres
Hello Tom,
The hardware is pretty good, I have 8 cpus of Intel(R) Core(TM) i7, 2.4 GH ,
and 16 Gib of RAM. Is there any configuration parameter that can lead to this
issue.
Regards
On Thursday, December 5, 2013 3:23 PM, vincent elschot wrote:
On 05-12-13 15:09, salah jubeh wrote:
>
On Thu, Dec 5, 2013 at 2:42 AM, Max wrote:
> We are starting a new project to deploy a solution in cloud with the
> possibility to be used for 2.000+ clients. Each of this clients will use
> several tables to store their information (our model has about 500+ tables
> but there's less than 100 cor
salah jubeh writes:
> When I excute a query, the exection time is about 1 minute; however, when I
> execute the query with explain analyze the excution time jumps to 10 minutes.
This isn't exactly unheard of, although it sounds like you have a
particularly bad case. Cheap commodity PCs tend t
On 05-12-13 15:09, salah jubeh wrote:
Hello guys,
When I excute a query, the exection time is about 1 minute; however,
when I execute the query with explain analyze the excution time jumps
to 10 minutes.
I have tried this for several queries, where I need to optimize; and
using explain an
Hello guys,
When I excute a query, the exection time is about 1 minute; however, when I
execute the query with explain analyze the excution time jumps to 10 minutes.
I have tried this for several queries, where I need to optimize; and using
explain analyze leads alway to a huge time ove
Hello,
We are starting a new project to deploy a solution in cloud with the
possibility to be used for 2.000+ clients. Each of this clients will use
several tables to store their information (our model has about 500+ tables but
there's less than 100 core table with heavy use). Also the projecte
On 2013-12-05 11:33:29 +0200, Metin Doslu wrote:
> > Is your workload bigger than RAM?
>
> RAM is bigger than workload (more than a couple of times).
> > I think a good bit of the contention
> > you're seeing in that listing is populating shared_buffers - and might
> > actually vanish once you're
> Is your workload bigger than RAM?
RAM is bigger than workload (more than a couple of times).
> I think a good bit of the contention
> you're seeing in that listing is populating shared_buffers - and might
> actually vanish once you're halfway cached.
> From what I've seen so far the bigger prob
On 2013-12-05 11:15:20 +0200, Metin Doslu wrote:
> > - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
> > disappeared for 8 core machines and come back with 16 core machines on
> > Amazon EC2. Would it be related with PostgreSQL locking mechanism?
>
> If we build with -DLWLOCK_ST
> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
> disappeared for 8 core machines and come back with 16 core machines on
> Amazon EC2. Would it be related with PostgreSQL locking mechanism?
If we build with -DLWLOCK_STATS to print locking stats from PostgreSQL, we
see tons of
34 matches
Mail list logo