[PERFORM] pgsql_tmp( Temporary tablespace)

2012-11-27 Thread suhas.basavaraj12
Hi, This folde( Temporary tablespace) is getting filled and size increases in the day where there lots of sorting operations.But after some times the data in the is deleted automatically . Can any one explain what is going on ? Rgrds Suhas -- View this message in context: http://postgresql.1

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Willem Leenen
Savepoint are not created for performance. If you have one very long running transactions that fails in the end, it will all be rolled back. So be pretty sure about your dataquality or use safepoints.

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Franklin, Dan
On Tue, Nov 27, 2012 at 6:26 PM, Steve Atkins wrote: > > On Nov 27, 2012, at 2:04 PM, Mike Blackwell > wrote: > > > I need to delete about 1.5 million records from a table and reload it in > one transaction. The usual advice when loading with inserts seems to be > group them into transactions o

Re: [PERFORM] Hints (was Poor performance using CTE)

2012-11-27 Thread Scott Marlowe
On Tue, Nov 27, 2012 at 7:17 PM, Craig Ringer wrote: > On 27/11/2012 3:42 PM, Scott Marlowe wrote: > >> Here here! PostgreSQL is well known for its extensibility and this is >> the perfect place for hints. > > I agree with the sentiment and your concerns. However, this doesn't solve > the CTE pro

Re: [PERFORM] Hints (was Poor performance using CTE)

2012-11-27 Thread Craig Ringer
On 27/11/2012 3:42 PM, Scott Marlowe wrote: Here here! PostgreSQL is well known for its extensibility and this is the perfect place for hints. I agree with the sentiment and your concerns. However, this doesn't solve the CTE problem. Some people are relying on the planner's inability to push

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Claudio Freire
On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell wrote: > > > Postgresql isn't going to run out of resources doing a big transaction, in > > the way some other databases will. > > I thought I had read something at one point about keeping the transaction > size on the order of a couple thousand b

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Mike Blackwell
Steve Atkins wrote: > Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will. I thought I had read something at one point about keeping the transaction size on the order of a couple thousand because there were issues when it got larger. As th

Re: [PERFORM] Postgres configuration for 8 CPUs, 6 GB RAM

2012-11-27 Thread Dave Crooke
Asif: 1. 6GB is pretty small once you work through the issues, adding RAM will probably be a good investment, depending on your time-working set curve. A quick rule of thumb is this: - if your cache hit ratio is significantly larger than (cache size / db size) then there is locality of refe

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Bob Lunney
Mike, Is there anything that the 1.5 million rows have in common that would allow you to use partitions? if so, you could load the new data into a partition at your leisure, start a transaction, alter the partition table with the old data to no longer inherit from the parent, alter the new pa

Re: [PERFORM] Hints (was Poor performance using CTE)

2012-11-27 Thread Scott Marlowe
On Fri, Nov 23, 2012 at 3:05 AM, Cédric Villemain wrote: > Le mercredi 21 novembre 2012 17:34:02, Craig James a écrit : >> On Wed, Nov 21, 2012 at 5:42 AM, Kevin Grittner wrote: >> > It's a tough problem. Disguising and not documenting the available >> > optimizer hints leads to more reports on w

Re: [PERFORM] Query that uses lots of memory in PostgreSQL 9.2.1 in Windows 7

2012-11-27 Thread Merlin Moncure
On Tue, Nov 20, 2012 at 1:27 AM, Pavel Stehule wrote: > Hello > > HashSetOp is memory expensive operation, and should be problematic > when statistic estimation is bad. > > Try to rewritre this query to JOIN or, 'WHERE NOT EXISTS'. if 41 seconds seems like it's too long, go ahead and post that

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Steve Atkins
On Nov 27, 2012, at 2:04 PM, Mike Blackwell wrote: > I need to delete about 1.5 million records from a table and reload it in one > transaction. The usual advice when loading with inserts seems to be group > them into transactions of around 1k records. Committing at that point would > leave

Re: [PERFORM] How to keep queries low latency as concurrency increases

2012-11-27 Thread Scott Marlowe
On Mon, Nov 26, 2012 at 12:46 AM, Heikki Linnakangas wrote: > On 25.11.2012 18:30, Catalin Iacob wrote: >> >> So it seems we're just doing too many connections and too many >> queries. Each page view from a user translates to multiple requests to >> the application server and each of those transla

Re: [PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Richard Huxton
On 27/11/12 22:04, Mike Blackwell wrote: I need to delete about 1.5 million records from a table and reload it in one transaction. The data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here. No

[PERFORM] Savepoints in transactions for speed?

2012-11-27 Thread Mike Blackwell
I need to delete about 1.5 million records from a table and reload it in one transaction. The usual advice when loading with inserts seems to be group them into transactions of around 1k records. Committing at that point would leave the table in an inconsistent state. Would issuing a savepoint e

Re: [PERFORM] Postgres configuration for 8 CPUs, 6 GB RAM

2012-11-27 Thread Scott Marlowe
On Tue, Nov 27, 2012 at 12:47 AM, Syed Asif Tanveer wrote: > Hi, > > > > I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size > is around 100 GB and I have tuned my PostgreSQL accordingly still I am > facing performance issues. The query performance is too low despite table

Re: [PERFORM] Postgres configuration for 8 CPUs, 6 GB RAM

2012-11-27 Thread Andrew Dunstan
On 11/27/2012 02:47 AM, Syed Asif Tanveer wrote: Hi, I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size is around 100 GB and I have tuned my PostgreSQL accordingly still I am facing performance issues. The query performance is too low despite tables being properly

Re: [PERFORM] Postgres configuration for 8 CPUs, 6 GB RAM

2012-11-27 Thread Heikki Linnakangas
On 27.11.2012 09:47, Syed Asif Tanveer wrote: I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size is around 100 GB and I have tuned my PostgreSQL accordingly still I am facing performance issues. The query performance is too low despite tables being properly indexed and a

[PERFORM] Postgres configuration for 8 CPUs, 6 GB RAM

2012-11-27 Thread Syed Asif Tanveer
Hi, I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size is around 100 GB and I have tuned my PostgreSQL accordingly still I am facing performance issues. The query performance is too low despite tables being properly indexed and are vacuumed and analyzed at regular basi