Hi,
This folde( Temporary tablespace) is getting filled and size increases in
the day where there lots of sorting operations.But after some times the data
in the is deleted automatically . Can any one explain what is going on ?
Rgrds
Suhas
--
View this message in context:
http://postgresql.1
Savepoint are not created for performance. If you have one very long running
transactions that fails in the end, it will all be rolled back. So be pretty
sure about your dataquality or use safepoints.
On Tue, Nov 27, 2012 at 6:26 PM, Steve Atkins wrote:
>
> On Nov 27, 2012, at 2:04 PM, Mike Blackwell
> wrote:
>
> > I need to delete about 1.5 million records from a table and reload it in
> one transaction. The usual advice when loading with inserts seems to be
> group them into transactions o
On Tue, Nov 27, 2012 at 7:17 PM, Craig Ringer wrote:
> On 27/11/2012 3:42 PM, Scott Marlowe wrote:
>
>> Here here! PostgreSQL is well known for its extensibility and this is
>> the perfect place for hints.
>
> I agree with the sentiment and your concerns. However, this doesn't solve
> the CTE pro
On 27/11/2012 3:42 PM, Scott Marlowe wrote:
Here here! PostgreSQL is well known for its extensibility and this is
the perfect place for hints.
I agree with the sentiment and your concerns. However, this doesn't
solve the CTE problem.
Some people are relying on the planner's inability to push
On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell wrote:
>
> > Postgresql isn't going to run out of resources doing a big transaction, in
> > the way some other databases will.
>
> I thought I had read something at one point about keeping the transaction
> size on the order of a couple thousand b
Steve Atkins wrote:
> Postgresql isn't going to run out of resources doing a big transaction,
in the way some other databases will.
I thought I had read something at one point about keeping the transaction
size on the order of a couple thousand because there were issues when it
got larger. As th
Asif:
1. 6GB is pretty small once you work through the issues, adding RAM
will probably be a good investment, depending on your time-working set
curve.
A quick rule of thumb is this:
- if your cache hit ratio is significantly larger than (cache size / db
size) then there is locality of refe
Mike,
Is there anything that the 1.5 million rows have in common that would allow you
to use partitions? if so, you could load the new data into a partition at
your leisure, start a transaction, alter the partition table with the old data
to no longer inherit from the parent, alter the new pa
On Fri, Nov 23, 2012 at 3:05 AM, Cédric Villemain
wrote:
> Le mercredi 21 novembre 2012 17:34:02, Craig James a écrit :
>> On Wed, Nov 21, 2012 at 5:42 AM, Kevin Grittner wrote:
>> > It's a tough problem. Disguising and not documenting the available
>> > optimizer hints leads to more reports on w
On Tue, Nov 20, 2012 at 1:27 AM, Pavel Stehule wrote:
> Hello
>
> HashSetOp is memory expensive operation, and should be problematic
> when statistic estimation is bad.
>
> Try to rewritre this query to JOIN
or, 'WHERE NOT EXISTS'. if 41 seconds seems like it's too long, go
ahead and post that
On Nov 27, 2012, at 2:04 PM, Mike Blackwell wrote:
> I need to delete about 1.5 million records from a table and reload it in one
> transaction. The usual advice when loading with inserts seems to be group
> them into transactions of around 1k records. Committing at that point would
> leave
On Mon, Nov 26, 2012 at 12:46 AM, Heikki Linnakangas
wrote:
> On 25.11.2012 18:30, Catalin Iacob wrote:
>>
>> So it seems we're just doing too many connections and too many
>> queries. Each page view from a user translates to multiple requests to
>> the application server and each of those transla
On 27/11/12 22:04, Mike Blackwell wrote:
I need to delete about 1.5 million records from a table and reload it
in one transaction.
The data to reload the table is coming from a Perl DBI connection to a
different database (not PostgreSQL) so I'm not sure the COPY
alternative applies here.
No
I need to delete about 1.5 million records from a table and reload it in
one transaction. The usual advice when loading with inserts seems to be
group them into transactions of around 1k records. Committing at that
point would leave the table in an inconsistent state. Would issuing a
savepoint e
On Tue, Nov 27, 2012 at 12:47 AM, Syed Asif Tanveer
wrote:
> Hi,
>
>
>
> I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size
> is around 100 GB and I have tuned my PostgreSQL accordingly still I am
> facing performance issues. The query performance is too low despite table
On 11/27/2012 02:47 AM, Syed Asif Tanveer wrote:
Hi,
I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes.
Data size is around 100 GB and I have tuned my PostgreSQL accordingly
still I am facing performance issues. The query performance is too low
despite tables being properly
On 27.11.2012 09:47, Syed Asif Tanveer wrote:
I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size
is around 100 GB and I have tuned my PostgreSQL accordingly still I am
facing performance issues. The query performance is too low despite tables
being properly indexed and a
Hi,
I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size
is around 100 GB and I have tuned my PostgreSQL accordingly still I am
facing performance issues. The query performance is too low despite tables
being properly indexed and are vacuumed and analyzed at regular basi
19 matches
Mail list logo