Понеділок, 9 липня 2012 р. користувач Misa Simic
написав:
>
>
> 2012/7/9 Gregg Jaskiewicz
>>
>> Use cursors.
>> By far the most flexible. offset/limit have their down sides.
>
>
> Well, I am not aware what down sides there are in LIMIT OFFSET what does
not exist in any other solutions for paged q
Понеділок, 9 липня 2012 р. користувач Misa Simic
написав:
>
>
> 2012/7/9 Gregg Jaskiewicz
>>
>> Use cursors.
>> By far the most flexible. offset/limit have their down sides.
>
>
> Well, I am not aware what down sides there are in LIMIT OFFSET what does
not exist in any other solutions for paged q
Yan Chunlu wrote:
> I have logged one day data and found the checkpoint is rather
frequently(detail:
> https://gist.github.com/3088338). Not sure if it is normal, but the
average time of checkpoint is
> about 100sec~200sec, it seems related with my settings:
>
> 574 checkpoint_segments = 64
> 575
could that because of my system is really busy?
1, postgresql always have 400+ connections(dozens of python process using
client pool)
2, the query peak is 50+/s
3, I do have some bad performance sql executing periodically, need 100+
second to complete. could those bad performance sql influence oth
On Wed, Jul 11, 2012 at 9:24 AM, Yan Chunlu wrote:
> I have logged one day data and found the checkpoint is rather
> frequently(detail: https://gist.github.com/3088338). Not sure if it is
> normal, but the average time of checkpoint is about 100sec~200sec, it seems
> related with my settings:
>
>
Daniel Farina writes:
> TRUNCATE should simply be very nearly the fastest way to remove data
> from a table while retaining its type information, and if that means
> doing DELETE without triggers when the table is small, then it should.
> The only person who could thwart me is someone who badly w
On Wed, Jul 11, 2012 at 10:05:48AM -0400, Tom Lane wrote:
> Daniel Farina writes:
> > TRUNCATE should simply be very nearly the fastest way to remove data
> > from a table while retaining its type information, and if that means
> > doing DELETE without triggers when the table is small, then it sho
huge thanks for the patient explanations, I think you are right, it is
really related to the IO.
I monitor the IO using iostat -x and found the utilize part reach 100%
frequently, postgresql is the only service running on that machine, so I
think it is either checkpoint or queries caused the proble
Last night I created directories and moved files as outlined in Josh's very
helpful reply to my original request. All seemed okay until we unmounted the
drives from the first volume. I got the following error (with oid differences)
whenever I tried to access any of the tables that were not orig
Tom Lane wrote:
> (3) The performance of the truncation itself should not be viewed in
> isolation; subsequent behavior also needs to be considered. An example
> of possible degradation is that index bloat would no longer be
> guaranteed to be cleaned up over a series of repeated truncations.
> (Y
On Wed, Jul 11, 2012 at 7:05 AM, Tom Lane wrote:
> Daniel Farina writes:
> > TRUNCATE should simply be very nearly the fastest way to remove data
> > from a table while retaining its type information, and if that means
> > doing DELETE without triggers when the table is small, then it should.
>
On 07/11/2012 03:18 PM, Craig James wrote:
It strikes me as a contrived case rather than a use case. What sort of
app repeatedly fills and truncates a small table thousands of times ...
other than a test app to see whether you can do it or not?
Test systems. Any company with even a medium-siz
On 07/11/2012 04:47 PM, Shaun Thomas wrote:
On 07/11/2012 03:18 PM, Craig James wrote:
It strikes me as a contrived case rather than a use case. What sort of
app repeatedly fills and truncates a small table thousands of times ...
other than a test app to see whether you can do it or not?
Te
On 11/07/12 21:18, Craig James wrote:
It strikes me as a contrived case rather than a use case. What sort
of app repeatedly fills and truncates a small table thousands of times
... other than a test app to see whether you can do it or not?
If I have a lot of data which updates/inserts an exis
On Wed, Jul 11, 2012 at 2:32 PM, Mark Thornton wrote:
> On 11/07/12 21:18, Craig James wrote:
>
>>
>> It strikes me as a contrived case rather than a use case. What sort of
>> app repeatedly fills and truncates a small table thousands of times ...
>> other than a test app to see whether you can
On Wed, Jul 11, 2012 at 7:05 AM, Tom Lane wrote:
> Daniel Farina writes:
>> TRUNCATE should simply be very nearly the fastest way to remove data
>> from a table while retaining its type information, and if that means
>> doing DELETE without triggers when the table is small, then it should.
>> Th
On 07/11/2012 07:40 PM, Yan Chunlu wrote:
could that because of my system is really busy?
1, postgresql always have 400+ connections(dozens of python process
using client pool)
2, the query peak is 50+/s
3, I do have some bad performance sql executing periodically, need
100+ second to complete
Really sorry for the lack of information, but I did asked if the slow
queries could affect those simple one:
'so I wonder could this simple select is innocent and affected badly by
other queries? '
I didn't mention the connections number because I don't think my app is
that busy, and the large n
On Wed, Jul 11, 2012 at 5:47 PM, Yan Chunlu wrote:
> I learnt a lot during the back and forth!
Great to hear.
>> 1, postgresql always have 400+ connections(dozens of python process using
>> client pool)
Note that Postgres does not deal well with a large number of
connections[1]: consider shrin
On 07/12/2012 02:10 AM, Matthew Woodcraft wrote:
I think a documentation change would be worthwhile. At the moment the
TRUNCATE page says, with no caveats, that it is faster than
unqualified DELETE.
+1 to updating the docs to reflect the fact that TRUNCATE may have a
higher fixed cost than D
On 07/11/2012 01:22 PM, Daniel Farina wrote:
On Tue, Jul 10, 2012 at 5:37 PM, Craig Ringer wrote:
Hi
After seeing a few discussions here and on Stack Overflow I've put together
a quick explanation of why "DELETE FROM table;" may be faster than "TRUNCATE
table" for people doing unit testing on
On 07/12/2012 06:51 AM, Daniel Farina wrote:
15x slower. This is a Macbook Air with full disk encryption and SSD
disk with fsync off, e.g. a very typical developer configuration.
Don't use full disk encryption for throwaway test data if you care about
how long those tests take. It's a lot like
On 07/12/2012 08:47 AM, Yan Chunlu wrote:
Really sorry for the lack of information
I shouldn't have grumped like that either, sorry about that.
I didn't mention the connections number because I don't think my app
is that busy, and the large number connections was caused by slow queries.
Yep
after check out the wiki page Maciek mentioned, turns out that heavy
connection also burden the disk hardly.
looks like I am in the vicious circle:
1, slow query cause connection blocked so the client request more
connection.
2, more connection cause high disk io and make even the simplest query sl
On Wed, Jul 11, 2012 at 6:41 PM, Craig Ringer wrote:
> On 07/12/2012 06:51 AM, Daniel Farina wrote:
>>
>> 15x slower. This is a Macbook Air with full disk encryption and SSD
>> disk with fsync off, e.g. a very typical developer configuration.
>
> Don't use full disk encryption for throwaway test
On 07/12/2012 01:10 PM, Yan Chunlu wrote:
after check out the wiki page Maciek mentioned, turns out that heavy
connection also burden the disk hardly.
looks like I am in the vicious circle:
1, slow query cause connection blocked so the client request more
connection.
2, more connection cause hi
26 matches
Mail list logo