On 01/18/2013 03:19 AM, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>> Hi,
>>
>> attached is a patch that improves performance when dropping multiple
>> tables within a transaction. Instead of scanning the shared buffers for
>> each table separately, the patch removes this and evicts all the tables
On 17.1.2013 20:19, Alvaro Herrera wrote:
>
> I'm curious -- why would you drop tables in groups of 100 instead of
> just doing the 100,000 in a single transaction? Maybe that's faster
> now, because you'd do a single scan of the buffer pool instead of 1000?
> (I'm assuming that "in groups of" me
Alvaro Herrera writes:
> Made some tweaks and pushed (added comments to new functions, ensure
> that we never try to palloc(0), renamed DropRelFileNodeAllBuffers to
> plural, made the "use bsearch" logic a bit simpler).
FWIW, there's nothing particularly wrong with palloc(0) ...
Tomas Vondra wrote:
> Hi,
>
> attached is a patch that improves performance when dropping multiple
> tables within a transaction. Instead of scanning the shared buffers for
> each table separately, the patch removes this and evicts all the tables
> in a single pass through shared buffers.
Made so
Hi Tomas,
On Wed, Jan 9, 2013 at 6:38 AM, Tomas Vondra wrote:
>> Well, you need to ensure that the initial palloc is an array of that
>> size.
>
> Oh, right - I forgot to modify the palloc() call. Thanks for spotting
> this. Attached is a patch with increased threshold and fixed palloc call.
OK,
On 8.1.2013 22:30, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>> On 8.1.2013 03:47, Shigeru Hanada wrote:
>
> * +1 for Alvaro's suggestion about avoiding palloc traffic by starting
> with 8 elements or so.
Done.
>>>
>>> Not yet. Initial size of srels array is still 1 element.
>
Tomas Vondra wrote:
> On 8.1.2013 03:47, Shigeru Hanada wrote:
> >>> * +1 for Alvaro's suggestion about avoiding palloc traffic by starting
> >>> with 8 elements or so.
> >>
> >> Done.
> >
> > Not yet. Initial size of srels array is still 1 element.
>
> I've checked the patch and I see there 'm
On 8.1.2013 03:47, Shigeru Hanada wrote:
>>> * naming of DROP_RELATIONS_BSEARCH_LIMIT (or off-by-one bug?)
>>> IIUC bsearch is used when # of relations to be dropped is *more* than
>>> the value of DROP_RELATIONS_BSEARCH_LIMIT (10). IMO this behavior is
>>> not what the macro name implies; I thoug
Hi Tomas,
On Tue, Jan 8, 2013 at 7:08 AM, Tomas Vondra wrote:
>> * I found another extra space after asterisk.
>>
>> + RelFileNode * nodes;
>
> Thanks, fixed.
check
>> * Curly bracket which starts code block should be at the head of next line.
>>
>> + /* extend t
On 7.1.2013 10:07, Shigeru Hanada wrote:
> Hi Tomas,
>
> I reviewed v6 patch, and found that several places need fix.
> Sorry for extra nitpickings.
>
> * I found another extra space after asterisk.
>
> + RelFileNode * nodes;
Thanks, fixed.
>
> * Curly bracket which starts code block shou
Hi Tomas,
I reviewed v6 patch, and found that several places need fix.
Sorry for extra nitpickings.
* I found another extra space after asterisk.
+ RelFileNode * nodes;
* Curly bracket which starts code block should be at the head of next line.
+ /* extend t
On 4.1.2013 17:42, Robert Haas wrote:
> On Mon, Dec 31, 2012 at 11:51 AM, Tomas Vondra wrote:
>> I thought I followed the conding style - which guidelines have I broken?
>
> + /* If there are no non-local relations, then we're done. Release the
> memory
> + * and return. */
>
> Multi-l
On Mon, Dec 24, 2012 at 02:41:37AM +0100, Tomas Vondra wrote:
> + SMgrRelation *srels = palloc(sizeof(SMgrRelation));
> + int nrels = 0,
> + i = 0,
> + maxrels = 1;
maxrels=1 is not good -- too much palloc traff
On Mon, Dec 31, 2012 at 11:51 AM, Tomas Vondra wrote:
> I thought I followed the conding style - which guidelines have I broken?
+ /* If there are no non-local relations, then we're done. Release the
memory
+* and return. */
Multi-line comments should start with a line containing
On 1.1.2013 17:35, Alvaro Herrera wrote:
> There was an earlier suggestion by Andres Freund to use memcmp()
> instead, but I don't see that in the latest posted version of the patch;
> was there a specific rationale for taking it out or it was just lost in
> the shuffle?
No, I've tried that approa
There was an earlier suggestion by Andres Freund to use memcmp()
instead, but I don't see that in the latest posted version of the patch;
was there a specific rationale for taking it out or it was just lost in
the shuffle?
--
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL De
On 30.12.2012 04:03, Robert Haas wrote:
> On Sun, Dec 23, 2012 at 8:41 PM, Tomas Vondra wrote:
>> Attached is a patch with fixed handling of temporary relations. I've
>> chosen to keep the logic in DropRelFileNodeAllBuffers and rather do a
>> local copy without the local relations.
>
> This looks
On Sun, Dec 23, 2012 at 8:41 PM, Tomas Vondra wrote:
> Attached is a patch with fixed handling of temporary relations. I've
> chosen to keep the logic in DropRelFileNodeAllBuffers and rather do a
> local copy without the local relations.
This looks pretty good, although it needs some cleanup for
On 20.12.2012 12:33, Andres Freund wrote:
> On 2012-12-20 02:54:48 +0100, Tomas Vondra wrote:
>> On 19.12.2012 02:18, Andres Freund wrote:
>>> On 2012-12-17 00:31:00 +0100, Tomas Vondra wrote:
>>>
>>> I think except of the temp buffer issue mentioned below its ready.
>>>
-DropRelFileNodeAllBuf
On 2012-12-20 02:54:48 +0100, Tomas Vondra wrote:
> On 19.12.2012 02:18, Andres Freund wrote:
> > On 2012-12-17 00:31:00 +0100, Tomas Vondra wrote:
> >
> > I think except of the temp buffer issue mentioned below its ready.
> >
> >> -DropRelFileNodeAllBuffers(RelFileNodeBackend rnode)
> >> +DropRelF
On 19.12.2012 02:18, Andres Freund wrote:
> On 2012-12-17 00:31:00 +0100, Tomas Vondra wrote:
>
> I think except of the temp buffer issue mentioned below its ready.
>
>> -DropRelFileNodeAllBuffers(RelFileNodeBackend rnode)
>> +DropRelFileNodeAllBuffers(RelFileNodeBackend * rnodes, int nnodes)
>>
On 2012-12-17 00:31:00 +0100, Tomas Vondra wrote:
> I've updated the patch to include the optimization described in the
> previous post, i.e. if the number of relations is below a certain
> threshold, use a simple for loop, for large numbers of relations use
> bsearch calls.
>
> This is done by a n
Hi,
I've updated the patch to include the optimization described in the
previous post, i.e. if the number of relations is below a certain
threshold, use a simple for loop, for large numbers of relations use
bsearch calls.
This is done by a new constant BSEARCH_LIMIT, which is set to 10 in the
pat
Dne 10.12.2012 16:38, Andres Freund napsal:
On 2012-12-08 17:07:38 +0100, Tomas Vondra wrote:
I've done some test and yes - once there are other objects the
optimization falls short. For example for tables with one index, it
looks like this:
1) unpatched
one by one: 28.9 s
100 batches:
On 2012-12-08 17:07:38 +0100, Tomas Vondra wrote:
> On 8.12.2012 15:49, Tomas Vondra wrote:
> > On 8.12.2012 15:26, Andres Freund wrote:
> >> On 2012-12-06 23:38:59 +0100, Tomas Vondra wrote:
> >>> I've re-run the tests with the current patch on my home workstation, and
> >>> the results are these
On Sun, Dec 9, 2012 at 1:07 AM, Tomas Vondra wrote:
> * If you're dropping a single table, it really does not matter - the
> difference will be like 100 ms vs. 200 ms or something like that.
Did you try dropping 10,000 tables in 100 batches, as same as previous post?
If so the overhead introduc
On 8.12.2012 15:49, Tomas Vondra wrote:
> On 8.12.2012 15:26, Andres Freund wrote:
>> On 2012-12-06 23:38:59 +0100, Tomas Vondra wrote:
>>> I've re-run the tests with the current patch on my home workstation, and
>>> the results are these (again 10k tables, dropped either one-by-one or in
>>> batch
On 8.12.2012 15:26, Andres Freund wrote:
> On 2012-12-06 23:38:59 +0100, Tomas Vondra wrote:
>> I've re-run the tests with the current patch on my home workstation, and
>> the results are these (again 10k tables, dropped either one-by-one or in
>> batches of 100).
>>
>> 1) unpatched
>>
>> dropping
On 2012-12-06 23:38:59 +0100, Tomas Vondra wrote:
> On 6.12.2012 05:47, Shigeru Hanada wrote:
> >> I've done a simple benchmark on my laptop with 2GB shared buffers, it's
> >> attached in the drop-test.py (it's a bit messy, but it works).
> > [snip]
> >> With those parameters, I got these numbers o
On 6.12.2012 05:47, Shigeru Hanada wrote:
> On Mon, Nov 12, 2012 at 4:36 AM, Tomas Vondra wrote:
>> Hi,
>>
>> I've prepared a slightly updated patch, based on the previous review.
>> See it attached.
>
> All changes in v3 patch seem good, however I found some places which requires
> cosmetic chan
On Mon, Nov 12, 2012 at 4:36 AM, Tomas Vondra wrote:
> Hi,
>
> I've prepared a slightly updated patch, based on the previous review.
> See it attached.
All changes in v3 patch seem good, however I found some places which requires
cosmetic changes. Please see attached v3.1 patch for those changes
Hi,
I've prepared a slightly updated patch, based on the previous review.
See it attached.
On 18.10.2012 04:28, 花田 茂 wrote:> Hi Tomas,
>
> On 2012/10/17, at 20:45, Tomas Vondra wrote:
>>
>> Dne 17.10.2012 12:34, Shigeru HANADA napsal:
>>> Performance test
>>>
>>> I tested 1000 t
Tomas Vondra wrote:
> Hi,
>
> thanks for the review. I'll look into that in ~2 weeks, once the
> pgconf.eu
> is over.
Excellent. Please submit the updated version to the upcoming commitfest
when you have it. I'm marking this patch Returned with Feedback.
Many thanks to Shigeru Hanada for the re
Hi Tomas,
On 2012/10/17, at 20:45, Tomas Vondra wrote:
>
> Dne 17.10.2012 12:34, Shigeru HANADA napsal:
>> Performance test
>>
>> I tested 1000 tables case (each is copy of pgbench_branches with 10
>> rows) on 1GB shared_buffers server. Please note that I tested on
>> MacBo
Hi,
thanks for the review. I'll look into that in ~2 weeks, once the
pgconf.eu
is over. A few comments in the text below.
Dne 17.10.2012 12:34, Shigeru HANADA napsal:
Performance test
I tested 1000 tables case (each is copy of pgbench_branches with
10
rows) on 1GB share
Hi Tomas,
Sorry to be late.
On Sat, Aug 25, 2012 at 7:36 AM, Tomas Vondra wrote:
> attached is a patch that improves performance when dropping multiple
> tables within a transaction. Instead of scanning the shared buffers for
> each table separately, the patch removes this and evicts all the tab
On Thu, Aug 30, 2012 at 3:17 PM, Tomas Vondra wrote:
> On 30 Srpen 2012, 17:53, Robert Haas wrote:
>> On Fri, Aug 24, 2012 at 6:36 PM, Tomas Vondra wrote:
>>> attached is a patch that improves performance when dropping multiple
>>> tables within a transaction. Instead of scanning the shared buffe
On 30 Srpen 2012, 17:53, Robert Haas wrote:
> On Fri, Aug 24, 2012 at 6:36 PM, Tomas Vondra wrote:
>> attached is a patch that improves performance when dropping multiple
>> tables within a transaction. Instead of scanning the shared buffers for
>> each table separately, the patch removes this and
On Fri, Aug 24, 2012 at 6:36 PM, Tomas Vondra wrote:
> attached is a patch that improves performance when dropping multiple
> tables within a transaction. Instead of scanning the shared buffers for
> each table separately, the patch removes this and evicts all the tables
> in a single pass through
Hi,
attached is a patch that improves performance when dropping multiple
tables within a transaction. Instead of scanning the shared buffers for
each table separately, the patch removes this and evicts all the tables
in a single pass through shared buffers.
Our system creates a lot of "working ta
40 matches
Mail list logo