mins Run, When I get time, I will run for
longer time and confirm again.
Shared Buffer= 8GB
Scale Factor=300
./pgbench -j$ -c$ -T300 -M prepared -S postgres
client basepatch
1 7057 5230
2 10043 9573
4 2014018188
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
could impact performance? Is it possible for you to get perf data with and
> without patch and share with others?
>
I only reverted ac1d794 commit in my test, In my next run I will revert
6150a1b0 also and test.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Feb 10, 2016 at 7:06 PM, Dilip Kumar wrote:
>
I have tested Relation extension patch from various aspects and performance
results and other statistical data are explained in the mail.
Test 1: Identify the Heavy Weight lock is the Problem or the Actual Context
Switch
1. I converted
sion.
>
OK, I will test it, sometime in this week.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Mar 1, 2016 at 10:19 AM, Dilip Kumar wrote:
>
> OK, I will test it, sometime in this week.
>
I have tested this patch in my laptop, and there i did not see any
regression at 1 client
Shared buffer 10GB, 5 mins run with pgbench, read-only test
basepatch
run1
.
If (success - failure > Threshold)
{
// Can not reduce it by big number because, may be more request are
satisfying because this is correct amount, so gradually decrease the pace
and re-analyse the statistics next time.
ExtendByBlock --;
Failure = success= 0
}
Any Suggestions are Welcome...
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
gt;./pgbench -j$ -c$ -T300 -M prepared -S postgres
>> > > > >>client basepatch
>> > > > >>1 7057 5230
>> > > > >>2 10043 9573
>> > > > >>4 2014018188
And this latest result (no regression) is on X86 but on my local machine.
I did not exactly saw what this new version of patch is doing different, so
I will test this version in other machines also and see the results.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Mar 2, 2016 at 10:31 AM, Dilip Kumar wrote:
> 1. One option can be as you suggested like ProcArrayGroupClearXid, With
> some modification, because when we wait for the request and extend w.r.t
> that, may be again we face the Context Switch problem, So may be we can
> ex
On Wed, Mar 2, 2016 at 11:05 AM, Dilip Kumar wrote:
> And this latest result (no regression) is on X86 but on my local machine.
>
> I did not exactly saw what this new version of patch is doing different,
> so I will test this version in other machines also and see the results.
>
264
* (waiter*20)-> First process got the lock will find the lock waiters and
add waiter*20 extra blocks.
In next run I will run beyond 32 also, as we can see even at 32 client its
increasing.. so its clear when it see more contentions, adapting to
contention and adding
;
> Also, consider moving the code that adds multiple blocks at a time to
> its own function instead of including it all in line.
>
Done
Attaching a latest patch.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/
130
8 43 147
16 40 209
32 --- 254
64 --- 205
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 8140418..b73535c 100644
--- a/src/backend/access/
possible case is as soon as we extend the blocks new requester directly
find in FSM and don't come for lock, and old waiter after getting lock
don't find in FSM, But IMHO in such cases, also its good that other waiter
also extend more blocks (because this can happen when request flow is ve
3889462139
7198573908162983
8199103992375358
9201693993777481
10 201814000378462
------
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ents loads 1..64, with multiple load size 4
byte records to 1KB Records, COPY/ INSERT and found 20 works best.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
4 byte COPY I did not tested other data
size with 50.).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
2 clients, means at a time we are extending 32*20 ~= max (600) pages at
a time. So now with 4MB limit (max 512 pages) Results will looks similar.
So we need to take a decision whether 4MB is good limit, should I change it
?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Mar 17, 2016 at 1:31 PM, Petr Jelinek wrote:
> Great.
>
> Just small notational thing, maybe this would be simpler?:
> extraBlocks = Min(512, lockWaiters * 20);
>
Done, new patch attached.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --gi
32 19849 19909
39211 38837
33 19854 19932
39230 38876
34 19867 19949
39249 39088
35 19891 19990
39259 39148
36 20038 20085
39286 39453
37 20083 20128
39435 39563
38 20143 20166
39448 39959
39 20191 20198
39475 40495
40 20437 20455
40375 40664
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
limit on extra pages,
512(4MB) pages is the max limit.
I have measured the performance also and that looks equally good.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 8140418..fcd42b9 100644
--- a/
dr, true));
}
}
--- 826,834
*/
do
{
! uint32 state = LockBufHdr(bufHdr);
! state &= ~(BM_VALID | BM_LOCKED);
! pg_atomic_write_u32(&bufHdr->state, state);
} while (!StartBufferIO(bufHdr, true));
Better Write some comment, about we clearing the BM_LOCKED from stage
di
On Tue, Mar 22, 2016 at 12:31 PM, Dilip Kumar wrote:
> ! pg_atomic_write_u32(&bufHdr->state, state);
> } while (!StartBufferIO(bufHdr, true));
>
> Better Write some comment, about we clearing the BM_LOCKED from stage
> directly and need not to call UnlockBufHdr expli
we don't need atomic
operation here, we are not yet added to the list.
+ if (nextidx != INVALID_PGPROCNO &&
+ ProcGlobal->allProcs[nextidx].clogGroupMemberPage !=
proc->clogGroupMemberPage)
+ return false;
+
+ pg_atomic_write_u32(&proc->clogGroupNext, nextidx);
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ho wants
to add one block or who have got one block added in along with extra block.
I think this way code is simple.. That everybody comes down will add one
block for self use. and all other functionality and logic is above, i.e.
wether to take lock or not, whether to add extra blocks or not..
slot.
I have done performance test just to ensure the result. And performance is
same as old. with both COPY and INSERT.
3. I have also run pgbench read-write what amit suggested upthread.. No
regression or improvement with pgbench workload.
Client basePatch
1 899 914
8 5397 5413
32 18170
gt; >
> > GetNearestPageWithFreeSpace? (although not sure that's accurate
> description, maybe Nearby would be better)
> >
>
> Better than what is used in patch.
>
> Yet another possibility could be to call it as
> GetPageWithFreeSpaceExtende
263115
64 248109
Note: I think one thread number can be just run to run variance..
Does anyone see problem in updating the FSM tree, I have debugged and saw
that we are able to get the pages properly from tree and same is visible in
performance numbe
k3ktcmhi...@alap3.anarazel.de
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
The extra call to RelationGetNumberOfBlocks seems
> cheap enough here because the alternative is to wait for a contended
> heavyweight lock.
>
I will try the test with this also and post the results.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
copy_script
De
On Sat, Mar 26, 2016 at 3:18 PM, Dilip Kumar wrote:
> search the last, say, two pages of the FSM in all cases. But that
>> might be expensive. The extra call to RelationGetNumberOfBlocks seems
>> cheap enough here because the alternative is to wait for a contended
>> heavy
On Sat, Mar 26, 2016 at 3:18 PM, Dilip Kumar wrote:
> We could go further still and have GetPageWithFreeSpace() always
>> search the last, say, two pages of the FSM in all cases. But that
>> might be expensive. The extra call to RelationGetNumberOfBlocks seems
>> cheap en
t(s):24
NUMA node(s): 4
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
anted to explained the same above.
> Another idea is:
>
> If ConditionalLockRelationForExtension fails to get the lock
> immediately, search the last *two* pages of the FSM for a free page.
>
> Just brainstorming here.
I think this is better option, Since we will search last two
age where the bulk extend
> rolls over?
>
This is actually multi level tree, So each FSM page contain one slot tree.
So fsm_search_avail() is searching only the slot tree, inside one FSM page.
But we want to go to next FSM page.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sun, Mar 27, 2016 at 5:48 PM, Andres Freund wrote:
>
> What's sizeof(BufferDesc) after applying these patches? It should better
> be <= 64...
>
It is 72.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Mon, Mar 28, 2016 at 7:21 AM, Dilip Kumar wrote:
> I agree with that conclusion. I'm not quite sure where that leaves
>> us, though. We can go back to v13, but why isn't that producing extra
>> pages? It seems like it should: whenever a bulk extend rolls
On Mon, Mar 28, 2016 at 3:02 PM, Dilip Kumar wrote:
> 1. Relation Size : No change in size, its same as base and v13
>
> 2. INSERT 1028 Byte 1000 tuple performance
> ---
> Client base v13 v15
> 1 117 124 122
> 2 111 1
27;t get then search from top.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
multi_extend_v17.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Mar 29, 2016 at 2:09 PM, Dilip Kumar wrote:
>
> Attaching new version v18
- Some cleanup work on v17.
- Improved *UpdateFreeSpaceMap *function.
- Performance and space utilization are same as V17
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
multi_exte
ed(__ppc64__) ||
defined(__powerpc64__)
*#define* HAS_TEST_AND_SET
*typedef* *unsigned* *int* slock_t; --> changed like this
*#define* TAS(lock) tas(lock)
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
backend and now freespace is not same, will not harm
+ * anything, because actual freespace will be calculated by user
+ * after getting the page.
+ */
+ UpdateFreeSpaceMap(relation, firstBlock, blockNum, freespace);
Does this look good ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprise
On Wed, Mar 30, 2016 at 7:51 AM, Dilip Kumar wrote:
> + if (lockWaiters)
> + /*
> + * Here we are using same freespace for all the Blocks, but that
> + * is Ok, because all are newly added blocks and have same freespace
> + * And even some block which we just added to Freespa
ging multiplier and max limit .
But I think we are ok with the max size as 4MB (512 blocks) right?.
Does this test make sense ?
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
80
128431346 415121
256380926 379176
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c
index b423aa7..04862d7 100644
---
20791
32 372633 355356
64 532052 552148
128412755 478826
256 346701 372057
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ith_hash_value is reduced from 3.86%(master) to
1.72%(patch).
I have plan to do further investigation, in different scenarios of dynahash.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ebuild dependent .c files:
>
Yes, actually i always compile using "make clean;make -j20; make install"
If you want i will run it again may be today or tomorrow and post the
result.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
to ensure that extending relation in multiple-chunks should not
> regress such cases.
>
Ok
>
> Currently i have kept extend_num_page as session level parameter but i
>> think later we can make this as table property.
>> Any suggestion on this ?
>>
>>
> I think w
easing
> the pin on buffer which will be released by
> UnlockReleaseBuffer(). Get the block number before unlocking
> the buffer.
>
Good catch, will fix this also in next version.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
in this case HASHHDR.mutex access will be quite frequent.
And in this case i do see very good improvement in POWER8 server.
Test Result:
Scale Factor:300
Shared Buffer:512MB
pgbench -c$ -j$ -S -M Prepared postgres
Clientbasepatch
64 222173 318318
12
On Mon, Jan 25, 2016 at 11:59 AM, Dilip Kumar wrote:
1.
>> Patch is not getting compiled.
>>
>> 1>src/backend/access/heap/hio.c(480): error C2065: 'buf' : undeclared
>> identifier
>>
> Oh, My mistake, my preprocessor is ignoring this error and rela
On Thu, Jan 28, 2016 at 4:53 PM, Dilip Kumar wrote:
> I did not find in regression in normal case.
> Note: I tested it with previous patch extend_num_pages=10 (guc parameter)
> so that we can see any impact on overall system.
>
Just forgot to mentioned That i have run pgbench rea
don't see any
reason why it will regress with scale factor 300.
So I will run the test again with scale factor 300 and this time i am
planning to run 2 cases.
1. when data fits in shared buffer
2. when data doesn't fit in shared buffer.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sun, Jan 31, 2016 at 11:44 AM, Dilip Kumar wrote:
> By looking at the results with scale factor 1000 and 100 i don't see any
> reason why it will regress with scale factor 300.
>
> So I will run the test again with scale factor 300 and this time i am
> planning to run 2 c
f it can get a page with
> space from FSM. It seems to me that we should re-check the
> availability of page because while one backend is waiting on extension
> lock, other backend might have added pages. To re-check the
> availability we might want to use something similar to
> LWLockAcquireOrWait() semantics as used during WAL writing.
>
I will work on this in next version...
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
laces which are not exposed function,
but I don't this will have any imapct, will this ?
2. lookup_type_cache: This is being called from record_in
(record_in->lookup_rowtype_tupdesc->
lookup_rowtype_tupdesc_internal->lookup_type_cache).
3. CheckFunctionValidatorAccess: This is bein
On Wed, Sep 7, 2016 at 8:52 AM, Haribabu Kommi wrote:
> I reviewed and tested the patch. The changes are fine.
> This patch provides better error message compared to earlier.
>
> Marked the patch as "Ready for committer" in commit-fest.
Thanks for the review !
--
alidatorAccess: This is being called from all
>> language validator functions.
>
> This part seems reasonable, since the validator functions are documented
> as something users might call, and CheckFunctionValidatorAccess seems
> like an apropos place to handle it.
--
Regards,
search.
> You could imagine buying back those cycles by teaching the typcache
> to be able to cache the result of getBaseTypeAndTypmod, but I'm doubtful
> that we really care. This whole setup sequence only happens once per
> query anyway.
Agreed.
--
Regards,
Dilip Kumar
En
On Fri, Sep 9, 2016 at 6:51 PM, Tom Lane wrote:
> Pushed with cosmetic adjustments --- mostly, I thought we needed some
> comments about the topic.
Okay, Thanks.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-h
will test with 3rd approach also, whenever I get time.
3. Summary:
1. I can see on head we are gaining almost ~30 % performance at higher
client count (128 and beyond).
2. group lock is ~5% better compared to granular lock.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Sep 14, 2016 at 10:25 AM, Dilip Kumar wrote:
> I have tested performance with approach 1 and approach 2.
>
> 1. Transaction (script.sql): I have used below transaction to run my
> bench mark, We can argue that this may not be an ideal workload, but I
> tested this to p
t happens there. Those cases are a lot more likely than
> these stratospheric client counts.
I tested with 64 clients as well..
1. On head we are gaining ~15% with both the patches.
2. But group lock vs granular lock is almost same.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprised
> SAVEPOINT s1;
> SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
> SAVEPOINT s2;
> SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
> END;
> ---
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via
dSetOldestMember
+ 0.66% LockRefindAndRelease
Next I will test, "update with 2 savepoints", "select for update with
no savepoints"
I will also test the granular lock and atomic lock patch in next run..
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
S
mage to the SLRU abstraction layer.
>
> I agree with you unless it shows benefit on somewhat more usual
> scenario's, we should not accept it. So shouldn't we wait for results
> of other workloads like simple-update or tpc-b on bigger machines
> before reaching to co
On Tue, Sep 20, 2016 at 9:15 AM, Dilip Kumar wrote:
> +1
>
> My test are under run, I will post it soon..
I have some more results now
8 socket machine
10 min run(median of 3 run)
synchronous_commit=off
scal factor = 300
share buffer= 8GB
test1: Simple update(pgbench)
Clients
On Wed, Sep 21, 2016 at 8:47 AM, Dilip Kumar wrote:
> Summary:
> --
> At 32 clients no gain, I think at this workload Clog Lock is not a problem.
> At 64 Clients we can see ~10% gain with simple update and ~5% with TPCB.
> At 128 Clients we can see > 50% gain.
&g
On Thu, Oct 5, 2017 at 8:15 PM, Robert Haas wrote:
> On Sun, Sep 17, 2017 at 7:04 AM, Dilip Kumar wrote:
>> I used lossy_pages = max(0, total_pages - maxentries / 2). as
>> suggesed by Alexander.
>
> Does that formula accurately estimate the number of lossy pages
On Fri, Oct 6, 2017 at 6:36 PM, Robert Haas wrote:
> On Fri, Oct 6, 2017 at 2:12 AM, Dilip Kumar wrote:
>>> The performance results look good, but that's a slightly different
>>> thing from whether the estimate is accurate.
>>>
>>> +nbuckets = tbm_
227229
> 8197114179172
> 10269227190192
> 14 110108106105
>
Thanks for the testing number looks good to me.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 6, 2017 at 7:04 PM, Dilip Kumar wrote:
> On Fri, Oct 6, 2017 at 6:36 PM, Robert Haas wrote:
>> On Fri, Oct 6, 2017 at 2:12 AM, Dilip Kumar wrote:
>>>> The performance results look good, but that's a slightly different
>>>> thing
On Fri, Oct 6, 2017 at 7:24 PM, Dilip Kumar wrote:
> On Fri, Oct 6, 2017 at 6:08 PM, Alexander Kuzmenkov
> wrote:
>>
>>> Analysis: The estimated value of the lossy_pages is way higher than
>>> its actual value and reason is that the total_pages calculated by the
&
On Fri, Oct 6, 2017 at 9:21 PM, Dilip Kumar wrote:
> On Fri, Oct 6, 2017 at 7:24 PM, Dilip Kumar wrote:
>> On Fri, Oct 6, 2017 at 6:08 PM, Alexander Kuzmenkov
>> wrote:
>>>
>>>> Analysis: The estimated value of the lossy_pages is way higher than
>>
m by
itself and isshared is set for BitmapOr.
Attached patch fixing the issue for me. I will thoroughly test this
patch with other scenario as well. Thanks for reporting.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/optimizer/plan/createplan.c b/src/
On Thu, Oct 12, 2017 at 6:37 PM, Tomas Vondra
wrote:
>
>
> On 10/12/2017 02:40 PM, Dilip Kumar wrote:
>> On Thu, Oct 12, 2017 at 4:31 PM, Tomas Vondra
>> wrote:
>>> Hi,
>>>
>>> It seems that Q19 from TPC-H is consistently failing with segfaul
on t2 (cost=0.00..1885.00
rows=10 width=8) (actual time=1.407..11.689 rows=10 loops=1)
Planning time: 0.146 ms
Execution time: 263.678 ms
(11 rows)
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
30%.
I understand that the this is the worst case for PWA where
FinalizeAggregate is getting all the tuple.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://ww
omething cleaner
because if we want to access the ExprContext inside
partkey_datum_from_expr then we may need to pass it to
"get_partitions_from_clauses" which is a common function for optimizer
and executor.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgs
ment as well:
>
> I made an adjustment that I hope will address your concern here, made
> a few other adjustments, and committed this.
>
Thanks, Robert.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
what CPU model is Dilip using - I know it's x86, but not which
> generation it is. I'm using E5-4620 v1 Xeon, perhaps Dilip is using a newer
> model and it makes a difference (although that seems unlikely).
I am using "Intel(R) Xeon(R) CPU E7- 8830 @ 2.13
On Thu, Sep 29, 2016 at 8:05 PM, Robert Haas wrote:
> OK, another theory: Dilip is, I believe, reinitializing for each run,
> and you are not.
Yes, I am reinitializing for each run.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailin
_SIZE
$9 = 1048576
In dsa-v1 problem was not exist because DSA_MAX_SEGMENTS was 1024,
but in dsa-v2 I think it's calculated wrongly.
(gdb) p DSA_MAX_SEGMENTS
$10 = 16777216
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@p
ket_head < iterator->last_item_pointer) &&
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
same can be
that HeapKeyTest is much simplified compared to ExecQual. It's
possible that in future when we try to support more variety of keys,
gain at high selectivity may come down.
WIP patch attached..
Thoughts ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
heap_scan
iance (because earlier I never saw this regression, I can
confirm again with multiple runs).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
[@power2 ~]$ uname -mrs
Linux 3.10.0-229.14.1.ael7b.ppc64le ppc64le
[@power2 ~]$ lscpu
Architecture: ppc64le
Byte Order:
cide decide the operator are safe or not based on their
datatype ?
What I mean to say is instead of checking safety of each operator like
texteq(), text_le()...
we can directly discard any operator involving such kind of data types.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
://git.postgresql.org/pg/commitdiff/75ae538bc3168bf44475240d4e0487ee2f3bb376
On Fri, Oct 7, 2016 at 11:46 AM, Dilip Kumar wrote:
> Hi Hackers,
>
> I would like to propose parallel bitmap heap scan feature. After
> running TPCH benchmark, It was observed that many of TPCH queries are
> us
n so no impact.
Q14, Q15 time spent in BitmapIndex node is < 5% of time spent in
BitmapHeap Node. Q6 it's 20% but I did not see much impact on this in
my local machine. However I will take the complete performance reading
and post the data on my actual performance machine.
--
Regards,
Di
in shared memory (my current approach)
or we need to copy each hash element at shared location (I think this
is going to be expensive).
Let me know if I am missing something..
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hacker
Execution time: 6669.195 ms
(13 rows)
Summary:
-> With patch overall execution is 2 time faster compared to head.
-> Bitmap creation with patch is bit slower compared to head and thats
because of DHT vs efficient hash table.
I found one defect in v2 patch, that I induced during last reb
| wal_insert
30 BufferPin | BufferPin
10 LWLockTranche | proc
6 LWLockTranche | buffer_mapping
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
of individual runs etc.
I saw your report, I think presenting it this way can give very clear idea.
>
> If you want to cooperate on this, I'm available - i.e. I can help you get
> the tooling running, customize it etc.
That will be really helpful, then next time I can also present m
ontention on ClogControlLock become much worse ?
I will run this test and post the results.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Oct 21, 2016 at 7:57 AM, Dilip Kumar wrote:
> On Thu, Oct 20, 2016 at 9:03 PM, Tomas Vondra
> wrote:
>
>> In the results you've posted on 10/12, you've mentioned a regression with 32
>> clients, where you got 52k tps on master but only 48k tps with the p
til we find first non pushable qual (this way
we can maintain the same qual execution order what is there in
existing system).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
re very low and fixed "1".
Do we really need to take care of any user defined function which is
declared with very low cost ?
Because while building index conditions also we don't take care of
such things. Index conditions will always we evaluated first then only
filter will be ap
That would be an
>> interesting thing to mention in the summary, I think.
>>
>
> One thing is clear that all results are on either
> synchronous_commit=off or on unlogged tables. I think Dilip can
> answer better which of those are on unlogged and which on
> synchr
ut with that I did not see any
regression in v1).
2. (v1+use_slot_in_HeapKeyTest) is always winner, even at very high selectivity.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
-11-01 14:31:52.235 IST [72343] ERROR: cannot copy to view "ttt_v"
2016-11-01 14:31:52.235 IST [72343] STATEMENT: COPY ttt_v FROM stdin;
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
1 - 100 of 402 matches
Mail list logo