>From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>> I measured the memory context accounting overhead using Tomas's tool
>> palloc_bench, which he made it a while ago in the simil
>From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>> [Size=800, iter=1,000,000]
>> Master |15.763
>> Patched|16.262 (+3%)
>>
>> [Size=32768, iter=1,000,000]
>> Master |61.3076
>> Patched|62.9566 (+2%)
>
>What's the unit, second or millisecond?
Millisecond.
>Why is the number of d
>From: Robert Haas [mailto:robertmh...@gmail.com]
>On Thu, Mar 7, 2019 at 9:49 AM Tomas Vondra
>wrote:
>> I don't think this shows any regression, but perhaps we should do a
>> microbenchmark isolating the syscache entirely?
>
>Well, if we need the LRU list, then yeah I think a microbenchmark woul
>From: Vladimir Sitnikov [mailto:sitnikov.vladi...@gmail.com]
>
>Robert> This email thread is really short on clear demonstrations that X
>Robert> or Y is useful.
>
>It is useful when the whole database does **not** crash, isn't it?
>
>Case A (==current PostgeSQL mode): syscache grows, then OOMkil
>From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
>
>> Personally, I don't find this hint particularly necessary. The
>> session was terminated because nothing was happening, so the real fix
>> on the application side is probably more involved than just retrying.
>> This is different from some of th
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Does this result show that hard-limit size option with memory accounting
>doesn't harm
>to usual users who disable hard limit size option?
Hi,
I've implemented relation cache size limitation with LRU lis
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Wednesday, December 5, 2018 2:42 PM
>Subject: RE: Copy data to DSA area
Hi
It's been a long while since we discussed this topic.
Let me recap first and I'll give some thoughts.
It seems things we got conse
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Do you have any thoughts?
>
Hi, I updated my idea, hoping get some feedback.
[TL; DR]
The basic idea is following 4 points:
A. User can choose which database to put a cache (relation and catalog) on
shared memory and
Hi, I've updated Thomas's quick PoC.
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Wednesday, April 17, 2019 2:07 PM
>>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>Sent: Wednesday, December 5, 2018 2:42 PM
>>Subject: R
Hi,
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Friday, April 26, 2019 11:50 PM
>Well, after developing PoC, I realized that this PoC doesn't solve the local
>process is
>crashed before the context becomes shared because local process keeps track o
Hi, Thomas
>-Original Message-
>From: Thomas Munro [mailto:thomas.mu...@gmail.com]
>Subject: Re: Copy data to DSA area
>
>On Wed, May 8, 2019 at 5:29 PM Ideriha, Takeshi
>
>wrote:
>> >From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>
Hi,
>>From: Thomas Munro [mailto:thomas.mu...@gmail.com]
>>Subject: Re: Copy data to DSA area
>>
>>On Wed, May 8, 2019 at 5:29 PM Ideriha, Takeshi
>>
>>wrote:
>>> >From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>> >Sent
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>[TL; DR]
>The basic idea is following 4 points:
>A. User can choose which database to put a cache (relation and catalog) on
>shared
>memory and how much memory is used
>B. Caches of committed data are on the
&
Hi, let me clarify my understanding about the $title.
It seems that the number of hash partitions is fixed at 128 in dshash and
right now we cannot change it unless dshash.c code itself is changed, right?
According to the comment of dshash.c, DSHASH_NUM_PARTITIONS could be runtime
parameter in f
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>> This would cause some waste of memory on DSA because some partitions
>> (buckets)
>is allocated but not used.
>>
>> So I'm thinking that current dshash design is still ok but flexible
>> size of partition is appropriate for some
Hi,
Related to my development (putting relcache and catcache onto shared memory)[1],
I have some questions about how to copy variables into shared memory,
especially DSA area, and its implementation way.
Under the current architecture when initializing some data, we sometimes copy
certain data
Hi, thank you for all the comment.
It's really helpful.
>From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
>Sent: Wednesday, November 7, 2018 1:35 PM
>
>On Wed, Nov 7, 2018 at 3:34 PM Ideriha, Takeshi
>
>wrote:
>> Related to my development (putting relc
Thank you for the comment.
From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
>> I'm thinking to go with plan 1. No need to think about address
>> translation seems tempting. Plan 2 (as well as plan 3) looks a big project.
>
>The existing function dsa_create_in_place() interface was intende
From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
>
>On Tue, Nov 13, 2018 at 9:45 AM Robert Haas wrote:
>> On Thu, Nov 8, 2018 at 9:05 PM Thomas Munro
>> wrote:
>> > * I had some ideas about some kind of "allocation rollback" interface:
>> > you begin an "allocation transaction", allocat
Hello, thank you for updating the patch.
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>At Thu, 4 Oct 2018 04:27:04 +0000, "Ideriha, Takeshi"
> wrote in
><4E72940DA2BF16479384A86D54D0988A6F1BCB6F@G01JPEXMBKW04>
>> >As a *PoC*, in the att
Hi,
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Wednesday, October 3, 2018 3:18 PM
>At this moment this patch only allocates catalog cache header and CatCache
>data on
>the shared memory area.
On this allocation stuffs I'm trying to handle it in ano
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>I haven't looked into the code but I'm going to do it later.
Hi, I've taken a look at 0001 patch. Reviewing the rest of patch will be later.
Hi
>From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
>Sent: Wednesday, November 14, 2018 9:50 AM
>To: Ideriha, Takeshi/出利葉 健
>
>On Tue, Nov 13, 2018 at 10:59 PM Ideriha, Takeshi
>
>wrote:
>> Can I check my understanding?
>> The situation you are talkin
>From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
>Sent: Wednesday, November 28, 2018 12:18 PM
>To: pgsql-hackers@lists.postgresql.org
>Subject: idle-in-transaction timeout error does not give a hint
>
>idle-in-transaction timeout error closed the session. I think in this case the
>error
>message sh
>> Hi, it makes sense to me. One can submit transaction again same as
>> other cases you mentioned.
>>
>> I didn't attach the patch but according to my simple experiment in
>> psql the output would become the following:
>>
>> FATAL: terminating connection due to idle-in-transaction timeout
>> HINT
Hi.
I found a minor typo in dsa.c.
s/set_dsa_size_limit/dsa_set_size_limit/
regards,
==
Takeshi Ideriha
Fujitsu Limited
0001-Minor-typo-in-dsa.c.patch
Description: 0001-Minor-typo-in-dsa.c.patch
Hi
>> > I suggest you go with just syscache_prune_min_age, get that into PG
>> > 12, and we can then reevaluate what we need. If you want to
>> > hard-code a minimum cache size where no pruning will happen, maybe
>> > based on the system catalogs or typical load, that is fine.
>>
>> Please forgiv
>From: br...@momjian.us [mailto:br...@momjian.us]
>On Mon, Feb 4, 2019 at 08:23:39AM +, Tsunakawa, Takayuki wrote:
>> Horiguchi-san, Bruce, all, So, why don't we make
>> syscache_memory_target the upper limit on the total size of all
>> catcaches, and rethink the past LRU management?
>
>I was g
>From: Jamison, Kirk [mailto:k.jami...@jp.fujitsu.com]
>On the other hand, the simplest method I thought that could also work is to
>only cache
>the file size (nblock) in shared memory, not in the backend process, since
>both nblock
>and relsize_change_counter are uint32 data type anyway. If
>r
Hi, thanks for recent rapid work.
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>At Tue, 5 Feb 2019 19:05:26 -0300, Alvaro Herrera
>wrote in <20190205220526.GA1442@alvherre.pgsql>
>> On 2019-Feb-05, Tomas Vondra wrote:
>>
>> > I don't think we need to remove the expired entri
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>I made a rerun of benchmark using "-S -T 30" on the server build with no
>assertion and
>-O2. The numbers are the best of three successive attempts. The patched
>version is
>running with cache_target_memory = 0, cache_prune_min_a
>From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
>But it's a bit funnier, because there's also DropRelationFiles() which does
>smgrclose on
>a batch of relations too, and it says this
>
>/*
> * Call smgrclose() in reverse order as when smgropen() is called.
> * This trick enab
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>
>
>(2) Another new patch v15-0005 on top of previous design of
> limit-by-number-of-a-cache feature converts it to
> limit-by-size-on-all-caches feature, which I think is
> Tsunakawa-san wanted.
Yeah, size looks better to me.
>
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>About the new global-size based evicition(2), cache entry creation
>>becomes slow after the total size reached to the limit since every one
>>new entry evicts one or more old (=
>>not-recently-used) entr
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>But at the same time, I did some benchmark with only hard limit option enabled
>and
>time-related option disabled, because the figures of this case are not
>provided in this
>thread.
>So let me share it.
I
>From: Tsunakawa, Takayuki
>>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>> number of tables | 100 |1000|1
>> ---
>> TPS (master) |10966 |10654 |9099
>> TPS (patch)
>From: Tsunakawa, Takayuki
>Ideriha-san,
>Could you try simplifying the v15 patch set to see how simple the code would
>look or
>not? That is:
>
>* 0001: add dlist_push_tail() ... as is
>* 0002: memory accounting, with correction based on feedback
>* 0003: merge the original 0003 and 0005, with c
>>From: Tsunakawa, Takayuki
>>Ideriha-san,
>>Could you try simplifying the v15 patch set to see how simple the code
>>would look or not? That is:
>>
>>* 0001: add dlist_push_tail() ... as is
>>* 0002: memory accounting, with correction based on feedback
>>* 0003: merge the original 0003 and 0005,
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>>* 0001: add dlist_push_tail() ... as is
>>>* 0002: memory accounting, with correction based on feedback
>>>* 0003: merge the original 0003 and 0005, with correction based on
>>>feedback
>
>From: Robert Haas [mailto:robertmh...@gmail.com]
>
>On Mon, Feb 25, 2019 at 3:50 AM Tsunakawa, Takayuki
> wrote:
>> How can I make sure that this context won't exceed, say, 10 MB to avoid OOM?
>
>As Tom has said before and will probably say again, I don't think you actually
>want that.
>We know t
>From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
>
>Hi Ideriha-san,
>
> Hi, it makes sense to me. One can submit transaction again same as
> other cases you mentioned.
>
> I didn't attach the patch but according to my simple experiment in
> psql the output would become the follow
>From: Tatsuo Ishii [mailto:is...@sraoss.co.jp]
>Subject: Re: idle-in-transaction timeout error does not give a hint
>
>>Alternative HINT message would be something like:
>>
>>HINT: In a moment you should be able to reconnect to the
>> database and restart your transaction.
>>>
Hi, thank you for the comment.
>-Original Message-
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>Sent: Wednesday, November 28, 2018 5:09 PM
>
>Hello.
>
>At Wed, 28 Nov 2018 05:13:26 +0000, "Ideriha, Takeshi"
> wrote in
><
Hello,
Sorry for delay.
The detailed comments for the source code will be provided later.
>> I just thought that the pair of ageclass and nentries can be
>> represented as json or multi-dimensional array but in virtual they are
>> all same and can be converted each other using some functions. So
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>The detailed comments for the source code will be provided later.
Hi,
I'm adding some comments to 0001 and 0002 one.
Hi
cc’ed pgsql-translators
I personally uses ‘poedit’ when working around po files.
Takeshi Ideriha
Fujitsu Limited
From: Дмитрий Воронин [mailto:carriingfat...@yandex.ru]
Sent: Thursday, December 20, 2018 12:54 AM
To: pgsql-hack...@postgresql.org
Subject: Localization tools
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Friday, April 26, 2019 11:50 PM
>To: 'Kyotaro HORIGUCHI' ;
>thomas.mu...@enterprisedb.com; robertmh...@gmail.com
>Cc: pgsql-hack...@postgresql.org
>Subject: RE: Copy data to DSA area
>
>Hi, I&
Hi, everyone.
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>My current thoughts:
>- Each catcache has (maybe partial) HeapTupleHeader
>- put every catcache on shared memory and no local catcache
>- but catcache for aborted tuple is not put on shared memory
>-
Hi, thank you for the previous two emails.
Thomas Munro wrote:
>What do you think about the following? Even though I know you want to start
>with
>much simpler kinds of cache, I'm looking ahead to the lofty end-goal of having
>a shared
>plan cache. No doubt, that involves solving many other
Hi,
When I tried to use libpq, I found that libpq example[1] does not work.
That's because SELECT pg_catlog.set_config() never returns PGRES_COMMAND_OK
which is expected,
but actually returns PGRES_TUPLES_OK.
Patch attached.
I changed both src/test/example and document.
[1] https://www.postgr
>-Original Message-
>From: Peter Eisentraut [mailto:peter.eisentr...@2ndquadrant.com]
>Sent: Sunday, July 1, 2018 9:17 PM
>To: Ideriha, Takeshi/出利葉 健 ; pgsql-hackers
>
>Subject: Re: libpq example doesn't work
>
>On 27.06.18 08:29, Ideriha, Takeshi wrote:
&g
>-Original Message-
>From: AJG [mailto:ay...@gera.co.nz]
>Sent: Wednesday, June 27, 2018 3:21 AM
>To: pgsql-hack...@postgresql.org
>Subject: Re: Global shared meta cache
>
>Ideriha, Takeshi wrote
>> 2) benchmarked 3 times for each conditions and got t
>-Original Message-
>From: se...@rielau.com [mailto:se...@rielau.com]
>Sent: Wednesday, June 27, 2018 2:04 AM
>To: Ideriha, Takeshi/出利葉 健 ; pgsql-hackers
>
>Subject: RE: Global shared meta cache
>
>Takeshi-san,
>
>
>>My customer created hundreds of tho
Hi,
> The new structure member appears out of place, can you move up along
> with other "command-line long options" ?
>
>
>
>Done
>
I did regression tests (make check-world) and
checked manually pg_dump --on-conflict-do-nothing works properly.
Also it seems to me the code has no prob
Hi, thanks for the revision.
>
>+Add ON CONFLICT DO NOTHING clause in the INSERT commands.
>
>I think this would be better as: Add ON CONFLICT DO NOTHING
>to
>INSERT commands.
Agreed.
>+printf(_(" --on-conflict-do-nothing dump data as INSERT
>commands with ON CONFLICT DO NOTHIN
>I noticed one more thing: pg_dumpall.c doesn't really need to prohibit
>--on-conflict-do-nothing without --insert. Its existing validation rejects
>illegal
>combinations of the settings that are *not* passed on to pg_dump. It seems OK
>to
>just pass those on and let pg_dump complain. For exam
Hi, Konstantin
>Hi,
>I really think that we need to move to global caches (and especially catalog
>caches) in
>Postgres.
>Modern NUMA servers may have hundreds of cores and to be able to utilize all
>of them,
>we may need to start large number (hundreds) of backends.
>Memory overhead of local ca
Hi,
I noticed that if log_min_messages is set to ‘debug2’, it shows ‘debug’ instead
of debug2.
Showing other debug options like debug1 work normally.
This is same for client_min_messages.
According to a033daf56 and baaad2330, debug is an alias for debug2 and for
backward-compatibility.
And also
>-Original Message-
>From: Robert Haas [mailto:robertmh...@gmail.com]
>OK, I'm happy enough to commit it then, barring other objections. I was just
>going to
>just do that but then I realized we're in feature freeze right now, so I
>suppose this
>should go into the next CommitFest.
Than
on" should be "These functions".
(At the next sentece "these functions" is used.)
Regards,
Ideriha, Takeshi
release_note_typo.patch
Description: release_note_typo.patch
Hi,
When I looked into the dshash.c, I noticed that dshash.c, bipartite_match.c and
knapsack.c are not mentioned at README.
The other files at src/backend/lib are mentioned. I'm not sure this is an
intentional one or just leftovers.
Does anyone have opinions?
Patch attached.
- add summary of
>Seems reasonable. Pushed, thanks!
>
>- Heikki
>
Thanks for the quick work!
Ideriha, Takeshi
>From: Surafel Temesgen [mailto:surafel3...@gmail.com]
>Subject: ON CONFLICT DO NOTHING on pg_dump
>Sometimes I have to maintain two similar database and I have to update one
>from the other and notice having the option to add ON CONFLICT DO NOTHING
>clause to >INSERT command in the dump data w
>-Original Message-
>From: Nico Williams [mailto:n...@cryptonector.com]
>On Tue, Jun 12, 2018 at 09:05:23AM +, Ideriha, Takeshi wrote:
>> >From: Surafel Temesgen [mailto:surafel3...@gmail.com]
>> >Subject: ON CONFLICT DO NOTHING on pg_dump
>>
>
Hi,
>-Original Message-
>From: Surafel Temesgen [mailto:surafel3...@gmail.com]
>thank you for the review
>
> Do you have any plan to support on-conlict-do-update? Supporting this
> seems
>to me complicated and take much time so I don't mind not implementing this.
>
>
>i agree its co
Hi, hackers!
My customer created hundreds of thousands of partition tables and tried to
select data from hundreds of applications,
which resulted in enormous consumption of memory because it consumed # of
backend multiplied by # of local memory (ex. 100 backends X 1GB = 100GB).
Relation caches a
>> I agree with you though supporting MERGE or ON-CONFLICT-DO-UPDATE seems
>hard work.
>> Only ON-CONCLICT-DO-NOTHING use case may be narrow.
>
>Is it narrow, or is it just easy enough to add quickly?
Sorry for late replay.
I read your comment and rethought about it.
What I meant by "narrow" is th
e fixed.
>As we are at the end of commitfest, it is better you can move it
>to next one commitfest and provide an updated patch to solve the
>above problem.
I tottaly agreed.
I moved to next CF with waiting on author.
Regards,
Ideriha Takeshi
Hi,
>Subject: Re: Protect syscache from bloating with negative cache entries
>
>Hello. The previous v4 patchset was just broken.
>Somehow the 0004 was merged into the 0003 and applying 0004 results in
>failure. I
>removed 0004 part from the 0003 and rebased and repost it.
I have some questions
Hi,
I'm trying to use DSA API and confused a little bit about $subject.
Some type of return value or arguments are defined as size_t
but others are as Size.
Example:
- dsa_area *dsa_create_in_place(void *place, size_t size,
int tranche_id, dsm_segment *segment)
- Size dsa_
>> As a non-expert developer's opinion, I think mixing of Size and size_t makes
>> difficult
>to understand source code.
>
>Agreed. Let's change them all to size_t and back-patch that to keep future
>back-patching easy. Patch attached.
Thank you for the quick action. I'm happy now.
I confirmed
Hi,
Thank you for the previous discussion while ago.
I’m afraid I haven't replied to all.
To move forward this development I attached a PoC patch.
I introduced a guc called shared_catacache_mem to specify
how much memory is supposed be allocated on the shared memory area.
It defaults to zero,
Hi, thank you for the explanation.
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>>
>> Can I confirm about catcache pruning?
>> syscache_memory_target is the max figure per CatCache.
>> (Any CatCache has the same max value.) So the total max size of
>> catalog caches is estimat
73 matches
Mail list logo