At Mon, 22 Mar 2021 13:12:10 -0400, Bruce Momjian wrote in
> On Thu, Jan 28, 2021 at 05:16:52PM +0900, Kyotaro Horiguchi wrote:
> > At Thu, 28 Jan 2021 16:50:44 +0900 (JST), Kyotaro Horiguchi
> > wrote in
> > > I was going to write in the doc something like "you can inspect memory
> > > consum
On Thu, Jan 28, 2021 at 05:16:52PM +0900, Kyotaro Horiguchi wrote:
> At Thu, 28 Jan 2021 16:50:44 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > I was going to write in the doc something like "you can inspect memory
> > consumption by catalog caches using pg_backend_memory_contexts", but
> > all
At Thu, 28 Jan 2021 16:50:44 +0900 (JST), Kyotaro Horiguchi
wrote in
> I was going to write in the doc something like "you can inspect memory
> consumption by catalog caches using pg_backend_memory_contexts", but
> all the memory used by catalog cache is in CacheMemoryContext. Is it
> sensible
At Wed, 27 Jan 2021 13:11:55 +0200, Heikki Linnakangas wrote
in
> On 27/01/2021 03:13, Kyotaro Horiguchi wrote:
> > At Thu, 14 Jan 2021 17:32:27 +0900 (JST), Kyotaro Horiguchi
> > wrote in
> >> The commit 4656e3d668 (debug_invalidate_system_caches_always)
> >> conflicted with this patch. Rebase
On 27/01/2021 03:13, Kyotaro Horiguchi wrote:
At Thu, 14 Jan 2021 17:32:27 +0900 (JST), Kyotaro Horiguchi
wrote in
The commit 4656e3d668 (debug_invalidate_system_caches_always)
conflicted with this patch. Rebased.
At Wed, 27 Jan 2021 10:07:47 +0900 (JST), Kyotaro Horiguchi
wrote in
(I fou
At Thu, 14 Jan 2021 17:32:27 +0900 (JST), Kyotaro Horiguchi
wrote in
> The commit 4656e3d668 (debug_invalidate_system_caches_always)
> conflicted with this patch. Rebased.
At Wed, 27 Jan 2021 10:07:47 +0900 (JST), Kyotaro Horiguchi
wrote in
> (I found a bug in a benchmark-aid function
> (Cat
At Tue, 26 Jan 2021 11:43:21 +0200, Heikki Linnakangas wrote
in
> Hi,
>
> On 19/11/2020 07:25, Kyotaro Horiguchi wrote:
> > Performance measurement on the attached showed better result about
> > searching but maybe worse for cache entry creation. Each time number
> > is the mean of 10 runs.
>
Hi,
On 19/11/2020 07:25, Kyotaro Horiguchi wrote:
Performance measurement on the attached showed better result about
searching but maybe worse for cache entry creation. Each time number
is the mean of 10 runs.
# Cacache (negative) entry creation
: time(ms) (% to master)
master
Hello.
The commit 4656e3d668 (debug_invalidate_system_caches_always)
conflicted with this patch. Rebased.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
>From ec069488fd2675369530f3f967f02a7b683f0a7f Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi
Date: Wed, 18 Nov 2020 16:54:3
Ah. It was obvious from the first.
Sorry for the sloppy diagnosis.
At Fri, 20 Nov 2020 16:08:40 +0900 (JST), Kyotaro Horiguchi
wrote in
> At Thu, 19 Nov 2020 15:23:05 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > At Wed, 18 Nov 2020 21:42:02 -0800, Andres Freund
> > wrote in
> > > Hi,
>
At Thu, 19 Nov 2020 15:23:05 +0900 (JST), Kyotaro Horiguchi
wrote in
> At Wed, 18 Nov 2020 21:42:02 -0800, Andres Freund wrote
> in
> > Hi,
> >
> > On 2020-11-19 14:25:36 +0900, Kyotaro Horiguchi wrote:
> > > # Creation, searching and expiration
> > > master : 6393.23(100.0)
> > > p
At Wed, 18 Nov 2020 21:42:02 -0800, Andres Freund wrote in
> Hi,
>
> On 2020-11-19 14:25:36 +0900, Kyotaro Horiguchi wrote:
> > # Creation, searching and expiration
> > master : 6393.23(100.0)
> > patched-off: 6527.94(102.1)
> > patched-on : 15880.01(248.4)
>
> What's the deal
Hi,
On 2020-11-19 14:25:36 +0900, Kyotaro Horiguchi wrote:
> # Creation, searching and expiration
> master : 6393.23(100.0)
> patched-off: 6527.94(102.1)
> patched-on : 15880.01(248.4)
What's the deal with this massive increase here?
Greetings,
Andres Freund
Thank you for the comments.
At Tue, 17 Nov 2020 16:22:54 -0500, Robert Haas wrote
in
> On Tue, Nov 17, 2020 at 10:46 AM Heikki Linnakangas wrote:
> > 0.7% degradation is probably acceptable.
>
> I haven't looked at this patch in a while and I'm pleased with the way
> it seems to have been red
At Tue, 17 Nov 2020 17:46:25 +0200, Heikki Linnakangas wrote
in
> On 09/11/2020 11:34, Kyotaro Horiguchi wrote:
> > At Fri, 6 Nov 2020 10:42:15 +0200, Heikki Linnakangas
> > wrote in
> >> Do you need the "ntaccess == 2" test? You could always increment the
> >> counter, and in the code that use
On Tue, Nov 17, 2020 at 10:46 AM Heikki Linnakangas wrote:
> 0.7% degradation is probably acceptable.
I haven't looked at this patch in a while and I'm pleased with the way
it seems to have been redesigned. It seems relatively simple and
unlikely to cause big headaches. I would say that 0.7% is p
On 09/11/2020 11:34, Kyotaro Horiguchi wrote:
At Fri, 6 Nov 2020 10:42:15 +0200, Heikki Linnakangas wrote in
Do you need the "ntaccess == 2" test? You could always increment the
counter, and in the code that uses ntaccess to decide what to evict,
treat all values >= 2 the same.
Need to handle
At Fri, 6 Nov 2020 10:42:15 +0200, Heikki Linnakangas wrote
in
> Do you need the "ntaccess == 2" test? You could always increment the
> counter, and in the code that uses ntaccess to decide what to evict,
> treat all values >= 2 the same.
>
> Need to handle integer overflow somehow. Or maybe no
At Mon, 09 Nov 2020 11:13:31 +0900 (JST), Kyotaro Horiguchi
wrote in
> Now the branch for counter-increment is removed. For similar
> branches for counter-decrement side in CatCacheCleanupOldEntries(),
> Min() is compiled into cmovbe and a branch was removed.
Mmm. Sorry, I sent this by a mista
At Fri, 6 Nov 2020 10:42:15 +0200, Heikki Linnakangas wrote
in
> On 06/11/2020 10:24, Kyotaro Horiguchi wrote:
> > Thank you for the comment!
> > First off, I thought that I managed to eliminate the degradation
> > observed on the previous versions, but significant degradation (1.1%
> > slower)
On 06/11/2020 10:24, Kyotaro Horiguchi wrote:
Thank you for the comment!
First off, I thought that I managed to eliminate the degradation
observed on the previous versions, but significant degradation (1.1%
slower) is still seen in on case.
One thing to keep in mind with micro-benchmarks like
me> First off, I thought that I managed to eliminate the degradation
me> observed on the previous versions, but significant degradation (1.1%
me> slower) is still seen in on case.
While trying benchmarking with many patterns, I noticed that it slows
down catcache search significantly to call CatCa
Thank you for the comment!
First off, I thought that I managed to eliminate the degradation
observed on the previous versions, but significant degradation (1.1%
slower) is still seen in on case.
Anyway, before sending the new patch, let met just answer for the
comments.
At Thu, 5 Nov 2020 11:09:
On 19/11/2019 12:48, Kyotaro Horiguchi wrote:
1. Inserting a branch in SearchCatCacheInternal. (CatCache_Pattern_1.patch)
This is the most straightforward way to add an alternative feature.
pattern 1 | 8459.73 | 28.15 # 9% (>> 1%) slower than 7757.58
pattern 1 | 8504.83 | 55.61
pattern 1 |
Hello.
The attached is the version that is compactified from the previous
version.
At Thu, 01 Oct 2020 16:47:18 +0900 (JST), Kyotaro Horiguchi
wrote in
> This is the rebased version.
It occurred to me suddenly that static parameters to inline functions
causes optimization. I split SearchCatC
At Thu, 1 Oct 2020 13:37:29 +0900, Michael Paquier wrote
in
> On Wed, Jan 22, 2020 at 02:38:19PM +0900, Kyotaro Horiguchi wrote:
> > I changed my mind to attach the benchmark patch as .txt file,
> > expecting the checkers not picking up it as a part of the patchset.
> >
> > I have in the precis
On Wed, Jan 22, 2020 at 02:38:19PM +0900, Kyotaro Horiguchi wrote:
> I changed my mind to attach the benchmark patch as .txt file,
> expecting the checkers not picking up it as a part of the patchset.
>
> I have in the precise performance measurement mode for a long time,
> but I think it's settle
At Tue, 21 Jan 2020 17:29:47 +0100, Tomas Vondra
wrote in
> I see this patch is stuck in WoA since 2019/12/01, although there's a
> new patch version from 2020/01/14. But the patch seems to no longer
> apply, at least according to https://commitfest.cputube.org :-( So at
> this point the status
On Tue, Jan 21, 2020 at 02:17:53PM -0500, Tom Lane wrote:
> Alvaro Herrera writes:
>> Hmm ... travis is running -Werror? That seems overly strict. I think
>> we shouldn't punt a patch because of that.
>
> Why not? We're not going to allow pushing a patch that throws warnings
> on common compil
Hello.
At Tue, 21 Jan 2020 14:17:53 -0500, Tom Lane wrote in
> Alvaro Herrera writes:
> > On 2020-Jan-21, Tomas Vondra wrote:
> >> Not sure about the appveyor build (it seems to be about jsonb_set_lax),
>
> FWIW, I think I fixed jsonb_set_lax yesterday, so that problem should
> be gone the nex
Alvaro Herrera writes:
> On 2020-Jan-21, Tomas Vondra wrote:
>> Not sure about the appveyor build (it seems to be about jsonb_set_lax),
FWIW, I think I fixed jsonb_set_lax yesterday, so that problem should
be gone the next time the cfbot tries this.
>> but on travis it fails like this:
>> catcac
On 2020-Jan-21, Tomas Vondra wrote:
> Not sure about the appveyor build (it seems to be about jsonb_set_lax),
> but on travis it fails like this:
>
> catcache.c:820:1: error: no previous prototype for
> ‘CatalogCacheFlushCatalog2’ [-Werror=missing-prototypes]
Hmm ... travis is running -Werror
Hello Kyotaro-san,
I see this patch is stuck in WoA since 2019/12/01, although there's a
new patch version from 2020/01/14. But the patch seems to no longer
apply, at least according to https://commitfest.cputube.org :-( So at
this point the status is actually correct.
Not sure about the appveyo
This is a new complete workable patch after a long time of struggling
with benchmarking.
At Tue, 19 Nov 2019 19:48:10 +0900 (JST), Kyotaro Horiguchi
wrote in
> I ran the run2.sh script attached, which runs catcachebench2(), which
> asks SearchSysCache3() for cached entries (almost) 24 times
On Tue, Nov 19, 2019 at 07:48:10PM +0900, Kyotaro Horiguchi wrote:
> I'd like to throw in food for discussion on how much SearchSysCacheN
> suffers degradation from some choices on how we can insert a code into
> the SearchSysCacheN code path.
Please note that the patch has a warning, causing cfbo
I'd like to throw in food for discussion on how much SearchSysCacheN
suffers degradation from some choices on how we can insert a code into
the SearchSysCacheN code path.
I ran the run2.sh script attached, which runs catcachebench2(), which
asks SearchSysCache3() for cached entries (almost) 24
Hello,
my_gripe> But, still fluctulates by around 5%..
my_gripe>
my_gripe> If this level of the degradation is still not acceptable, that
my_gripe> means that nothing can be inserted in the code path and the new
my_gripe> code path should be isolated from existing code by using indirect
my_gripe
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Does this result show that hard-limit size option with memory accounting
>doesn't harm
>to usual users who disable hard limit size option?
Hi,
I've implemented relation cache size limitation with LRU list and built-in
memory cont
At Fri, 05 Apr 2019 09:44:07 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20190405.094407.151644324.horiguchi.kyot...@lab.ntt.co.jp>
> By the way, I found the reason of the wrong result of the
> previous benchmark. The test 3_0/1 needs to update catcacheclock
> midst of the loop. I'm
Thank you for the comment.
At Thu, 4 Apr 2019 15:44:35 -0400, Robert Haas wrote in
> On Thu, Apr 4, 2019 at 8:53 AM Kyotaro HORIGUCHI
> wrote:
> > So it seems to me that the simplest "Full" version wins. The
> > attached is rebsaed version. dlist_move_head(entry) is removed as
> > mentioned ab
On Thu, Apr 4, 2019 at 8:53 AM Kyotaro HORIGUCHI
wrote:
> So it seems to me that the simplest "Full" version wins. The
> attached is rebsaed version. dlist_move_head(entry) is removed as
> mentioned above in that patch.
1. I really don't think this patch has any business changing the
existing log
At Mon, 01 Apr 2019 11:05:32 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20190401.110532.102998353.horiguchi.kyot...@lab.ntt.co.jp>
> At Fri, 29 Mar 2019 17:24:40 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
> wrote in
> <20190329.172440.199616830.horiguchi.kyot...@lab.ntt.co.j
At Fri, 29 Mar 2019 17:24:40 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20190329.172440.199616830.horiguchi.kyot...@lab.ntt.co.jp>
> I ran three artificial test cases. The database is created by
> gen_tbl.pl. Numbers are the average of the fastest five runs in
> successive 15 runs.
Hello. Sorry for being late a bit.
At Wed, 27 Mar 2019 17:30:37 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20190327.173037.40342566.horiguchi.kyot...@lab.ntt.co.jp>
> > I don't see much point in continuing to review this patch at this
> > point. There's been no new version of the
At Mon, 25 Mar 2019 09:28:57 -0400, Robert Haas wrote
in
> On Thu, Mar 7, 2019 at 11:40 PM Ideriha, Takeshi
> wrote:
> > Just to be sure, we introduced the LRU list in this thread to find the
> > entries less than threshold time
> > without scanning the whole hash table. If hash table becomes
On Thu, Mar 7, 2019 at 11:40 PM Ideriha, Takeshi
wrote:
> Just to be sure, we introduced the LRU list in this thread to find the
> entries less than threshold time
> without scanning the whole hash table. If hash table becomes large without
> LRU list, scanning time becomes slow.
Hmm. So, it's
>From: Vladimir Sitnikov [mailto:sitnikov.vladi...@gmail.com]
>
>Robert> This email thread is really short on clear demonstrations that X
>Robert> or Y is useful.
>
>It is useful when the whole database does **not** crash, isn't it?
>
>Case A (==current PostgeSQL mode): syscache grows, then OOMkil
>From: Robert Haas [mailto:robertmh...@gmail.com]
>On Thu, Mar 7, 2019 at 9:49 AM Tomas Vondra
>wrote:
>> I don't think this shows any regression, but perhaps we should do a
>> microbenchmark isolating the syscache entirely?
>
>Well, if we need the LRU list, then yeah I think a microbenchmark woul
On 3/7/19 4:01 PM, Robert Haas wrote:
> On Thu, Mar 7, 2019 at 9:49 AM Tomas Vondra
> wrote:
>> I don't think this shows any regression, but perhaps we should do a
>> microbenchmark isolating the syscache entirely?
>
> Well, if we need the LRU list, then yeah I think a microbenchmark
> would be a
On Thu, Mar 7, 2019 at 9:49 AM Tomas Vondra
wrote:
> I don't think this shows any regression, but perhaps we should do a
> microbenchmark isolating the syscache entirely?
Well, if we need the LRU list, then yeah I think a microbenchmark
would be a good idea to make sure we really understand what
Robert Haas writes:
> On Wed, Mar 6, 2019 at 6:18 PM Tomas Vondra
> wrote:
>> Which part of the LRU approach is supposedly expensive? Updating the
>> lastaccess field or moving the entries to tail? I'd guess it's the
>> latter, so perhaps we can keep some sort of age field, update it less
>> freq
On 3/7/19 3:34 PM, Robert Haas wrote:
> On Wed, Mar 6, 2019 at 6:18 PM Tomas Vondra
> wrote:
>> I agree clock sweep might be sufficient, although the benchmarks done in
>> this thread so far do not suggest the LRU approach is very expensive.
>
> I'm not sure how thoroughly it's been tested --
On Wed, Mar 6, 2019 at 6:18 PM Tomas Vondra
wrote:
> I agree clock sweep might be sufficient, although the benchmarks done in
> this thread so far do not suggest the LRU approach is very expensive.
I'm not sure how thoroughly it's been tested -- has someone
constructed a benchmark that does a lot
On 3/6/19 9:17 PM, Tom Lane wrote:
> Robert Haas writes:
>> OK, so this is getting simpler, but I'm wondering why we need
>> dlist_move_tail() at all. It is a well-known fact that maintaining
>> LRU ordering is expensive and it seems to be unnecessary for our
>> purposes here.
>
> Yeah ... LRU m
Robert Haas writes:
> OK, so this is getting simpler, but I'm wondering why we need
> dlist_move_tail() at all. It is a well-known fact that maintaining
> LRU ordering is expensive and it seems to be unnecessary for our
> purposes here.
Yeah ... LRU maintenance was another thing that used to be
On Fri, Mar 1, 2019 at 3:33 AM Kyotaro HORIGUCHI
wrote:
> > > It is artificial (or acutually wont't be repeatedly executed in a
> > > session) but anyway what can get benefit from
> > > catalog_cache_memory_target would be a kind of extreme.
> >
> > I agree. So then let's not have it.
>
> Ah... Y
Hello.
At Mon, 4 Mar 2019 03:03:51 +, "Ideriha, Takeshi"
wrote in
<4E72940DA2BF16479384A86D54D0988A6F44564E@G01JPEXMBKW04>
> Does this result show that hard-limit size option with memory accounting
> doesn't harm to usual users who disable hard limit size option?
Not sure, but 4% seems be
>From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>> [Size=800, iter=1,000,000]
>> Master |15.763
>> Patched|16.262 (+3%)
>>
>> [Size=32768, iter=1,000,000]
>> Master |61.3076
>> Patched|62.9566 (+2%)
>
>What's the unit, second or millisecond?
Millisecond.
>Why is the number of d
At Tue, 26 Feb 2019 10:55:18 -0500, Robert Haas wrote
in
> On Mon, Feb 25, 2019 at 1:27 AM Kyotaro HORIGUCHI
> wrote:
> > > I'd like to see some evidence that catalog_cache_memory_target has any
> > > value, vs. just always setting it to zero.
> >
> > It is artificial (or acutually wont't be re
Robert> This email thread is really short on clear demonstrations that X or Y
Robert> is useful.
It is useful when the whole database does **not** crash, isn't it?
Case A (==current PostgeSQL mode): syscache grows, then OOMkiller
chimes in, kills the database process, and it leads to the complete
From: Ideriha, Takeshi/出利葉 健
> [Size=800, iter=1,000,000]
> Master |15.763
> Patched|16.262 (+3%)
>
> [Size=32768, iter=1,000,000]
> Master |61.3076
> Patched|62.9566 (+2%)
What's the unit, second or millisecond?
Why is the number of digits to the right of the decimal point?
Is the measurement c
On Wed, Feb 27, 2019 at 3:16 AM Ideriha, Takeshi
wrote:
> I'm afraid I may be quibbling about it.
> What about users who understand performance drops but don't want to
> add memory or decrease concurrency?
> I think that PostgreSQL has a parameter
> which most of users don't mind and use is as def
>From: Tsunakawa, Takayuki [mailto:tsunakawa.ta...@jp.fujitsu.com]
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>> I measured the memory context accounting overhead using Tomas's tool
>> palloc_bench, which he made it a while ago in the similar discussion.
>> https://www.postgres
From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
> I measured the memory context accounting overhead using Tomas's tool
> palloc_bench,
> which he made it a while ago in the similar discussion.
> https://www.postgresql.org/message-id/53f7e83c.3020...@fuzzy.cz
>
> This tool is a littl
>From: Robert Haas [mailto:robertmh...@gmail.com]
>
>On Mon, Feb 25, 2019 at 3:50 AM Tsunakawa, Takayuki
> wrote:
>> How can I make sure that this context won't exceed, say, 10 MB to avoid OOM?
>
>As Tom has said before and will probably say again, I don't think you actually
>want that.
>We know t
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>>* 0001: add dlist_push_tail() ... as is
>>>* 0002: memory accounting, with correction based on feedback
>>>* 0003: merge the original 0003 and 0005, with correction based on
>>>feedback
>>
>>Attached are simpler version based on Hor
On Mon, Feb 25, 2019 at 1:27 AM Kyotaro HORIGUCHI
wrote:
> > I'd like to see some evidence that catalog_cache_memory_target has any
> > value, vs. just always setting it to zero.
>
> It is artificial (or acutually wont't be repeatedly executed in a
> session) but anyway what can get benefit from
>
On Mon, Feb 25, 2019 at 3:50 AM Tsunakawa, Takayuki
wrote:
> How can I make sure that this context won't exceed, say, 10 MB to avoid OOM?
As Tom has said before and will probably say again, I don't think you
actually want that. We know that PostgreSQL gets roughly 100x slower
with the system cac
>>From: Tsunakawa, Takayuki
>>Ideriha-san,
>>Could you try simplifying the v15 patch set to see how simple the code
>>would look or not? That is:
>>
>>* 0001: add dlist_push_tail() ... as is
>>* 0002: memory accounting, with correction based on feedback
>>* 0003: merge the original 0003 and 0005,
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> - If you find the process too much "bloat"s and you (intuirively)
> suspect the cause is system cache, set it to certain shorter
> value, say 1 minutes, and set the catalog_cache_memory_target
> to allowable amount of memory f
From: Robert Haas [mailto:robertmh...@gmail.com]
> I don't understand the idea that we would add something to PostgreSQL
> without proving that it has value. Sure, other systems have somewhat
> similar systems, and they have knobs to tune them. But, first, we
> don't know that those other systems
At Mon, 25 Feb 2019 15:23:22 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20190225.152322.104148315.horiguchi.kyot...@lab.ntt.co.jp>
> I think the two parameters are to be tuned in the following
> steps.
>
> - If the default setting sutisfies you, leave it alone. (as a
> general s
At Wed, 20 Feb 2019 13:09:08 -0500, Robert Haas wrote
in
> On Tue, Feb 19, 2019 at 11:15 PM Kyotaro HORIGUCHI
> wrote:
> > Difference from v15:
> >
> > Removed AllocSet accounting stuff. We use approximate memory
> > size for catcache.
> >
> > Removed prune-by-number(or size) stuff.
> >
>
On Thu, Feb 21, 2019 at 1:38 AM Tsunakawa, Takayuki
wrote:
> Why don't we consider this just like the database cache and other DBMS's
> dictionary caches? That is,
>
> * If you want to avoid infinite memory bloat, set the upper limit on size.
>
> * To find a better limit, check the hit ratio wit
>From: Tsunakawa, Takayuki
>Ideriha-san,
>Could you try simplifying the v15 patch set to see how simple the code would
>look or
>not? That is:
>
>* 0001: add dlist_push_tail() ... as is
>* 0002: memory accounting, with correction based on feedback
>* 0003: merge the original 0003 and 0005, with c
On Tue, Feb 19, 2019 at 07:08:14AM +, Tsunakawa, Takayuki wrote:
> We all have to manage things within resource constraints. The DBA
> wants to make sure the server doesn't overuse memory to avoid crash
> or slowdown due to swapping. Oracle does it, and another open source
> database, MySQL,
From: Robert Haas [mailto:robertmh...@gmail.com]
> That might be enough to justify having the parameter. But I'm not
> quite sure how high the value would need to be set to actually get the
> benefit in a case like that, or what happens if you set it to a value
> that's not quite high enough.
From: Ideriha, Takeshi/出利葉 健
> I checked it with perf record -avg and perf report.
> The following shows top 20 symbols during benchmark including kernel space.
> The main difference between master (unpatched) and patched one seems that
> patched one consumes cpu catcache-evict-and-refill functions
>From: Tsunakawa, Takayuki
>>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>> number of tables | 100 |1000|1
>> ---
>> TPS (master) |10966 |10654 |9099
>> TPS (patch)| 11137 (+1%) |10710 (+0%) |772 (-
On Tue, Feb 19, 2019 at 11:15 PM Kyotaro HORIGUCHI
wrote:
> Difference from v15:
>
> Removed AllocSet accounting stuff. We use approximate memory
> size for catcache.
>
> Removed prune-by-number(or size) stuff.
>
> Adressing comments from Tsunakawa-san and Ideriha-san .
>
> Separated cat
At Thu, 14 Feb 2019 00:40:10 -0800, Andres Freund wrote in
<20190214084010.bdn6tmba2j7sz...@alap3.anarazel.de>
> Hi,
>
> On 2019-02-13 15:31:14 +0900, Kyotaro HORIGUCHI wrote:
> > Instead, I added an accounting(?) interface function.
> >
> > | MemoryContextGettConsumption(MemoryContext cxt);
>
From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
> number of tables | 100 |1000|1
> ---
> TPS (master) |10966 |10654 |9099
> TPS (patch)| 11137 (+1%) |10710 (+0%) |772 (-91%)
>
> It seems that before ca
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> 0.7% may easily be just a noise, possibly due to differences in layout
> of the binary. How many runs? What was the variability of the results
> between runs? What hardware was this tested on?
3 runs, with the variability of about +-2%. L
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>But at the same time, I did some benchmark with only hard limit option enabled
>and
>time-related option disabled, because the figures of this case are not
>provided in this
>thread.
>So let me share it.
I'm sorry but I'm taking ba
On 2/19/19 12:43 AM, Tsunakawa, Takayuki wrote:
> Hi Horiguchi-san,
>
> I've looked through your patches. This is the first part of my review
> results. Let me post the rest after another work today.
>
> BTW, how about merging 0003 and 0005, and separating and deferring 0004 in
> another th
From: 'Bruce Momjian' [mailto:br...@momjian.us]
> I think, in general, smaller is better, as long as making something
> smaller doesn't remove data that is frequently accessed. Having a timer
> to expire only old entries seems like it accomplished this goal.
>
> Having a minimum size and not taki
Hi Horiguchi-san,
This is the rest of my review comments.
(5) patch 0003
CatcacheClockTimeoutPending = 0;
+
+ /* Update timetamp then set up the next timeout */
+
false is better than 0, to follow other **Pending variables.
timetamp -> timestamp
(6) patch 0003
Hi Horiguchi-san,
I've looked through your patches. This is the first part of my review results.
Let me post the rest after another work today.
BTW, how about merging 0003 and 0005, and separating and deferring 0004 in
another thread? That may help to relieve other community members by makin
On 2019-Feb-15, Tomas Vondra wrote:
> ISTM there's a couple of ways to deal with that:
>
> 1) Ignore the memory amounts entirely, and do just time-base eviction.
>
> 2) If we want some size thresholds (e.g. to disable eviction for
> backends with small caches etc.) use the number of entries inst
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> I think "catalog_cache_..." is fine. If we end up with a similar
> patchfor relcache, we can probably call it "relation_cache_".
Agreed, those are not too long or too short, and they are sufficiently
descriptive.
Regards
Takayuki Tsunak
On 2/14/19 4:49 PM, 'Bruce Momjian' wrote:
> On Thu, Feb 14, 2019 at 01:31:49AM +, Tsunakawa, Takayuki wrote:
>> From: Bruce Momjian [mailto:br...@momjian.us]
That being said, having a "minimal size" threshold before starting
with the time-based eviction may be a good idea.
>>>
>>>
On 2/14/19 3:46 PM, Bruce Momjian wrote:
> On Thu, Feb 14, 2019 at 12:40:10AM -0800, Andres Freund wrote:
>> Hi,
>>
>> On 2019-02-13 15:31:14 +0900, Kyotaro HORIGUCHI wrote:
>>> Instead, I added an accounting(?) interface function.
>>>
>>> | MemoryContextGettConsumption(MemoryContext cxt);
>>>
>
On 2/13/19 1:23 AM, Tsunakawa, Takayuki wrote:
> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>> I'm at a loss how call syscache for users. I think it is "catalog
>> cache". The most basic component is called catcache, which is
>> covered by the syscache layer, both of then
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>>About the new global-size based evicition(2), cache entry creation
>>becomes slow after the total size reached to the limit since every one
>>new entry evicts one or more old (=
>>not-recently-used) entries. Because of not needing k
On Thu, Feb 14, 2019 at 01:31:49AM +, Tsunakawa, Takayuki wrote:
> From: Bruce Momjian [mailto:br...@momjian.us]
> > > That being said, having a "minimal size" threshold before starting
> > > with the time-based eviction may be a good idea.
> >
> > Agreed. I see the minimal size as a way to ke
On Thu, Feb 14, 2019 at 12:40:10AM -0800, Andres Freund wrote:
> Hi,
>
> On 2019-02-13 15:31:14 +0900, Kyotaro HORIGUCHI wrote:
> > Instead, I added an accounting(?) interface function.
> >
> > | MemoryContextGettConsumption(MemoryContext cxt);
> >
> > The API returns the current consumption in
Hi,
On 2019-02-13 15:31:14 +0900, Kyotaro HORIGUCHI wrote:
> Instead, I added an accounting(?) interface function.
>
> | MemoryContextGettConsumption(MemoryContext cxt);
>
> The API returns the current consumption in this memory
> context. This allows "real" memory accounting almost without
> ov
>From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
>
>
>(2) Another new patch v15-0005 on top of previous design of
> limit-by-number-of-a-cache feature converts it to
> limit-by-size-on-all-caches feature, which I think is
> Tsunakawa-san wanted.
Yeah, size looks better to me.
>
From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> It is too complex as I was afraid. The indirect calls causes siginicant
> degradation. (Anyway the previous code was bogus in that it passes
> CACHELINEALIGN'ed pointer to get_chunk_size..)
>
> Instead, I added an accounting(?) int
From: Bruce Momjian [mailto:br...@momjian.us]
> > That being said, having a "minimal size" threshold before starting with
> > the time-based eviction may be a good idea.
>
> Agreed. I see the minimal size as a way to keep the systems tables in
> cache, which we know we will need for the next quer
1 - 100 of 235 matches
Mail list logo