On Wed, Sep 07, 2022 at 12:53:07PM +0500, Ibrar Ahmed wrote:
> Hunk #1 FAILED at 231.
> Hunk #2 succeeded at 409 (offset 82 lines).
>
> 1 out of 2 hunks FAILED -- saving rejects to file
> src/include/storage/buf_internals.h.rej
With no rebase done since this notice, I have marked this entry as
Rw
On Tue, Jun 28, 2022 at 4:50 PM Yura Sokolov
wrote:
> В Вт, 28/06/2022 в 14:26 +0300, Yura Sokolov пишет:
> > В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:
> >
> > > Tests:
> > > - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled
> > > (ie max frequency is 2.20GHz)
> >
>
В Вт, 28/06/2022 в 14:26 +0300, Yura Sokolov пишет:
> В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:
>
> > Tests:
> > - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled
> > (ie max frequency is 2.20GHz)
>
> Forgot to mention:
> - this time it was Centos7.9.2009 (Core) with
В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:
> Tests:
> - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled
> (ie max frequency is 2.20GHz)
Forgot to mention:
- this time it was Centos7.9.2009 (Core) with Linux mn10 3.10.0-1160.el7.x86_64
Perhaps older kernel describes p
В Пт, 06/05/2022 в 10:26 -0400, Robert Haas пишет:
> On Thu, Apr 21, 2022 at 6:58 PM Yura Sokolov wrote:
> > At the master state:
> > - SharedBufHash is not declared as HASH_FIXED_SIZE
> > - get_hash_entry falls back to element_alloc too fast (just if it doesn't
> > found free entry in current f
On Thu, Apr 21, 2022 at 6:58 PM Yura Sokolov wrote:
> At the master state:
> - SharedBufHash is not declared as HASH_FIXED_SIZE
> - get_hash_entry falls back to element_alloc too fast (just if it doesn't
> found free entry in current freelist partition).
> - get_hash_entry has races.
> - if ther
Btw, I've runned tests on EPYC (80 cores).
1 key per select
conns | master | patch-v11 | master 1G | patch-v11 1G
++++
1 | 29053 | 28959 | 26715 | 25631
2 | 53714 | 53002 | 55211 |
В Чт, 21/04/2022 в 16:24 -0400, Robert Haas пишет:
> On Thu, Apr 21, 2022 at 5:04 AM Yura Sokolov wrote:
> > $ pid=`ps x | awk '/checkpointer/ && !/awk/ { print $1 }'`
> > $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'
> >
> > $1 = 16512
> >
> > $ install/bin/pgbench -
On Thu, Apr 21, 2022 at 5:04 AM Yura Sokolov wrote:
> $ pid=`ps x | awk '/checkpointer/ && !/awk/ { print $1 }'`
> $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'
>
> $1 = 16512
>
> $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres
> ...
> $ gdb -
Good day, hackers.
There are some sentences.
Sentence one
> With the existing system, there is a hard cap on the number of hash
> table entries that we can ever need: one per buffer, plus one per
> partition to cover the "extra" entries that are needed while changing
> buffer tags.
At Mon, 18 Apr 2022 09:53:42 -0400, Robert Haas wrote
in
> On Fri, Apr 15, 2022 at 4:29 AM Kyotaro Horiguchi
> wrote:
> > The patch removes buftable entry frist then either inserted again or
> > returned to freelist. I don't understand how it can be in both
> > buftable and freelist.. What ki
On Fri, Apr 15, 2022 at 4:29 AM Kyotaro Horiguchi
wrote:
> The patch removes buftable entry frist then either inserted again or
> returned to freelist. I don't understand how it can be in both
> buftable and freelist.. What kind of trouble do you have in mind for
> example?
I'm not sure. I'm ju
At Thu, 14 Apr 2022 11:02:33 -0400, Robert Haas wrote
in
> It seems to me that whatever hazards exist must come from the fact
> that the operation is no longer fully atomic. The existing code
> acquires every relevant lock, then does the work, then releases locks.
> Ergo, we don't have to worry
On Thu, Apr 14, 2022 at 11:27 AM Tom Lane wrote:
> If it's not atomic, then you have to worry about what happens if you
> fail partway through, or somebody else changes relevant state while
> you aren't holding the lock. Maybe all those cases can be dealt with,
> but it will be significantly more
Robert Haas writes:
> On Thu, Apr 14, 2022 at 10:04 AM Tom Lane wrote:
>> FWIW, I have extremely strong doubts about whether this patch
>> is safe at all. This particular problem seems resolvable though.
> Can you be any more specific?
> This existing comment is surely in the running for terri
On Thu, Apr 14, 2022 at 10:04 AM Tom Lane wrote:
> I agree that "just hope it doesn't overflow" is unacceptable.
> But couldn't you bound the number of extra entries as MaxBackends?
Yeah, possibly ... as long as it can't happen that an operation still
counts against the limit after it's failed du
Robert Haas writes:
> With the existing system, there is a hard cap on the number of hash
> table entries that we can ever need: one per buffer, plus one per
> partition to cover the "extra" entries that are needed while changing
> buffer tags. With the patch, the number of concurrent buffer tag
>
On Wed, Apr 6, 2022 at 9:17 AM Yura Sokolov wrote:
> I skipped v10 since I used it internally for variant
> "insert entry with dummy index then search victim".
Hi,
I think there's a big problem with this patch:
--- a/src/backend/storage/buffer/freelist.c
+++ b/src/backend/storage/buffer/freelis
В Пт, 08/04/2022 в 16:46 +0900, Kyotaro Horiguchi пишет:
> At Thu, 07 Apr 2022 14:14:59 +0300, Yura Sokolov
> wrote in
> > В Чт, 07/04/2022 в 16:55 +0900, Kyotaro Horiguchi пишет:
> > > Hi, Yura.
> > >
> > > At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov
> > > wrot
> > > e in
> > > > Ok, I
At Thu, 07 Apr 2022 14:14:59 +0300, Yura Sokolov
wrote in
> В Чт, 07/04/2022 в 16:55 +0900, Kyotaro Horiguchi пишет:
> > Hi, Yura.
> >
> > At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov
> > wrot
> > e in
> > > Ok, I got access to stronger server, did the benchmark, found weird
> > > things
В Чт, 07/04/2022 в 16:55 +0900, Kyotaro Horiguchi пишет:
> Hi, Yura.
>
> At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov
> wrot
> e in
> > Ok, I got access to stronger server, did the benchmark, found weird
> > things, and so here is new version :-)
>
> Thanks for the new version and benchmar
Hi, Yura.
At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov wrot
e in
> Ok, I got access to stronger server, did the benchmark, found weird
> things, and so here is new version :-)
Thanks for the new version and benchmarking.
> First I found if table size is strictly limited to NBuffers and FIX
Good day, Kyotaoro-san.
Good day, hackers.
В Вс, 20/03/2022 в 12:38 +0300, Yura Sokolov пишет:
> В Чт, 17/03/2022 в 12:02 +0900, Kyotaro Horiguchi пишет:
> > At Wed, 16 Mar 2022 14:11:58 +0300, Yura Sokolov
> > wrote in
> > > В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:
> > > > At T
В Чт, 17/03/2022 в 12:02 +0900, Kyotaro Horiguchi пишет:
> At Wed, 16 Mar 2022 14:11:58 +0300, Yura Sokolov
> wrote in
> > В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:
> > > At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov
> > > wrote in
> > > In v7, HASH_ENTER returns the element
At Wed, 16 Mar 2022 14:11:58 +0300, Yura Sokolov
wrote in
> В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:
> > At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov
> > wrote in
> > In v7, HASH_ENTER returns the element stored in DynaHashReuse using
> > the freelist_idx of the new key.
В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:
> At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov
> wrote in
> > В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:
> > > Hmm. v8 returns stashed element with original patition index when the
> > > element is *not* reused. But what
At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov
wrote in
> В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:
> > Hmm. v8 returns stashed element with original patition index when the
> > element is *not* reused. But what I saw in the previous test runs is
> > the REUSE->ENTER(reuse)(->R
В Вт, 15/03/2022 в 13:47 +0300, Yura Sokolov пишет:
> В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:
> > Thanks for the new version.
> >
> > At Tue, 15 Mar 2022 08:07:39 +0300, Yura Sokolov
> > wrote in
> > > В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:
> > > > В Пн, 14/03/2022
В Вт, 15/03/2022 в 13:47 +0300, Yura Sokolov пишет:
> В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:
> > > I lost access to Xeon 8354H, so returned to old Xeon X5675.
> > ...
> > > Strange thing: both master and patched version has higher
> > > peak tps at X5676 at medium connections (17
В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:
> Thanks for the new version.
>
> At Tue, 15 Mar 2022 08:07:39 +0300, Yura Sokolov
> wrote in
> > В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:
> > > В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:
> > > > At Mon, 14 Mar 20
Thanks for the new version.
At Tue, 15 Mar 2022 08:07:39 +0300, Yura Sokolov
wrote in
> В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:
> > В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:
> > > At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov
> > > wrote in
> > > > В Пн, 14/03/2
В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:
> В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:
> > At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov
> > wrote in
> > > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:
> > > > I'd like to ask you to remove nalloced from part
В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:
> At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov
> wrote in
> > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:
> > > I'd like to ask you to remove nalloced from partitions then add a
> > > global atomic for the same use?
> >
>
At Mon, 14 Mar 2022 17:12:48 +0900 (JST), Kyotaro Horiguchi
wrote in
> Then, I tried the same with the patch, and I am surprized to see that
> the rise of the number of newly allocated elements didn't stop and
> went up to 511 elements after the 100 seconds run. So I found that my
> concern was
At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov
wrote in
> В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:
> > I'd like to ask you to remove nalloced from partitions then add a
> > global atomic for the same use?
>
> I really believe it should be global. I made it per-partition to
> n
В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:
> At Mon, 14 Mar 2022 09:39:48 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > I'll examine the possibility to resolve this...
>
> The existence of nfree and nalloc made me confused and I found the
> reason.
>
> In the case where a paritti
At Mon, 14 Mar 2022 09:39:48 +0900 (JST), Kyotaro Horiguchi
wrote in
> I'll examine the possibility to resolve this...
The existence of nfree and nalloc made me confused and I found the
reason.
In the case where a parittion collects many REUSE-ASSIGN-REMOVEed
elemetns from other paritiotns, nf
At Fri, 11 Mar 2022 12:34:32 +0300, Yura Sokolov
wrote in
> В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:
> > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi
> > > BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool
> > reuse)
> >
> > BufTableDelete considers both
At Fri, 11 Mar 2022 11:30:27 +0300, Yura Sokolov
wrote in
> В Пт, 11/03/2022 в 15:30 +0900, Kyotaro Horiguchi пишет:
> > At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov
> > wrote in
> > > В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:
> > > > Ok, here is v4.
> > >
> > > And here is v5.
On Sun, Mar 13, 2022 at 3:27 PM Yura Sokolov
wrote:
> В Вс, 13/03/2022 в 07:05 -0700, Zhihong Yu пишет:
> >
> > Hi,
> > In the description:
> >
> > There is no need to hold both lock simultaneously.
> >
> > both lock -> both locks
>
> Thanks.
>
> > +* We also reset the usage_count since any r
В Вс, 13/03/2022 в 07:05 -0700, Zhihong Yu пишет:
>
> Hi,
> In the description:
>
> There is no need to hold both lock simultaneously.
>
> both lock -> both locks
Thanks.
> +* We also reset the usage_count since any recency of use of the old
>
> recency of use -> recent use
Thanks.
> +
On Sun, Mar 13, 2022 at 3:25 AM Yura Sokolov
wrote:
> В Пт, 11/03/2022 в 17:21 +0900, Kyotaro Horiguchi пишет:
> > At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi <
> horikyota@gmail.com> wrote in
> > > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <
> horikyota@
В Пт, 11/03/2022 в 17:21 +0900, Kyotaro Horiguchi пишет:
> At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi
> > wrote in
> > > Thanks! I looked into dynahash part.
> > >
> > > struct HASHHDR
> > > {
> >
В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:
> At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > Thanks! I looked into dynahash part.
> >
> > struct HASHHDR
> > {
> > - /*
> > - * The freelist can become a point of contention in high-concurrency
В Пт, 11/03/2022 в 15:30 +0900, Kyotaro Horiguchi пишет:
> At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov
> wrote in
> > В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:
> > > Ok, here is v4.
> >
> > And here is v5.
> >
> > First, there was compilation error in Assert in dynahash.c .
> >
At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi
wrote in
> At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > Thanks! I looked into dynahash part.
Then I looked into bufmgr part. It looks fine to me but I have some
comments on code comments.
>
At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi
wrote in
> Thanks! I looked into dynahash part.
>
> struct HASHHDR
> {
> - /*
> - * The freelist can become a point of contention in high-concurrency
> hash
>
> Why did you move around the freeList?
>
>
> - long
At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov
wrote in
> В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:
> > Ok, here is v4.
>
> And here is v5.
>
> First, there was compilation error in Assert in dynahash.c .
> Excuse me for not checking before sending previous version.
>
> Second, I
В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:
> Ok, here is v4.
And here is v5.
First, there was compilation error in Assert in dynahash.c .
Excuse me for not checking before sending previous version.
Second, I add third commit that reduces HASHHDR allocation
size for non-partitioned dynah
В Пт, 25/02/2022 в 09:38 +, Simon Riggs пишет:
> On Fri, 25 Feb 2022 at 09:24, Yura Sokolov wrote:
>
> > > This approach is cleaner than v1, but should also perform better
> > > because there will be a 1:1 relationship between a buffer and its
> > > dynahash entry, most of the time.
> >
> >
В Пт, 25/02/2022 в 09:01 -0800, Andres Freund пишет:
> Hi,
>
> On 2022-02-25 12:51:22 +0300, Yura Sokolov wrote:
> > > > +* The usage_count starts out at 1 so that the buffer can
> > > > survive one
> > > > +* clock-sweep pass.
> > > > +*
> > > > +* We use direct a
Hi,
On 2022-02-25 12:51:22 +0300, Yura Sokolov wrote:
> > > + * The usage_count starts out at 1 so that the buffer can survive one
> > > + * clock-sweep pass.
> > > + *
> > > + * We use direct atomic OR instead of Lock+Unlock since no other backend
> > > + * could be interested in the buffer.
Hello, Andres
В Пт, 25/02/2022 в 00:04 -0800, Andres Freund пишет:
> Hi,
>
> On 2022-02-21 11:06:49 +0300, Yura Sokolov wrote:
> > From 04b07d0627ec65ba3327dc8338d59dbd15c405d8 Mon Sep 17 00:00:00 2001
> > From: Yura Sokolov
> > Date: Mon, 21 Feb 2022 08:49:03 +0300
> > Subject: [PATCH v3] [PGPR
On Fri, 25 Feb 2022 at 09:24, Yura Sokolov wrote:
> > This approach is cleaner than v1, but should also perform better
> > because there will be a 1:1 relationship between a buffer and its
> > dynahash entry, most of the time.
>
> Thank you for suggestion. Yes, it is much clearer than my initial
Hello, Simon.
В Пт, 25/02/2022 в 04:35 +, Simon Riggs пишет:
> On Mon, 21 Feb 2022 at 08:06, Yura Sokolov wrote:
> > Good day, Kyotaro Horiguchi and hackers.
> >
> > В Чт, 17/02/2022 в 14:16 +0900, Kyotaro Horiguchi пишет:
> > > At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov
> > > wrote
At Fri, 25 Feb 2022 00:04:55 -0800, Andres Freund wrote in
> Why don't you just use LockBufHdr/UnlockBufHdr?
FWIW, v2 looked fine to me in regards to this point.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Hi,
On 2022-02-21 11:06:49 +0300, Yura Sokolov wrote:
> From 04b07d0627ec65ba3327dc8338d59dbd15c405d8 Mon Sep 17 00:00:00 2001
> From: Yura Sokolov
> Date: Mon, 21 Feb 2022 08:49:03 +0300
> Subject: [PATCH v3] [PGPRO-5616] bufmgr: do not acquire two partition locks.
>
> Acquiring two partition l
On Mon, 21 Feb 2022 at 08:06, Yura Sokolov wrote:
>
> Good day, Kyotaro Horiguchi and hackers.
>
> В Чт, 17/02/2022 в 14:16 +0900, Kyotaro Horiguchi пишет:
> > At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov
> > wrote in
> > > Hello, all.
> > >
> > > I thought about patch simplification, and te
Good day, Kyotaro Horiguchi and hackers.
В Чт, 17/02/2022 в 14:16 +0900, Kyotaro Horiguchi пишет:
> At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov
> wrote in
> > Hello, all.
> >
> > I thought about patch simplification, and tested version
> > without BufTable and dynahash api change at all.
At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov
wrote in
> Hello, all.
>
> I thought about patch simplification, and tested version
> without BufTable and dynahash api change at all.
>
> It performs suprisingly well. It is just a bit worse
> than v1 since there is more contention around dynah
Hello, all.
I thought about patch simplification, and tested version
without BufTable and dynahash api change at all.
It performs suprisingly well. It is just a bit worse
than v1 since there is more contention around dynahash's
freelist, but most of improvement remains.
I'll finish benchmarking
В Вс, 06/02/2022 в 19:34 +0300, Michail Nikolaev пишет:
> Hello, Yura.
>
> A one additional moment:
>
> > 1332: Assert((oldFlags & (BM_PIN_COUNT_WAITER | BM_IO_IN_PROGRESS)) == 0);
> > 1333: CLEAR_BUFFERTAG(buf->tag);
> > 1334: buf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);
> > 1335: Unlock
Hello, Yura.
A one additional moment:
> 1332: Assert((oldFlags & (BM_PIN_COUNT_WAITER | BM_IO_IN_PROGRESS)) == 0);
> 1333: CLEAR_BUFFERTAG(buf->tag);
> 1334: buf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);
> 1335: UnlockBufHdr(buf, buf_state);
I think there is no sense to unlock buffer here
Hello, Yura.
Test results look promising. But it seems like the naming and dynahash
API change is a little confusing.
1) I think it is better to split the main part and atomic nentries
optimization into separate commits.
2) Also, it would be nice to also fix hash_update_hash_key bug :)
3) Do we r
At Sat, 22 Jan 2022 12:56:14 +0500, Andrey Borodin wrote
in
> I've took a look into the patch. The idea seems reasonable to me:
> clearing\evicting old buffer and placing new one seem to be
> different units of work, there is no need to couple both partition
> locks together. And the claimed per
> 21 дек. 2021 г., в 10:23, Yura Sokolov написал(а):
>
>
Hi Yura!
I've took a look into the patch. The idea seems reasonable to me:
clearing\evicting old buffer and placing new one seem to be different units of
work, there is no need to couple both partition locks together. And the claime
В Сб, 02/10/2021 в 01:25 +0300, Yura Sokolov пишет:
> Good day.
>
> I found some opportunity in Buffer Manager code in BufferAlloc
> function:
> - When valid buffer is evicted, BufferAlloc acquires two partition
> lwlocks: for partition for evicted block is in and partition for new
> block placeme
В Пт, 01/10/2021 в 15:46 -0700, Zhihong Yu wrote:
>
>
> On Fri, Oct 1, 2021 at 3:26 PM Yura Sokolov
> wrote:
> > Good day.
> >
> > I found some opportunity in Buffer Manager code in BufferAlloc
> > function:
> > - When valid buffer is evicted, BufferAlloc acquires two partition
> > lwlocks: for
On Fri, Oct 1, 2021 at 3:26 PM Yura Sokolov
wrote:
> Good day.
>
> I found some opportunity in Buffer Manager code in BufferAlloc
> function:
> - When valid buffer is evicted, BufferAlloc acquires two partition
> lwlocks: for partition for evicted block is in and partition for new
> block placeme
Good day.
I found some opportunity in Buffer Manager code in BufferAlloc
function:
- When valid buffer is evicted, BufferAlloc acquires two partition
lwlocks: for partition for evicted block is in and partition for new
block placement.
It doesn't matter if there is small number of concurrent repl
70 matches
Mail list logo