Rebased to make patchset v5.
I also found that my past replies have separated the thread in the
pgsql-hackers archive. I try to connect this mail to the original
thread [1], and let this point to the separated portions [2][3][4].
Note that the patchset v3 is in [3] and v4 is in [4].
Regards,
[1]
Dear hackers,
I rebased my old patchset. It would be good to compare this v4 patchset to
non-volatile WAL buffer's one [1].
[1]
https://www.postgresql.org/message-id/002101d649fb$1f5966e0$5e0c34a0$@hco.ntt.co.jp_1
Regards,
Takashi
--
Takashi Menjo
v4-0001-Add-configure-option-for-PMDK.patc
Peter Eisentraut wrote:
> I'm concerned with how this would affect the future maintenance of this
> code. You are introducing a whole separate code path for PMDK beside
> the normal file path (and it doesn't seem very well separated either).
> Now everyone who wants to do some surgery in the WAL c
On 30/01/2019 07:16, Takashi Menjo wrote:
> Sorry but I found that the patchset v2 had a bug in managing WAL segment
> file offset. I fixed it and updated a patchset as v3 (attached).
I'm concerned with how this would affect the future maintenance of this
code. You are introducing a whole separa
Hi,
Sorry but I found that the patchset v2 had a bug in managing WAL segment
file offset. I fixed it and updated a patchset as v3 (attached).
Regards,
Takashi
--
Takashi Menjo - NTT Software Innovation Center
0001-Add-configure-option-for-PMDK-v3.patch
Description: Binary data
0002-Read-
Hi,
Peter Eisentraut wrote:
> When you manage the WAL (or perhaps in the future relation files)
> through PMDK, is there still a file system view of it somewhere, for
> browsing, debugging, and for monitoring tools?
First, I assume that our patchset is used with a filesystem that supports
direct
On 25/01/2019 09:52, Takashi Menjo wrote:
> Heikki Linnakangas wrote:
>> To re-iterate what I said earlier in this thread, I think the next step
>> here is to write a patch that modifies xlog.c to use plain old
>> mmap()/msync() to memory-map the WAL files, to replace the WAL buffers.
> Sorry but
Hello,
On behalf of Yoshimi, I rebased the patchset onto the latest master
(e3565fd6).
Please see the attachment. It also includes an additional bug fix (in patch
0002)
about temporary filename.
Note that PMDK 1.4.2+ supports MAP_SYNC and MAP_SHARED_VALIDATE flags,
so please use a new version
Hi,
On 2019-01-23 18:45:42 +0200, Heikki Linnakangas wrote:
> To re-iterate what I said earlier in this thread, I think the next step here
> is to write a patch that modifies xlog.c to use plain old mmap()/msync() to
> memory-map the WAL files, to replace the WAL buffers. Let's see what the
> perf
On 10/12/2018 23:37, Dmitry Dolgov wrote:
On Thu, Nov 29, 2018 at 6:48 PM Dmitry Dolgov <9erthali...@gmail.com> wrote:
On Tue, Oct 2, 2018 at 4:53 AM Michael Paquier wrote:
On Mon, Aug 06, 2018 at 06:00:54PM +0900, Yoshimi Ichiyanagi wrote:
The libpmem's pmem_map_file() supported 2M/1G(the s
> On Thu, Nov 29, 2018 at 6:48 PM Dmitry Dolgov <9erthali...@gmail.com> wrote:
>
> > On Tue, Oct 2, 2018 at 4:53 AM Michael Paquier wrote:
> >
> > On Mon, Aug 06, 2018 at 06:00:54PM +0900, Yoshimi Ichiyanagi wrote:
> > > The libpmem's pmem_map_file() supported 2M/1G(the size of huge page)
> > > al
> On Tue, Oct 2, 2018 at 4:53 AM Michael Paquier wrote:
>
> On Mon, Aug 06, 2018 at 06:00:54PM +0900, Yoshimi Ichiyanagi wrote:
> > The libpmem's pmem_map_file() supported 2M/1G(the size of huge page)
> > alignment, since it could reduce the number of page faults.
> > In addition, libpmem's pmem_m
On Mon, Aug 06, 2018 at 06:00:54PM +0900, Yoshimi Ichiyanagi wrote:
> The libpmem's pmem_map_file() supported 2M/1G(the size of huge page)
> alignment, since it could reduce the number of page faults.
> In addition, libpmem's pmem_memcpy_nodrain() is the function
> to copy data using single instru
I'm sorry for the delay in replying your mail.
<91411837-8c65-bf7d-7ca3-d69bdcb49...@iki.fi>
Thu, 1 Mar 2018 18:40:05 +0800Heikki Linnakangas wrote
:
>Interesting. How does this compare with using good old mmap()?
The libpmem's pmem_map_file() supported 2M/1G(the size of huge page)
alignment, s
On 01/03/18 12:40, Heikki Linnakangas wrote:
On 16/01/18 15:00, Yoshimi Ichiyanagi wrote:
These patches enable to use Persistent Memory Development Kit(PMDK)[1]
for reading/writing WAL logs on persistent memory(PMEM).
PMEM is next generation storage and it has a number of nice features:
fast, by
<20180301103641.tudam4mavba3g...@alap3.anarazel.de>
Thu, 1 Mar 2018 02:36:41 -0800Andres Freund wrote :
Re: [HACKERS][PATCH] Applying PMDK to WAL operations for persistent
memory
>On 2018-02-05 09:59:25 +0900, Yoshimi Ichiyanagi wrote:
>> I added my patches to the CommitFest 2
On 16/01/18 15:00, Yoshimi Ichiyanagi wrote:
Hi.
These patches enable to use Persistent Memory Development Kit(PMDK)[1]
for reading/writing WAL logs on persistent memory(PMEM).
PMEM is next generation storage and it has a number of nice features:
fast, byte-addressable and non-volatile.
Intere
On 2018-02-05 09:59:25 +0900, Yoshimi Ichiyanagi wrote:
> I added my patches to the CommitFest 2018-3.
> https://commitfest.postgresql.org/17/1485/
Unfortunately this is the last CF for the v11 development cycle. This is
a major project submitted late for v11, there's been no code level
review, th
>On Tue, Jan 30, 2018 at 3:37 AM, Yoshimi Ichiyanagi
> wrote:
>> Oracle and Microsoft SQL Server suported PMEM [1][2].
>> I think it is not too early for PostgreSQL to support PMEM.
>
>I agree; it's good to have the option available for those who have
>access to the hardware.
>
>If you haven't adde
On Tue, Jan 30, 2018 at 3:37 AM, Yoshimi Ichiyanagi
wrote:
> Oracle and Microsoft SQL Server suported PMEM [1][2].
> I think it is not too early for PostgreSQL to support PMEM.
I agree; it's good to have the option available for those who have
access to the hardware.
If you haven't added your pa
Fri, 19 Jan 2018 09:42:25 -0500Robert Haas wrote
:
>
>I think that you really need to include the checkpoints in the tests.
>I would suggest setting max_wal_size and/or checkpoint_timeout so that
>you reliably complete 2 checkpoints in a 30-minute test, and then do a
>comparison on that basis.
On Thu, Jan 25, 2018 at 8:54 PM, Tsunakawa, Takayuki
wrote:
> Yes, that's pg_test_fsync output. Isn't pg_test_fsync the tool to determine
> the value for wal_sync_method? Is this manual misleading?
Hmm. I hadn't thought about it as misleading, but now that you
mention it, I'd say that it prob
From: Robert Haas [mailto:robertmh...@gmail.com]
> If I understand correctly, those results are all just pg_test_fsync results.
> That's not reflective of what will happen when the database is actually
> running. When you use open_sync or open_datasync, you force WAL write and
> WAL flush to happe
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> Or to put it short, the lack of granular syncs in ext3 kills performance
> for some workloads. Tomas Vondra's presentation on such matters are a really
> cool read by the way:
> https://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrf
On Thu, Jan 25, 2018 at 8:32 PM, Tsunakawa, Takayuki
wrote:
> As I showed previously, regular file writes on PCIe flash, *not writes using
> PMDK on persistent memory*, was 20% faster with open_datasync than with
> fdatasync.
If I understand correctly, those results are all just pg_test_fsync
r
From: Robert Haas [mailto:robertmh...@gmail.com]> On Thu, Jan 25, 2018 at 7:08
PM, Tsunakawa, Takayuki
> wrote:
> > No, I'm not saying we should make the persistent memory mode the default.
> I'm simply asking whether it's time to make open_datasync the default
> setting. We can write a notice i
On Thu, Jan 25, 2018 at 09:30:45AM -0500, Robert Haas wrote:
> On Wed, Jan 24, 2018 at 10:31 PM, Tsunakawa, Takayuki
> wrote:
>>> This is just a guess, of course. You didn't mention what the underlying
>>> storage for your test was?
>>
>> Uh, your guess was correct. My file system was ext3, wher
On Thu, Jan 25, 2018 at 7:08 PM, Tsunakawa, Takayuki
wrote:
> No, I'm not saying we should make the persistent memory mode the default.
> I'm simply asking whether it's time to make open_datasync the default
> setting. We can write a notice in the release note for users who still use
> ext3 e
From: Robert Haas [mailto:robertmh...@gmail.com]
> On Wed, Jan 24, 2018 at 10:31 PM, Tsunakawa, Takayuki
> wrote:
> > As you said, open_datasync was 20% faster than fdatasync on RHEL7.2, on
> a LVM volume with ext4 (mounted with options noatime, nobarrier) on a PCIe
> flash memory.
>
> So does th
On Wed, Jan 24, 2018 at 10:31 PM, Tsunakawa, Takayuki
wrote:
>> This is just a guess, of course. You didn't mention what the underlying
>> storage for your test was?
>
> Uh, your guess was correct. My file system was ext3, where fsync() writes
> all dirty buffers in page cache.
Oh, ext3 is ter
From: Robert Haas [mailto:robertmh...@gmail.com]
> I think open_datasync will be worse on systems where fsync() is expensive
> -- it forces the data out to disk immediately, even if the data doesn't
> need to be flushed immediately. That's bad, because we wait immediately
> when we could have defe
On Tue, Jan 23, 2018 at 8:07 PM, Tsunakawa, Takayuki
wrote:
> From: Robert Haas [mailto:robertmh...@gmail.com]
>> Oh, incidentally -- in our internal testing, we found that
>> wal_sync_method=open_datasync was significantly faster than
>> wal_sync_method=fdatasync. You might find that open_datasy
From: Robert Haas [mailto:robertmh...@gmail.com]
> Oh, incidentally -- in our internal testing, we found that
> wal_sync_method=open_datasync was significantly faster than
> wal_sync_method=fdatasync. You might find that open_datasync isn't much
> different from pmem_drain, even though they're bot
On Fri, Jan 19, 2018 at 9:42 AM, Robert Haas wrote:
> That's not necessarily an argument against this patch, which by the
> way I have not reviewed. Even a 5% speedup on this kind of workload
> is potentially worthwhile; everyone likes it when things go faster.
> I'm just not convinced you can ge
On Fri, Jan 19, 2018 at 4:56 AM, Yoshimi Ichiyanagi
wrote:
>>Was the only non-default configuration setting wal_sync_method? i.e.
>>synchronous_commit=on? No change to max_wal_size?
> No, I used the following parameter in postgresql.conf to prevent
> checkpoints from occurring while running the
Thank you for your reply.
Wed, 17 Jan 2018 15:29:11 -0500Robert Haas wrote :
>> Using pgbench which is a PostgreSQL general benchmark, the postgres server
>> to which the patches is applied is about 5% faster than original server.
>> And using my insert benchmark, it is up to 90% faster than ori
On Tue, Jan 16, 2018 at 2:00 AM, Yoshimi Ichiyanagi
wrote:
> C-5. Running the 2 benchmarks(1. pgbench, 2. my insert benchmark)
> C-5-1. pgbench
> # numactl -N 1 pgbech -c 32 -j 8 -T 120 -M prepared [DB_NAME]
>
> The averages of running pgbench three times are:
> wal_sync_method=fdatasync: tps =
On Tue, Jan 16, 2018 at 2:00 AM, Yoshimi Ichiyanagi
wrote:
> Using pgbench which is a PostgreSQL general benchmark, the postgres server
> to which the patches is applied is about 5% faster than original server.
> And using my insert benchmark, it is up to 90% faster than original one.
> I will des
Hi.
These patches enable to use Persistent Memory Development Kit(PMDK)[1]
for reading/writing WAL logs on persistent memory(PMEM).
PMEM is next generation storage and it has a number of nice features:
fast, byte-addressable and non-volatile.
Using pgbench which is a PostgreSQL general benchmark,
39 matches
Mail list logo