On Mon, 30 Mar 2020 12:34:04 -0400 (EDT)
Mikulas Patocka wrote:
> Hi
>
> I tested it on my rotational disk:
>
>
> On Thu, 27 Feb 2020, Lukas Straub wrote:
>
> > If a full metadata buffer is being written, don't read it first. This
> > prevents a
On Mon, 30 Mar 2020 12:34:04 -0400 (EDT)
Mikulas Patocka wrote:
> Hi
>
> I tested it on my rotational disk:
>
>
> On Thu, 27 Feb 2020, Lukas Straub wrote:
>
> > If a full metadata buffer is being written, don't read it first. This
> > prevents a
apper/hdda_integ bs=64K count=$((16*1024*4))
conv=fsync oflag=direct status=progress
Without patch:
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 400.326 s, 10.7 MB/s
With patch:
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 41.2057 s, 104 MB/s
Signed-off-by: Lukas Straub
---
drivers/md/dm-integr
On Tue, 25 Feb 2020 11:41:45 -0500 (EST)
Mikulas Patocka wrote:
> On Thu, 20 Feb 2020, Lukas Straub wrote:
>
> > If a full tag area is being written, don't read it first. This prevents a
> > read-modify-write cycle and increases performance on HDDs considerably.
&g
dd if=/dev/zero of=/dev/mapper/hdda_integ bs=64K count=$((16*1024*4))
conv=fsync oflag=direct status=progress
Without patch:
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 400.326 s, 10.7 MB/s
With patch:
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 41.2057 s, 104 MB/s
Signed-off-by: Lukas Straub
---
On Wed, 26 Feb 2020 09:14:31 -0500 (EST)
Mikulas Patocka wrote:
> On Wed, 26 Feb 2020, Lukas Straub wrote:
>
> > > > - data = dm_bufio_read(ic->bufio, *metadata_block, &b);
> > > > - if (IS_ERR(data))
> > >
On Tue, 24 Mar 2020 13:18:22 -0400
Mike Snitzer wrote:
> On Thu, Feb 27 2020 at 8:26am -0500,
> Lukas Straub wrote:
>
> > If a full metadata buffer is being written, don't read it first. This
> > prevents a read-modify-write cycle and increases performanc
On Tue, 24 Mar 2020 15:11:49 -0400
Mike Snitzer wrote:
> On Tue, Mar 24 2020 at 2:59pm -0400,
> Lukas Straub wrote:
>
> > On Tue, 24 Mar 2020 13:18:22 -0400
> > Mike Snitzer wrote:
> >
> > > On Thu, Feb 27 2020 at 8:26am -0500,
> > > Lu
igned-off-by: Lukas Straub
---
drivers/md/dm-integrity.c | 8
1 file changed, 8 insertions(+)
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 5a7a1b90e671..a26ed65869f6 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -2196,6 +2196,8 @@ s
On Sun, 20 Dec 2020 14:02:22 +0100
Lukas Straub wrote:
> With an external metadata device, flush requests aren't passed down
> to the data device.
>
> Fix this by issuing flush in the right places: In integrity_commit
> when not in journal mode, in do_journal_write after wr
This is a test. It seems that my mails don't reach the mailing-list.
--
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
the data device flush with
> the metadata device flush.
>
> Signed-off-by: Mikulas Patocka
> Reported-by: Lukas Straub
> Cc: sta...@vger.kernel.org
Looks good to me.
Reviewed-by: Lukas Straub
Regards,
Lukas Straub
> ---
> drivers/md/dm-bufio.c |6
just return the corrupted data without i/o
error and a write could write directly to the on-disk data to fixup the
corruption everywhere. btrfs could also check that the newly written
data actually matches the checksum.
However, in this btrfs usecase the process still needs to be
CAP_SYS_ADMIN o
CC'ing linux-raid mailing list, where md raid development happens.
dm-raid is just a different interface to md raid.
On Fri, 7 Jan 2022 10:30:56 +0800
Qu Wenruo wrote:
> Hi,
>
> Recently I'm working on refactor btrfs raid56 (with long term objective
> to add proper journal to solve write-hole),
On Sat, 8 Jan 2022 19:52:59 +
Lukas Straub wrote:
> CC'ing linux-raid mailing list, where md raid development happens.
> dm-raid is just a different interface to md raid.
>
> On Fri, 7 Jan 2022 10:30:56 +0800
> Qu Wenruo wrote:
>
> > Hi,
> >
> &g
On Sun, 9 Jan 2022 20:13:36 +0800
Qu Wenruo wrote:
> On 2022/1/9 18:04, David Woodhouse wrote:
> > On Sun, 2022-01-09 at 07:55 +0800, Qu Wenruo wrote:
> >> On 2022/1/9 04:29, Lukas Straub wrote:
> >>> But there is a even simpler solution for btrfs: It could jus
CC'ing Song Liu (md-raid maintainer) and linux-raid mailing list.
On Fri, 21 Jan 2022 16:38:03 +
Roger Willcocks wrote:
> Hi folks,
>
> we noticed a thirty percent drop in performance on one of our raid
> arrays when switching from CentOS 6.5 to 8.4; it uses raid0-like
> striping to balance
On Wed, 14 Jun 2023 17:29:17 -0400
Marc Smith wrote:
> Hi,
>
> I'm using dm-writecache via 'lvmcache' on Linux 5.4.229 (vanilla
> kernel.org source). I've been testing my storage server -- I'm using a
> couple NVMe drives in an MD RAID1 array that is the cache (fast)
> device, and using a 12-dri
On Fri, 16 Jun 2023 16:43:47 -0400
Marc Smith wrote:
> On Fri, Jun 16, 2023 at 12:33 PM Lukas Straub wrote:
> >
> > On Wed, 14 Jun 2023 17:29:17 -0400
> > Marc Smith wrote:
> >
> > > Hi,
> > >
> > > I'm using dm-writecache via &
19 matches
Mail list logo