From: Sagi Grimberg
commit ca1ff67d0fb14f39cf0cc5102b1fbcc3b14f6fb9 upstream.
When a bio merges, we can get a request that spans multiple
bios, and the overall request payload size is the sum of
all bios. When we calculate how much we need to send
from the existing bio (and bvec), we did not tak
From: Sagi Grimberg
commit ca1ff67d0fb14f39cf0cc5102b1fbcc3b14f6fb9 upstream.
When a bio merges, we can get a request that spans multiple
bios, and the overall request payload size is the sum of
all bios. When we calculate how much we need to send
from the existing bio (and bvec), we did not tak
Also vbuf will not contain the correct data which results
in the userspace emulation being wrong and hence undetected user data
corruption.
In the past we've been mostly lucky as vbuf has ended up aligned but
this is fragile and isn't always true. CONFIG_STACKPROTECTOR in
particular
Also vbuf will not contain the correct data which results
in the userspace emulation being wrong and hence undetected user data
corruption.
In the past we've been mostly lucky as vbuf has ended up aligned but
this is fragile and isn't always true. CONFIG_STACKPROTECTOR in
particular
Also vbuf will not contain the correct data which results
in the userspace emulation being wrong and hence undetected user data
corruption.
In the past we've been mostly lucky as vbuf has ended up aligned but
this is fragile and isn't always true. CONFIG_STACKPROTECTOR in
particular
From: Heiner Kallweit
[ Upstream commit ef9da46ddef071e1bbb943afbbe9b3877184 ]
Petr reported that after resume from suspend RTL8402 partially
truncates incoming packets, and re-initializing register RxConfig
before the actual chip re-initialization sequence is needed to avoid
the issue.
Rep
From: Heiner Kallweit
[ Upstream commit ef9da46ddef071e1bbb943afbbe9b3877184 ]
Petr reported that after resume from suspend RTL8402 partially
truncates incoming packets, and re-initializing register RxConfig
before the actual chip re-initialization sequence is needed to avoid
the issue.
Rep
From: Rohith Surabattula
commit 62593011247c8a8cfeb0c86aff84688b196727c2 upstream.
TCP server info field server->total_read is modified in parallel by
demultiplex thread and decrypt offload worker thread. server->total_read
is used in calculation to discard the remaining data of PDU which is
not
From: Rohith Surabattula
commit 62593011247c8a8cfeb0c86aff84688b196727c2 upstream.
TCP server info field server->total_read is modified in parallel by
demultiplex thread and decrypt offload worker thread. server->total_read
is used in calculation to discard the remaining data of PDU which is
not
From: Rohith Surabattula
commit 62593011247c8a8cfeb0c86aff84688b196727c2 upstream.
TCP server info field server->total_read is modified in parallel by
demultiplex thread and decrypt offload worker thread. server->total_read
is used in calculation to discard the remaining data of PDU which is
not
From: Heiner Kallweit
[ Upstream commit ef9da46ddef071e1bbb943afbbe9b3877184 ]
Petr reported that after resume from suspend RTL8402 partially
truncates incoming packets, and re-initializing register RxConfig
before the actual chip re-initialization sequence is needed to avoid
the issue.
Rep
From: Heiner Kallweit
[ Upstream commit ef9da46ddef071e1bbb943afbbe9b3877184 ]
Petr reported that after resume from suspend RTL8402 partially
truncates incoming packets, and re-initializing register RxConfig
before the actual chip re-initialization sequence is needed to avoid
the issue.
Rep
From: Heiner Kallweit
[ Upstream commit ef9da46ddef071e1bbb943afbbe9b3877184 ]
Petr reported that after resume from suspend RTL8402 partially
truncates incoming packets, and re-initializing register RxConfig
before the actual chip re-initialization sequence is needed to avoid
the issue.
Rep
From: Heiner Kallweit
[ Upstream commit ef9da46ddef071e1bbb943afbbe9b3877184 ]
Petr reported that after resume from suspend RTL8402 partially
truncates incoming packets, and re-initializing register RxConfig
before the actual chip re-initialization sequence is needed to avoid
the issue.
Rep
From: Kirill A. Shutemov
[ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ]
Jeff Moyer has reported that one of xfstests triggers a warning when run
on DAX-enabled filesystem:
WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50
...
wp_page_
From: Kirill A. Shutemov
[ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ]
Jeff Moyer has reported that one of xfstests triggers a warning when run
on DAX-enabled filesystem:
WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50
...
wp_page_
From: Zeng Tao
[ Upstream commit 4a33691c4cea9eb0a7c66e87248be4637e14b180 ]
Currently there are only 10 bytes to store the cpu-topology 'name'
information. Only 10 bytes copied into cluster/thread/core names.
If the cluster ID exceeds 2-digit number, it will result in the data
corru
From: Kirill A. Shutemov
[ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ]
Jeff Moyer has reported that one of xfstests triggers a warning when run
on DAX-enabled filesystem:
WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50
...
wp_page_
From: "Kirill A. Shutemov"
[ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ]
Jeff Moyer has reported that one of xfstests triggers a warning when run
on DAX-enabled filesystem:
WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50
...
wp_pag
From: "Kirill A. Shutemov"
[ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ]
Jeff Moyer has reported that one of xfstests triggers a warning when run
on DAX-enabled filesystem:
WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50
...
wp_pag
From: "Kirill A. Shutemov"
[ Upstream commit c3e5ea6ee574ae5e845a40ac8198de1fb63bb3ab ]
Jeff Moyer has reported that one of xfstests triggers a warning when run
on DAX-enabled filesystem:
WARNING: CPU: 76 PID: 51024 at mm/memory.c:2317 wp_page_copy+0xc40/0xd50
...
wp_pag
From: Zeng Tao
[ Upstream commit 4a33691c4cea9eb0a7c66e87248be4637e14b180 ]
Currently there are only 10 bytes to store the cpu-topology 'name'
information. Only 10 bytes copied into cluster/thread/core names.
If the cluster ID exceeds 2-digit number, it will result in the data
corru
From: Dexuan Cui
The v4.4 stable kernel lacks this bugfix:
commit 327868212381 ("make skb_copy_datagram_msg() et.al. preserve ->msg_iter
on error").
As a result, the v4.4 kernel can deliver corrupt data to the application
when a corrupt UDP packet is closely followed by a valid UDP packet: the
s
Hi!
> After alot of fiddling around it turned out that the problem goes away if
> doing "cp --sparse=never"
> when copying the files. This would to me exclude any hardware errors and
> feels more like something
> deeper inside the kernel.
If files contain random data, they are never sparse. It is
On Mon, Jun 29, 2020 at 01:55:40AM +0700, Sebastian Hyrwall wrote:
> Sorry if this is not the right place for this email but I can't think of
> another place (might be linux-fsdevel)
You can always CC the mailinglists of the filesystems.
> Someone here is ought to be an expert in this.
>
> It a
Hi
Sorry if this is not the right place for this email but I can't think of
another place (might be linux-fsdevel)
Someone here is ought to be an expert in this.
It all started as having file corruptions inside VMs that then led to
alot of testing that
resulted in replicatable results on the
last one block space, let's just writeback raw data instead of
compressed one, this can fix data corruption when decompressing
incomplete stored compression data.
Fixes: 50cfa66f0de0 ("f2fs: compress: support zstd compress algorithm")
Signed-off-by: Daeho Jeong
Signed-off-by: Chao Yu
last one block space, let's just writeback raw data instead of
compressed one, this can fix data corruption when decompressing
incomplete stored compression data.
Fixes: 50cfa66f0de0 ("f2fs: compress: support zstd compress algorithm")
Signed-off-by: Daeho Jeong
Signed-off-by: Chao Yu
mained in intermediate buffer, it means that zstd algorithm can not
> save at last one block space, let's just writeback raw data instead of
> compressed one, this can fix data corruption when decompressing
> incomplete stored compression data.
>
Fixes: 50cfa66f0de0 ("f2fs:
> >
> > > Could we save more memory space for these two cases like ZSTD?
> > > As you know, we are using 5 pages compression buffer for LZ4 and LZO
> > in
> > > compress_log_size=2,
> > > and if the compressed data doesn't fit in
ed data doesn't fit in 3 pages, it returns -EAGAIN to
> > give up compressing that one.
> >
> > Thanks,
> >
> > 2020년 5월 8일 (금) 오전 10:17, Chao Yu <mailto:yuch...@huawei.com>>님이 작성:
> >
> >> During zstd compress
y return non-zero value
>> because distination buffer is full, but there is still compressed data
>> remained in intermediate buffer, it means that zstd algorithm can not
>> save at last one block space, let's just writeback raw data instead of
>> compressed one, this can
d one, this can fix data corruption when decompressing
incomplete stored compression data.
Signed-off-by: Daeho Jeong
Signed-off-by: Chao Yu
---
fs/f2fs/compress.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index c22cc0d37369..5e4947250262 1
reload:
1. construct new target
2. suspend old target
3. resume new target
4. destroy old target
Metadata that were written by the old target between steps 1 and 2 would
not be visible by the new target.
Fix the data corruption by loading the metadata in the resume handler.
Also, validate block_size
reload:
1. construct new target
2. suspend old target
3. resume new target
4. destroy old target
Metadata that were written by the old target between steps 1 and 2 would
not be visible by the new target.
Fix the data corruption by loading the metadata in the resume handler.
Also, validate block_size
reload:
1. construct new target
2. suspend old target
3. resume new target
4. destroy old target
Metadata that were written by the old target between steps 1 and 2 would
not be visible by the new target.
Fix the data corruption by loading the metadata in the resume handler.
Also, validate block_size
size of the second smallest - etc.
A change in Linux 3.14 unintentionally changed the layout for the
second and subsequent zones. All the correct data is still stored, but
each chunk may be assigned to a different device than in pre-3.14 kernels.
This can lead to data corruption.
It is not
size of the second smallest - etc.
A change in Linux 3.14 unintentionally changed the layout for the
second and subsequent zones. All the correct data is still stored, but
each chunk may be assigned to a different device than in pre-3.14 kernels.
This can lead to data corruption.
It is not
size of the second smallest - etc.
A change in Linux 3.14 unintentionally changed the layout for the
second and subsequent zones. All the correct data is still stored, but
each chunk may be assigned to a different device than in pre-3.14 kernels.
This can lead to data corruption.
It is not
size of the second smallest - etc.
A change in Linux 3.14 unintentionally changed the layout for the
second and subsequent zones. All the correct data is still stored, but
each chunk may be assigned to a different device than in pre-3.14 kernels.
This can lead to data corruption.
It is not
3.16.74-rc1 review patch. If anyone has any objections, please let me know.
--
From: Lukas Czerner
commit 57a0da28ced8707cb9f79f071a016b9d005caf5a upstream.
Unaligned AIO must be serialized because the zeroing of partial blocks
of unaligned AIO can result in data corruption
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
From: Lukas Czerner
commit 57a0da28ced8707cb9f79f071a016b9d005caf5a upstream.
Unaligned AIO must be serialized because the zeroing of partial blocks
of unaligned AIO can result in data corruption in case it's overlapping
another in flight IO.
Currently we wait for all unwritten extents b
From: Lukas Czerner
commit 57a0da28ced8707cb9f79f071a016b9d005caf5a upstream.
Unaligned AIO must be serialized because the zeroing of partial blocks
of unaligned AIO can result in data corruption in case it's overlapping
another in flight IO.
Currently we wait for all unwritten extents b
From: Lukas Czerner
commit 57a0da28ced8707cb9f79f071a016b9d005caf5a upstream.
Unaligned AIO must be serialized because the zeroing of partial blocks
of unaligned AIO can result in data corruption in case it's overlapping
another in flight IO.
Currently we wait for all unwritten extents b
From: Lukas Czerner
commit 57a0da28ced8707cb9f79f071a016b9d005caf5a upstream.
Unaligned AIO must be serialized because the zeroing of partial blocks
of unaligned AIO can result in data corruption in case it's overlapping
another in flight IO.
Currently we wait for all unwritten extents b
From: Lukas Czerner
commit 57a0da28ced8707cb9f79f071a016b9d005caf5a upstream.
Unaligned AIO must be serialized because the zeroing of partial blocks
of unaligned AIO can result in data corruption in case it's overlapping
another in flight IO.
Currently we wait for all unwritten extents b
ll clear the uptodate flag. But the
> > > data in the buffer maybe newer than disk. In some case, this
> > > will lead data corruption.
> > >
> > > For example: ext4 flush metadata to disk failed, it will clear
> > > the uptodate flag. when a new coming c
On 4/8/2019 7:11 PM, Jan Kara wrote:
On Sat 06-04-19 15:13:13, ZhangXiaoxu wrote:
When the buffer write failed, 'end_buffer_write_sync' and
'end_buffer_async_write' will clear the uptodate flag. But the
data in the buffer maybe newer than disk. In some case, this
will
On Sat 06-04-19 15:13:13, ZhangXiaoxu wrote:
> When the buffer write failed, 'end_buffer_write_sync' and
> 'end_buffer_async_write' will clear the uptodate flag. But the
> data in the buffer maybe newer than disk. In some case, this
> will lead data corrupti
When the buffer write failed, 'end_buffer_write_sync' and
'end_buffer_async_write' will clear the uptodate flag. But the
data in the buffer maybe newer than disk. In some case, this
will lead data corruption.
For example: ext4 flush metadata to disk failed, it will clear
the u
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
result in data
corruption.
However it decides not to serialize if the potentially unaligned aio is
past i_size with the rationale that no pending writes are possible past
i_size. Unfortunately if the i_size is not block aligned and the second
unaligned write lands past i_size, but still into the
fork
reservation. This ultimately causes writeback to the shared extent
and data corruption that is detected across md5 checks of the
filesystem across a mount cycle.
The problem occurs when a buffered write lands over a shared extent
that crosses an extent size hint boundary and that also happens
lication that got recently
fixed by commit de02b9f6bb65 ("Btrfs: fix data corruption when
deduplicating between different files").
Fix this by not allowing such operations to be performed and return the
errno -EINVAL to user space. This is what XFS is doing as well at the VFS
level. Th
On 2019-01-07 1:27 p.m., mario.limoncie...@dell.com wrote:
..
> The xHCI overrun workaround should only be applied on TB16/TB16, correct.
>
> Can you double check the verbose information from lsusb for the r8153 device
> on your WD15?
Sure, see below for the full output.
> If it's the same infor
c_s...@realtek.com; linux-
> ker...@vger.kernel.org; linux-...@vger.kernel.org; ryan...@realtek.com
> Subject: Re: r8152: data corruption in various scenarios
>
>
> [EXTERNAL EMAIL]
>
> On 2019-01-07 11:01 a.m., mario.limoncie...@dell.com wrote:
> >
> > TB16 co
On 2019-01-07 11:01 a.m., mario.limoncie...@dell.com wrote:
>
> TB16 contains ASMedia host controller. It's a Thunderbolt dock and all USB
> devices
> are connected to ASMedia host controller in the dock.
>
> WD15 does not contain an ASMedia host controller, it connected to system's
> USB host c
u...@vger.kernel.org; Limonciello, Mario; Ryankao
> Subject: RE: r8152: data corruption in various scenarios
>
>
> [EXTERNAL EMAIL]
>
> Monday, January 07, 2019 5:17 AM
> [...]
> >> This is probably an xHC bug. A similar issue is fixed by commit
> >> 9da5a1092b13
&g
On 2019-01-07 1:46 a.m., Kai Heng Feng wrote:
>
> Do you happen to use a Dell system? We can do some test here.
Yes. It is a Dell XPS 13 9360 i7-8550U notebook,
with the Dell WD15 USB-C dock.
--
Mark Lord
Real-Time Remedies Inc.
ml...@pobox.com
> On Jan 7, 2019, at 12:13, Mark Lord wrote:
>
> On 2019-01-06 11:09 p.m., Kai Heng Feng wrote:
>>
>>
>>> On Jan 7, 2019, at 05:16, Mark Lord wrote:
>>>
>>> On 2019-01-06 4:13 p.m., Mark Lord wrote:
On 2019-01-06 2:14 p.m., Kai Heng Feng wrote:>> On Jan 5, 2019, at 10:14
PM, Mar
On 2019-01-06 11:09 p.m., Kai Heng Feng wrote:
>
>
>> On Jan 7, 2019, at 05:16, Mark Lord wrote:
>>
>> On 2019-01-06 4:13 p.m., Mark Lord wrote:
>>> On 2019-01-06 2:14 p.m., Kai Heng Feng wrote:>> On Jan 5, 2019, at 10:14
>>> PM, Mark Lord
>>> wrote:
>>> ..
> There is even now a special ha
> On Jan 7, 2019, at 05:16, Mark Lord wrote:
>
> On 2019-01-06 4:13 p.m., Mark Lord wrote:
>> On 2019-01-06 2:14 p.m., Kai Heng Feng wrote:>> On Jan 5, 2019, at 10:14 PM,
>> Mark Lord
>> wrote:
>> ..
There is even now a special hack in the upstream r8152.c to attempt to
detect
>>>
Monday, January 07, 2019 5:17 AM
[...]
>> This is probably an xHC bug. A similar issue is fixed by commit 9da5a1092b13
>> ("xhci: Bad Ethernet performance plugged in ASM1042A host”).
>>
>>> I just got that exact message above, with the r8152 in my 1-day old WD15
>>> dock,
>>> with the TB16 "worka
On 2019-01-06 4:13 p.m., Mark Lord wrote:
> On 2019-01-06 2:14 p.m., Kai Heng Feng wrote:>> On Jan 5, 2019, at 10:14 PM,
> Mark Lord
> wrote:
> ..
>>> There is even now a special hack in the upstream r8152.c to attempt to
>>> detect
>>> a Dell TB16 dock and disable RX Aggregation in the driver t
On 2019-01-06 2:14 p.m., Kai Heng Feng wrote:>> On Jan 5, 2019, at 10:14 PM,
Mark Lord
wrote:
..
>> There is even now a special hack in the upstream r8152.c to attempt to detect
>> a Dell TB16 dock and disable RX Aggregation in the driver to prevent such
>> issues.
>>
>> Well.. I have a WD15 doc
> On Jan 5, 2019, at 10:14 PM, Mark Lord wrote:
>
> A couple of years back, I reported data corruption resulting from
> a change in kernel 3.16 which enabled hardware checksums in the r8152 driver.
> This was happening on an embedded system that was using a r8152 USB dongle.
&
On 2019-01-05 9:14 a.m., Mark Lord wrote:
> A couple of years back, I reported data corruption resulting from
> a change in kernel 3.16 which enabled hardware checksums in the r8152 driver.
> This was happening on an embedded system that was using a r8152 USB dongle.
>
> At the ti
A couple of years back, I reported data corruption resulting from
a change in kernel 3.16 which enabled hardware checksums in the r8152 driver.
This was happening on an embedded system that was using a r8152 USB dongle.
At the time, it was very difficult to figure out what could possibly be
From: Dave Chinner
[ Upstream commit 4721a6010990971440b4ffefbdf014976b8eda2f ]
When doing direct IO to a pipe for do_splice_direct(), then pipe is
trivial to fill up and overflow as it can only hold 16 pages. At
this point bio_iov_iter_get_pages() then returns -EFAULT, and we
abort the IO submi
lication that got recently
fixed by commit de02b9f6bb65 ("Btrfs: fix data corruption when
deduplicating between different files").
Fix this by not allowing such operations to be performed and return the
errno -EINVAL to user space. This is what XFS is doing as well at the VFS
level. Th
lication that got recently
fixed by commit de02b9f6bb65 ("Btrfs: fix data corruption when
deduplicating between different files").
Fix this by not allowing such operations to be performed and return the
errno -EINVAL to user space. This is what XFS is doing as well at the VFS
level. Th
lication that got recently
fixed by commit de02b9f6bb65 ("Btrfs: fix data corruption when
deduplicating between different files").
Fix this by not allowing such operations to be performed and return the
errno -EINVAL to user space. This is what XFS is doing as well at the VFS
level. Th
lication that got recently
fixed by commit de02b9f6bb65 ("Btrfs: fix data corruption when
deduplicating between different files").
Fix this by not allowing such operations to be performed and return the
errno -EINVAL to user space. This is what XFS is doing as well at the VFS
level. Th
lication that got recently
fixed by commit de02b9f6bb65 ("Btrfs: fix data corruption when
deduplicating between different files").
Fix this by not allowing such operations to be performed and return the
errno -EINVAL to user space. This is what XFS is doing as well at the VFS
level. Th
3.16.61-rc1 review patch. If anyone has any objections, please let me know.
--
From: Herbert Xu
commit 46d8c4b28652d35dc6cfb5adf7f54e102fc04384 upstream.
This was detected by the self-test thanks to Ard's chunking patch.
I finally got around to testing this out on my ancient
Gregory Shapiro 于2018年11月6日周二 下午12:31写道:
>
> Hi Jack,
> I tested it in 4.9.102 and I checked the latest code from elixir
> (versions 4.19 and 4.20) and the error in code is still present there.
> More on the scenario and the bug:
> I experienced data corruption in my appli
Hi Jack,
I tested it in 4.9.102 and I checked the latest code from elixir
(versions 4.19 and 4.20) and the error in code is still present there.
More on the scenario and the bug:
I experienced data corruption in my application (nvme based storage).
The issue was caused because of faulty hardware
Gregory Shapiro 于2018年11月5日周一 下午4:19写道:
>
> Hello, my name is Gregory Shapiro and I am a newbie on this list.
> I recently encountered data corruption as I got a kernel to
> acknowledge write ("io_getevents" system call with a correct number of
> bytes) but underg
Hello, my name is Gregory Shapiro and I am a newbie on this list.
I recently encountered data corruption as I got a kernel to
acknowledge write ("io_getevents" system call with a correct number of
bytes) but undergoing write to disk failed.
After investigating the problem I found it is
gt; Direct IO can be used in case of hardware encryption. The following
> > > > > scenario results into data corruption issue in this path -
> > > > >
> > > > > Thread A - Thread B-
> > >
ted cannot read
from the dropped device anymore. It prints lots of WARN_ON messages.
And it results in data corruption because existing stripes write
problematic data into its replacement device and update the progress.
\# Erase disks (1MB + 2GB)
dd if=/dev/zero of=/dev/sda bs=1MB count=2049
dd if=/d
ted cannot read
from the dropped device anymore. It prints lots of WARN_ON messages.
And it results in data corruption because existing stripes write
problematic data into its replacement device and update the progress.
\# Erase disks (1MB + 2GB)
dd if=/dev/zero of=/dev/sda bs=1MB count=2049
dd if=/d
147 ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae
*
11777540 ae ae ae ae ae ae ae ae
11777550
# The bytes in range 2515659 to 2519040 have a value of 0x00 and not a
# value of 0xae, data corruption happened due to the deduplication
# operation.
So fix this by rounding down, to the
ted cannot read
from the dropped device anymore. It prints lots of WARN_ON messages.
And it results in data corruption because existing stripes write
problematic data into its replacement device and update the progress.
\# Erase disks (1MB + 2GB)
dd if=/dev/zero of=/dev/sda bs=1MB count=2049
dd if=/d
147 ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae ae
*
11777540 ae ae ae ae ae ae ae ae
11777550
# The bytes in range 2515659 to 2519040 have a value of 0x00 and not a
# value of 0xae, data corruption happened due to the deduplication
# operation.
So fix this by rounding down, to the
ted cannot read
from the dropped device anymore. It prints lots of WARN_ON messages.
And it results in data corruption because existing stripes write
problematic data into its replacement device and update the progress.
\# Erase disks (1MB + 2GB)
dd if=/dev/zero of=/dev/sda bs=1MB count=2049
dd if=/d
ted cannot read
from the dropped device anymore. It prints lots of WARN_ON messages.
And it results in data corruption because existing stripes write
problematic data into its replacement device and update the progress.
\# Erase disks (1MB + 2GB)
dd if=/dev/zero of=/dev/sda bs=1MB count=2049
dd if=/d
On Sat, 2018-08-04 at 11:01 +0200, Greg Kroah-Hartman wrote:
> 4.4-stable review patch. If anyone has any objections, please let me know.
>
> --
>
> From: Herbert Xu
>
> commit 46d8c4b28652d35dc6cfb5adf7f54e102fc04384 upstream.
>
> This was detected by the self-test thanks to
it will result in data corruption.
Actually, it's just an unhandled case of replacement. In commit
(md/raid5: fix interaction of 'replace' and 'recovery'.),
if a NeedReplace device is not UPTODATE then that is an error, the
commit just simply print WARN_ON but also mark
it will result in data corruption.
Actually, it's just an unhandled case of replacement. In commit
(md/raid5: fix interaction of 'replace' and 'recovery'.),
if a NeedReplace device is not UPTODATE then that is an error, the
commit just simply print WARN_ON but also mark
it will result in data corruption.
Actually, it's just an unhandled case of replacement. In commit
(md/raid5: fix interaction of 'replace' and 'recovery'.),
if a NeedReplace device is not UPTODATE then that is an error, the
commit just simply print WARN_ON but also mark
it will result in data corruption.
Actually, it's just an unhandled case of replacement. In commit
(md/raid5: fix interaction of 'replace' and 'recovery'.),
if a NeedReplace device is not UPTODATE then that is an error, the
commit just simply print WARN_ON but also mark
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Herbert Xu
commit 46d8c4b28652d35dc6cfb5adf7f54e102fc04384 upstream.
This was detected by the self-test thanks to Ard's chunking patch.
I finally got around to testing this out on my ancient
4.9-stable review patch. If anyone has any objections, please let me know.
--
From: Filipe Manana
commit bd3599a0e142cd73edd3b6801068ac3f48ac771a upstream.
When we clone a range into a file we can end up dropping existing
extent maps (or trimming them) and replacing them with
4.14-stable review patch. If anyone has any objections, please let me know.
--
From: Filipe Manana
commit bd3599a0e142cd73edd3b6801068ac3f48ac771a upstream.
When we clone a range into a file we can end up dropping existing
extent maps (or trimming them) and replacing them with
1 - 100 of 822 matches
Mail list logo