I am just an ordinary user of Linux and ventoy .
Q)
https://github.com/ventoy/Ventoy/issues/2234
Is what I have suggested here, meaningful?
Is there contra-indications to not do it or an alternative suggestions?
thoughts?
Ventoy, a GPL software, uses a small kernel patch to achieve a small
remou
With moving crypt_free_buffer_pages() before crypt_alloc_buffer(), we
don't need an extra declaration anymore.
Signed-off-by: Yang Shi
---
drivers/md/dm-crypt.c | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-cry
Changelog:
RFC -> v2:
* Added callback variant for page bulk allocator and mempool bulk allocator
per Mel Gorman.
* Used the callback version in dm-crypt driver.
* Some code cleanup and refactor to reduce duplicate code.
rfc:
https://lore.kernel.org/linux-mm/20221005180341.1738796-1-s
Since v5.13 the page bulk allocator was introduced to allocate order-0
pages in bulk. There are a few mempool allocator callers which does
order-0 page allocation in a loop, for example, dm-crypt, f2fs compress,
etc. A mempool page bulk allocator seems useful. So introduce the
mempool page bulk
Currently the bulk allocator support to pass pages via list or array,
but neither is suitable for some usecases, for example, dm-crypt, which
doesn't need a list, but array may be too big to fit on stack. So
adding a new bulk allocator API, which passes in a callback function
that deal with the al
Extract the common initialization code to __mempool_init() and
__mempool_create(). And extract the common alloc code into an internal
function. This will make the following patch easier and avoid duplicate
code.
Signed-off-by: Yang Shi
---
mm/mempool.c | 93
When using dm-crypt for full disk encryption, dm-crypt would allocate
an out bio and allocate the same amount of pages as in bio for
encryption. It currently allocates one page at a time in a loop. This
is not efficient. So using mempool page bulk allocator instead of
allocating one page at a ti
I am just an ordinary user of Linux and Ventoy.
Q)
https://github.com/ventoy/Ventoy/issues/2234
Is what I have suggested here, meaningful?
Is there contra-indications to not do it or an alternative suggestions?
thoughts?
Ventoy a GPL2, grub2 environment to native boot iso-s and vdisks.
Ventoy u
Hi Kyle,
On Mon, Feb 13, 2023 at 1:12 PM Kyle Sanderson wrote:
>
[...]
> >
> > > The benefit of this can be the data disks are all zoned, and you can
> > > have a fast parity disk and still maintain excellent performance in
> > > the array (limited only by the speed of the disk in question +
> >
> On Tue, Feb 14, 2023 at 2:28 PM Roger Heflin wrote:
>
> Such that you can lose any one data disk and parity can rebuild that
> disk. And if you lose several data diskis, then you have intact
> non-striped data for the remaining disks.
>
> It would almost seem that you would need to put a separa
I think he is wanting the parity across the data blocks on the
separate filesystems (some sort of parity across fs[1-8]/block0 to
parity/block0).
it is not clear to me what this setup would be enough better than what
the current setups.Given that one could have 8 spin + 1ssd or 12
spin for the
Typically double mounts are done via bind mounts (not really double
mounted just the device showing someplace else). Or one would do a
mount -o remount,rw and remount it rw so you could write
to it.
A real double mount where the kernel fs modules manages both mounts as
if it was a separate devi
On 14/02/2023 22:28, Roger Heflin wrote:
On Tue, Feb 14, 2023 at 3:27 PM Heinz Mauelshagen wrote:
...which is RAID1 plus a parity disk which seems superfluous as you achieve
(N-1)
resilience against single device failures already without the later.
What would you need such parity disk f
On Tue, 14 Feb 2023, Yang Shi wrote:
>
> Changelog:
> RFC -> v2:
> * Added callback variant for page bulk allocator and mempool bulk allocator
> per Mel Gorman.
> * Used the callback version in dm-crypt driver.
> * Some code cleanup and refactor to reduce duplicate code.
>
> rfc:
>
On 15/02/2023 11:44, Roger Heflin wrote:
WOL: current SSD's are rated for around 1000-2000 writes. So a 1Tb
disk can sustain 1000-2000TB of total writes. And writes to
filesystem blocks would get re-written more often than data blocks.
How well it would work would depend on how often the data
The SMART on the disk marks the disk as FAILED when you hit the
manufacturer's posted limit (1000 or 2000 writes average).I am
sure using a "FAILED" disk would make a lot of people nervous.
The conclusion of you can write as fast as you can and it will take 3
years to wear out would be specifi
The block layer might merge together discard requests up until the
max_discard_segments limit is hit, but blk_insert_cloned_request checks
the segment count against max_segments regardless of the req op. This
can result in errors like the following when discards are issued through
a DM device and m
On Wed, Feb 15, 2023 at 07:23:40PM +0800, Pingfan Liu wrote:
> Hi guys,
>
> I encountered a hang issue on a s390x system. The tested kernel is
> not preemptible and booting with "nr_cpus=1"
>
> The test steps:
> umount /home
> lvremove /dev/rhel_s390x-kvm-011/home
> ## uncomme
On Wed, Feb 15, 2023 at 01:15:08PM -0700, Uday Shankar wrote:
> The block layer might merge together discard requests up until the
> max_discard_segments limit is hit, but blk_insert_cloned_request checks
> the segment count against max_segments regardless of the req op. This
> can result in errors
19 matches
Mail list logo