ede8 ("block: use lcm_not_zero() when stacking chunk_sectors")
> Fixes: 07d098e6bbad ("block: allow 'chunk_sectors' to be non-power-of-2")
> Cc: sta...@vger.kernel.org
> Reported-by: John Dorminy
> Reported-by: Bruce Johnston
> Signed-off-by: Mike
> If you're going to cherry pick a portion of a commit header please
> reference the commit id and use quotes or indentation to make it clear
> what is being referenced, etc.
Apologies.
> Quite the tangent just to setup an a toy example of say: thinp with 256K
> blocksize/chunk_sectors ontop of a
traditionally, I think the changes between patch versions go after the
--- marker, so they don't go in the change description of the commit.)
Thanks!
John Dorminy
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
) would result in
> blk_max_size_offset() splitting IO at 128 sectors rather than the
> required more restrictive 8 sectors.
>
> Fixes: 22ada802ede8 ("block: use lcm_not_zero() when stacking chunk_sectors")
> Cc: sta...@vger.kernel.org
> Reported-by: John Dorminy
> R
Greetings;
There are a lot of uses of PAGE_SIZE/SECTOR_SIZE scattered around, and
it seems like a medium improvement to be able to refer to it as
PAGE_SECTORS everywhere instead of just inside dm, bcache, and
null_blk. Did this change progress forward somewhere?
Thanks!
John Dorminy
On Mon
he
minimum.
Thanks!
John Dorminy
On Thu, Nov 19, 2020 at 3:37 PM Mikulas Patocka wrote:
>
> We get these I/O errors when we run md-raid1 on the top of dm-integrity on
> the top of ramdisk:
> device-mapper: integrity: Bio not aligned on 8 sectors: 0xff00, 0xff
> device-mapper: integ
Greetings;
There are a lot of uses of PAGE_SIZE/SECTOR_SIZE scattered around, and it
seems like a medium improvement to be able to refer to it as PAGE_SECTORS
everywhere instead of just inside dm, bcache, and null_blk. Did this change
progress forward somewhere?
Thanks!
John Dorminy
On Mon
On Thu, Sep 24, 2020 at 1:24 PM John Dorminy wrote:
>
> I am impressed at how much I read wrong...
>
> On Thu, Sep 24, 2020 at 1:00 PM Mike Snitzer wrote:
> >
> > On Thu, Sep 24 2020 at 12:45pm -0400,
> > John Dorminy wrote:
> >
> > > I don
I am impressed at how much I read wrong...
On Thu, Sep 24, 2020 at 1:00 PM Mike Snitzer wrote:
>
> On Thu, Sep 24 2020 at 12:45pm -0400,
> John Dorminy wrote:
>
> > I don't understand how this works...
> >
> > Can chunk_size_bytes be 0? If not, how is disc
I don't understand how this works...
Can chunk_size_bytes be 0? If not, how is discard_granularity being set to 0?
I think also limits is local to the ti in question here, initialized
by blk_set_stacking_limits() via dm-table.c, and therefore has only
default values and not anything to do with th
Your points are good. I don't know a good macrobenchmark at present,
but at least various latency numbers are easy to get out of fio.
I ran a similar set of tests on an Optane 900P with results below.
'clat' is, as fio reports, the completion latency, measured in usec.
'configuration' is [block si
For what it's worth, I just ran two tests on a machine with dm-crypt
using the cipher_null:ecb cipher. Results are mixed; not offloading IO
submission can result in -27% to +23% change in throughput, in a
selection of three IO patterns HDDs and SSDs.
(Note that the IO submission thread also reorde
REQ_OP_FLUSH was being treated as a flag, but the operation
part of bio->bi_opf must be treated as a whole. Change to
accessing the operation part via bio_op(bio) and checking
for equality.
Signed-off-by: John Dorminy
---
drivers/md/dm-ebs-target.c | 2 +-
1 file changed, 1 insertion(+)
fixed order suitable for
storing on disk?
On Thu, Apr 2, 2020 at 11:52 AM John Dorminy wrote:
> That does make sense. May I request, then, using UUID_SIZE instead of 16?
> Perhaps with a compile-time assertion that UUID_SIZE has not change from 16?
>
> On Thu, Apr 2, 2020 at 11:10 AM Han
That does make sense. May I request, then, using UUID_SIZE instead of 16?
Perhaps with a compile-time assertion that UUID_SIZE has not change from 16?
On Thu, Apr 2, 2020 at 11:10 AM Hannes Reinecke wrote:
> On 4/2/20 4:53 PM, John Dorminy wrote:
> > I'm worried about hardcoding u
I'm worried about hardcoding uuid members as u8[16].
May I ask why you're not using uuid_t to define it in the on-disk
structure? It would save the casting of the uuid members to (uuid_t *)
every time you use a uuid.h function.
Possibly it is customary to use only raw datatypes on disk rather tha
> Also, the test "!dm_suspended(wc->ti)" in writecache_writeback is not
> sufficient, because dm_suspended returns zero while writecache_suspend is
> in progress. We add a variable wc->suspending and set it in
> writecache_suspend. Without this variable, drain_workqueue would spit
> warnings:
> wor
Yeah, that's a great point. Now that I've reviewed the code a little
more, I understand how it's not safe to do the thing I described in
the first place, and how this change is safe and correct.
Please feel free to add my
Reviewed-by: John Dorminy
Thanks!
On Fri, Feb 7,
On Fri, Feb 7, 2020 at 1:04 PM Mikulas Patocka wrote:
>
>
>
> On Fri, 7 Feb 2020, John Dorminy wrote:
>
> > > +/*
> > > + * Free the specified range of buffers. If a buffer is held by other
> > > process, it
> > > + * is not freed. If a
> +/*
> + * Free the specified range of buffers. If a buffer is held by other
> process, it
> + * is not freed. If a buffer is dirty, it is discarded without writeback.
> + * Finally, send the discard request to the device.
Might be clearer to say "After freeing, send a discard request for the
spe
I agree that adding uuid to all messages would be gross bloat, and a
bad idea to apply everywhere.
I didn't actually realize that devices could be renamed with dmsetup.
Thanks for pointing that out...
On Thu, Feb 6, 2020 at 8:42 PM Alasdair G Kergon wrote:
>
> On Fri, Feb 07, 2020 at 01:24:33AM
On Mon, Feb 3, 2020 at 11:54 AM Mike Snitzer wrote:
>
> On Fri, Jan 31 2020 at 7:55pm -0500,
> John Dorminy wrote:
>
> > While dm_device_name() returns the MAJOR:MINOR numbers of a device,
> > some targets would like to know the pretty name of a device, and
> > s
and the UUID at present, and this change exports the
function for use by GPLd modules.
Signed-off-by: John Dorminy
---
drivers/md/dm-ioctl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 1e03bc89e20f..711a46015696 100644
Makes sense, sorry I missed that detail.
Might it be better to just extend 'dmsetup targets' to take an optional
target-name parameter? When I saw this change, I thought 'dmsetup targets
' surely worked already for the purpose, and was somewhat surprised
when experiment disagreed. Then list_versio
I'm confused here:
>...and then it fails on activation because DM table load detects old (or
missing) dm-crypt feature.
>(There was no way to get dm target version before table load if module is
not loaded.)
>And I tried to avoid modprobe calls from libcryptsetup.
I'm not understanding how this co
Having studied the code in question more thoroughly, my first email's
scenario can't occur, I believe. bio_put() contains a
atomic_dec_and_test(), which (according to
Documentation/atomic_t.txt), having a return value, are fully ordered
and therefore impose a general memory barrier. A general memor
Thank you! I had not encountered that useful function, it does exactly
what I want. You're the best!
On Fri, Mar 22, 2019 at 9:21 AM Mikulas Patocka wrote:
>
>
>
> On Thu, 21 Mar 2019, John Dorminy wrote:
>
> > I'm thankful for this change making it explicit that
I'm thankful for this change making it explicit that this parameter is
not a max IO length but something else. I've been confused by the name
more than once when trying to figure out why IOs weren't coming in as
large as I expected. I wish there were a way for targets to say "I can
accept IO of up
I'm also worried about the other two versions, though:
memory-barriers.txt#1724:
1724 (*) The compiler is within its rights to invent stores to a variable,
i.e. the compiler is free to decide __bio_chain_endio looks like this:
static struct bio *__bio_chain_endio(struct bio *bio)
{
struct bio
I am perhaps not understanding the intricacies here, or not seeing a
barrier protecting it, so forgive me if I'm off base. I think reading
parent->bi_status here is unsafe.
Consider the following sequence of events on two threads.
Thread 0 Thread 1
In __bio_chain_en
I didn't know such a thing existed... does it work on any block
device? Where do I read more about this?
On Fri, Feb 1, 2019 at 2:35 AM Christoph Hellwig wrote:
>
> On Thu, Jan 31, 2019 at 02:41:52PM -0500, John Dorminy wrote:
> > > On Wed, Jan 30, 2019 at 09:08:50AM -0500
On Thu, Jan 31, 2019 at 5:39 AM Christoph Hellwig wrote:
>
> On Wed, Jan 30, 2019 at 09:08:50AM -0500, John Dorminy wrote:
> > (I use WRITE_SAME to fill devices with a particular pattern in order
> > to catch failures to initialize disk structures appropriately,
> > perso
On Mon, Jan 28, 2019 at 11:54 PM Martin K. Petersen
wrote:
> We rounded up LBS when we created the DM device. And therefore the
> bv_len coming down is 4K. But one of the component devices has a LBS of
> 512 and fails this check.
>
> At first glance one could argue we should just nuke the BUG_ON s
device on top of a disk? Does it have a
filesystem on top, and if so, what filesystem?
Thank you!
John Dorminy
On Fri, Jan 25, 2019 at 9:53 AM Zhang Xiaoxu wrote:
>
> If the lvm is stacked by different logical_block_size disks,
> when WRITE SAME on it, will bug_on:
>
> kernel BUG
Resending as plain text, apologies.
On Thu, Jan 24, 2019 at 1:23 PM John Dorminy wrote:
>
> Adding dm-devel since it involves LVM.
>
> On Thu, Jan 24, 2019 at 1:14 PM Zhang Xiaoxu wrote:
>>
>> If the lvm is stacked by different logical_block_size disks,
>> when
Adding dm-devel since it involves LVM.
On Thu, Jan 24, 2019 at 1:14 PM Zhang Xiaoxu
wrote:
> If the lvm is stacked by different logical_block_size disks,
> when WRITE SAME on it, will bug_on:
>
> kernel BUG at drivers/scsi/sd.c:968!
> invalid opcode: [#1] SMP PTI
> CPU: 11 PID: 525 Comm: kw
36 matches
Mail list logo