Re: [PATCH v7 09/10] blk-mq: prevent offlining hk CPUs with associated online isolated CPUs

2025-07-07 Thread Ming Lei
On Wed, Jul 02, 2025 at 06:33:59PM +0200, Daniel Wagner wrote: > When isolcpus=io_queue is enabled, and the last housekeeping CPU for a > given hctx goes offline, there would be no CPU left to handle I/O. To > prevent I/O stalls, prevent offlining housekeeping CPUs that are still > serving isolated

Re: [PATCH] Revert "block: don't reorder requests in blk_add_rq_to_plug"

2025-06-11 Thread Ming Lei
On Wed, Jun 11, 2025 at 12:14:54PM +, Hazem Mohamed Abuelfotoh wrote: > This reverts commit e70c301faece15b618e54b613b1fd6ece3dd05b4. > > Commit ("block: don't reorder requests in > blk_add_rq_to_plug") reversed how requests are stored in the blk_plug > list, this had significant impact on bi

Re: [PATCH] selftests: ublk: kublk: improve behavior on init failure

2025-06-03 Thread Ming Lei
t print > a (not very descriptive) log line when this happens. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v8 8/9] selftests: ublk: add stress test for per io daemons

2025-05-29 Thread Ming Lei
tps://lore.kernel.org/linux-block/aDgwGoGCEpwd1mFY@fedora/ > > Suggested-by: Ming Lei > Signed-off-by: Uday Shankar > --- > tools/testing/selftests/ublk/Makefile | 1 + > tools/testing/selftests/ublk/test_common.sh| 5 > tools/testing/selftests/ublk/test_stre

Re: [PATCH v8 1/9] ublk: have a per-io daemon instead of a per-queue daemon

2025-05-29 Thread Ming Lei
alleviating the issue > described above. > > Add the new UBLK_F_PER_IO_DAEMON feature to ublk_drv, which ublk servers > can use to essentially test for the presence of this change and tailor > their behavior accordingly. > > Signed-off-by: Uday Shankar > Reviewed-by: Caleb Sander Mateos Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v7 1/8] ublk: have a per-io daemon instead of a per-queue daemon

2025-05-29 Thread Ming Lei
On Tue, May 27, 2025 at 05:01:24PM -0600, Uday Shankar wrote: > Currently, ublk_drv associates to each hardware queue (hctx) a unique > task (called the queue's ubq_daemon) which is allowed to issue > COMMIT_AND_FETCH commands against the hctx. If any other task attempts > to do so, the command fai

Re: [PATCH v7 8/8] Documentation: ublk: document UBLK_F_PER_IO_DAEMON

2025-05-29 Thread Ming Lei
ons, as the new > UBLK_F_PER_IO_DAEMON feature renders that concept obsolete. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v7 7/8] selftests: ublk: add test for per io daemons

2025-05-29 Thread Ming Lei
the future, the last check above may be strengthened to "verify that > all ublk server threads handle the same amount of I/O." However, this > requires some adjustments/bugfixes to tag allocation, so this work is > postponed to a followup. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v7 6/8] selftests: ublk: kublk: decouple ublk_queues from ublk server threads

2025-05-29 Thread Ming Lei
+130,7 @@ struct ublk_io { > unsigned short refs;/* used by target code only */ > > int tag; > + int buf_index; Both the above two can be 'unsigned short', otherwise: Reviewed-by: Ming Lei

Re: [PATCH v7 3/8] selftests: ublk: kublk: tie sqe allocation to io instead of queue

2025-05-29 Thread Ming Lei
t will > allocate from the io's thread's ring instead. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v6 7/8] selftests: ublk: kublk: decouple ublk_queues from ublk server threads

2025-05-09 Thread Ming Lei
On Wed, May 07, 2025 at 03:49:41PM -0600, Uday Shankar wrote: > Add support in kublk for decoupled ublk_queues and ublk server threads. > kublk now has two modes of operation: > > - (preexisting mode) threads and queues are paired 1:1, and each thread > services all the I/Os of one queue > - (ne

Re: [PATCH v6 7/8] selftests: ublk: kublk: decouple ublk_queues from ublk server threads

2025-05-09 Thread Ming Lei
On Wed, May 07, 2025 at 03:49:41PM -0600, Uday Shankar wrote: > Add support in kublk for decoupled ublk_queues and ublk server threads. > kublk now has two modes of operation: > > - (preexisting mode) threads and queues are paired 1:1, and each thread > services all the I/Os of one queue > - (ne

Re: [PATCH v6 6/8] selftests: ublk: kublk: move per-thread data out of ublk_queue

2025-05-09 Thread Ming Lei
} > > - qinfo[i].q = &dev->q[i]; > - qinfo[i].queue_sem = &queue_sem; > - qinfo[i].affinity = &affinity_buf[i]; > - pthread_create(&dev->q[i].thread, NULL, > + tinfo[i].dev = dev; > + tinfo[i].idx = i; > + tinfo[i].ready = &ready; > + tinfo[i].affinity = &affinity_buf[i]; > + pthread_create(&dev->threads[i].thread, NULL, > ublk_io_handler_fn, > - &qinfo[i]); > + &tinfo[i]); > } > > for (i = 0; i < dinfo->nr_hw_queues; i++) > - sem_wait(&queue_sem); > - free(qinfo); > + sem_wait(&ready); > + free(tinfo); > free(affinity_buf); > > /* everything is fine now, start us */ > @@ -889,7 +902,7 @@ static int ublk_start_daemon(const struct dev_ctx *ctx, > struct ublk_dev *dev) > > /* wait until we are terminated */ > for (i = 0; i < dinfo->nr_hw_queues; i++) > - pthread_join(dev->q[i].thread, &thread_ret); > + pthread_join(dev->threads[i].thread, &thread_ret); > fail: > for (i = 0; i < dinfo->nr_hw_queues; i++) > ublk_queue_deinit(&dev->q[i]); > diff --git a/tools/testing/selftests/ublk/kublk.h > b/tools/testing/selftests/ublk/kublk.h > index > 7c912116606429215af7dbc2a8ce6b40ef89bfbd..9eb2207fcebe96d34488d057c881db262b9767b3 > 100644 > --- a/tools/testing/selftests/ublk/kublk.h > +++ b/tools/testing/selftests/ublk/kublk.h > @@ -51,10 +51,12 @@ > #define UBLK_IO_MAX_BYTES (1 << 20) > #define UBLK_MAX_QUEUES_SHIFT5 > #define UBLK_MAX_QUEUES (1 << UBLK_MAX_QUEUES_SHIFT) > +#define UBLK_MAX_THREADS_SHIFT 5 > +#define UBLK_MAX_THREADS (1 << UBLK_MAX_THREADS_SHIFT) > #define UBLK_QUEUE_DEPTH1024 > > #define UBLK_DBG_DEV(1U << 0) > -#define UBLK_DBG_QUEUE (1U << 1) > +#define UBLK_DBG_THREAD (1U << 1) > #define UBLK_DBG_IO_CMD (1U << 2) > #define UBLK_DBG_IO (1U << 3) > #define UBLK_DBG_CTRL_CMD (1U << 4) > @@ -62,6 +64,7 @@ > > struct ublk_dev; > struct ublk_queue; > +struct ublk_thread; > > struct stripe_ctx { > /* stripe */ > @@ -120,6 +123,8 @@ struct ublk_io { > unsigned short refs;/* used by target code only */ > > struct ublk_queue *q; > + struct ublk_thread *t; Given you have to take static mapping between queue/tag and thread, 'struct ublk_thread' should have been figured out runtime easily, then we can save 8 bytes, also avoid memory indirect dereference. sizeof(struct ublk_io) need to be held in single L1 cacheline. But it can be one followup. Reviewed-by: Ming Lei thanks, Ming

Re: [PATCH v6 5/8] selftests: ublk: kublk: lift queue initialization out of thread

2025-05-09 Thread Ming Lei
lly need to happen on the thread that will use the ring; that is > separated into a separate ublk_thread_init which is still called by each > I/O handler thread. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei thanks, Ming

Re: [PATCH v6 4/8] selftests: ublk: kublk: tie sqe allocation to io instead of queue

2025-05-09 Thread Ming Lei
On Wed, May 07, 2025 at 03:49:38PM -0600, Uday Shankar wrote: > We currently have a helper ublk_queue_alloc_sqes which the ublk targets > use to allocate SQEs for their own operations. However, as we move > towards decoupled ublk_queues and ublk server threads, this helper does > not make sense any

Re: [PATCH v6 3/8] selftests: ublk: kublk: plumb q_id in io_uring user_data

2025-05-09 Thread Ming Lei
he > associated SQE's user_data. > > Signed-off-by: Uday Shankar Looks fine, Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v6 1/8] ublk: have a per-io daemon instead of a per-queue daemon

2025-05-08 Thread Ming Lei
On Wed, May 07, 2025 at 03:49:35PM -0600, Uday Shankar wrote: > Currently, ublk_drv associates to each hardware queue (hctx) a unique > task (called the queue's ubq_daemon) which is allowed to issue > COMMIT_AND_FETCH commands against the hctx. If any other task attempts > to do so, the command fai

Re: [PATCH v6 2/8] sbitmap: fix off-by-one when wrapping hint

2025-05-08 Thread Ming Lei
On Wed, May 07, 2025 at 03:49:36PM -0600, Uday Shankar wrote: > In update_alloc_hint_after_get, we wrap the new hint back to 0 one bit > too early. This breaks round robin tag allocation (BLK_MQ_F_TAG_RR) - > some tags get skipped, so we don't get round robin tags even in the > simple case of singl

Re: [PATCH v6 9/9] blk-mq: prevent offlining hk CPU with associated online isolated CPUs

2025-05-08 Thread Ming Lei
On Thu, Apr 24, 2025 at 08:19:48PM +0200, Daniel Wagner wrote: > When isolcpus=io_queue is enabled, and the last housekeeping CPU for a > given hctx would go offline, there would be no CPU left which handles > the IOs. To prevent IO stalls, prevent offlining housekeeping CPUs which > are still seve

Re: [PATCH v6 8/9] blk-mq: use hk cpus only when isolcpus=io_queue is enabled

2025-05-08 Thread Ming Lei
On Thu, Apr 24, 2025 at 08:19:47PM +0200, Daniel Wagner wrote: > When isolcpus=io_queue is enabled all hardware queues should run on > the housekeeping CPUs only. Thus ignore the affinity mask provided by > the driver. Also we can't use blk_mq_map_queues because it will map all > CPUs to first hctx

Re: [PATCH v6 7/9] lib/group_cpus: honor housekeeping config when grouping CPUs

2025-05-08 Thread Ming Lei
ly() is used when the isolcpus command line > + * argument is used with managed_irq option. In this case only the s/managed_irq/io_queue > + * housekeeping CPUs are considered. I'd suggest to highlight the difference, which is one fundamental thing, originally all CPUs are covered, now only housekeeping CPUs are distributed. Otherwise, looks fine to me: Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v6 6/9] isolation: introduce io_queue isolcpus type

2025-05-08 Thread Ming Lei
On Fri, Apr 25, 2025 at 09:32:16AM +0200, Daniel Wagner wrote: > On Fri, Apr 25, 2025 at 08:26:22AM +0200, Hannes Reinecke wrote: > > On 4/24/25 20:19, Daniel Wagner wrote: > > > Multiqueue drivers spreading IO queues on all CPUs for optimal > > > performance. The drivers are not aware of the CPU i

Re: [PATCH v6 5/9] virtio: blk/scsi: use block layer helpers to calculate num of queues

2025-05-08 Thread Ming Lei
On Thu, Apr 24, 2025 at 08:19:44PM +0200, Daniel Wagner wrote: > Multiqueue devices should only allocate queues for the housekeeping CPUs > when isolcpus=io_queue is set. This avoids that the isolated CPUs get > disturbed with OS workload. With commit log fixed: Reviewed-by: Ming Lei

Re: [PATCH v6 4/9] scsi: use block layer helpers to calculate num of queues

2025-05-08 Thread Ming Lei
hat isn't what the patch is doing. Otherwise, looks fine to me: Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v6 3/9] nvme-pci: use block layer helpers to calculate num of queues

2025-05-08 Thread Ming Lei
On Thu, Apr 24, 2025 at 08:19:42PM +0200, Daniel Wagner wrote: > Multiqueue devices should only allocate queues for the housekeeping CPUs > when isolcpus=io_queue is set. This avoids that the isolated CPUs get > disturbed with OS workload. The commit log needs to be updated: - io_queue isn't intr

Re: [PATCH v6 2/9] blk-mq: add number of queue calc helper

2025-05-08 Thread Ming Lei
t log needs to be updated. Otherwise, looks fine: Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v6 1/9] lib/group_cpus: let group_cpu_evenly return number initialized masks

2025-05-08 Thread Ming Lei
which can be less than numgrps. > * > * Try to put close CPUs from viewpoint of CPU and NUMA locality into > * same group, and run two-stage grouping: > @@ -344,7 +346,8 @@ static int __group_cpus_evenly(unsigned int startgrp, > unsigned int numgrps, > * We guarantee in the result

Re: [PATCH v6 0/9] blk: honor isolcpus configuration

2025-05-06 Thread Ming Lei
On Thu, Apr 24, 2025 at 08:19:39PM +0200, Daniel Wagner wrote: > I've added back the isolcpus io_queue agrument. This avoids any semantic > changes of managed_irq. IMO, this is correct thing to do. > I don't like it but I haven't found a > better way to deal with it. Ming clearly stated managed_i

Re: [PATCH v2 1/3] selftests: ublk: kublk: build with -Werror iff WERROR!=0

2025-04-29 Thread Ming Lei
Jens decide if it is fine to pass -Werror at default: Reviewed-by: Ming Lei Otherwise, it still can be enabled conditionally with default off. Thanks, Ming

Re: [PATCH 3/3] selftests: ublk: kublk: fix include path

2025-04-28 Thread Ming Lei
o run under the kernel tree without installing headers system wide, nice! Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH 2/3] selftests: ublk: make test_generic_06 silent on success

2025-04-28 Thread Ming Lei
On Mon, Apr 28, 2025 at 05:10:21PM -0600, Uday Shankar wrote: > Convention dictates that tests should not log anything on success. Make > test_generic_06 follow this convention. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH 1/3] selftests: ublk: kublk: build with -Werror iff CONFIG_WERROR=y

2025-04-28 Thread Ming Lei
On Mon, Apr 28, 2025 at 05:10:20PM -0600, Uday Shankar wrote: > Compiler warnings can catch bugs at compile time. They can also produce > annoying false positives. Due to this duality, the kernel provides > CONFIG_WERROR so that the developer can choose whether or not they want > compiler warnings

Re: [PATCH 2/2] selftests: ublk: common: fix _get_disk_dev_t for pre-9.0 coreutils

2025-04-23 Thread Ming Lei
common.sh > @@ -17,8 +17,8 @@ _get_disk_dev_t() { > local minor > > dev=/dev/ublkb"${dev_id}" > - major=$(stat -c '%Hr' "$dev") > - minor=$(stat -c '%Lr' "$dev") > + major="0x"$(stat -c '%t' "$dev") > + minor="0x"$(stat -c '%T' "$dev") Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH 1/2] selftests: ublk: kublk: build with -Werror

2025-04-23 Thread Ming Lei
-O3 -Wl,-no-as-needed -Wall -Werror -I $(top_srcdir) > LDLIBS += -lpthread -lm -luring Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v3] ublk: improve detection and handling of ublk server exit

2025-04-10 Thread Ming Lei
On Mon, Apr 07, 2025 at 01:16:33PM -0600, Uday Shankar wrote: > On Sat, Apr 05, 2025 at 10:59:29PM +0800, Ming Lei wrote: > > On Thu, Apr 03, 2025 at 06:05:57PM -0600, Uday Shankar wrote: > > > There are currently two ways in which ublk server exit is detected by > > &g

Re: [PATCH 1/4] selftests: ublk: kublk: use ioctl-encoded opcodes

2025-04-05 Thread Ming Lei
could easily require it to be set as a prerequisite for these selftests, > but since new applications should not be using the legacy opcodes, use > the ioctl-encoded opcodes everywhere in kublk. > > Signed-off-by: Uday Shankar Reviewed-by: Ming Lei -- Ming

Re: [PATCH 0/2] ublk: specify io_cmd_buf pointer type

2025-04-05 Thread Ming Lei
fy io_cmd_buf pointer type > > drivers/block/ublk_drv.c | 8 > tools/testing/selftests/ublk/kublk.c | 2 +- > tools/testing/selftests/ublk/kublk.h | 4 ++-- > 3 files changed, 7 insertions(+), 7 deletions(-) Reviewed-by: Ming Lei Thanks, Ming

Re: [PATCH v2] ublk: improve detection and handling of ublk server exit

2025-04-05 Thread Ming Lei
new test (and all other selftests, and all > ublksrv tests) pass: > > selftests: ublk: test_generic_04.sh > dev id is 0 > dd: error writing '/dev/ublkb0': Input/output error > 1+0 records in > 0+0 records out > 0 bytes copied, 0.0376731 s, 0.0 kB/s > DEAD > generic

Re: [PATCH v3] ublk: improve detection and handling of ublk server exit

2025-04-05 Thread Ming Lei
new test (and all other selftests, and all > ublksrv tests) pass: > > selftests: ublk: test_generic_04.sh > dev id is 0 > dd: error writing '/dev/ublkb0': Input/output error > 1+0 records in > 0+0 records out > 0 bytes copied, 0.0376731 s, 0.0 kB/s > DEAD > gene

Re: [PATCH 4/4] ublk: improve handling of saturated queues when ublk server exits

2025-04-01 Thread Ming Lei
On Mon, Mar 31, 2025 at 05:17:16PM -0600, Uday Shankar wrote: > On Thu, Mar 27, 2025 at 09:23:21AM +0800, Ming Lei wrote: > > On Wed, Mar 26, 2025 at 11:54:16AM -0600, Uday Shankar wrote: > > > On Wed, Mar 26, 2025 at 01:38:35PM +0800, Ming Lei wrote: > > > > On T

Re: [PATCH 4/4] ublk: improve handling of saturated queues when ublk server exits

2025-03-26 Thread Ming Lei
On Tue, Mar 25, 2025 at 04:19:34PM -0600, Uday Shankar wrote: > There are currently two ways in which ublk server exit is detected by > ublk_drv: > > 1. uring_cmd cancellation. If there are any outstanding uring_cmds which >have not been completed to the ublk server when it exits, io_uring >

Re: [PATCH 4/4] ublk: improve handling of saturated queues when ublk server exits

2025-03-26 Thread Ming Lei
On Wed, Mar 26, 2025 at 05:08:19PM -0600, Uday Shankar wrote: > On Wed, Mar 26, 2025 at 12:56:56PM -0600, Uday Shankar wrote: > > On Wed, Mar 26, 2025 at 11:54:16AM -0600, Uday Shankar wrote: > > > > ublk_abort_requests() should be called only in case of queue dying, > > > > since ublk server may o

Re: [PATCH 4/4] ublk: improve handling of saturated queues when ublk server exits

2025-03-26 Thread Ming Lei
On Wed, Mar 26, 2025 at 11:54:16AM -0600, Uday Shankar wrote: > On Wed, Mar 26, 2025 at 01:38:35PM +0800, Ming Lei wrote: > > On Tue, Mar 25, 2025 at 04:19:34PM -0600, Uday Shankar wrote: > > > There are currently two ways in which ublk server exit is detected by > > &g

Re: [PATCH 4/4] ublk: improve handling of saturated queues when ublk server exits

2025-03-25 Thread Ming Lei
On Tue, Mar 25, 2025 at 04:19:34PM -0600, Uday Shankar wrote: > There are currently two ways in which ublk server exit is detected by > ublk_drv: > > 1. uring_cmd cancellation. If there are any outstanding uring_cmds which >have not been completed to the ublk server when it exits, io_uring >

Re: [PATCH 3/4] selftests: ublk: kublk: ignore SIGCHLD

2025-03-25 Thread Ming Lei
; > } > > + signal(SIGCHLD, SIG_IGN); Reviewed-by: Ming Lei BTW, the SIGCHLD signal is ignored by default, looks it is good to do it explicitly, if the -EINTR from io_uring_enter() can be avoided in this way. Thanks, Ming

Re: [PATCH 2/4] selftests: ublk: kublk: fix an error log line

2025-03-25 Thread Ming Lei
ic void ublk_ctrl_dump(struct ublk_dev *dev) > > ret = ublk_ctrl_get_params(dev, &p); > if (ret < 0) { > - ublk_err("failed to get params %m\n"); > + ublk_err("failed to get params %d %s\n", ret, strerror(-ret)); > return; Reviewed-by: Ming Lei -- Ming

Re: [PATCH 00/11] selftests: ublk: bug fixes & consolidation

2025-03-15 Thread Ming Lei
On Mon, Mar 3, 2025 at 8:43 PM Ming Lei wrote: > > Hello Jens and guys, > > This patchset fixes several issues(1, 2, 4) and consolidate & improve > the tests in the following ways: > > - support shellcheck and fixes all warning > > - misc cleanup > > - improve

Re: [PATCH 00/11] selftests: ublk: bug fixes & consolidation

2025-03-11 Thread Ming Lei
On Mon, Mar 10, 2025 at 09:17:56AM -0600, Jens Axboe wrote: > On 3/10/25 9:09 AM, Ming Lei wrote: > > On Mon, Mar 3, 2025 at 8:43?PM Ming Lei wrote: > >> > >> Hello Jens and guys, > >> > >> This patchset fixes several issues(1, 2, 4) and consolidate &

[PATCH 06/11] selftests: ublk: don't pass ${dev_id} to _cleanup_test()

2025-03-03 Thread Ming Lei
More devices can be created in single tests, so simply remove all ublk devices in _cleanup_test(), meantime remove the ${dev_id} argument of _cleanup_test(). Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/test_common.sh | 2 +- tools/testing/selftests/ublk/test_loop_01.sh | 2

[PATCH 08/11] selftests: ublk: load/unload ublk_drv when preparing & cleaning up tests

2025-03-03 Thread Ming Lei
Load ublk_drv module in _prep_test(), and unload it in _cleanup_test(), so that test can always be done in consistent state. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/test_common.sh | 5 + 1 file changed, 5 insertions(+) diff --git a/tools/testing/selftests/ublk

[PATCH 11/11] selftests: ublk: improve test usability

2025-03-03 Thread Ming Lei
tests, ...) Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/test_common.sh| 10 -- tools/testing/selftests/ublk/test_loop_01.sh | 2 +- tools/testing/selftests/ublk/test_loop_02.sh | 2 +- tools/testing/selftests/ublk/test_loop_03.sh | 2 +- tools/testing/selftests

[PATCH 10/11] selftests: ublk: add stress test for covering IO vs. killing ublk server

2025-03-03 Thread Ming Lei
Add stress_test_01 for running IO vs. killing ublk server, so io_uring exit & cancel code path can be covered, same with ublk's cancel code path. Especially IO buffer lifetime is one big thing for ublk zero copy, the added test can verify if this area works as expected. Signed-off-by:

[PATCH 07/11] selftests: ublk: move zero copy feature check into _add_ublk_dev()

2025-03-03 Thread Ming Lei
terminal shell. Meantime always return error code from _add_ublk_dev(). Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/test_common.sh | 56 tools/testing/selftests/ublk/test_loop_01.sh | 1 + tools/testing/selftests/ublk/test_loop_02.sh | 2 +- tools/testing

[PATCH 09/11] selftests: ublk: add one stress test for covering IO vs. removing device

2025-03-03 Thread Ming Lei
Add stress_test_01 for running IO vs. removing device for verifying that ublk device removal can work as expected when heavy IO workloads are in progress. null, loop and loop/zc are covered in this tests. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile | 2 + tools

[PATCH 04/11] selftests: ublk: fix parsing '-a' argument

2025-03-03 Thread Ming Lei
The argument of '-a' doesn't follow any value, so fix it by putting it with '-z' together. Fixes: ed5820a7e918 ("selftests: ublk: add ublk zero copy test") Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/kublk.c | 2 +- 1 file changed, 1 insertion(+)

[PATCH 05/11] selftests: ublk: support shellcheck and fix all warning

2025-03-03 Thread Ming Lei
Add shellcheck, meantime fixes all warnings. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 3 ++ tools/testing/selftests/ublk/test_common.sh | 57 +++- tools/testing/selftests/ublk/test_loop_01.sh | 10 ++-- tools/testing/selftests/ublk

[PATCH 03/11] selftests: ublk: add --foreground command line

2025-03-03 Thread Ming Lei
Add --foreground command for helping to debug. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/kublk.c | 17 + tools/testing/selftests/ublk/kublk.h | 1 + 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/ublk/kublk.c b/tools

[PATCH 00/11] selftests: ublk: bug fixes & consolidation

2025-03-03 Thread Ming Lei
ange for overriding skip_code, libring uses 77 and kselftests takes 4 Ming Lei (11): selftests: ublk: make ublk_stop_io_daemon() more reliable selftests: ublk: fix build failure selftests: ublk: add --foreground command line selftests: ublk: fix parsing '-a' argument selfte

[PATCH 01/11] selftests: ublk: make ublk_stop_io_daemon() more reliable

2025-03-03 Thread Ming Lei
his way may reduce time of delete command a lot. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/kublk.c | 24 ++-- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/ublk/kublk.c b/tools/testing/selftests/ublk/kublk.c

[PATCH 02/11] selftests: ublk: fix build failure

2025-03-03 Thread Ming Lei
); | ^~~~ | O_DIRECTORY when trying to reuse this same utility for liburing test. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/kublk.h | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/testing/selftests/ublk

Re: [PATCH V3 0/3] selftests: add ublk selftests

2025-02-28 Thread Ming Lei
On Fri, Feb 28, 2025 at 09:37:47AM -0700, Jens Axboe wrote: > On 2/28/25 9:19 AM, Ming Lei wrote: > > Hello Jens, > > > > This patchset adds ublk kernel selftests, which is very handy for > > developer for verifying kernel change, especially ublk heavily depends > &g

[PATCH V3 3/3] selftests: ublk: add ublk zero copy test

2025-02-28 Thread Ming Lei
Enable zero copy on file backed target, meantime add one fio test for covering write verify, another test for mkfs/mount/umount. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 2 + tools/testing/selftests/ublk/file_backed.c | 104 +++ tools

[PATCH V3 2/3] selftests: ublk: add file backed ublk

2025-02-28 Thread Ming Lei
Add file backed ublk target code, meantime add one fio test for covering write verify, another test for mkfs/mount/umount. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 4 +- tools/testing/selftests/ublk/file_backed.c | 158 +++ tools/testing

[PATCH V3 1/3] selftests: ublk: add kernel selftests for ublk

2025-02-28 Thread Ming Lei
Both ublk driver and userspace heavily depends on io_uring subsystem, and tools/testing/selftests/ should be the best place for holding this cross-subsystem tests. Add basic read/write IO test over this ublk null disk, and make sure ublk working. More tests will be added. Signed-off-by: Ming

[PATCH V3 0/3] selftests: add ublk selftests

2025-02-28 Thread Ming Lei
zero copy with io_link can pass - dump log in case of error - add one more test for mkfs/mount on zero copy Ming Lei (3): selftests: ublk: add kernel selftests for ublk selftests: ublk: add file backed ublk selftests: ublk: add ublk zero copy test MAINTAINERS

Re: [PATCH V2 3/3] selftests: ublk: add ublk zero copy test

2025-02-26 Thread Ming Lei
On Wed, Feb 26, 2025 at 10:41:43AM -0700, Keith Busch wrote: > On Wed, Feb 26, 2025 at 11:58:38PM +0800, Ming Lei wrote: > > + struct io_uring_sqe *reg; > > + struct io_uring_sqe *rw; > > + struct io_uring_sqe *ureg; > > + > > + if (!zc) { > >

[PATCH V2 0/3] selftests: add ublk selftests

2025-02-26 Thread Ming Lei
- make -C tools/testing/selftests TARGETS=ublk run_test Thanks, V2: - fix one sqe allocation bug, so ublk zero copy with io_link can pass - dump log in case of error - add one more test for mkfs/mount on zero copy Ming Lei (3): selftests: ublk: add kernel

[PATCH V2 3/3] selftests: ublk: add ublk zero copy test

2025-02-26 Thread Ming Lei
Add selftests for covering ublk zero copy feature. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 2 + tools/testing/selftests/ublk/kublk.c | 196 --- tools/testing/selftests/ublk/test_common.sh | 8 + tools/testing/selftests/ublk

[PATCH V2 1/3] selftests: ublk: add kernel selftests for ublk

2025-02-26 Thread Ming Lei
Both ublk driver and userspace heavily depends on io_uring subsystem, and tools/testing/selftests/ should be the best place for holding this cross-subsystem tests. Add basic read/write IO test over this ublk null disk, and make sure ublk working. More tests will be added. Signed-off-by: Ming

[PATCH V2 2/3] selftests: ublk: add file backed ublk

2025-02-26 Thread Ming Lei
Add file backed ublk and IO verify test. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 2 + tools/testing/selftests/ublk/kublk.c | 172 ++- tools/testing/selftests/ublk/test_common.sh | 47 + tools/testing/selftests/ublk/test_loop_01

[PATCH 1/3] selftests: ublk: add kernel selftests for ublk

2025-02-26 Thread Ming Lei
Both ublk driver and userspace heavily depends on io_uring subsystem, and tools/testing/selftests/ should be the best place for holding this cross-subsystem tests. Add basic read/write IO test over this ublk null disk, and make sure ublk working. More tests will be added. Signed-off-by: Ming

[PATCH 3/3] selftests: ublk: add ublk zero copy test

2025-02-26 Thread Ming Lei
Add selftests for covering ublk zero copy feature. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 1 + tools/testing/selftests/ublk/kublk.c | 204 +-- tools/testing/selftests/ublk/test_common.sh | 8 + tools/testing/selftests/ublk

[PATCH 2/3] selftests: ublk: add file backed ublk

2025-02-26 Thread Ming Lei
Add file backed ublk and IO verify test. Signed-off-by: Ming Lei --- tools/testing/selftests/ublk/Makefile| 2 + tools/testing/selftests/ublk/kublk.c | 172 ++- tools/testing/selftests/ublk/test_common.sh | 47 + tools/testing/selftests/ublk/test_loop_01

[PATCH 0/3] selftests: add ublk selftests

2025-02-26 Thread Ming Lei
/testing/selftests TARGETS=ublk run_test Thanks, Ming Lei (3): selftests: ublk: add kernel selftests for ublk selftests: ublk: add file backed ublk selftests: ublk: add ublk zero copy test MAINTAINERS |1 + tools/testing/selftests/Makefile |1

Re: [PATCH 1/2] loop: force GFP_NOIO for underlying file systems allocations

2025-01-17 Thread Ming Lei
On Fri, Jan 17, 2025 at 08:44:07AM +0100, Christoph Hellwig wrote: > File systems can and often do allocate memory in the read-write path. > If these allocations are done with __GFP_IO or __GFP_FS set they can > recurse into the file system or swap device on top of the loop device > and cause deadl

Re: [PATCH v5 8/9] blk-mq: issue warning when offlining hctx with online isolcpus

2025-01-10 Thread Ming Lei
Hi Daniel, On Fri, Jan 10, 2025 at 05:26:46PM +0100, Daniel Wagner wrote: > When isolcpus=managed_irq is enabled, and the last housekeeping CPU for > a given hardware context goes offline, there is no CPU left which > handles the IOs anymore. If isolated CPUs mapped to this hardware > context are

Re: [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled

2025-01-10 Thread Ming Lei
On Fri, Jan 10, 2025 at 10:21:49AM +0100, Daniel Wagner wrote: > Hi Ming, > > On Fri, Dec 20, 2024 at 04:54:21PM +0800, Ming Lei wrote: > > On Thu, Dec 19, 2024 at 04:38:43PM +0100, Daniel Wagner wrote: > > > > > When isolcpus=managed_irq is enabled all hardware

Re: [PATCH v4 9/9] blk-mq: issue warning when offlining hctx with online isolcpus

2024-12-20 Thread Ming Lei
On Tue, Dec 17, 2024 at 07:29:43PM +0100, Daniel Wagner wrote: > When we offlining a hardware context which also serves isolcpus mapped > to it, any IO issued by the isolcpus will stall as there is nothing > which handles the interrupts etc. > > This configuration/setup is not supported at this po

Re: [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled

2024-12-20 Thread Ming Lei
On Thu, Dec 19, 2024 at 05:20:44PM +0800, Ming Lei wrote: > > > + cpumask_andnot(isol_mask, > > > +cpu_possible_mask, > > > +housekeeping_cpumask(HK_TYPE_MANAGED_IRQ)); > > > + > > > + for_each_cpu(cpu, iso

Re: [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled

2024-12-19 Thread Ming Lei
On Tue, Dec 17, 2024 at 07:29:42PM +0100, Daniel Wagner wrote: > When isolcpus=managed_irq is enabled all hardware queues should run on > the housekeeping CPUs only. Thus ignore the affinity mask provided by > the driver. Also we can't use blk_mq_map_queues because it will map all > CPUs to first h

Re: [PATCH v5 8/8] blk-mq: remove unused queue mapping helpers

2024-11-20 Thread Ming Lei
On Fri, Nov 15, 2024 at 05:37:52PM +0100, Daniel Wagner wrote: > There are no users left of the pci and virtio queue mapping helpers. > Thus remove them. > > Reviewed-by: Christoph Hellwig > Reviewed-by: Hannes Reinecke > Signed-off-by: Daniel Wagner Reviewed-by: Ming Lei -- Ming

Re: [PATCH v5 7/8] virtio: blk/scsi: replace blk_mq_virtio_map_queues with blk_mq_map_hw_queues

2024-11-20 Thread Ming Lei
ecke > Signed-off-by: Daniel Wagner Reviewed-by: Ming Lei -- Ming

Re: [PATCH v5 6/8] nvme: replace blk_mq_pci_map_queues with blk_mq_map_hw_queues

2024-11-20 Thread Ming Lei
e > Signed-off-by: Daniel Wagner Reviewed-by: Ming Lei -- Ming

Re: [PATCH v5 4/8] blk-mq: introduce blk_mq_map_hw_queues

2024-11-20 Thread Ming Lei
t; retrieved. Also, those functions are located in the block subsystem > where it doesn't really fit in. They are virtio and pci subsystem > specific. > > Thus introduce provide a generic mapping function which uses the > irq_get_affinity callback from bus_type. > > Originall

Re: [PATCH v5 5/8] scsi: replace blk_mq_pci_map_queues with blk_mq_map_hw_queues

2024-11-20 Thread Ming Lei
e > Signed-off-by: Daniel Wagner Reviewed-by: Ming Lei -- Ming

Re: [PATCH v4 05/10] blk-mq: introduce blk_mq_hctx_map_queues

2024-11-14 Thread Ming Lei
On Thu, Nov 14, 2024 at 08:54:46AM +0100, Daniel Wagner wrote: > On Thu, Nov 14, 2024 at 09:58:25AM +0800, Ming Lei wrote: > > > +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap, > > > > Some drivers may not know hctx at all, maybe blk_mq_map_hw_queues()? >

Re: [PATCH v4 05/10] blk-mq: introduce blk_mq_hctx_map_queues

2024-11-13 Thread Ming Lei
t; retrieved. Also, those functions are located in the block subsystem > where it doesn't really fit in. They are virtio and pci subsystem > specific. > > Thus introduce provide a generic mapping function which uses the > irq_get_affinity callback from bus_type. > > Original

Re: [PATCH v4 04/10] virtio: hookup irq_get_affinity callback

2024-11-13 Thread Ming Lei
-by: Daniel Wagner Reviewed-by: Ming Lei -- Ming

Re: [PATCH v4 03/10] PCI: hookup irq_get_affinity callback

2024-11-13 Thread Ming Lei
Reinecke > Signed-off-by: Daniel Wagner Reviewed-by: Ming Lei -- Ming

Re: [PATCH v4 02/10] driver core: add irq_get_affinity callback device_driver

2024-11-13 Thread Ming Lei
e_group **groups; > const struct attribute_group **dev_groups; The patch looks fine, but if you put 1, 2 and 5 into single patch, it will become much easier to review, anyway: Reviewed-by: Ming Lei -- Ming

Re: [PATCH v4 01/10] driver core: bus: add irq_get_affinity callback to bus_type

2024-11-13 Thread Ming Lei
dev, > + unsigned int irq_vec); > > int (*online)(struct device *dev); > int (*offline)(struct device *dev); > Looks one nice abstraction, Reviewed-by: Ming Lei -- Ming

[PATCH] virtio-blk: don't keep queue frozen during system suspend

2024-11-12 Thread Ming Lei
to previous behavior by keeping queue quiesced during suspend. Cc: Yi Sun Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Stefan Hajnoczi Cc: virtualizat...@lists.linux.dev Reported-by: Marek Szyprowski Signed-off-by: Ming Lei --- drivers/block/virtio_blk.c | 7 +-- 1 file changed, 5 insert

Re: [PATCH RFC v1 1/2] genirq/affinity: add support for limiting managed interrupts

2024-10-31 Thread Ming Lei
On Thu, Oct 31, 2024 at 6:35 PM Thomas Gleixner wrote: > > On Thu, Oct 31 2024 at 15:46, guan...@linux.alibaba.com wrote: > > #ifdef CONFIG_SMP > > > > +static unsigned int __read_mostly managed_irqs_per_node; > > +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] > > __cacheline_aligned_i

Re: [PATCH v3 15/15] blk-mq: use hk cpus only when isolcpus=io_queue is enabled

2024-08-13 Thread Ming Lei
On Tue, Aug 06, 2024 at 02:06:47PM +0200, Daniel Wagner wrote: > When isolcpus=io_queue is enabled all hardware queues should run on the > housekeeping CPUs only. Thus ignore the affinity mask provided by the > driver. Also we can't use blk_mq_map_queues because it will map all CPUs > to first hctx

Re: [PATCH v3 15/15] blk-mq: use hk cpus only when isolcpus=io_queue is enabled

2024-08-09 Thread Ming Lei
On Tue, Aug 06, 2024 at 02:06:47PM +0200, Daniel Wagner wrote: > When isolcpus=io_queue is enabled all hardware queues should run on the > housekeeping CPUs only. Thus ignore the affinity mask provided by the > driver. Also we can't use blk_mq_map_queues because it will map all CPUs > to first hctx

Re: [PATCH v3 15/15] blk-mq: use hk cpus only when isolcpus=io_queue is enabled

2024-08-09 Thread Ming Lei
On Fri, Aug 09, 2024 at 09:22:11AM +0200, Daniel Wagner wrote: > On Thu, Aug 08, 2024 at 01:26:41PM GMT, Ming Lei wrote: > > Isolated CPUs are removed from queue mapping in this patchset, when someone > > submit IOs from the isolated CPU, what is the correct hctx used for handlin

Re: [PATCH v3 15/15] blk-mq: use hk cpus only when isolcpus=io_queue is enabled

2024-08-07 Thread Ming Lei
On Wed, Aug 07, 2024 at 02:40:11PM +0200, Daniel Wagner wrote: > On Tue, Aug 06, 2024 at 10:55:09PM GMT, Ming Lei wrote: > > On Tue, Aug 06, 2024 at 02:06:47PM +0200, Daniel Wagner wrote: > > > When isolcpus=io_queue is enabled all hardware queues should run on the > >

Re: [PATCH v3 15/15] blk-mq: use hk cpus only when isolcpus=io_queue is enabled

2024-08-06 Thread Ming Lei
On Tue, Aug 06, 2024 at 02:06:47PM +0200, Daniel Wagner wrote: > When isolcpus=io_queue is enabled all hardware queues should run on the > housekeeping CPUs only. Thus ignore the affinity mask provided by the > driver. Also we can't use blk_mq_map_queues because it will map all CPUs > to first hctx

Re: [PATCH v3 14/15] lib/group_cpus.c: honor housekeeping config when grouping CPUs

2024-08-06 Thread Ming Lei
On Tue, Aug 06, 2024 at 02:06:46PM +0200, Daniel Wagner wrote: > group_cpus_evenly distributes all present CPUs into groups. This ignores > the isolcpus configuration and assigns isolated CPUs into the groups. > > Make group_cpus_evenly aware of isolcpus configuration and use the > housekeeping CP

Re: [PATCH] virtio_blk: Fix device surprise removal

2024-02-18 Thread Ming Lei
On Sat, Feb 17, 2024 at 08:08:48PM +0200, Parav Pandit wrote: > When the PCI device is surprise removed, requests won't complete from > the device. These IOs are never completed and disk deletion hangs > indefinitely. > > Fix it by aborting the IOs which the device will never complete > when the V

  1   2   3   4   5   6   7   8   9   10   >