Your points are good. I don't know a good macrobenchmark at present,
but at least various latency numbers are easy to get out of fio.
I ran a similar set of tests on an Optane 900P with results below.
'clat' is, as fio reports, the completion latency, measured in usec.
'configuration' is [block si
When one iscsi device logs in and logs out with the "multipath -r"
executed at the same time, memory leak happens in multipathd
process.
The reason is following. When "multipath -r" is executed, the path
will be free in configure function. Before path_discovery executed,
iscsi device logs out. The
Hi
I'm sorry for subject with Re. I will send it again.
On 2020/8/18 21:11, lixiaokeng wrote:
> When one iscsi device logs in and logs out with the "multipath -r"
> executed at the same time, memory leak happens in multipathd
> process.
>
> The reason is following. When "multipath -r" is execu
Just to bring in some more context: the primary trigger that made us look
into it was high p99 read latency on a random read workflow on modern-ish
SATA SSD and NVME disks. That is, on average things looked fine, but some
portions of requests, which required a small chunk of data to be fetched
from
On Tue, Aug 18 2020 at 7:52pm -0400,
Ming Lei wrote:
> On Tue, Aug 18, 2020 at 11:20:22AM -0400, Mike Snitzer wrote:
> > On Tue, Aug 18 2020 at 10:50am -0400,
> > Jens Axboe wrote:
> >
> > > On 8/18/20 2:07 AM, Ming Lei wrote:
> > > > c616cbee97ae ("blk-mq: punt failed direct issue to dispatch
On Tue, Aug 18, 2020 at 11:20:22AM -0400, Mike Snitzer wrote:
> On Tue, Aug 18 2020 at 10:50am -0400,
> Jens Axboe wrote:
>
> > On 8/18/20 2:07 AM, Ming Lei wrote:
> > > c616cbee97ae ("blk-mq: punt failed direct issue to dispatch list")
> > > supposed
> > > to add request which has been through
On Tue, Aug 18, 2020 at 2:12 PM Ignat Korchagin wrote:
>
> Additionally if one cares about latency
I think everybody really deep down cares about latency, they just
don't always know it, and the benchmarks are very seldom about it
because it's so much harder to measure.
> they will not use HDDs
On Tue, Aug 18, 2020 at 1:40 PM John Dorminy wrote:
>
>The summary (for my FIO workloads focused on
> parallelism) is that offloading is useful for high IO depth random
> writes on SSDs, and for long sequential small writes on HDDs.
Do we have any non-microbenchmarks that might be somewhat
re
For what it's worth, I just ran two tests on a machine with dm-crypt
using the cipher_null:ecb cipher. Results are mixed; not offloading IO
submission can result in -27% to +23% change in throughput, in a
selection of three IO patterns HDDs and SSDs.
(Note that the IO submission thread also reorde
Hi Lixiaokeng,
On Tue, 2020-08-18 at 21:09 +0800, lixiaokeng wrote:
> There may be a race window here:
> 1. all paths gone, causing map flushed both from multipathd and
> kernel
> 2. paths regenerated, causing multipathd creating the map again.
>
> 1 will generate a remove uevent which can be han
On Tue, 2020-08-18 at 21:08 +0800, lixiaokeng wrote:
> Add reclear_pp_from_mpp in ev_remove_path to make sure that pp is
> cleared in mpp.
>
> When multipathd del path xxx, multipathd -v2, multipathd add path xxx
> and multipath -U
> dm-x are executed simultaneously, multipath -U dm-x will case
>
On Tue, 2020-08-18 at 21:06 +0800, lixiaokeng wrote:
> I got a multipath segfault while running iscsi login/logout and
> following scripts in parallel:
>
> #!/bin/bash
> interval=1
> while true
> do
> multipath -F &> /dev/null
> multipath -r &> /dev/null
>
On Tue, 2020-08-18 at 21:02 +0800, lixiaokeng wrote:
> In set_ble_device func, if blist is NULL or ble is NULL,
> the vendor and product isn't freed. We think it is not
> reasonable that strdup(XXX) is used as set_ble_device
> and store_ble functions' parameter.
>
> Here we call strdup() in store_
On Tue, Aug 18 2020 at 10:50am -0400,
Jens Axboe wrote:
> On 8/18/20 2:07 AM, Ming Lei wrote:
> > c616cbee97ae ("blk-mq: punt failed direct issue to dispatch list") supposed
> > to add request which has been through ->queue_rq() to the hw queue dispatch
> > list, however it adds request running o
On 2020/8/17 16:35, Martin Wilck wrote:
> On Sun, 2020-08-16 at 09:44 +0800, Zhiqiang Liu wrote:
>> We adopt static char* array (sd_notify_status_msg) in
>> sd_notify_status func, so it looks more simpler and easier
>> to expand.
>>
>> Signed-off-by: Zhiqiang Liu
>> Signed-off-by: lixiaokeng
>
On Sun, 2020-08-16 at 14:02 -0700, Tushar Sugandhi wrote:
> There are several device-mapper targets which contribute to verify
> the integrity of the mapped devices e.g. dm-integrity, dm-verity,
> dm-crypt etc.
>
> But they do not use the capabilities provided by kernel integrity
> subsystem (IMA)
On 2020-08-17 4:43 p.m., Mimi Zohar wrote:
On Mon, 2020-08-17 at 15:27 -0700, Tushar Sugandhi wrote:
scripts/Lindent isn't as prevalent as it used to be, but it's still
included in Documentation/process/coding-style.rst. Use it as a guide.
Thanks for the pointer. We'll use scripts/Lindent
On Mon, 2020-08-17 at 23:33 +0800, Zhiqiang Liu wrote:
>
> On 2020/8/17 16:35, Martin Wilck wrote:
> > On Sun, 2020-08-16 at 09:44 +0800, Zhiqiang Liu wrote:
> > > We adopt static char* array (sd_notify_status_msg) in
> > > sd_notify_status func, so it looks more simpler and easier
> > > to expand
On 2020/8/17 23:44, Martin Wilck wrote:
> On Mon, 2020-08-17 at 23:33 +0800, Zhiqiang Liu wrote:
>>
>> On 2020/8/17 16:35, Martin Wilck wrote:
>>> On Sun, 2020-08-16 at 09:44 +0800, Zhiqiang Liu wrote:
We adopt static char* array (sd_notify_status_msg) in
sd_notify_status func, so it l
On 2020-08-17 2:46 p.m., Mimi Zohar wrote:
On Sun, 2020-08-16 at 14:02 -0700, Tushar Sugandhi wrote:
There are several device-mapper targets which contribute to verify
the integrity of the mapped devices e.g. dm-integrity, dm-verity,
dm-crypt etc.
But they do not use the capabilities provide
On Mon, 2020-08-17 at 15:27 -0700, Tushar Sugandhi wrote:
> > scripts/Lindent isn't as prevalent as it used to be, but it's still
> > included in Documentation/process/coding-style.rst. Use it as a guide.
> Thanks for the pointer. We'll use scripts/Lindent going forward
Please don't change exist
On Wed, 2020-08-12 at 12:31 -0700, Tushar Sugandhi wrote:
> There would be several candidate kernel components suitable for IMA
> measurement. Not all of them would be enlightened for IMA measurement.
> Also, system administrators may not want to measure data for all of
> them, even when they are e
On 2020/8/17 23:44, Martin Wilck wrote:
> On Mon, 2020-08-17 at 23:33 +0800, Zhiqiang Liu wrote:
>>
>> On 2020/8/17 16:35, Martin Wilck wrote:
>>> On Sun, 2020-08-16 at 09:44 +0800, Zhiqiang Liu wrote:
We adopt static char* array (sd_notify_status_msg) in
sd_notify_status func, so it l
On 2020-08-17 1:43 p.m., Mimi Zohar wrote:
On Wed, 2020-08-12 at 12:31 -0700, Tushar Sugandhi wrote:
There would be several candidate kernel components suitable for IMA
measurement. Not all of them would be enlightened for IMA measurement.
Also, system administrators may not want to measure d
On Sun, 2020-08-16 at 09:44 +0800, Zhiqiang Liu wrote:
> We adopt static char* array (sd_notify_status_msg) in
> sd_notify_status func, so it looks more simpler and easier
> to expand.
>
> Signed-off-by: Zhiqiang Liu
> Signed-off-by: lixiaokeng
> ---
> multipathd/main.c | 26 ---
When one iscsi device logs in and logs out with the "multipath -r"
executed at the same time, memory leak happens in multipathd
process.
The reason is following. When "multipath -r" is executed, the path
will be free in configure function. Before path_discovery executed,
iscsi device logs out. The
There may be a race window here:
1. all paths gone, causing map flushed both from multipathd and kernel
2. paths regenerated, causing multipathd creating the map again.
1 will generate a remove uevent which can be handled after 2, so we can
disable queueing for the map created by 2 here temporaril
Add reclear_pp_from_mpp in ev_remove_path to make sure that pp is cleared in
mpp.
When multipathd del path xxx, multipathd -v2, multipathd add path xxx and
multipath -U
dm-x are executed simultaneously, multipath -U dm-x will case coredump.
The reason is that there are two paths with same dev_t
I got a multipath segfault while running iscsi login/logout and following
scripts in parallel:
#!/bin/bash
interval=1
while true
do
multipath -F &> /dev/null
multipath -r &> /dev/null
multipath -v2 &> /dev/null
multipath -ll &> /dev/null
In set_ble_device func, if blist is NULL or ble is NULL,
the vendor and product isn't freed. We think it is not
reasonable that strdup(XXX) is used as set_ble_device
and store_ble functions' parameter.
Here we call strdup() in store_ble and set_ble_device
functions and the string will be free if f
When I learn the multipath-tools source code and test it, I
find some bugs and fix them.
repo: openSUSE/multipath-tools
repo link: https://github.com/openSUSE/multipath-tools
branch: upstream-queue
lixiaokeng (5):
libmultipath: fix a memory leak in set_ble_device
libmultipath: fix NULL derefe
On Mon, 2020-08-17 at 19:30 -0500, Benjamin Marzinski wrote:
> On Wed, Aug 12, 2020 at 01:36:01PM +0200, mwi...@suse.com wrote:
> > From: Martin Wilck
> >
> > A typo in a config file, assigning the same alias to multiple
> > WWIDs,
> > can cause massive confusion and even data corruption. Check a
On Mon, 2020-08-17 at 16:31 -0500, Benjamin Marzinski wrote:
> On Wed, Aug 12, 2020 at 01:35:40PM +0200, mwi...@suse.com wrote:
> > From: Martin Wilck
> >
> > If we are in the reconfigure() code path, and we encounter maps to
> > be reloaded, we usually set the DM_SUBSYSTEM_UDEV_FLAG0 flag to
> >
33 matches
Mail list logo