> -Original Message-
> From: Javier González
> Sent: Tuesday, 7 July 2020 10.43
> To: Matias Bjorling
> Cc: Damien Le Moal ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Niklas Cassel ; Hans
> Holmberg ; li
> -Original Message-
> From: Javier González
> Sent: Monday, 29 June 2020 21.39
> To: Damien Le Moal
> Cc: Matias Bjorling ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Niklas Cassel ; Hans
> Holmberg ; li
> -Original Message-
> From: Bart Van Assche
> Sent: Monday, 29 June 2020 03.36
> To: Matias Bjorling ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Damien Le Moal ;
> Niklas Cassel ; Hans Holmberg
>
> Cc:
> -Original Message-
> From: Niklas Cassel
> Sent: Monday, 29 June 2020 11.04
> To: Damien Le Moal
> Cc: Matias Bjorling ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Hans Holmberg ;
> linux-s...@vger.
On 02/20/2016 08:52 AM, Matias Bjørling wrote:
> Hi Jens,
>
> Sorry, I was living in a fairy tail land, where patches are
> miraculously applied without being sent upstream. Leading me to test
> on top of the wrong base.
>
> I was missing three patches, which I should have sent for previous -rc
>
On 11/02/2015 04:37 PM, Jens Axboe wrote:
> On 11/02/2015 05:43 AM, Matias Bjorling wrote:
>> On 11/02/2015 02:16 AM, Randy Dunlap wrote:
>>> On 11/01/15 08:53, Stephen Rothwell wrote:
>>>> Hi all,
>>>>
>>>> I start again a day early, and thi
On 11/02/2015 02:16 AM, Randy Dunlap wrote:
> On 11/01/15 08:53, Stephen Rothwell wrote:
>> Hi all,
>>
>> I start again a day early, and this is how you all repay me? ;-)
>>
>> Changes since 20151022:
>>
>
> on i386:
>
> ../include/linux/lightnvm.h:143:4: error: width of 'resved' exceeds its type
Den 02-09-2015 kl. 20:39 skrev Ross Zwisler:
On Mon, Aug 31, 2015 at 02:17:18PM +0200, Matias Bjørling wrote:
From: Matias Bjørling
Driver was not freeing the memory allocated for internal nullb queues.
This patch frees the memory during driver unload.
You may want to consider devm_* style a
I don't think the current abuses of the block API are acceptable though.
The crazy deep merging shouldn't be too relevant for SSD-type devices
so I think you'd do better than trying to reuse the TYPE_FS level
blk-mq merging code. If you want to reuse the request
allocation/submission code that's
On 06/11/2015 12:29 PM, Christoph Hellwig wrote:
> On Wed, Jun 10, 2015 at 08:11:42PM +0200, Matias Bjorling wrote:
>> 1. A get/put flash block API, that user-space applications can use.
>> That will enable application-driven FTLs. E.g. RocksDB can be integrated
>> tightly w
On 06/09/2015 09:46 AM, Christoph Hellwig wrote:
> Hi Matias,
>
> I've been looking over this and I really think it needs a fundamental
> rearchitecture still. The design of using a separate stacking
> block device and all kinds of private hooks does not look very
> maintainable.
>
> Here is my
-
+#if defined(CONFIG_NVM)
+ struct bio_nvm_payload *bi_nvm; /* open-channel ssd backend */
+#endif
unsigned short bi_vcnt;/* how many bio_vec's */
Jens suggests this to implemented using a bio clone. Will do in the next
refresh.
--
To unsubscribe from this lis
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -39,6 +39,7 @@
#include
#include
#include
+#include
#include
#include
@@ -134,6 +135,11 @@ static inline void _nvme_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
BUILD_BUG_ON(siz
On Sat, Apr 18, 2015 at 08:45:19AM +0200, Matias Bjorling wrote:
The reason it shouldn't be under the a single block device, is that a target
should be able to provide a global address space.
That allows the address
space to grow/shrink dynamically with the disks. Allowing a continu
Den 17-04-2015 kl. 19:46 skrev Christoph Hellwig:
On Fri, Apr 17, 2015 at 10:15:46AM +0200, Matias Bj?rling wrote:
Just the prep/unprep, or other pieces as well?
All of it - it's functionality that lies logically below the block
layer, so that's where it should be handled.
In fact it should p
Den 16-04-2015 kl. 16:55 skrev Keith Busch:
On Wed, 15 Apr 2015, Matias Bjørling wrote:
@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-int shift = NVME_CAP_MPSMIN(readq(&dev->bar->cap)) + 12;
+u6
On 08/15/2014 01:09 AM, Keith Busch wrote:
The allocation and freeing of blk-mq parts seems a bit asymmetrical
to me. The 'tags' belong to the tagset, but any request_queue using
that tagset may free the tags. I looked to separate the tag allocation
concerns, but that's more time than I have, so
I haven't event tried debugging this next one: doing an insmod+rmmod
caused this warning followed by a panic:
I'll look into it. Thanks
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with sl
On 07/14/2014 02:41 PM, Christoph Hellwig wrote:
+static int nvme_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+ unsigned int hctx_idx)
+ struct nvme_queue *nvmeq = dev->queues[
+ (hctx_idx % dev->queue_count) + 1];
+
+
On 07/22/2014 07:46 AM, Hannes Reinecke wrote:
On 07/21/2014 09:28 PM, Kent Overstreet wrote:
On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
On 07/18/2014 07:04 PM, John Utz wrote:
On 07/18/2014 05:31 AM, John Utz wrote:
Thankyou very much for the exhaustive answer! I forwar
Den 16-06-2014 17:57, Keith Busch skrev:
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+struct nvme_de
Den 16-06-2014 17:57, Keith Busch skrev:
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+struct nvme_de
On 06/04/2014 08:52 PM, Keith Busch wrote:
> On Wed, 4 Jun 2014, Jens Axboe wrote:
>> On 06/04/2014 12:28 PM, Keith Busch wrote:
>> Are you testing against 3.13? You really need the current tree for this,
>> otherwise I'm sure you'll run into issues (as you appear to be :-)
>
> I'm using Matias' c
On 06/03/2014 01:06 AM, Keith Busch wrote:
> Depending on the timing, it might fail in alloc instead of free:
>
> Jun 2 16:45:40 kbgrz1 kernel: [ 265.421243] NULL pointer dereference
> at (null)
> Jun 2 16:45:40 kbgrz1 kernel: [ 265.434284] PGD 429acf067 PUD
> 42ce28067 PMD 0
> Jun
On 05/30/2014 06:48 PM, Keith Busch wrote:
> On Thu, 29 May 2014, Matias Bjørling wrote:
>> This converts the current NVMe driver to utilize the blk-mq layer.
>
> I'm pretty darn sure this new nvme_remove can cause a process
> with an open reference to use queues after they're freed in the
> nvme_
On 05/30/2014 01:12 AM, Jens Axboe wrote:
> On 05/29/2014 05:06 PM, Jens Axboe wrote:
>> Ah I see, yes that code apparently got axed. The attached patch brings
>> it back. Totally untested, I'll try and synthetically hit it to ensure
>> that it does work. Note that it currently does unmap and iod f
On 04/03/2014 11:01 AM, Christoph Hellwig wrote:
> On Thu, Apr 03, 2014 at 09:45:11AM -0700, Matias Bjorling wrote:
>>> I'd still create a request_queue for the internal queue, just not register
>>> a block device for it. For example SCSI sets up queues for each LUN
&g
On 04/03/2014 12:36 AM, Christoph Hellwig wrote:
> On Wed, Apr 02, 2014 at 09:10:12PM -0700, Matias Bjorling wrote:
>> For the nvme driver, there's a single admin queue, which is outside
>> blk-mq's control, and the X normal queues. Should we allow the shared
>> tag
On 04/02/2014 12:46 AM, Christoph Hellwig wrote:
> On Tue, Apr 01, 2014 at 05:16:21PM -0700, Matias Bjorling wrote:
>> Hi Christoph,
>>
>> Can you rebase it on top of 3.14. I have trouble applying it for testing.
>
> Hi Martin,
>
> the series is based on top of
On 03/31/2014 07:46 AM, Christoph Hellwig wrote:
> This series adds support for sharing tags (and thus requests) between
> multiple request_queues. We'll need this for SCSI, and I think Martin
> also wants something similar for nvme.
>
> Besides the mess with request contructors/destructors the m
On 03/24/2014 08:08 PM, David Lang wrote:
> On Fri, 21 Mar 2014, Matias Bjorling wrote:
>
>> On 03/21/2014 02:06 AM, Joe Thornber wrote:
>>> Hi Matias,
>>>
>>> This looks really interesting and I'd love to get involved. Do you
>>> have a
On 03/24/2014 07:22 PM, Akira Hayakawa wrote:
> Hi, Matias,
>
> Sorry for jumping in. I am interested in this new feature, too.
>
>>> Does it even make sense to expose the underlying devices as block
>>> devices? It surely would help to send this together with a driver
>>> that you plan to use i
On 03/23/2014 11:13 PM, Bart Van Assche wrote:
> On 03/21/14 16:37, Christoph Hellwig wrote:
>> Just curious: why do you think implementing this as a block remapper
>> inside device mapper is a better idea than as a blk-mq driver?
>>
>> At the request layer you already get a lot of infrastructure
On 03/21/2014 08:32 AM, Richard Weinberger wrote:
> On Fri, Mar 21, 2014 at 7:32 AM, Matias Bjørling wrote:
>
> This sounds very interesting!
>
> Is there also a way to expose the flash directly as MTD device?
> I'm thinking of UBI. Maybe both projects can benefit from each others.
>
Hi Richa
On 03/21/2014 08:37 AM, Christoph Hellwig wrote:
> Just curious: why do you think implementing this as a block remapper
> inside device mapper is a better idea than as a blk-mq driver?
Hi Christoph,
I imagine the layer to interact with a compatible SSD, that either uses
SATA, NVMe or PCI-e as ho
gt;> compatible device, the device will be queued upon initialized for the
>> relevant values.
>>
>> The last part is still in progress and a fully working prototype will be
>> presented in upcoming patches.
>>
>> Contributions to make this possible by the following pe
On 03/21/2014 02:06 AM, Joe Thornber wrote:
> Hi Matias,
>
> This looks really interesting and I'd love to get involved. Do you
> have any recommendations for what hardware I should pick up?
Hi Joe,
The most easily available platform is OpenSSD
(http://www.openssd-project.org). It's a little ol
fers() could return NULL on (1) bs > PAGE_SIZE
> + * (2) low memory case. Ensure that we don't dereference null ptr
> + */
> + BUG_ON(!head);
> bh = head;
> do {
> bh->b_state |= b_state;
>
Reviewed-by: Matias Bjorling
--
To
;
> + bs = PAGE_SIZE;
> + }
>
> if (queue_mode == NULL_Q_MQ && use_per_node_hctx) {
> if (submit_queues < nr_online_nodes) {
>
Reviewed-by: Matias Bjorling
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On 01/20/2014 04:58 AM, Raghavendra K T wrote:
> If we load the null_blk module with bs=8k we get following oops:
> [ 3819.812190] BUG: unable to handle kernel NULL pointer dereference at
> 0008
> [ 3819.812387] IP: [] create_empty_buffers+0x28/0xaf
> [ 3819.812527] PGD 219244067 PUD
On 01/17/2014 01:22 AM, Raghavendra K T wrote:
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index a2e69d2..6b0e049 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -535,6 +535,11 @@ static int null_add_dev(void)
if (!nullb)
retur
On 01/09/2014 10:33 PM, Muthu Kumar wrote:
Thanks Matias. Yes, Ming Lei's 4th patch does make the function internal.
So, which branch has the laest patches... i am checking for-3.14/core...
Depends on what you want to do. You may use the Jens' for-linus branch
for the latest including blk pa
On 01/09/2014 07:54 PM, Muthu Kumar wrote:
Jens,
Compiling null_blk.ko failed with error that blk_mq_free_queue() was
defined implicitly. So, moved the declaration from block/blk-mq.h to
include/linux/blk-mq.h and exported it.
The patch from Ming Lei is missing in -rc6
4af48694451676403188a6
From: Matias Bjørling
Randy Dunlap reported a couple of grammar errors and unfortunate usages of
socket/node/core.
Signed-off-by: Matias Bjorling
---
Documentation/block/null_blk.txt | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/Documentation/block
queues are mapped as node0[0,1],
node1[2,3], ...
If uneven, we are left with an uneven number of submit_queues that must be
mapped. These are mapped toward the first node and onward. E.g. 5
submit queues mapped onto 4 nodes are mapped as node0[0,1], node1[2], ...
Signed-off-by: Matias Bjorling
From: Matias Bjørling
The defaults for the module is to instantiate itself with blk-mq and a
submit queue for each CPU node in the system.
To save resources, initialize instead with a single submit queue.
Signed-off-by: Matias Bjorling
---
Documentation/block/null_blk.txt | 9
Hi,
These three patches cover:
* Incorporated the feedback from Randy Dunlap into documentation.
Can be merged with the previous documentation commit (6824518).
* Set use_per_node_hctx to false per default to save resources.
* Allow submit_queues and use_per_node_hctx to be used simultanesly.
parameter is ignored.
Thanks,
Matias
Matias Bjorling (3):
null_blk: documentation
null_blk: refactor init and init errors code paths
null_blk: warning on ignored submit_queues param
Documentation/block/null_blk.txt | 71
drivers/block/null_blk.c
Add description of module and its parameters.
Signed-off-by: Matias Bjorling
---
Documentation/block/null_blk.txt | 71
1 file changed, 71 insertions(+)
create mode 100644 Documentation/block/null_blk.txt
diff --git a/Documentation/block/null_blk.txt b
Let the user know when the number of submission queues are being
ignored.
Signed-off-by: Matias Bjorling
---
drivers/block/null_blk.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index f0aeb2a..8f2e7c3 100644
paths.
Signed-off-by: Matias Bjorling
---
drivers/block/null_blk.c | 63 +---
1 file changed, 38 insertions(+), 25 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index f370fc1..f0aeb2a 100644
--- a/drivers/block/null_blk.c
garbage into struct request_queue's
mq_map.
Signed-off-by: Matias Bjorling
---
drivers/block/null_blk.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index ea192ec..f370fc1 100644
--- a/drivers/block/null_blk.c
Now that the tags mapping allows shared mapping, update blk-mq to allow
the driver to control initialization of the tags structure.
Signed-off-by: Matias Bjorling
---
block/blk-mq.c | 19 +++
include/linux/blk-mq.h | 6 ++
2 files changed, 17 insertions(+), 8
ed by the NVMe, that has a limited number of queues,
shared by multiple request queues.
v1->v2
* Changed blk_mq_init_tags_shared to blk_mq_tags_get.
* Moved from EXPORT_SYMBOL_GPL to EXPORT_SYMBOL
* Moved to using a flag for defining when driver handles tags initialization.
Matias Bjorling (2
Some devices, such as NVMe, initializes a limited number of queues. These are
shared by all the block devices. Allow a driver to tap into the tags
structure and decide if an existing tag structure should be used.
Signed-off-by: Matias Bjorling
---
block/blk-mq-tag.c | 22
Some devices, such as NVMe, initializes a limited number of queues. These are
shared by all the block devices. Allow a driver to tap into the tags
structure and decide if an existing tag structure should be used.
Signed-off-by: Matias Bjorling
---
block/blk-mq-tag.c | 22
These two patches enable a driver to control if tags should be initialized or
use an existing tags structure.
This is for example needed for the NVMe driver, that have a shared number of
queues for each block device it expose.
Matias Bjorling (2):
blk-mq: allow request queues to share tags map
Now that the tags mapping allows shared mapping, update blk-mq to allow
the driver to control initialization of the tags structure.
Signed-off-by: Matias Bjorling
---
block/blk-mq.c | 11 +++
include/linux/blk-mq.h | 12
2 files changed, 19 insertions(+), 4
Den 22-10-2013 18:55, Keith Busch skrev:
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This
primarily
because of high lock congestion for high
On 10/18/2013 05:48 PM, Matthew Wilcox wrote:
On Fri, Oct 18, 2013 at 03:14:19PM +0200, Matias Bjorling wrote:
Performance study:
System: HGST Research NVMe prototype, Haswell i7-4770 3.4Ghz, 32GB 1333Mhz
I don't have one of these. Can you provide more details about it,
such as:
ithin blk-mq. Decide if mq has the reponsibility or
layers higher up should be aware.
* Only issue doorbell on REQ_END.
* Understand if nvmeq->q_suspended is necessary with blk-mq.
* Only a single name-space is supported. Keith suggests extending gendisk to be
namespace aware.
Matias
The queue size of the admin queue should be defined as a constant for use in
multiple places.
Signed-off-by: Matias Bjorling
---
drivers/block/nvme-core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index da52092
blk-mq where possible.
Signed-off-by: Matias Bjorling
---
drivers/block/nvme-core.c | 765 +++---
drivers/block/nvme-scsi.c | 39 +--
include/linux/nvme.h | 7 +-
3 files changed, 385 insertions(+), 426 deletions(-)
diff --git a/drivers/block/nvme
The driver initializes itself using init_hctx and reverts using exit_hctx if
unsucessful. exit_hctx is missing on normal hw queue teardown.
Signed-off-by: Matias Bjorling
---
block/blk-mq.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 923e9e1
previous patch with the updated. Sorry for the
inconvenience.
Changes in v2:
[1/1] Uses EXPORT_SYMBOL_GPL
Matias Bjorling (1):
percpu-refcount: Export symbols
lib/percpu-refcount.c | 3 +++
1 file changed, 3 insertions(+)
--
1.8.1.2
--
To unsubscribe from this list: send the line &qu
Export the interface to be used within modules.
Signed-off-by: Matias Bjorling
---
lib/percpu-refcount.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index 7deeb62..1a53d49 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
Need to be exported for being used within modules.
Signed-off-by: Matias Bjorling
---
lib/percpu-refcount.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index 7deeb62..25b9ac7 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
ERROR: "percpu_ref_kill_and_confirm" [...] undefined!
Working on a driver that needs them, but other modules might benefit as well.
Matias Bjorling (1):
percpu-refcount: Export symbols
lib/percpu-refcount.c | 3 +++
1 file changed, 3 insertions(+)
--
1.8.1.2
--
To unsubscribe
The queue size of the admin queue should be defined as a constant for use in
multiple places.
Signed-off-by: Matias Bjorling
---
drivers/block/nvme-core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 1e940e8
sq_tail.
Matias Bjorling (2):
NVMe: Refactor doorbell
NVMe: Extract admin queue size
drivers/block/nvme-core.c | 32 +++-
1 file changed, 15 insertions(+), 17 deletions(-)
--
1.8.1.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu
The doorbell code is repeated various places. Refactor it into its own function
for clarity.
Signed-off-by: Matias Bjorling
---
drivers/block/nvme-core.c | 29 +
1 file changed, 13 insertions(+), 16 deletions(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block
Two refactor patches that increases the code clarity.
The to-be blk-mq proposal patchset utilizes these as part of its transformation.
They are also useful for the current codebase and therefore submitted upfront.
Matias Bjorling (2):
NVMe: Refactor doorbell
NVMe: Extract admin queue size
The doorbell code is repeated various places. Refactor it into its own function
for clarity.
Signed-off-by: Matias Bjorling
---
drivers/block/nvme-core.c | 29 +
1 file changed, 13 insertions(+), 16 deletions(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block
The queue size of the admin queue should be defined as a constant for use in
multiple places.
Signed-off-by: Matias Bjorling
---
drivers/block/nvme-core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 1e940e8
74 matches
Mail list logo