> On Tue, Jun 06, 2017 at 02:59:43PM +0300, Sagi Grimberg wrote:
>> Christoph,
>>
>> Can you place a stable tag on this (4.11+)?
>
> Added.
I do not see this one in nvme-4.13. Can we get it in, please?
We're seeing the races in our setup and this patch fixes it.
Thanks,
Marta
Hello Mellanox maintainers,
I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9 stable
tree (they're in master for some time already).
We have multiple deployment in 4.9 that are running into the bug fixed by those
patches. We're deploying patched kernels and the issue disappea
- Mail original -
> On Tue, Jan 30, 2018 at 10:12:51AM +0100, Marta Rybczynska wrote:
>> Hello Mellanox maintainers,
>> I'd like to ask you to OK backporting two patches in mlx5 driver to 4.9
>> stable
>> tree (they're in master for some time already
> @@ -429,10 +429,7 @@ static void __nvme_submit_cmd(struct nvme_queue *nvmeq,
> {
> u16 tail = nvmeq->sq_tail;
>
> - if (nvmeq->sq_cmds_io)
> - memcpy_toio(&nvmeq->sq_cmds_io[tail], cmd, sizeof(*cmd));
> - else
> - memcpy(&nvmeq->sq_cmds[tail], cmd, sizeof(*c
avoid the issue.
Signed-off-by: Marta Rybczynska
Signed-off-by: Pierre-Yves Kerbrat
---
drivers/nvme/host/pci.c | 8
1 file changed, 8 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b6f43b7..af53854 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme
> On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
>> NVMe driver uses threads for the work at device reset, including enabling
>> the PCIe device. When multiple NVMe devices are initialized, their reset
>> works may be scheduled in parallel. Then pci_ena
> On Wed, Mar 21, 2018 at 11:48:09PM +0800, Ming Lei wrote:
>> On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote:
>> > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
>> > >> NVMe driver uses threads for the work at devic
> On Wed, Mar 21, 2018 at 05:10:56PM +0100, Marta Rybczynska wrote:
>>
>> The problem may happen also with other device doing its probe and
>> nvme running its workqueue (and we probably have seen it in practice
>> too). We were thinking about a lock in the pci gene
- Mail original -
> De: "Marta Rybczynska"
> À: "Keith Busch"
> Cc: "Ming Lei" , ax...@fb.com, h...@lst.de,
> s...@grimberg.me, linux-n...@lists.infradead.org,
> linux-kernel@vger.kernel.org, bhelg...@google.com, linux-...@vger.ker
- On 17 Aug, 2018, at 06:49, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
> This protects enable/disable operations using the state mutex to
> avoid races with, for example, concurrent enables on a bridge.
>
> The bus hierarchy is walked first before taking the lock to
> avoid l
http://lists.infradead.org/pipermail/linux-nvme/2018-June/018791.html
Signed-off-by: Marta Rybczynska
---
drivers/nvme/host/core.c| 66 ++---
include/uapi/linux/nvme_ioctl.h | 23 ++
2 files changed, 85 insertions(+), 4 deletions(-)
diff --
- On 22 Aug, 2019, at 02:06, Christoph Hellwig h...@lst.de wrote:
> On Fri, Aug 16, 2019 at 11:47:21AM +0200, Marta Rybczynska wrote:
>> It is not possible to get 64-bit results from the passthru commands,
>> what prevents from getting for the Capabilities (CAP) property val
http://lists.infradead.org/pipermail/linux-nvme/2018-June/018791.html
Signed-off-by: Marta Rybczynska
---
drivers/nvme/host/core.c| 108 ++--
include/uapi/linux/nvme_ioctl.h | 23 +
2 files changed, 115 insertions(+), 16 deletions(-)
diff --git a/dr
n do an approach like this.
>
> On Fri, Aug 16, 2019 at 11:47:21AM +0200, Marta Rybczynska wrote:
>> It is not possible to get 64-bit results from the passthru commands,
>> what prevents from getting for the Capabilities (CAP) property value.
>>
>> As a result, it is
- On 10 Jul, 2019, at 18:38, Christoph Hellwig h...@lst.de wrote:
> On Wed, Jul 10, 2019 at 07:26:46AM +0200, Marta Rybczynska wrote:
>> Christoph, why would you like to put the use_ana function in the header?
>> It isn't used anywhere else outside of that file.
>
, including 64-bit results. The older ioctls stay
unchanged.
[1] http://lists.infradead.org/pipermail/linux-nvme/2018-June/018791.html
Signed-off-by: Marta Rybczynska
---
drivers/nvme/host/core.c| 98 -
include/uapi/linux/nvme_ioctl.h | 23 ++
2
64] Kernel panic - not syncing: Fatal exception
[ 300.534338] Kernel Offset: 0x17c0 from 0x8100 (relocation
range: 0x8000-0xbfff)
[ 300.536227] ---[ end Kernel panic - not syncing: Fatal exception ]---
Condition check refactoring from Christoph Hellwig.
- On 2 Jul, 2019, at 11:31, Hannes Reinecke h...@suse.de wrote:
> On 7/1/19 12:10 PM, Marta Rybczynska wrote:
>> Fix a crash with multipath activated. It happends when ANA log
>> page is larger than MDTS and because of that ANA is disabled.
>> When connecting the
DR6: fffe0ff0 DR7: 0400
[ 300.533264] Kernel panic - not syncing: Fatal exception
[ 300.534338] Kernel Offset: 0x17c0 from 0x8100 (relocation
range: 0x8000-0xbfff)
[ 300.536227] ---[ end Kernel panic - not syncing: Fatal exception ]
64] Kernel panic - not syncing: Fatal exception
[ 300.534338] Kernel Offset: 0x17c0 from 0x8100 (relocation
range: 0x8000-0xbfff)
[ 300.536227] ---[ end Kernel panic - not syncing: Fatal exception ]---
Signed-off-by: Marta Rybczynska
Tested-by: Jean-Baptis
- On 9 Jul, 2019, at 23:29, Christoph Hellwig h...@lst.de wrote:
> On Sat, Jul 06, 2019 at 01:06:44PM +0300, Max Gurtovoy wrote:
>>> + /* check if multipath is enabled and we have the capability */
>>> + if (!multipath)
>>> + return 0;
>>> + if (!ctrl->subsys || ((ctrl->subs
Fix issues with local_locks documentation:
- fix function names, local_lock.h has local_unlock_irqrestore(),
not local_lock_irqrestore()
- fix mapping table, local_unlock_irqrestore() maps to local_irq_restore(),
not _save()
Signed-off-by: Marta Rybczynska
---
Documentation/locking
> On 02/04/17 03:03 PM, Sinan Kaya wrote:
>> Push the decision all the way to the user. Let them decide whether they
>> want this feature to work on a root port connected port or under the
>> switch.
>
> Yes, I prefer this too. If other folks agree with that I'd be very happy
> to go back to user
> On Wed, Jun 21, 2017 at 05:14:41PM +0200, Marta Rybczynska wrote:
>> I do not see this one in nvme-4.13. Can we get it in, please?
>> We're seeing the races in our setup and this patch fixes it.
>
> I've added it. Note that your mail was whitespace damaged, so I
Assche.
Signed-off-by: Marta Rybczynska
Reviewed-by: Sagi Grimberg
---
Changes from v1:
* remove nvme_rdma_init_sig_count, put all into
nvme_rdma_queue_sig_limit
---
drivers/nvme/host/rdma.c | 21 -
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/drivers
Assche.
Signed-off-by: Marta Rybczynska
---
drivers/nvme/host/rdma.c | 31 +--
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 28bd255..80682f7 100644
--- a/drivers/nvme/host/rdma.c
+++ b
>> -static inline int nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue)
>> +static inline int nvme_rdma_init_sig_count(int queue_size)
>> {
>> - int sig_limit;
>> -
>> - /*
>> -* We signal completion every queue depth/2 and also handle the
>> -* degenerated cas
- Mail original -
> De: "Marta Rybczynska"
> À: "Sagi Grimberg"
> Cc: ax...@fb.com, "Leon Romanovsky" ,
> linux-kernel@vger.kernel.org, linux-n...@lists.infradead.org,
> "keith busch" , "Doug Ledford" ,
> "Bart
> On 6/5/2017 12:45 PM, Marta Rybczynska wrote:
>> This patch improves the way the RDMA IB signalling is done
>> by using atomic operations for the signalling variable. This
>> avoids race conditions on sig_count.
>>
>> The signalling interval changes slightly a
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 92b4e9f11a636d1723cc0866bf8b9111b1e24339
Gitweb:
https://git.kernel.org/tip/92b4e9f11a636d1723cc0866bf8b9111b1e24339
Author:Marta Rybczynska
AuthorDate:Sun, 26 Jul 2020 20:54:40 +02:00
30 matches
Mail list logo