On 3/3/2014 6:41 AM, Mike Christie wrote:
On 02/27/2014 05:13 AM, Sagi Grimberg wrote:
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
index 4046241..a58a6bb 100644
--- a/drivers/scsi/libiscsi.c
+++ b/drivers/scsi/libiscsi.c
@@ -395,6 +395,10 @@ static int iscsi_prep_scsi_cmd_pdu(
On 3/3/2014 6:44 AM, Mike Christie wrote:
On 02/27/2014 05:13 AM, Sagi Grimberg wrote:
diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c
b/drivers/infiniband/ulp/iser/iser_initiator.c
index 58e14c7..7fd95fe 100644
--- a/drivers/infiniband/ulp/iser/iser_initiator.c
+++ b/drivers/infiniba
On Monday, February 24, 2014 9:02 AM Alexander Gordeev
wrote:
> As result of deprecation of MSI-X/MSI enablement functions
> pci_enable_msix() and pci_enable_msi_block() all drivers
> using these two interfaces need to be updated to use the
> new pci_enable_msi_range() or pci_enable_msi_exact()
Hello,
Am sending this email here as I think it is possibly the most
appropriate place. If not, please disregard/ignore and my apologizes.
Recently I have bought a new Dell PowerEdge VRTX Enclosure
(http://www.dell.com/us/business/p/poweredge-vrtx/pd) that houses Dell
PowerEdge M520 Blade server
Le Mon, 3 Mar 2014 14:53:36 +
Istvan Hubay Cebrian écrivait:
> lspci -nn (on Dell PowerEdge M520 Blade)
> ...
> 01:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID
> SAS 2008 [Falcon] [1000:0073] (rev 03)
This one is supported by the megaraid_sas driver, provided it's rec
I can't tell you exactly which one is causing me problems but I can
assume it is the unsupported one.
Basically the M520 Blades also have internal storage. I can see and
mount that storage.
[root@hostname ~]# lsmod | grep raid
megaraid_sas 87177 2
[root@hostname ~]# modinfo megaraid_s
Le Mon, 3 Mar 2014 15:22:13 +
Istvan Hubay Cebrian écrivait:
> I can't however see the Shared storage provided by the VRTX Enclosure.
> I would assume one controller is for the internal storage of the M520
> Blade and the other controller for the Shared storage.
Yes, that's probably the case
So I do have ESXi on one of the blade servers. Curiously "lspci -nn"
output is very different. I for example don't see two RAID controllers
but one:
...
00:10.0 SCSI storage controller [0100]: LSI Logic / Symbios Logic
53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI [1000:0030] (rev 01)
...
Also, lsm
Le Mon, 3 Mar 2014 16:57:44 +
Istvan Hubay Cebrian écrivait:
> So I do have ESXi on one of the blade servers. Curiously "lspci -nn"
> output is very different. I for example don't see two RAID controllers
> but one:
>
> ...
> 00:10.0 SCSI storage controller [0100]: LSI Logic / Symbios Logic
You are absolutely correct. I was on the wrong machine entirely. Sorry
(this is what lack of sleep causes). So here is what I could get from
the ESXi machine:
--
00:00:1f.2 SATA controller Mass storage controller: Intel Corporation
Patsburg 6 Port SATA AHCI Controller [vmhba0]
Class 0106: 8086:1d
Le Mon, 3 Mar 2014 17:56:04 +
Istvan Hubay Cebrian écrivait:
> ~ # esxcli system module get -m megaraid_sas
>Module: megaraid_sas
>Module File: /usr/lib/vmware/vmkmod/megaraid_sas
>License: GPL
>Version: Version 06.801.52.00, Build: 472560, Interface: 9.2 Built
> on: Feb 7 20
Here you go (ran modinfo on my Fedora machine):
[icebrian@laptop megaraid]$ modinfo megaraid_sas
filename:
/lib/modules/3.13.5-200.fc20.x86_64/kernel/drivers/scsi/megaraid/megaraid_sas.ko
description:LSI MegaRAID SAS Driver
author: megaraidli...@lsi.com
version:06.700.06.00-rc1
Sorry. I just realized this is not what you wanted. That was for my
own megaraid module on my own machine. There aparently is no
megaraid_sas.ko file in the ESXi machine only a "megaraid_sas" which I
can't seem to use modinfo on.
--
Istvan Hubay Cebrian
http://icebrian.net
On 3 March 2014 18:14,
Btw one other thing, but how can we be sure its the megaraid_sas
module being used to provide support for the
> 00:08:00.0 RAID bus controller Mass storage controller: LSI Logic /
> Symbios Logic Shared PERC 8 Mini [vmhba2]
> Class 0104: 1000:002f
One thing I noticed is the other controller:
> 0
Le Mon, 3 Mar 2014 18:44:50 + vous écriviez:
> Specifically stated "MegaRAID" whilst the Shared PERC 8 does not. In
> the module listing I can't clearly identify any module that would be
> responsible for providing support to the shared PERC 8 controller
Well I went through the module list an
Istvan/Emmanuel,
>> 08:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID
SAS 2208 IOV [Thunderbolt] [1000:002f] (rev 05)
I will be sending a driver patch for megaraid_sas to support the Dell
PowerEdge VRTX/Shared PERC8 device later this week. This will allow
you to run Linux wi
> I will be sending a driver patch for megaraid_sas to support the Dell
> PowerEdge VRTX/Shared PERC8 device later this week. This will allow
> you to run Linux with the megaraid_sas in Virtual Function (VF) mode
> on the VRTX blades.
That is great news Adam! Many thanks for that. Might I just as
From: Alan Stern
Evidently some wacky USB-ATA bridges don't recognize the SYNCHRONIZE
CACHE command, as shown in this email thread:
http://marc.info/?t=13897835622&r=1&w=2
The fact that we can't tell them to drain their caches shouldn't
prevent the system from going into suspend. T
On Mon, 17 Feb 2014, James Bottomley wrote:
> > You can tell by the way the stack trace doesn't mention USB at all. In
> > fact, this is a known SCSI problem. It has been fixed by these two
> > patches:
> >
> > http://marc.info/?l=linux-scsi&m=139031645920152&w=2
> > http://marc.info/
Lieber Freund
Ich bin Dr. Zuliu Hu. Independent Non-Executive Director der Hang Seng Bank
Ltd, Hong Kong. Ich habe einen Geschäftsvorgang von $54,5 Millionen US-Dollar,
und ich werde 30% Entschädigung für Ihre Unterstützung in dieser Transaktion
erhalten Sie, bei Interesse kontaktieren Sie mic
From: Nicholas Bellinger
This patch fixes the incorrect setting of ->post_send_buf_count
related to RDMA WRITEs + READs where isert_rdma_rw->send_wr_num
was not being taken into account.
This includes incrementing ->post_send_buf_count within
isert_put_datain() + isert_get_dataout(), decrementin
From: Nicholas Bellinger
This patch fixes a bug in iscsit_get_tpg_from_np() where the
tpg->tpg_state sanity check was looking for TPG_STATE_FREE,
instead of != TPG_STATE_ACTIVE.
The latter is expected during a normal TPG shutdown once the
tpg_state goes into TPG_STATE_INACTIVE in order to reject
From: Nicholas Bellinger
There are a handful of uses of list_empty() for cmd->i_conn_node
within iser-target code that expect to return false once a cmd
has been removed from the per connect list.
This patch changes all uses of list_del -> list_del_init in order
to ensure that list_empty() retur
From: Nicholas Bellinger
Hi Or & Sagi,
This series addresses a number of active I/O shutdown related issues
in iser-target code that have come up recently during stress testing.
Note there is still a seperate iser-target network portal shutdown
bug being tracked down, but this series addresses
From: Nicholas Bellinger
This patch addresses a number of active I/O shutdown issues
related to isert_cmd descriptors being leaked that are part
of a completion interrupt coalescing batch.
This includes adding logic in isert_cq_tx_comp_err() to
drain any associated tx_desc->comp_llnode_batch, as
From: Nicholas Bellinger
This patch addresses a couple of different hug shutdown issues
related to wait_event() + isert_conn->state. First, it changes
isert_conn->conn_wait + isert_conn->conn_wait_comp_err from
waitqueues to completions, and sets ISER_CONN_TERMINATING from
within isert_disconnec
From: Nicholas Bellinger
This patch changes IB_WR_FAST_REG_MR + IB_WR_LOCAL_INV related
work requests to include a ISER_FRWR_LI_WRID value in order to
signal isert_cq_tx_work() that these requests should be ignored.
This is necessary because even though IB_SEND_SIGNALED is not
set for either wor
On 04/03/2014 02:01, Nicholas A. Bellinger wrote:
This is necessary because even though IB_SEND_SIGNALED is
not set for RDMA WRITEs + READs, during a QP failure event
the work requests will be returned with exception status
from the TX completion queue.
Impossible... for rdma reads we must ask
28 matches
Mail list logo