On 06/24/13 04:36, James Bottomley wrote:
On Wed, 2013-06-12 at 14:51 +0200, Bart Van Assche wrote:
Now that all scsi_request_fn() callers hold a reference on the
SCSI device that function is invoked for
What makes you think that this is a true statement? The usual caller is
the block layer,
On Sun, Jun 23 2013, Ingo Molnar wrote:
> I'm wondering why this makes such a performance difference.
They key ingredient here is simply not going to sleep, only to get an
IRQ and get woken up very shortly again. NAPI and similar approaches
work great for high IOPS cases, where you maintain a cert
On Sun, Jun 23 2013, Linus Torvalds wrote:
> nothing in common. Networking very very seldom
> has the kind of "submit and wait for immediate result" issues that
> disk reads do.
>
> That said, I dislike the patch intensely. I do not think it's at all a
> good idea to look at "need_resched" to say
On 06/23/13 23:13, Mike Christie wrote:
> On 06/12/2013 08:28 AM, Bart Van Assche wrote:
>> +/*
>> + * It can occur that after fast_io_fail_tmo expired and before
>> + * dev_loss_tmo expired that the SCSI error handler has
>> + * offlined one or more
* Linus Torvalds wrote:
> On Sun, Jun 23, 2013 at 12:09 AM, Ingo Molnar wrote:
> >
> > The spinning approach you add has the disadvantage of actively wasting
> > CPU time, which could be used to run other tasks. In general it's much
> > better to make sure the completion IRQs are rate-limited
* Jens Axboe wrote:
> - With the former note, the app either needs to opt in (and hence
> willingly sacrifice CPU cycles of its scheduling slice) or it needs to
> be nicer in when it gives up and goes back to irq driven IO.
The scheduler could look at sleep latency averages of the task in
* David Ahern wrote:
> On 6/23/13 3:09 AM, Ingo Molnar wrote:
> >If an IO driver is implemented properly then it will batch up requests for
> >the controller, and gets IRQ-notified on a (sub-)batch of buffers
> >completed.
> >
> >If there's any spinning done then it should be NAPI-alike polling:
> @@ -646,14 +703,20 @@ static int scsi_try_target_reset(struct scsi_cmnd *scmd)
> static int scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
> {
> int rtn;
> - struct scsi_host_template *hostt = scmd->device->host->hostt;
> + struct Scsi_Host *host = scmd->device->host;
> +
On 06/24/13 12:17, Jack Wang wrote:
@@ -646,14 +703,20 @@ static int scsi_try_target_reset(struct scsi_cmnd *scmd)
static int scsi_try_bus_device_reset(struct scsi_cmnd *scmd)
{
int rtn;
- struct scsi_host_template *hostt = scmd->device->host->hostt;
+ struct Scsi_Host *h
On Mon, 2013-06-24 at 09:13 +0200, Bart Van Assche wrote:
> On 06/24/13 04:36, James Bottomley wrote:
> > On Wed, 2013-06-12 at 14:51 +0200, Bart Van Assche wrote:
> >> Now that all scsi_request_fn() callers hold a reference on the
> >> SCSI device that function is invoked for
> >
> > What makes yo
> I'm not sure it's possible to avoid such a race without introducing
> a new mutex. How about something like the (untested) SCSI core patch
> below, and invoking scsi_block_eh() and scsi_unblock_eh() around any
> reconnect activity not initiated from the SCSI EH thread ?
>
> [PATCH] Add scsi_bloc
On Wed, 2013-06-19 at 18:48 +, James Bottomley wrote:
> On Wed, 2013-06-19 at 13:42 -0400, Ewan D. Milne wrote:
> > From: "Ewan D. Milne"
> >
> > Generate a uevent on the scsi_target object when the following
> > Unit Attention ASC/ASCQ code is received:
> >
> > 3F/0E REPORTED LUNS DATA
On Wed, 2013-06-19 at 18:36 +, James Bottomley wrote:
> On Wed, 2013-06-19 at 13:42 -0400, Ewan D. Milne wrote:
> > From: "Ewan D. Milne"
> >
> > The names of the struct and some of the functions for scsi_device
> > events are too generic and do not match the comments in the source.
> > Chang
On Mon, 2013-06-24 at 10:11 -0400, Ewan Milne wrote:
> On Wed, 2013-06-19 at 18:48 +, James Bottomley wrote:
> > On Wed, 2013-06-19 at 13:42 -0400, Ewan D. Milne wrote:
> > > From: "Ewan D. Milne"
> > >
> > > Generate a uevent on the scsi_target object when the following
> > > Unit Attention
On Wed, 2013-06-12 at 14:49 +0200, Bart Van Assche wrote:
> scsi_run_queue() examines all SCSI devices that are present on
> the starved list. Since scsi_run_queue() unlocks the SCSI host
> lock before running a queue a SCSI device can get removed after
> it has been removed from the starved list a
On 06/24/13 15:34, James Bottomley wrote:
On Mon, 2013-06-24 at 09:13 +0200, Bart Van Assche wrote:
On 06/24/13 04:36, James Bottomley wrote:
On Wed, 2013-06-12 at 14:51 +0200, Bart Van Assche wrote:
Now that all scsi_request_fn() callers hold a reference on the
SCSI device that function is in
My static checker complains about a possible array overflow in
__iscsi_conn_send_pdu().
drivers/scsi/libiscsi.c
743 if (data_size) {
744 memcpy(task->data, data, data_size);
745 task->data_count = data_size;
746 } else
747
We need to free "payload" before returning.
Signed-off-by: Dan Carpenter
diff --git a/drivers/target/iscsi/iscsi_target.c
b/drivers/target/iscsi/iscsi_target.c
index c1106bb..1e59630 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3426,6 +3426,7 @@
On 06/24/13 15:48, Jack Wang wrote:
I'm not sure it's possible to avoid such a race without introducing
a new mutex. How about something like the (untested) SCSI core patch
below, and invoking scsi_block_eh() and scsi_unblock_eh() around any
reconnect activity not initiated from the SCSI EH threa
On 06/24/2013 05:50 PM, Bart Van Assche wrote:
> On 06/24/13 15:48, Jack Wang wrote:
>>> I'm not sure it's possible to avoid such a race without introducing
>>> a new mutex. How about something like the (untested) SCSI core patch
>>> below, and invoking scsi_block_eh() and scsi_unblock_eh() around
On 06/24/13 17:38, James Bottomley wrote:
I really don't like this because it's shuffling potentially fragile
lifetime rules since you now have to have the sdev deleted from the
starved list before final put. That becomes an unstated assumption
within the code.
The theory is that the starved li
On Mon, 2013-06-24 at 18:16 +0200, Bart Van Assche wrote:
> On 06/24/13 17:38, James Bottomley wrote:
> > I really don't like this because it's shuffling potentially fragile
> > lifetime rules since you now have to have the sdev deleted from the
> > starved list before final put. That becomes an u
On 06/24/2013 10:38 AM, James Bottomley wrote:
> On Wed, 2013-06-12 at 14:49 +0200, Bart Van Assche wrote:
>> scsi_run_queue() examines all SCSI devices that are present on
>> the starved list. Since scsi_run_queue() unlocks the SCSI host
>> lock before running a queue a SCSI device can get removed
On Wed, 2013-06-12 at 14:52 +0200, Bart Van Assche wrote:
> SCSI devices are added to the shost->__devices list from inside
> scsi_alloc_sdev(). If something goes wrong during LUN scanning,
> e.g. a transport layer failure occurs, then __scsi_remove_device()
> can get invoked by the LUN scanning co
On Mon, 2013-06-24 at 12:24 -0500, Mike Christie wrote:
> On 06/24/2013 10:38 AM, James Bottomley wrote:
> > On Wed, 2013-06-12 at 14:49 +0200, Bart Van Assche wrote:
> >> scsi_run_queue() examines all SCSI devices that are present on
> >> the starved list. Since scsi_run_queue() unlocks the SCSI h
On Wed, 2013-06-12 at 14:53 +0200, Bart Van Assche wrote:
> Changing the state of a SCSI device via sysfs into "cancel" or
> "deleted" prevents removal of these devices by scsi_remove_host().
> Hence do not allow this. Also, introduce the symbolic name
> INVALID_SDEV_STATE, representing a value dif
On Wed, 2013-06-12 at 14:55 +0200, Bart Van Assche wrote:
> A SCSI LLD may start cleaning up host resources as soon as
> scsi_remove_host() returns. These host resources may be needed by
> the LLD in an implementation of one of the eh_* functions. So if
> one of the eh_* functions is in progress wh
On 06/24/2013 02:19 PM, James Bottomley wrote:
> On Wed, 2013-06-12 at 14:55 +0200, Bart Van Assche wrote:
>> A SCSI LLD may start cleaning up host resources as soon as
>> scsi_remove_host() returns. These host resources may be needed by
>> the LLD in an implementation of one of the eh_* functions.
This patchset ports buslogic driver to 64-bit.
Current buslogic driver is composed of two components - SCCB manager
which communicates with adapter to execute SCSI commands (contained in
FlashPoint.c), and Linux driver part that interfaces with rest of the
kernel (contained in BusLogic.c). SCCB ma
On Mon, 2013-06-24 at 14:25 -0600, Khalid Aziz wrote:
> This patchset ports buslogic driver to 64-bit.
OK, thought long and hard about this., I'll take it on the proviso that
you're the new buslogic maintainer. The reason being that without
someone to sort through any bug reports, the only option
On Mon, Jun 24, 2013 at 02:26:00PM -0600, Khalid Aziz wrote:
> @@ -821,7 +821,7 @@ struct blogic_ccb {
> unsigned char cdblen; /* Byte 2 */
> unsigned char sense_datalen;/* Byte 3 */
> u32 datalen;
On 06/24/2013 03:07 PM, Dave Jones wrote:
On Mon, Jun 24, 2013 at 02:26:00PM -0600, Khalid Aziz wrote:
> @@ -821,7 +821,7 @@ struct blogic_ccb {
> unsigned char cdblen; /* Byte 2 */
> unsigned char sense_datalen;/* Byte 3 */
On 06/24/2013 02:55 PM, James Bottomley wrote:
On Mon, 2013-06-24 at 14:25 -0600, Khalid Aziz wrote:
This patchset ports buslogic driver to 64-bit.
OK, thought long and hard about this., I'll take it on the proviso that
you're the new buslogic maintainer. The reason being that without
someone
On Mon, 2013-06-24 at 15:17 -0600, Khalid Aziz wrote:
> On 06/24/2013 02:55 PM, James Bottomley wrote:
> > On Mon, 2013-06-24 at 14:25 -0600, Khalid Aziz wrote:
> >> This patchset ports buslogic driver to 64-bit.
> >
> > OK, thought long and hard about this., I'll take it on the proviso that
> > yo
---
MDaemon has detected restricted attachments within an email message
---
>From : linux-scsi@vger.kernel.org
To: chinhanhdan...@inoxhoabinh.vn
Subject :
On Mon, 2013-06-24 at 15:04 -0500, Mike Christie wrote:
> On 06/24/2013 02:19 PM, James Bottomley wrote:
> > On Wed, 2013-06-12 at 14:55 +0200, Bart Van Assche wrote:
> >> A SCSI LLD may start cleaning up host resources as soon as
> >> scsi_remove_host() returns. These host resources may be needed
On Sun, Jun 23, 2013 at 09:37:39PM +0900, Akinobu Mita wrote:
> The only difference between sg_pcopy_{from,to}_buffer() and
> sg_copy_{from,to}_buffer() is an additional argument that specifies
> the number of bytes to skip the SG list before copying.
>
> Signed-off-by: Akinobu Mita
> Cc: Tejun H
On 13-06-23 02:37 PM, Akinobu Mita wrote:
do_device_access() is a function that abstracts copying SG list from/to
ramdisk storage (fake_storep).
It must deal with the ranges exceeding actual fake_storep size, because
such ranges are valid if virtual_gb is set greater than zero, and they
should b
On Mon, Jun 24, 2013 at 09:17:18AM +0200, Jens Axboe wrote:
> On Sun, Jun 23 2013, Linus Torvalds wrote:
> >
> > You could try to do that either *in* the idle thread (which would take
> > the context switch overhead - maybe negating some of the advantages),
> > or alternatively hook into the sched
On 06/24/2013 05:27 PM, James Bottomley wrote:
>>> However, what's the reasoning behind wanting to do this? In theory all
>>> necessary resources for the eh thread should only be freed in the
>>> release callback. That means they aren't freed until all error recovery
>>> completes.
>>
>> I think
On Jun 24, 2013, at 9:26 PM, Mike Christie wrote:
> On 06/24/2013 05:27 PM, James Bottomley wrote:
However, what's the reasoning behind wanting to do this? In theory all
necessary resources for the eh thread should only be freed in the
release callback. That means they aren't fre
On Mon, Jun 24, 2013 at 09:15:45AM +0200, Jens Axboe wrote:
> Willy, I think the general design is fine, hooking in via the bdi is the
> only way to get back to the right place from where you need to sleep.
> Some thoughts:
>
> - This should be hooked in via blk-iopoll, both of them should call in
On Mon, Jun 24, 2013 at 08:11:02PM -0400, Steven Rostedt wrote:
> What about hooking into the idle_balance code? That happens if we are
> about to go to idle but before the full schedule switch to the idle
> task.
>
>
> In __schedule(void):
>
> if (unlikely(!rq->nr_running))
>
On Mon, Jun 24, 2013 at 10:07:51AM +0200, Ingo Molnar wrote:
> I'm wondering, how will this scheme work if the IO completion latency is a
> lot more than the 5 usecs in the testcase? What if it takes 20 usecs or
> 100 usecs or more?
There's clearly a threshold at which it stops making sense, and
On Mon, 2013-06-24 at 18:46 +0300, Dan Carpenter wrote:
> We need to free "payload" before returning.
>
> Signed-off-by: Dan Carpenter
>
> diff --git a/drivers/target/iscsi/iscsi_target.c
> b/drivers/target/iscsi/iscsi_target.c
> index c1106bb..1e59630 100644
> --- a/drivers/target/iscsi/iscsi_
45 matches
Mail list logo