Il 04/09/2012 22:11, Nicholas A. Bellinger ha scritto:
>>> As tgt->tgt_lock is taken in virtscsi_queuecommand_multi() before the
>>> atomic_inc_return(tgt->reqs) check, it seems like using atomic_dec() w/o
>>> smp_mb__after_atomic_dec or tgt_lock access here is not using atomic.h
>>> accessors prop
On Tue, 2012-09-04 at 08:46 +0200, Paolo Bonzini wrote:
> Il 04/09/2012 04:21, Nicholas A. Bellinger ha scritto:
> >> @@ -112,6 +118,9 @@ static void virtscsi_complete_cmd(struct virtio_scsi
> >> *vscsi, void *buf)
> >>struct virtio_scsi_cmd *cmd = buf;
> >>struct scsi_cmnd *sc = cmd->sc;
On Tue, Sep 04, 2012 at 04:55:56PM +0200, Paolo Bonzini wrote:
> Il 04/09/2012 16:47, Michael S. Tsirkin ha scritto:
> >> > static void virtscsi_init_vq(struct virtio_scsi_vq *virtscsi_vq,
> >> > - struct virtqueue *vq)
> >> > + struct virtqu
Il 04/09/2012 16:47, Michael S. Tsirkin ha scritto:
>> > static void virtscsi_init_vq(struct virtio_scsi_vq *virtscsi_vq,
>> > - struct virtqueue *vq)
>> > + struct virtqueue *vq, bool affinity)
>> > {
>> >spin_lock_init(&virtscsi_vq->vq_lock);
>> >
On Tue, Aug 28, 2012 at 01:54:17PM +0200, Paolo Bonzini wrote:
> @@ -575,15 +630,19 @@ static struct scsi_host_template virtscsi_host_template
> = {
> &__val, sizeof(__val)); \
> })
>
> +
Pls don't add empty lines.
> static void virtscsi_init_vq(struct vi
On Tue, Sep 04, 2012 at 04:30:35PM +0200, Paolo Bonzini wrote:
> Il 04/09/2012 16:21, Michael S. Tsirkin ha scritto:
> > > One reason is that, even though in practice I expect roughly the same
> > > number of targets and VCPUs, hotplug means the number of targets is
> > > difficult to predict and i
Il 04/09/2012 16:21, Michael S. Tsirkin ha scritto:
> > One reason is that, even though in practice I expect roughly the same
> > number of targets and VCPUs, hotplug means the number of targets is
> > difficult to predict and is usually fixed to 256.
> >
> > The other reason is that per-target vq
Il 04/09/2012 16:19, Michael S. Tsirkin ha scritto:
> > > Also - some kind of comment explaining why a similar race can not happen
> > > with this lock in place would be nice: I see why this specific race can
> > > not trigger but since lock is dropped later before you submit command, I
> > > have
On Tue, Sep 04, 2012 at 03:49:42PM +0200, Paolo Bonzini wrote:
> Il 04/09/2012 14:48, Michael S. Tsirkin ha scritto:
> >> > This patch adds queue steering to virtio-scsi. When a target is sent
> >> > multiple requests, we always drive them to the same queue so that FIFO
> >> > processing order is
On Tue, Sep 04, 2012 at 03:45:57PM +0200, Paolo Bonzini wrote:
> > Also - some kind of comment explaining why a similar race can not happen
> > with this lock in place would be nice: I see why this specific race can
> > not trigger but since lock is dropped later before you submit command, I
> > ha
Il 04/09/2012 14:48, Michael S. Tsirkin ha scritto:
>> > This patch adds queue steering to virtio-scsi. When a target is sent
>> > multiple requests, we always drive them to the same queue so that FIFO
>> > processing order is kept. However, if a target was idle, we can choose
>> > a queue arbitr
Il 04/09/2012 15:35, Michael S. Tsirkin ha scritto:
> I see. I guess you can rewrite this as:
> atomic_inc
> if (atomic_read() == 1)
> which is a bit cheaper, and make the fact
> that you do not need increment and return to be atomic,
> explicit.
It seems more complicated to me for hardly any rea
On Tue, Sep 04, 2012 at 01:18:31PM +0200, Paolo Bonzini wrote:
> Il 04/09/2012 13:09, Michael S. Tsirkin ha scritto:
> >> > queuecommand on CPU #0 queuecommand #2 on CPU #1
> >> > --
> >> > atomic_inc_return(...) == 1
> >
On Tue, Aug 28, 2012 at 01:54:17PM +0200, Paolo Bonzini wrote:
> This patch adds queue steering to virtio-scsi. When a target is sent
> multiple requests, we always drive them to the same queue so that FIFO
> processing order is kept. However, if a target was idle, we can choose
> a queue arbitra
Il 04/09/2012 13:09, Michael S. Tsirkin ha scritto:
>> > queuecommand on CPU #0 queuecommand #2 on CPU #1
>> > --
>> > atomic_inc_return(...) == 1
>> >atomic_inc_return(...) == 2
>> >
On Tue, Sep 04, 2012 at 12:25:03PM +0200, Paolo Bonzini wrote:
> Il 04/09/2012 10:46, Michael S. Tsirkin ha scritto:
> +static int virtscsi_queuecommand_multi(struct Scsi_Host *sh,
> + struct scsi_cmnd *sc)
> +{
> +struct virtio_scsi
Il 04/09/2012 10:46, Michael S. Tsirkin ha scritto:
+static int virtscsi_queuecommand_multi(struct Scsi_Host *sh,
+ struct scsi_cmnd *sc)
+{
+ struct virtio_scsi *vscsi = shost_priv(sh);
+ struct virtio_scsi_target_state *tgt = vscsi->tgt[s
On Tue, Sep 04, 2012 at 08:46:12AM +0200, Paolo Bonzini wrote:
> Il 04/09/2012 04:21, Nicholas A. Bellinger ha scritto:
> >> @@ -112,6 +118,9 @@ static void virtscsi_complete_cmd(struct virtio_scsi
> >> *vscsi, void *buf)
> >>struct virtio_scsi_cmd *cmd = buf;
> >>struct scsi_cmnd *sc = cm
Il 04/09/2012 04:21, Nicholas A. Bellinger ha scritto:
>> @@ -112,6 +118,9 @@ static void virtscsi_complete_cmd(struct virtio_scsi
>> *vscsi, void *buf)
>> struct virtio_scsi_cmd *cmd = buf;
>> struct scsi_cmnd *sc = cmd->sc;
>> struct virtio_scsi_cmd_resp *resp = &cmd->resp.cmd;
>>
On Tue, 2012-08-28 at 13:54 +0200, Paolo Bonzini wrote:
> This patch adds queue steering to virtio-scsi. When a target is sent
> multiple requests, we always drive them to the same queue so that FIFO
> processing order is kept. However, if a target was idle, we can choose
> a queue arbitrarily.
This patch adds queue steering to virtio-scsi. When a target is sent
multiple requests, we always drive them to the same queue so that FIFO
processing order is kept. However, if a target was idle, we can choose
a queue arbitrarily. In this case the queue is chosen according to the
current VCPU,
21 matches
Mail list logo