To maintain backward compatibility we would have to add a config option
here unfortunately. I do like the idea however. We can make the default
VirtIO ISCSI and keep the VirtIO-blk as an alternative for existing
installations.

On Mon, Jan 23, 2017 at 8:05 AM, Wido den Hollander <w...@widodh.nl> wrote:

>
> > Op 21 januari 2017 om 23:50 schreef Wido den Hollander <w...@widodh.nl>:
> >
> >
> >
> >
> > > Op 21 jan. 2017 om 22:59 heeft Syed Ahmed <sah...@cloudops.com> het
> volgende geschreven:
> > >
> > > Exposing this via an API would be tricky but it can definitely be
> added as
> > > a cluster-wide or a global setting in my opinion. By enabling that,
> all the
> > > instances would be using VirtIO SCSI. Is there a reason you'd want some
> > > instances to use VirtIIO and others to use VirtIO SCSI?
> > >
> >
> > Even a global setting would be a bit of work and hacky as well.
> >
> > I do not see any reason to keep VirtIO, it os just that devices will be
> named sdX instead of vdX in the guest.
>
> To add, the Qemu wiki [0] says:
>
> "A virtio storage interface for efficient I/O that overcomes virtio-blk
> limitations and supports advanced SCSI hardware."
>
> At OpenStack [1] they also say:
>
> "It has been designed to replace virtio-blk, increase it's performance and
> improve scalability."
>
> So it seems that VirtIO is there to be removed. I'd say switch to VirtIO
> SCSI at version 5.X? :)
>
> Wido
>
> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> [1]: https://wiki.openstack.org/wiki/LibvirtVirtioScsi
>
> >
> > That might break existing Instances when not using labels or UUIDs in
> the Instance when mounting.
> >
> > Wido
> >
> > >
> > >> On Sat, Jan 21, 2017 at 4:22 PM, Simon Weller <swel...@ena.com>
> wrote:
> > >>
> > >> For the record, we've been looking into this as well.
> > >> Has anyone tried it with Windows VMs before? The standard virtio
> driver
> > >> doesn't support spanned disks and that's something we'd really like to
> > >> enable for our customers.
> > >>
> > >>
> > >>
> > >> Simon Weller/615-312-6068 <(615)%20312-6068>
> > >>
> > >>
> > >> -----Original Message-----
> > >> *From:* Wido den Hollander [w...@widodh.nl]
> > >> *Received:* Saturday, 21 Jan 2017, 2:56PM
> > >> *To:* Syed Ahmed [sah...@cloudops.com]; dev@cloudstack.apache.org [
> > >> dev@cloudstack.apache.org]
> > >> *Subject:* Re: Adding VirtIO SCSI to KVM hypervisors
> > >>
> > >>
> > >>> Op 21 januari 2017 om 16:15 schreef Syed Ahmed <sah...@cloudops.com
> >:
> > >>>
> > >>>
> > >>> Wido,
> > >>>
> > >>> Were you thinking of adding this as a global setting? I can see why
> it
> > >> will
> > >>> be useful. I'm happy to review any ideas you might have around this.
> > >>>
> > >>
> > >> Well, not really. We don't have any structure for this in place right
> now
> > >> to define what type of driver/disk we present to a guest.
> > >>
> > >> See my answer below.
> > >>
> > >>> Thanks,
> > >>> -Syed
> > >>> On Sat, Jan 21, 2017 at 04:46 Laszlo Hornyak <
> laszlo.horn...@gmail.com>
> > >>> wrote:
> > >>>
> > >>>> Hi Wido,
> > >>>>
> > >>>> If I understand correctly from the documentation and your examples,
> > >> virtio
> > >>>> provides virtio interface to the guest while virtio-scsi provides
> scsi
> > >>>> interface, therefore an IaaS service should not replace it without
> user
> > >>>> request / approval. It would be probably better to let the user set
> > >> what
> > >>>> kind of IO interface the VM needs.
> > >>>>
> > >>
> > >> You'd say, but we already do those. Some Operating Systems get a IDE
> disk,
> > >> others a SCSI disk and when Linux guest support it according to our
> > >> database we use VirtIO.
> > >>
> > >> CloudStack has no way of telling how to present a volume to a guest. I
> > >> think it would be a bit to much to just make that configurable. That
> would
> > >> mean extra database entries, API calls. A bit overkill imho in this
> case.
> > >>
> > >> VirtIO SCSI is supported by all Linux distributions for a very long
> time.
> > >>
> > >> Wido
> > >>
> > >>>> Best regards,
> > >>>> Laszlo
> > >>>>
> > >>>> On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander <
> w...@widodh.nl>
> > >>>> wrote:
> > >>>>
> > >>>>> Hi,
> > >>>>>
> > >>>>> VirtIO SCSI [0] has been supported a while now by Linux and all
> > >> kernels,
> > >>>>> but inside CloudStack we are not using it. There is a issue for
> this
> > >> [1].
> > >>>>>
> > >>>>> It would bring more (theoretical) performance to VMs, but one of
> the
> > >>>>> motivators (for me) is that we can support TRIM/DISCARD [2].
> > >>>>>
> > >>>>> This would allow for RBD images on Ceph to shrink, but it can also
> > >> give
> > >>>>> back free space on QCOW2 images if quests run fstrim. Something all
> > >>>> modern
> > >>>>> distributions all do weekly in a CRON.
> > >>>>>
> > >>>>> Now, it is simple to swap VirtIO for VirtIO SCSI. This would
> however
> > >> mean
> > >>>>> that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> > >>>>>
> > >>>>> For GRUB and such this is no problems. This usually work on UUIDs
> > >> and/or
> > >>>>> labels, but for static mounts on /dev/vdb1 for example things
> break.
> > >>>>>
> > >>>>> We currently don't have any configuration method on how we want to
> > >>>> present
> > >>>>> a disk to a guest, so when attaching a volume we can't say that we
> > >> want
> > >>>> to
> > >>>>> use a different driver. If we think that a Operating System
> supports
> > >>>> VirtIO
> > >>>>> we use that driver in KVM.
> > >>>>>
> > >>>>> Any suggestion on how to add VirtIO SCSI support?
> > >>>>>
> > >>>>> Wido
> > >>>>>
> > >>>>>
> > >>>>> [0]: http://wiki.qemu.org/Features/VirtioSCSI
> > >>>>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-8239
> > >>>>> [2]: https://issues.apache.org/jira/browse/CLOUDSTACK-8104
> > >>>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>>
> > >>>> EOF
> > >>>>
> > >>
>

Reply via email to