On VM we try to emulate real hardware )))
So any device honor specification
In this case PCI :)

To be honest we can increase limits by adding multifunctional devices or
migrate to virtio-iscsi-blk

But as for me - 14 disks more than enough now


About 3 for cdrom. I will check . I think CDROM emulated as IDE device, not
via virtio-blk

For 0 - root volume , interesting , in this case we can easily add 1 more
DATA disk :)

ср, 15 февр. 2017 г. в 19:24, Rafael Weingärtner <
rafaelweingart...@gmail.com>:

> I thought that on a VM we would not be bound by PCI limitations.
> Interesting explanations, thanks.
>
>
> On Wed, Feb 15, 2017 at 12:19 PM, Voloshanenko Igor <
> igor.voloshane...@gmail.com> wrote:
>
> > I think explanation very easy.
> > PCI itself can handle up to 32 devices.
> >
> > If you run lspci inside empty (fresh created) VM - you will see, that 8
> > slots already occupied
> > [root@test-virtio-blk ~]# lspci
> > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> > 02)
> > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton
> II]
> > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> > II]
> > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB
> [Natoma/Triton
> > II] (rev 01)
> > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446
> > 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
> > 00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
> >
> > [root@test-virtio-blk ~]# lspci | wc -l
> > 8
> >
> > So, 7 system devices + 1 ROOT disk
> >
> > in current implementation, we used virtio-blk, which can handle only 1
> > device per instance.
> >
> > So, we have 32-8 == 24 free slots...
> >
> > As Cloudstack support more than 1 eth cards - 8 of them reserved for
> > network cards and 16 available for virtio-blk
> >
> > So, practical limit equal 16 devices (for DATA disks)
> >
> > Why 2 devices (0 + 3) excluded - interesting question... I will try to
> > research and post explanation
> >
> >
> >
> >
> >
> >
> >
> >
> > 2017-02-15 18:27 GMT+02:00 Rafael Weingärtner <
> rafaelweingart...@gmail.com
> > >:
> >
> > > I hate to say this, but probably no one knows why.
> > > I looked at the history and this method has always being like this.
> > >
> > > The device ID 3 seems to be something reserved, probably for Xen tools
> > (big
> > > guess here)?
> > >
> > > Also, regarding the limit; I could speculate two explanations for the
> > > limit. A developer did not get the full specs and decided to do
> whatever
> > > he/she wanted. Or, maybe, at the time of coding (long, long time ago)
> > there
> > > was a hypervisor that limited (maybe still limits) the number of
> devices
> > > that could be plugged to a VM and the first developers decided to level
> > > everything by that spec.
> > >
> > > It may be worth checking with KVM, XenServer, Hyper-V, and VMware if
> they
> > > have such limitation on disks that can be attached to a VM. If they do
> > not
> > > have, we could remove that, or at least externalize the limit in a
> > > parameter.
> > >
> > > On Wed, Feb 15, 2017 at 5:54 AM, Friðvin Logi Oddbjörnsson <
> > > frid...@greenqloud.com> wrote:
> > >
> > > > CloudStack is currently limiting the number of data volumes, that can
> > be
> > > > attached to an instance, to 14.
> > > > More specifically, this limitation relates to the device ids that
> > > > CloudStack considers valid for data volumes.
> > > > In method VolumeApiServiceImpl.getDeviceId(long, Long), only device
> > ids
> > > 1,
> > > > 2, and 4-15 are considered valid.
> > > > What I would like to know is: is there a reason for this limitation?
> > (of
> > > > not going higher than device id 15)
> > > >
> > > > Note that the current number of attached data volumes is already
> being
> > > > checked against the maximum number of data volumes per instance, as
> > > > specified by the relevant hypervisor’s capabilities.
> > > > E.g. if the relevant hypervisor’s capabilities specify that it only
> > > > supports 6 data volumes per instance, CloudStack rejects attaching a
> > > > seventh data volume.
> > > >
> > > >
> > > > Friðvin Logi Oddbjörnsson
> > > >
> > > > Senior Developer
> > > >
> > > > Tel: (+354) 415 0200 | frid...@greenqloud.com <
> jaros...@greenqloud.com
> > >
> > > >
> > > > Mobile: (+354) 696 6528 | PGP Key: 57CA1B00
> > > > <https://sks-keyservers.net/pks/lookup?op=vindex&search=
> > > > frid...@greenqloud.com>
> > > >
> > > > Twitter: @greenqloud <https://twitter.com/greenqloud> | @qstackcloud
> > > > <https://twitter.com/qstackcloud>
> > > >
> > > > www.greenqloud.com | www.qstack.com
> > > >
> > > > [image: qstack_blue_landscape_byqreenqloud-01.png]
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>

Reply via email to