Hello,

Even I am curious about LXC support on hosts in ACS. TBH - LXD (it's kind
of the manager that uses liblxc to spin up MCs) is the main thing with
canonical machine containers (MCs) nowadays. Doubt anyone is using LXC
directly anymore. (e.g. I have a 3 node K8 cluster running on LXD backed
MCs for testing etc).

Has anyone here used LXC (on ACS hosts) to actually spin up MCs? I mean
like in real production?


*--*
*Makrand*



On Thu, Feb 25, 2021 at 5:37 PM Andrija Panic <andrija.pa...@gmail.com>
wrote:

> Hi folks,
>
> in our official documentation, we state that we support MANY things that, I
> assume, have not been tested by almost anyone, not being used widely by
> CloudStack users.
>
> My questions: should we make a big note  (in the documentation) that "the
> following ... might work, but are not actively tested" or something along
> these lines
>
> Subjects to discuss bellow:
> ###################################
>
>    -
>
>    LXC Host Containers on RHEL 7
>    -
>
>    Windows Server 2012 R2 (with Hyper-V Role enabled)
>    -
>
>    Hyper-V 2012 R2
>    -
>
>    Oracle VM 3.0+
>    -
>
>    Bare metal hosts are supported, which have no hypervisor. These hosts
>    can run the following operating systems:
>    - Fedora 17
>       - Ubuntu 12.04
>
> Supported External Devices
>
>    - Netscaler VPX and MPX versions 9.3, 10.1e and 10.5
>    - Netscaler SDX version 9.3, 10.1e and 10.5
>    - SRX (Model srx100b) versions 10.3 to 10.4 R7.5
>    - F5 11.X
>    - Force 10 Switch version S4810 for Baremetal Advanced Networks
>
> #########################################
>
> My point is that we discontinued supporting i.e. VMware 6.0 (due to VMware
> stopped supporting it a while ago; valid reason) while in reality it works
> very well (I know 4.13 works in production environments with VMware 6.0),
> but we keep mentioning we support things that, probably, nobody tested, nor
> is using at all - the ones from above.
>
> Opinions, suggestion?
>
> --
>
> Andrija Panić
>

Reply via email to