Re: Next CloudStack EU user group date

2018-07-09 Thread Rakesh v
Hello

Is it possible to schedule after September 2nd week? I missed the last event 
also. Don't want to miss this one

Sent from my iPhone

> On 09-Jul-2018, at 3:18 PM, Steve Roles  wrote:
> 
> Hi all - I am about to schedule the next CloudStack European User Group for 
> Thursday, September 13 here in London.  
> 
> Sorry for delay, I'd usually have it all scheduled and publicised by now but 
> I've had some trouble with venues! 
> 
> Steve Roles
> 
> steve.ro...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> 
> 
> 
> -Original Message-
> From: Andrija Panic  
> Sent: 09 July 2018 14:10
> To: dev 
> Cc: users 
> Subject: Re: Next CloudStack EU user group date
> 
> And Open Source Summit Europe in October 22-24th...
> 
> On Mon, 9 Jul 2018 at 15:04, Rafael Weingärtner 
> wrote:
> 
>> Do not forget that we have CCC/ApacheCon in September 22-28.
>> 
>>> On Mon, Jul 9, 2018 at 10:01 AM, Sven Vogel  wrote:
>>> 
>>> Hi Ivan,
>>> 
>>> 
>>> 
>>> Early September or October?  What do you think?
>>> 
>>> 
>>> 
>>> We thought on the 18.10. Is that also possible?
>>> 
>>> 
>>> 
>>> Thanks
>>> 
>>> 
>>> 
>>> Sven
>>> 
>>> 
>>> 
>>> 
>>> __
>>> 
>>> Sven Vogel
>>> Cloud Solutions Architect
>>> 
>>> EWERK RZ GmbH
>>> Brühl 24, D-04109 Leipzig
>>> P +49 341 42649 - 11
>>> F +49 341 42649 - 18
>>> s.vo...@ewerk.com
>>> www.ewerk.com
>>> 
>>> Geschäftsführer:
>>> Dr. Erik Wende, Hendrik Schubert, Frank Richter, Gerhard Hoyer
>>> Registergericht: Leipzig HRB 17023
>>> 
>>> Zertifiziert nach:
>>> ISO/IEC 27001:2013
>>> DIN EN ISO 9001:2015
>>> DIN ISO/IEC 2-1:2011
>>> 
>>> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>>> 
>>> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>>> 
>>> Disclaimer Privacy:
>>> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter 
>>> Dateien)
>> ist
>>> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht 
>>> der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche 
>>> Offenlegung, Vervielfältigung, Weitergabe oder Nutzung des Inhalts 
>>> untersagt. Bitte informieren Sie in diesem Fall unverzüglich den 
>>> Absender und löschen Sie die E-Mail (einschließlich etwaiger 
>>> beigefügter Dateien) von Ihrem
>> System.
>>> Vielen Dank.
>>> 
>>> The contents of this e-mail (including any attachments) are 
>>> confidential and may be legally privileged. If you are not the 
>>> intended recipient of this e-mail, any disclosure, copying, 
>>> distribution or use of its contents is strictly prohibited, and you 
>>> should please notify the sender
>> immediately
>>> and then delete it (including any attachments) from your system. 
>>> Thank
>> you.
>>> Am 09.07.18, 13:45 schrieb "Ivan Kudryavtsev" 
>>> >> :
>>> 
>>> 
>>> 
>>>Early september would be great, by the way.
>>> 
>>> 
>>> 
>>>I would like to share some information about Bitworks' supported 
>>> ACS
>>> 
>>>plugins for guest ELK logging, key-value storage for VM
>> configuration,
>>> 
>>>Cloudstack-UI status and progress, and self-registration plugin. 
>>> We also
>>> 
>>>would like to grant them to community under Apache 2 license as well.
>>> 
>>> 
>>> 
>>>пн, 9 июл. 2018 г., 18:38 Ivan Kudryavtsev 
>>> >> :
>>> 
>>> 
>>> 
 Hi, Sven, Great! I would love to join if the meetup happens. 
>>> If the dates
>>> 
 can be established it will work for me, because visa is 
>>> required
>> and
>>> it
>>> 
 takes time to apply and get approval...
>>> 
 
>>> 
 пн, 9 июл. 2018 г., 18:34 Sven Vogel :
>>> 
 
>>> 
> Hi Ivan,
>>> 
> 
>>> 
> i would offer our Location in Germany, Leipzig by EWERK
>>> 
> 
>>> 
> www.ewerk.com
>>> 
> 
>>> 
> https://goo.gl/maps/PMQgXcJ73ZC2
>>> 
> 
>>> 
> Greetings
>>> 
> 
>>> 
> Sven Vogel
>>> 
> 
>>> 
> __
>>> 
> 
>>> 
> 
>>> 
> Sven Vogel
>>> 
> Cloud Solutions Architect
>>> 
> 
>>> 
> 
>>> 
> EWERK RZ GmbH
>>> 
> Brühl 24, D-04109 Leipzig
>>> 
> P +49 341 42649 - 11
>>> 
> F +49 341 42649 - 18
>>> 
> s.vo...@ewerk.com
>>> 
> www.ewerk.com
>>> 
> 
>>> 
> 
>>> 
> Geschäftsführer:
>>> 
> Dr. Erik Wende, Hendrik Schubert, Frank Richter, Gerhard 
>>> Hoyer
>>> 
> Registergericht: Leipzig HRB 17023
>>> 
> 
>>> 
> 
>>> 
> Zertifiziert nach:
>>> 
> ISO/IEC 27001:2013
>>> 
> DIN EN ISO 9001:2015
>>> 
> DIN ISO/IEC 2-1:2011
>>> 
> 
>>> 
> 
>>> 
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>>> 
> 
>>> 
> 
>>> 
> 
>>> 
> Am Samstag, den 07/07/2018 um 07:10 schrieb Ivan Kudryavtsev:
>>> 
> 
>>> 
> 
>>> 
> Hello, guys.
>>> 
> 
>>> 
> Do you have an ideas about the next CS EU User Group meetup 
>>> date
>> and
>>> 
> location? Would like to participate, so want arrange my plans.
>>> 
> 
>>> 
 
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> --
>> Rafael Weingärtner
>> 
> 
> 
>

Re: Multiple networks support in SG zone - half baked?

2019-04-03 Thread Rakesh v
Not 4.11.3.  porting the changes is in progress to 4.11.2 in our company

Sent from my iPhone

> On 03-Apr-2019, at 5:26 PM, Nux!  wrote:
> 
> Hi Wei,
> 
> 4.11.3 you mean? 4.11.2 is already released.
> 
> Anyway, last I tried, a couple of years back, this was not supported at all, 
> hence me getting my hopes up.
> You could add multiple networks, but the VM had to connect only in 1 at any 
> given time.
> Looking forward to your port.
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Wei ZHOU" 
>> To: "dev" 
>> Sent: Wednesday, 3 April, 2019 15:51:49
>> Subject: Re: Multiple networks support in SG zone - half baked?
> 
>> Was it supported before ? I do not think so.
>> 
>> We have made some changes to support it in cloudstack 4.7.1. We are going
>> to port our changes to cloudstack 4.11.2 LTS.
>> 
>> -Wei
>> 
>> Nux!  于2019年4月3日周三 下午4:47写道:
>> 
>>> Hi,
>>> 
>>> (ACS 4.11.2, KVM)
>>> 
>>> I noticed (with enthusiasm) that I can now add a second network to a VM in
>>> Adv + SG zones.[1]
>>> I then noticed (with less enthusiasm) that while indeed a new NIC into the
>>> said network is added, Security Groups for it are not set up at all, i.e.
>>> DHCP is broken.
>>> 
>>> Can anyone shed some light on this feature, where it's going and so on?
>>> 
>>> I would have killed for this feature a few years back and it was one of
>>> the reasons we failed to adopt Cloudstack properly, as we require a
>>> secondary network for private/backup usage.
>>> 
>>> Also noticed "Add L2 network" button, but gives an error[2], so I reckon
>>> it can't be used.
>>> 
>>> Are things any better in 4.12? With improved IPv6 support I am already
>>> considering to ditch the LTS for it.
>>> 
>>> 
>>> [1] http://tmp.nux.ro/J4M-Selection_058.png
>>> [2] http://tmp.nux.ro/Mx9-Selection_059.png
>>> 
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>> 
>>> Nux!
>>> www.nux.ro


Re: Secondary Storage VM timeout issue every hour

2019-07-25 Thread Rakesh v
Yes I have set the ip's of the three MGT servers in the "host" field

Sent from my iPhone

> On 25-Jul-2019, at 2:14 PM, Pierre-Luc Dion  wrote:
> 
> Do you have a load balancer in front of cloudstack? Did you set the global
> settings "host" to the ip of the mgmt server?
> 
> 
> Le jeu. 25 juill. 2019 03 h 24, Rakesh Venkatesh 
> a écrit :
> 
>> Hello People
>> 
>> 
>> I have a strange issue where mgt server times out to send a command to
>> secondary storage VM every hour and because of this UI won't be accessible
>> for a short duration of time. Sometimes I have to restart mgt server to get
>> it back to working state and sometimes I don't need to restart it. I also
>> see some exceptions while fetching the storage stats.
>> 
>> 
>> The log says secondary storage VM is lagging behind mgt server in ping and
>> it sends a disconnect message to other components. Can you let me know how
>> to troubleshoot this issue? I destroyed the secondary storage VM but the
>> issue still persists. I checked the date/time on the mgt server and SSVM
>> and they are same. This is happening for quite a few days now. Below are
>> the logs
>> 
>> 
>> 
>> 2019-07-25 04:01:22,769 INFO  [c.c.a.m.AgentManagerImpl]
>> (AgentMonitor-1:ctx-c33dbe74) (logid:5442158c) Found the following agents
>> behind on ping: [183]
>> 2019-07-25 04:01:22,775 WARN  [c.c.a.m.AgentManagerImpl]
>> (AgentMonitor-1:ctx-c33dbe74) (logid:5442158c) Disconnect agent for
>> CPVM/SSVM due to physical connection close. host: 183
>> 2019-07-25 04:01:22,778 INFO  [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Host 183 is disconnecting
>> with event ShutdownRequested
>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) The next status of agent
>> 183is Disconnected, current status is Up
>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Deregistering link for 183
>> with state Disconnected
>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Remove Agent : 183
>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.ConnectedAgentAttache]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Processing Disconnect.
>> 2019-07-25 04:01:22,782 DEBUG [c.c.a.m.AgentAttache]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Seq
>> 183-7541559051008607242: Sending disconnect to class
>> com.cloud.agent.manager.SynchronousListener
>> 2019-07-25 04:01:22,782 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: com.cloud.hypervisor.xenserver.discoverer.XcpServerDiscoverer
>> 2019-07-25 04:01:22,782 DEBUG [c.c.u.n.NioConnection]
>> (pool-2-thread-1:null) (logid:) Closing socket Socket[addr=/172.30.32.16
>> ,port=38250,localport=8250]
>> 2019-07-25 04:01:22,782 DEBUG [c.c.a.m.AgentAttache]
>> (StatsCollector-2:ctx-b55657a9) (logid:dafc4881) Seq
>> 183-7541559051008607242: Waiting some more time because this is the current
>> command
>> 2019-07-25 04:01:22,782 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: com.cloud.hypervisor.hyperv.discoverer.HypervServerDiscoverer
>> 2019-07-25 04:01:22,783 DEBUG [c.c.a.m.AgentAttache]
>> (StatsCollector-2:ctx-b55657a9) (logid:dafc4881) Seq
>> 183-7541559051008607242: Waiting some more time because this is the current
>> command
>> 2019-07-25 04:01:22,783 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: com.cloud.deploy.DeploymentPlanningManagerImpl
>> 2019-07-25 04:01:22,783 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: com.cloud.network.security.SecurityGroupListener
>> 2019-07-25 04:01:22,783 INFO  [c.c.u.e.CSExceptionErrorCode]
>> (StatsCollector-2:ctx-b55657a9) (logid:dafc4881) Could not find exception:
>> com.cloud.exception.OperationTimedoutException in error code list for
>> exceptions
>> 2019-07-25 04:01:22,783 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: org.apache.cloudstack.engine.orchestration.NetworkOrchestrator
>> 2019-07-25 04:01:22,783 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: com.cloud.vm.ClusteredVirtualMachineManagerImpl
>> 2019-07-25 04:01:22,783 WARN  [c.c.a.m.AgentAttache]
>> (StatsCollector-2:ctx-b55657a9) (logid:dafc4881) Seq
>> 183-7541559051008607242: Timed out on null
>> 2019-07-25 04:01:22,783 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>> listener: com.cloud.storage.listener.StoragePoolMonitor
>> 2019-07-25 04:01:22,784 DEBUG [c.c.a.m.AgentAttache]
>> (StatsCollector-2:ctx-b55657a9) (logid:dafc4881) Seq
>> 183-7541559051008607242: Cancell

Re: Secondary Storage VM timeout issue every hour

2019-07-25 Thread Rakesh v
Yes I was monitoring it continuously. Below are the steps which I was doing 
when issue happened


1. Ping from MGT server to ssvm
2. Ping from ssvm to secondary storage ip
3. Ping from ssvm to public IP like 8.8.8.8
4. Ping from MGT server to node in which ssvm was running


Out of all these, the ping drops were observed from MGT server to ssvm and mgt 
server to nodes. Basically all nodes lost connection. Then it recovered itself 
after 1 minute.


Sent from my iPhone

> On 25-Jul-2019, at 3:48 PM, Andrija Panic  wrote:
> 
> Can you observe the status of SSVM (is it UP/Connecting/Disconnected/Down)
> while you have issues?
> 
> I would advise checking your Secondary Storage itself - and also running
> the SSVM diagnose script  /usr/local/cloud/systemvm/ssvm-check.sh - observe
> if any errors with NFS or others.
> 
> Lastly - and don't laugh - check that you don't have issues with networking
> equipment (some of us had VRY strange issues in connectivity some years
> ago with crappy QCT/Quanta Switches in MLAG setup)
> 
> Andrija
> 
>> On Thu, 25 Jul 2019 at 15:42, Rakesh v  wrote:
>> 
>> Yes I have set the ip's of the three MGT servers in the "host" field
>> 
>> Sent from my iPhone
>> 
>>> On 25-Jul-2019, at 2:14 PM, Pierre-Luc Dion  wrote:
>>> 
>>> Do you have a load balancer in front of cloudstack? Did you set the
>> global
>>> settings "host" to the ip of the mgmt server?
>>> 
>>> 
>>> Le jeu. 25 juill. 2019 03 h 24, Rakesh Venkatesh <
>> www.rakeshv@gmail.com>
>>> a écrit :
>>> 
>>>> Hello People
>>>> 
>>>> 
>>>> I have a strange issue where mgt server times out to send a command to
>>>> secondary storage VM every hour and because of this UI won't be
>> accessible
>>>> for a short duration of time. Sometimes I have to restart mgt server to
>> get
>>>> it back to working state and sometimes I don't need to restart it. I
>> also
>>>> see some exceptions while fetching the storage stats.
>>>> 
>>>> 
>>>> The log says secondary storage VM is lagging behind mgt server in ping
>> and
>>>> it sends a disconnect message to other components. Can you let me know
>> how
>>>> to troubleshoot this issue? I destroyed the secondary storage VM but the
>>>> issue still persists. I checked the date/time on the mgt server and SSVM
>>>> and they are same. This is happening for quite a few days now. Below are
>>>> the logs
>>>> 
>>>> 
>>>> 
>>>> 2019-07-25 04:01:22,769 INFO  [c.c.a.m.AgentManagerImpl]
>>>> (AgentMonitor-1:ctx-c33dbe74) (logid:5442158c) Found the following
>> agents
>>>> behind on ping: [183]
>>>> 2019-07-25 04:01:22,775 WARN  [c.c.a.m.AgentManagerImpl]
>>>> (AgentMonitor-1:ctx-c33dbe74) (logid:5442158c) Disconnect agent for
>>>> CPVM/SSVM due to physical connection close. host: 183
>>>> 2019-07-25 04:01:22,778 INFO  [c.c.a.m.AgentManagerImpl]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Host 183 is
>> disconnecting
>>>> with event ShutdownRequested
>>>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.AgentManagerImpl]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) The next status of agent
>>>> 183is Disconnected, current status is Up
>>>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.AgentManagerImpl]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Deregistering link for
>> 183
>>>> with state Disconnected
>>>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.AgentManagerImpl]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Remove Agent : 183
>>>> 2019-07-25 04:01:22,781 DEBUG [c.c.a.m.ConnectedAgentAttache]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Processing Disconnect.
>>>> 2019-07-25 04:01:22,782 DEBUG [c.c.a.m.AgentAttache]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Seq
>>>> 183-7541559051008607242: Sending disconnect to class
>>>> com.cloud.agent.manager.SynchronousListener
>>>> 2019-07-25 04:01:22,782 DEBUG [c.c.a.m.AgentManagerImpl]
>>>> (AgentTaskPool-1:ctx-66de2057) (logid:841d2a63) Sending Disconnect to
>>>> listener: com.cloud.hypervisor.xenserver.discoverer.XcpServerDiscoverer
>>>> 2019-07-25 04:01:22,782 DEBUG [c.c.u.n.NioConnection]
>>>> (pool-2-thread-1:null) (logid:) Closing socket Socket[addr=/
&g

Re: Secondary Storage VM timeout issue every hour

2019-07-25 Thread Rakesh v
The ping between mgt server and ssvm fails because mgt sends disconnect message 
to all nodes. If you look at the logs I pasted in first email, the mgt server 
thinks ssvm is lagging behind on ping and sends a disconnect message without 
investigation for all nodes. Also it happens at the beginning of every hour.


So I'm sure network is not the issue here.

Sent from my iPhone

> On 25-Jul-2019, at 4:46 PM, Andrija Panic  wrote:
> 
> since basic network connectivity (ping failures) was down between mgmts and
> nodes (and SSVM on it)  - I would point my finger to your networking
> equipment - i.e. I expect zero problems with ACS (since pings fail).
> 
> Let us know how it goes.
> 
> Andrija
> 
>> On Thu, 25 Jul 2019 at 16:04, Rakesh v  wrote:
>> 
>> Yes I was monitoring it continuously. Below are the steps which I was
>> doing when issue happened
>> 
>> 
>> 1. Ping from MGT server to ssvm
>> 2. Ping from ssvm to secondary storage ip
>> 3. Ping from ssvm to public IP like 8.8.8.8
>> 4. Ping from MGT server to node in which ssvm was running
>> 
>> 
>> Out of all these, the ping drops were observed from MGT server to ssvm and
>> mgt server to nodes. Basically all nodes lost connection. Then it recovered
>> itself after 1 minute.
>> 
>> 
>> Sent from my iPhone
>> 
>>> On 25-Jul-2019, at 3:48 PM, Andrija Panic 
>> wrote:
>>> 
>>> Can you observe the status of SSVM (is it
>> UP/Connecting/Disconnected/Down)
>>> while you have issues?
>>> 
>>> I would advise checking your Secondary Storage itself - and also running
>>> the SSVM diagnose script  /usr/local/cloud/systemvm/ssvm-check.sh -
>> observe
>>> if any errors with NFS or others.
>>> 
>>> Lastly - and don't laugh - check that you don't have issues with
>> networking
>>> equipment (some of us had VRY strange issues in connectivity some
>> years
>>> ago with crappy QCT/Quanta Switches in MLAG setup)
>>> 
>>> Andrija
>>> 
>>>> On Thu, 25 Jul 2019 at 15:42, Rakesh v 
>> wrote:
>>>> 
>>>> Yes I have set the ip's of the three MGT servers in the "host" field
>>>> 
>>>> Sent from my iPhone
>>>> 
>>>>> On 25-Jul-2019, at 2:14 PM, Pierre-Luc Dion 
>> wrote:
>>>>> 
>>>>> Do you have a load balancer in front of cloudstack? Did you set the
>>>> global
>>>>> settings "host" to the ip of the mgmt server?
>>>>> 
>>>>> 
>>>>> Le jeu. 25 juill. 2019 03 h 24, Rakesh Venkatesh <
>>>> www.rakeshv@gmail.com>
>>>>> a écrit :
>>>>> 
>>>>>> Hello People
>>>>>> 
>>>>>> 
>>>>>> I have a strange issue where mgt server times out to send a command to
>>>>>> secondary storage VM every hour and because of this UI won't be
>>>> accessible
>>>>>> for a short duration of time. Sometimes I have to restart mgt server
>> to
>>>> get
>>>>>> it back to working state and sometimes I don't need to restart it. I
>>>> also
>>>>>> see some exceptions while fetching the storage stats.
>>>>>> 
>>>>>> 
>>>>>> The log says secondary storage VM is lagging behind mgt server in ping
>>>> and
>>>>>> it sends a disconnect message to other components. Can you let me know
>>>> how
>>>>>> to troubleshoot this issue? I destroyed the secondary storage VM but
>> the
>>>>>> issue still persists. I checked the date/time on the mgt server and
>> SSVM
>>>>>> and they are same. This is happening for quite a few days now. Below
>> are
>>>>>> the logs
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 2019-07-25 04:01:22,769 INFO  [c.c.a.m.AgentManagerImpl]
>>>>>> (AgentMonitor-1:ctx-c33dbe74) (logid:5442158c) Found the following
>>>> agents
>>>>>> behind on ping: [183]
>>>>>> 2019-07-25 04:01:22,775 WARN  [c.c.a.m.AgentManagerImpl]
>>>>>> (AgentMonitor-1:ctx-c33dbe74) (logid:5442158c) Disconnect agent for
>>>>>> CPVM/SSVM due to physical connection close. host: 183
>>>>>> 2019-07-25 04:01:22,778 INFO  [c.c.a.m.AgentManagerImpl

Re: Secondary Storage VM timeout issue every hour

2019-07-25 Thread Rakesh v
True, but I thought since mgt sends disconnect to ssvm, I thought it will reset 
the interface on which is connecting

Sent from my iPhone

> On 25-Jul-2019, at 4:57 PM, Andrija Panic  wrote:
> 
> In your previous mail, I understond that you used the OS tool "ping" and
> NOT refering to internal ACS pings?
> "Out of all these, the ping drops were observed from MGT server to ssvm and
> mgt server to nodes. Basically all nodes lost connection. Then it recovered
> itself after 1 minute."
> 
> 
>> On Thu, 25 Jul 2019 at 16:55, Rakesh v  wrote:
>> 
>> The ping between mgt server and ssvm fails because mgt sends disconnect
>> message to all nodes. If you look at the logs I pasted in first email, the
>> mgt server thinks ssvm is lagging behind on ping and sends a disconnect
>> message without investigation for all nodes. Also it happens at the
>> beginning of every hour.
>> 
>> 
>> So I'm sure network is not the issue here.
>> 
>> Sent from my iPhone
>> 
>>> On 25-Jul-2019, at 4:46 PM, Andrija Panic 
>> wrote:
>>> 
>>> since basic network connectivity (ping failures) was down between mgmts
>> and
>>> nodes (and SSVM on it)  - I would point my finger to your networking
>>> equipment - i.e. I expect zero problems with ACS (since pings fail).
>>> 
>>> Let us know how it goes.
>>> 
>>> Andrija
>>> 
>>>> On Thu, 25 Jul 2019 at 16:04, Rakesh v 
>> wrote:
>>>> 
>>>> Yes I was monitoring it continuously. Below are the steps which I was
>>>> doing when issue happened
>>>> 
>>>> 
>>>> 1. Ping from MGT server to ssvm
>>>> 2. Ping from ssvm to secondary storage ip
>>>> 3. Ping from ssvm to public IP like 8.8.8.8
>>>> 4. Ping from MGT server to node in which ssvm was running
>>>> 
>>>> 
>>>> Out of all these, the ping drops were observed from MGT server to ssvm
>> and
>>>> mgt server to nodes. Basically all nodes lost connection. Then it
>> recovered
>>>> itself after 1 minute.
>>>> 
>>>> 
>>>> Sent from my iPhone
>>>> 
>>>>> On 25-Jul-2019, at 3:48 PM, Andrija Panic 
>>>> wrote:
>>>>> 
>>>>> Can you observe the status of SSVM (is it
>>>> UP/Connecting/Disconnected/Down)
>>>>> while you have issues?
>>>>> 
>>>>> I would advise checking your Secondary Storage itself - and also
>> running
>>>>> the SSVM diagnose script  /usr/local/cloud/systemvm/ssvm-check.sh -
>>>> observe
>>>>> if any errors with NFS or others.
>>>>> 
>>>>> Lastly - and don't laugh - check that you don't have issues with
>>>> networking
>>>>> equipment (some of us had VRY strange issues in connectivity some
>>>> years
>>>>> ago with crappy QCT/Quanta Switches in MLAG setup)
>>>>> 
>>>>> Andrija
>>>>> 
>>>>>> On Thu, 25 Jul 2019 at 15:42, Rakesh v 
>>>> wrote:
>>>>>> 
>>>>>> Yes I have set the ip's of the three MGT servers in the "host" field
>>>>>> 
>>>>>> Sent from my iPhone
>>>>>> 
>>>>>>> On 25-Jul-2019, at 2:14 PM, Pierre-Luc Dion 
>>>> wrote:
>>>>>>> 
>>>>>>> Do you have a load balancer in front of cloudstack? Did you set the
>>>>>> global
>>>>>>> settings "host" to the ip of the mgmt server?
>>>>>>> 
>>>>>>> 
>>>>>>> Le jeu. 25 juill. 2019 03 h 24, Rakesh Venkatesh <
>>>>>> www.rakeshv@gmail.com>
>>>>>>> a écrit :
>>>>>>> 
>>>>>>>> Hello People
>>>>>>>> 
>>>>>>>> 
>>>>>>>> I have a strange issue where mgt server times out to send a command
>> to
>>>>>>>> secondary storage VM every hour and because of this UI won't be
>>>>>> accessible
>>>>>>>> for a short duration of time. Sometimes I have to restart mgt server
>>>> to
>>>>>> get
>>>>>>>> it back to working state and sometimes I don't need to restart it. I
>>>>>> also
>>>>&

Re: Querying async job result

2019-08-08 Thread Rakesh v
Hello Anurag


Thanks for the reply. The host does transit to  maintenance mode eventually but 
the asynchronous job status never changes. Right now I'm periodically fetching 
the resource_state from DB to see if it changes to "Maintenance". Is there any 
better way to do it like using triggers or events instead of periodic polling?

Sent from my iPhone

> On 08-Aug-2019, at 10:52 AM, Anurag Awasthi  
> wrote:
> 
> Hi Rakesh,
> 
> You seem to be doing the right thing. I think what you have encoutered is a 
> bug in prepareForMaintenance API. The host tends to be stuck in that state in 
> some scenarios. Perhaps, when a VM enters an error state. I would advise 
> canceling maintenance mode and examining what states the VMs are in. Ensure 
> there are no unexpected errors on VMs, clean them up manually if needed. Then 
> retry prepare for maintenance mode.
> 
> There is an open PR for fixing this issue as well - 
> https://github.com/apache/cloudstack/pull/3425 . While this got sidetracked 
> as we worked on 4.13.0, this will make it in 4.13.1.
> 
> Kind Regards,
> Anurag
> 
> From: Rakesh Venkatesh 
> Sent: Thursday, August 8, 2019 2:09 PM
> To: us...@cloudstack.apache.org ; 
> dev@cloudstack.apache.org 
> Subject: Querying async job result
> 
> Hello
> 
> 
> I want to know what is the best way to query the async job result using
> queryAsyncJobResult api. According to the documentation in
> http://docs.cloudstack.apache.org/projects/archived-cloudstack-getting-started/en/latest/dev.html
> ,
> the "jobstatus" of 1 means the command completed but im facing an issue
> where even though the command is still running, the "jobstatus" is always 1.
> 
> Im running "prepareHostForMaintenance" command which returns the jobid.
> When I run queryAsyncJobResult for this jobid, the jobstatus will always be
> 1 even though the hypervisor is still not in maintenance mode.
> 
> So can anyone tell me what is the best way to check if the hypervisor is in
> maintenance mode or not? Im using 4.11 version
> 
> 
> Below are the result which I get
> 
> 
> "resourcestate": "PrepareForMaintenance",
> "jobresultcode": 0,
>  "jobresulttype": "object",
>  "jobstatus": 1,
> 
> --
> Thanks and regards
> Rakesh venkatesh
> 
> anurag.awas...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
> 
> 
> 


Re: Querying async job result

2019-08-08 Thread Rakesh v
So then I guess the correct way to check if host is in maintenance is through 
querying DB.

Sent from my iPhone

> On 08-Aug-2019, at 12:03 PM, Anurag Awasthi  
> wrote:
> 
> Hi Rakesh,
> 
> Andrija is correct. Internally, all the API call does is move the host to a 
> different state. Periodically (ping.interval duration apart) MS would attempt 
> migration of VMs. Once the host has zero running VMs and no VM in 
> failure/error state it would be marked in maintenance mode. 
> 
> Regarding your 2nd question - how to track if maintenance state, one option 
> is that you query DB to see the state. The other option could be to see on 
> event bus for "MAINT.PREPARE" in completed state. I haven't seen this in 
> practice but perhaps you can dig in a bit to explore.
> 
> Regards,
> Anurag
> 
> On 8/8/19, 3:17 PM, "Andrija Panic"  wrote:
> 
>Rakesh,
> 
>I'm not quite sure if this is a bug/misbehaviour, but it is indeed a
>confusing one.
> 
>When you ask a host to go to maintenance mode, , you are using
>prepareHostForMaintenance as you said, and this will trigger the host to go
>into the "PrepareForMaintenance" state... so the job does indeed completes
>within 2-3sec usually, as you can actually see in the GUI, after a 2-3 secs
>of spinning circle and confirmation that it has been done.
> 
>Now, after the host has reached the PrepareForMaintenance state, ACS will
>migrate away VMs, and I can only assume that the mgmt server will mark it
>as in "Maintenance" state once it has zero VMs.
>So you can query for the status of the host for the "resourcestate" and
>observe when it goes into "Maintenance" state.
> 
>Regards
>Andrija
> 
> 
> anurag.awas...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
> 
> 
> 
>> On Thu, 8 Aug 2019 at 11:18, Rakesh v  wrote:
>> 
>> Hello Anurag
>> 
>> 
>> Thanks for the reply. The host does transit to  maintenance mode
>> eventually but the asynchronous job status never changes. Right now I'm
>> periodically fetching the resource_state from DB to see if it changes to
>> "Maintenance". Is there any better way to do it like using triggers or
>> events instead of periodic polling?
>> 
>> Sent from my iPhone
>> 
>>> On 08-Aug-2019, at 10:52 AM, Anurag Awasthi <
>> anurag.awas...@shapeblue.com> wrote:
>>> 
>>> Hi Rakesh,
>>> 
>>> You seem to be doing the right thing. I think what you have encoutered
>> is a bug in prepareForMaintenance API. The host tends to be stuck in that
>> state in some scenarios. Perhaps, when a VM enters an error state. I would
>> advise canceling maintenance mode and examining what states the VMs are in.
>> Ensure there are no unexpected errors on VMs, clean them up manually if
>> needed. Then retry prepare for maintenance mode.
>>> 
>>> There is an open PR for fixing this issue as well -
>> https://github.com/apache/cloudstack/pull/3425 . While this got
>> sidetracked as we worked on 4.13.0, this will make it in 4.13.1.
>>> 
>>> Kind Regards,
>>> Anurag
>>> 
>>> From: Rakesh Venkatesh 
>>> Sent: Thursday, August 8, 2019 2:09 PM
>>> To: us...@cloudstack.apache.org ;
>> dev@cloudstack.apache.org 
>>> Subject: Querying async job result
>>> 
>>> Hello
>>> 
>>> 
>>> I want to know what is the best way to query the async job result using
>>> queryAsyncJobResult api. According to the documentation in
>>> 
>> http://docs.cloudstack.apache.org/projects/archived-cloudstack-getting-started/en/latest/dev.html
>>> ,
>>> the "jobstatus" of 1 means the command completed but im facing an issue
>>> where even though the command is still running, the "jobstatus" is
>> always 1.
>>> 
>>> Im running "prepareHostForMaintenance" command which returns the jobid.
>>> When I run queryAsyncJobResult for this jobid, the jobstatus will always
>> be
>>> 1 even though the hypervisor is still not in maintenance mode.
>>> 
>>> So can anyone tell me what is the best way to check if the hypervisor is
>> in
>>> maintenance mode or not? Im using 4.11 version
>>> 
>>> 
>>> Below are the result which I get
>>> 
>>> 
>>> "resourcestate": "PrepareForMaintenance",
>>> "jobresultcode": 0,
>>> "jobresulttype": "object",
>>> "jobstatus": 1,
>>> 
>>> --
>>> Thanks and regards
>>> Rakesh venkatesh
>>> 
>>> anurag.awas...@shapeblue.com
>>> www.shapeblue.com
>>> Amadeus House, Floral Street, London  WC2E 9DPUK
>>> @shapeblue
>>> 
>>> 
>>> 
>> 
> 
> 
>-- 
> 
>Andrija Panić
> 
> 


Re: [DISCUSS] JDK11 for CloudStack

2019-09-27 Thread Rakesh v
Hello Rohit

Wanted to know if you planning to use GraalVM for compiling or have any plans 
of using it?

Sent from my iPhone

> On 27-Sep-2019, at 6:13 PM, Rohit Yadav  wrote:
> 
> All,
> 
> I started this PR during CCCNA19 hackathon to move to JDK11:
> https://github.com/apache/cloudstack/pull/3601
> 
> So far I'm able to build the codebase (along with tests), deploy-db and run 
> management server and deploy a zone (vm etc) against simulator, with 
> -DskipTests=true.
> 
> However, many tests fail due to issues with powermock and mockito 
> dependencies. I tried bumping up their versions but it led to a new different 
> set of errors.
> 
> Any advice on how to fix the test failures? Thanks.
> 
> 
> Regards,
> 
> Rohit Yadav
> 
> Software Architect, ShapeBlue
> 
> https://www.shapeblue.com
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
> 
> 
> 


Re: Number of hosts in a cluster

2019-10-30 Thread Rakesh v
Nice thanks

Sent from my iPhone

> On 30-Oct-2019, at 12:52 PM, Andrija Panic  wrote:
> 
> No limit for KVM (false docs), but don't abuse, due to too many KVM hosts
> i.e. accessing same cluster-wide primary storage etc.
> But otherwise, no technical limits.
> 
>> On Wed, 30 Oct 2019 at 11:34, Rakesh Venkatesh 
>> wrote:
>> 
>> Hello
>> 
>> 
>> In the documentation, I see that there is a configuration requirement of
>> not adding more than 16 hosts in a cluster: "For KVM, do not put more than
>> 16 hosts in a cluster."
>> 
>> In our setup, we have more than 18 nodes in a cluster without any issues so
>> far. So I wanted to know what is the actual limit of hosts in a cluster. I
>> didn't see any such value in global setting as well. If there is no such
>> limitation then we need to update the documentation.
>> 
>> --
>> Thanks and regards
>> Rakesh venkatesh
>> 
> 
> 
> -- 
> 
> Andrija Panić


Re: Cloudmonkey pypi page

2021-10-11 Thread Rakesh v
I think going forward communityight use go-cloudstack/cmk but I'm not sure 

Sent from my iPhone

> On Oct 11, 2021, at 6:03 PM, Sina Kashipazha 
>  wrote:
> 
> 
> Hey there,
> 
> I googled cloudmonkey today to find a simple install instruction for a 
> friend. The following page was in the top results, which is a legacy 
> cloudmonkey. It is hard for new users to find valuable information here. I 
> suggest updating this page and add a link to the new cloudmonkey repo.
> https://pypi.org/project/cloudmonkey/
> 
> Kind regards,
> Sina
> 


Re: Docker images

2021-11-18 Thread Rakesh v
Hello Marcus

I was actively using docker files to deploy my own changes in kubernetes and 
also to deploy multiple pods with different versions but I havent played out 
with reducing the image size. I made changes to docker files to just build the 
components I changed rather than building entire package... I was using 
skaffold to auto trigger docker build 

Sent from my iPhone

> On Nov 9, 2021, at 6:15 PM, Marcus  wrote:
> 
> Hi all, I've been familiarizing myself with the Docker image tooling in
> CloudStack, and I have a few questions.  I've been playing with a
> multi-stage build that shrinks the image from ~4Gi to ~800Mi, packages just
> the jar, some UI, and a JDK thinking that it might be more usable.
> 
> 1) Is there anyone actively using these Dockerfiles? It might be
> interesting to know what workflows they're a part of and whether they can
> be changed or if new files should be created.
> 
> 2) I see the Dockerfile.marvin points to a 'builds.cloudstack.org' to pull
> a Marvin bundle, which seems to be down.  Do these artifacts need to be
> moved to 'download.cloudstack.org' or is this just a temporary outage, or
> is it only reachable from CI? I do see the 'latest' tag pulls (which is
> three years old).


Re: [VOTE] Release Apache CloudStack CloudMonkey 6.1.0

2020-07-17 Thread Rakesh v
Downloaded the binary and tested it on Ubuntu. All looked good

Sent from my iPhone

> On 17-Jul-2020, at 7:32 PM, Rohit Yadav  wrote:
> 
> Hi all,
> 
> Thanks for participating and vote for CloudStack CloudMonkey v6.1.0 *passes* 
> with
> 4 PMC + 2 non-PMC votes.
> 
> +1 (PMC / binding)
> 4 person (Sven, Daan, Wei and Gabriel)
> 
> +1 (non binding)
> 2 person (Nicolas and Gregor)
> 
> 0
> none
> 
> -1
> none
> 
> I will send the release announcement shortly. Thanks everyone for 
> participating.
> 
> 
> Regards.
> 
> 
> From: Riepl, Gregor (SWISS TXT) 
> Sent: Wednesday, July 15, 2020 14:13
> To: dev@cloudstack.apache.org ; 
> us...@cloudstack.apache.org 
> Subject: Re: [VOTE] Release Apache CloudStack CloudMonkey 6.1.0
> 
> +1 (non-binding)
> 
> I tested a few common and less-common things and found no regressions.
> Note: I built the binary locally, didn't try the binaries on the Github 
> release page.
> 
> From: Rohit Yadav 
> Sent: 01 July 2020 06:51
> To: dev@cloudstack.apache.org ; 
> us...@cloudstack.apache.org 
> Subject: [VOTE] Release Apache CloudStack CloudMonkey 6.1.0
> 
> Hi All,
> 
> I've created a 6.1.0 release of CloudMonkey, with the following artifacts
> up for a vote:
> 
> Git Branch:
> https://github.com/apache/cloudstack-cloudmonkey/commits/abc31929e74a9f5b07507db203e75393fffc9f3e
> Commit: abc31929e74a9f5b07507db203e75393fffc9f3e
> 
> Commits since last release 6.0.0:
> https://github.com/apache/cloudstack-cloudmonkey/compare/6.0.0...abc31929e74a9f5b07507db203e75393fffc9f3e
> 
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/cloudmonkey-6.1.0
> 
> To facilitate voting and testing, the builds are uploaded in this
> pre-release:
> https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0
> 
> List of changes:
> https://github.com/apache/cloudstack-cloudmonkey/blob/master/CHANGES.md
> 
> PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
> 
> Vote will be open for till the end of the next week (10 July 2020),
> otherwise, extend until we reach lazy consensus. Thanks.
> 
> Regards.
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
> 
> 
> 


Re: [DISCUSS] CloudStack Kubernetes Cluster Auto-Scaler support

2020-10-12 Thread Rakesh v
I prefer providing an API to customers with necessary parameters rather than 
providing yaml files to them. Using API we can do automation also and editing 
yaml files can be sometimes messy

Sent from my iPhone

> On 12-Oct-2020, at 1:13 PM, David Jumani  wrote:
> 
> Hi Daan,
> 
> Thanks for your feedback!
> Wrt the ideas, Submitting a yaml to an API would be redundant since the user 
> can deploy it himself.
> The API proposal was to simplify it for the user so they can just pass min / 
> max size as well as API keys if needed (so no tweaking a yaml file)
> The scaleAPI could have a flag to indicate whether it enables autoscaling or 
> not, and if enabled, the additional fields provided.
> 
> Thanks,
> David
> 
> From: Daan Hoogland 
> Sent: Monday, October 12, 2020 4:36 PM
> To: dev 
> Subject: Re: [DISCUSS] CloudStack Kubernetes Cluster Auto-Scaler support
> 
> David,
> as a general principle an API called scale should not be used to
> configure autoscaling of  in my opinion.
> So option 1 seems the best to me (an submitYamlForKubernetes-API?) However
> instead of requiring an yaml we could just ask for the required fields
> 
>> On Mon, Oct 12, 2020 at 12:51 PM David Jumani 
>> wrote:
>> 
>> Hi,
>> 
>> I'm currently working on adding support for CloudStack as a cloud provider
>> for Kubernetes to allow it to dynamically scale the cluster size based on
>> capacity requirements.
>> It runs as a separate pod in its own deployment and requires an API and
>> Secret key to communicate with CloudStack.
>> 
>> While that's going on, I'd like some feedback on how it can be integrated
>> and even deployed from the CloudStack side. I have three proposals and
>> would like your input :
>> 
>>  1.  Provide the deployment yaml file to the user, have them change the
>> min and max cluster size to suit their requirement, provide the API keys as
>> Kubernetes secrets and deploy it themselves. (Most flexible as the user can
>> change several autoscaling parameters as well)
>>  2.  Deploy it via the scaleKubernetesCluster API. This will require
>> adding additional parameters to the API such as minsize, maxsize, apikey
>> and secretkey for the service to communicate with CloudStack. (Uses default
>> autoscaling parameters, api keys provided by the user)
>>  3.  Deploy it via the scaleKubernetesCluster API, but also create a
>> service account and use its API keys to communicate with CloudStack. The
>> user will still need to provide the minsize and maxsize to the API. (Uses
>> default autoscaling parameters, api keys generated and used by a service
>> account, which if deleted could cause issues)
>> 
>> The design document can be found here :
>> 
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cluster+Autoscaler+for+CloudStack+Kubernetes+Service
>> 
>> Additional info can be found here :
>> 
>> https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md
>> 
>> Look forward to hearing from you!
>> 
>> Thanks,
>> David
>> 
>> david.jum...@shapeblue.com
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>> 
>> 
>> 
>> 
> 
> --
> Daan
> 
> david.jum...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
> 
> 
> 


Re: [DISCUSS] CloudStack Kubernetes Cluster Auto-Scaler support

2020-10-13 Thread Rakesh v
Service account best suits the need. We can probably apply some RBAC on the 
account if possible

Sent from my iPhone

> On 12-Oct-2020, at 2:19 PM, David Jumani  wrote:
> 
> Thanks Rakesh.
> Do you think it would be better to have the user provide the API keys or 
> create a service account and use its keys?
> 
> ____
> From: Rakesh v 
> Sent: Monday, October 12, 2020 5:12 PM
> To: dev@cloudstack.apache.org 
> Subject: Re: [DISCUSS] CloudStack Kubernetes Cluster Auto-Scaler support
> 
> I prefer providing an API to customers with necessary parameters rather than 
> providing yaml files to them. Using API we can do automation also and editing 
> yaml files can be sometimes messy
> 
> Sent from my iPhone
> 
> 
> david.jum...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
> 
> 
> 
>> On 12-Oct-2020, at 1:13 PM, David Jumani  wrote:
>> 
>> Hi Daan,
>> 
>> Thanks for your feedback!
>> Wrt the ideas, Submitting a yaml to an API would be redundant since the user 
>> can deploy it himself.
>> The API proposal was to simplify it for the user so they can just pass min / 
>> max size as well as API keys if needed (so no tweaking a yaml file)
>> The scaleAPI could have a flag to indicate whether it enables autoscaling or 
>> not, and if enabled, the additional fields provided.
>> 
>> Thanks,
>> David
>> 
>> From: Daan Hoogland 
>> Sent: Monday, October 12, 2020 4:36 PM
>> To: dev 
>> Subject: Re: [DISCUSS] CloudStack Kubernetes Cluster Auto-Scaler support
>> 
>> David,
>> as a general principle an API called scale should not be used to
>> configure autoscaling of  in my opinion.
>> So option 1 seems the best to me (an submitYamlForKubernetes-API?) However
>> instead of requiring an yaml we could just ask for the required fields
>> 
>>> On Mon, Oct 12, 2020 at 12:51 PM David Jumani 
>>> wrote:
>>> 
>>> Hi,
>>> 
>>> I'm currently working on adding support for CloudStack as a cloud provider
>>> for Kubernetes to allow it to dynamically scale the cluster size based on
>>> capacity requirements.
>>> It runs as a separate pod in its own deployment and requires an API and
>>> Secret key to communicate with CloudStack.
>>> 
>>> While that's going on, I'd like some feedback on how it can be integrated
>>> and even deployed from the CloudStack side. I have three proposals and
>>> would like your input :
>>> 
>>> 1.  Provide the deployment yaml file to the user, have them change the
>>> min and max cluster size to suit their requirement, provide the API keys as
>>> Kubernetes secrets and deploy it themselves. (Most flexible as the user can
>>> change several autoscaling parameters as well)
>>> 2.  Deploy it via the scaleKubernetesCluster API. This will require
>>> adding additional parameters to the API such as minsize, maxsize, apikey
>>> and secretkey for the service to communicate with CloudStack. (Uses default
>>> autoscaling parameters, api keys provided by the user)
>>> 3.  Deploy it via the scaleKubernetesCluster API, but also create a
>>> service account and use its API keys to communicate with CloudStack. The
>>> user will still need to provide the minsize and maxsize to the API. (Uses
>>> default autoscaling parameters, api keys generated and used by a service
>>> account, which if deleted could cause issues)
>>> 
>>> The design document can be found here :
>>> 
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cluster+Autoscaler+for+CloudStack+Kubernetes+Service
>>> 
>>> Additional info can be found here :
>>> 
>>> https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md
>>> 
>>> Look forward to hearing from you!
>>> 
>>> Thanks,
>>> David
>>> 
>>> david.jum...@shapeblue.com
>>> www.shapeblue.com<http://www.shapeblue.com>
>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>>> @shapeblue
>>> 
>>> 
>>> 
>>> 
>> 
>> --
>> Daan
>> 
>> david.jum...@shapeblue.com
>> www.shapeblue.com<http://www.shapeblue.com>
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>> 
>> 
>> 


Re: Automatically applying DB schema changes

2020-10-17 Thread Rakesh v
How about using flyway in master branch? We currently use it in our fork and 
whenever we make new schema changes they are automatically and no need for 
manual intervention

Sent from my iPhone

> On 16-Oct-2020, at 4:21 PM, Andrija Panic  wrote:
> 
> No way automatically, manually is OK and the only way forward since you are
> on the same version all the time.
> 
> Best,
> 
>> On Fri, 16 Oct 2020, 10:00 Darrin Hüsselmann, <
>> darrin.husselm...@shapeblue.com> wrote:
>> 
>> Hi Rakesh,
>> 
>> db changes are applied automatically if the Cloudstack version changes and
>> there is an upgrade path in the code, I think you may have not changed
>> version.
>> 
>> Cheers
>> Darrin
>> 
>> From: Rakesh Venkatesh 
>> Sent: Friday, October 16, 2020 9:56 AM
>> To: users ; dev 
>> Subject: Automatically applying DB schema changes
>> 
>> Hello Users and Dev
>> 
>> Is there a way a new DB schema changes can be applied automatically
>> whenever I install new packages? My setup was running with two month old
>> changes of 4.15 and when I deployed new packages with latest changes, all
>> the recent db scheme changes are not applied and I need to run it manually.
>> How do I avoid it and how do you guys do it?
>> 
>> For example: This is the error I get
>> 
>> Caused by: java.sql.SQLSyntaxErrorException: Unknown column
>> 'image_store.readonly' in 'field list'
>> 
>> and a big stack trace
>> 
>> This was fixed by applying changes from
>> 
>> https://github.com/apache/cloudstack/blob/master/engine/schema/src/main/resources/META-INF/db/schema-41400to41500.sql#L198-L222
>> 
>> 
>> Another error was
>> 
>> Caused by: java.sql.SQLSyntaxErrorException: Unknown column
>> 'project_invitations.account_role' in 'field list'
>> 
>> 
>> So I had to apply the schema needed for project_role related queries
>> --
>> Thanks and regards
>> Rakesh
>> 
>> darrin.husselm...@shapeblue.com
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>> 
>> 
>> 
>> 


Re: [DISCUSS] [IMPORTANT] CloudStack 4.14 release WITH updated UI

2020-12-04 Thread Rakesh v
I agree to it. Having one package which deploys both backend and frontend is 
better

Sent from my iPhone

> On 03-Dec-2020, at 7:09 PM, pau...@apache.org wrote:
> 
> PMC members, PLEASE ONLY RESPOND ON THE DEV THREAD.
> 
> 
> 
> Hi all,
> 
> 
> 
> Please read all of this, I know it's a bit wordy..
> 
> 
> 
> We're pretty much there wrt to RC1 of CloudStack and the updated UI.  We
> have one issue remaining, and that is about the packaging and voting on
> CloudStack 'engine' and its UI.
> 
> The UI has been developed asynchronously, but at time of a CloudStack
> release, we really need to be able have definite link between the two
> codebases so that we release 'one thing' when we release CloudStack.
> 
> 
> 
> A while back, I created a proposal [1], which I'd like to again put forward
> as the default process unless there are any objections.
> 
> 
> 
> In addition;
> 
> 
> 
> 1. I think that the repo 'apache/cloudstack-primate' should be renamed to
> '/apache/cloudstack-ui', to keep everything just 'cloudstack'.
> 
> 
> 
> 2. In the repo RPM/DEB packaging make cloudstack-ui a dependency of
> cloudstack - and finally, when creating the repo, include both the
> 'cloudstack' and 'cloudstack-ui'  RPMs/DEBs so that there is one repo
> (http://download.cloudstack.org/centos7/4.15/) which contains both.
> 
> 
> 
> PS - PMC members, PLEASE ONLY RESPOND ON THE DEV THREAD.
> 
> 
> 
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=165222727
> 
> 
> 
> Kind regards
> 
> 
> 
> Paul Angus
> 


Re: SSVM and CPVM agent unable to start after console proxy SSL certificate update

2020-12-27 Thread Rakesh v
Probably try destroying them once? Any warn or error message in mgt logs or 
ssvm /var/log/messages?

Sent from my iPhone

> On 26-Dec-2020, at 5:12 AM, Cloud List  wrote:
> 
> Hi,
> 
> Merry Christmas to all.
> 
> We are using Cloudstack with KVM hypervisor. Since our console proxy SSL
> certificate has expired, we updated our new SSL certificate using below
> method:
> 
> http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.9/systemvm.html#using-a-ssl-certificate-for-the-console-proxy
> 
> We have done the above method in the past years without any issues, however
> this time round, both the SSVM and CPVM agents are not able to start after
> the update.
> 
> The state for both VMs are up but agents are in "disconnected" state. We
> are still able to login to the SSVM, and found out that the cloud service
> is not running.
> 
> root@s-4200-VM:~# service cloud status
> CloudStack cloud service is not running
> 
> Tried to start the service:
> 
> root@s-4200-VM:~# service cloud start
> Starting CloudStack cloud service (type=secstorage) Success
> 
> But the service is not started:
> 
> root@s-4200-VM:~# service cloud status
> CloudStack cloud service is not running
> 
> Below is the logs from /var/log/cloud.log:
> 
> =
> Sat Dec 26 03:45:04 UTC 2020 Executing cloud-early-config
> Sat Dec 26 03:45:04 UTC 2020 Detected that we are running inside kvm guest
> Sat Dec 26 03:45:04 UTC 2020 Found a non empty cmdline file. Will now exit
> the loop and proceed with configuration.
> Sat Dec 26 03:45:04 UTC 2020 Patching  cloud service
> Sat Dec 26 03:45:10 UTC 2020 Updating log4j-cloud.xml
> Sat Dec 26 03:45:10 UTC 2020 Setting up secondary storage system vm
> Sat Dec 26 03:45:10 UTC 2020 checking that eth0 has IP
> Sat Dec 26 03:45:11 UTC 2020 waiting for eth0 interface setup with ip
> timer=0
> Sat Dec 26 03:45:11 UTC 2020 checking that eth1 has IP
> Sat Dec 26 03:45:11 UTC 2020 checking that eth2 has IP
> Sat Dec 26 03:45:20 UTC 2020 checking that eth3 has IP
> Sat Dec 26 03:45:20 UTC 2020 Successfully setup storage network with
> STORAGE_IP:10.19.22.67, STORAGE_NETMASK:255.255.240.0, STORAGE_CIDR:
> Sat Dec 26 03:45:20 UTC 2020 Setting up route of RFC1918 space to 10.19.16.1
> Sat Dec 26 03:45:20 UTC 2020 Setting up apache web server
> Sat Dec 26 03:45:20 UTC 2020 setting up apache2 for post upload of
> volume/template
> Sat Dec 26 03:45:20 UTC 2020 rewrite rules already exist in file
> /etc/apache2/sites-available/default-ssl
> Sat Dec 26 03:45:20 UTC 2020 adding cors rules to file:
> /etc/apache2/sites-available/default-ssl
> Sat Dec 26 03:45:21 UTC 2020 cloud: disable rp_filter
> Sat Dec 26 03:45:21 UTC 2020 disable rpfilter
> Sat Dec 26 03:45:21 UTC 2020 cloud: enable_fwding = 0
> Sat Dec 26 03:45:21 UTC 2020 enable_fwding = 0
> Sat Dec 26 03:45:21 UTC 2020 Enable service haproxy = 0
> Sat Dec 26 03:45:21 UTC 2020 Processors = 1  Enable service  = 0
> Sat Dec 26 03:45:21 UTC 2020 Enable service dnsmasq = 0
> Sat Dec 26 03:45:21 UTC 2020 Enable service cloud-passwd-srvr = 0
> Sat Dec 26 03:45:21 UTC 2020 Enable service cloud = 1
> =
> 
> Result of /usr/local/cloud/systemvm/ssvm-check.sh:
> 
> =
> root@s-4200-VM:/var/log# /usr/local/cloud/systemvm/ssvm-check.sh
> 
> First DNS server is  8.8.8.8
> PING 8.8.8.8 (8.8.8.8): 48 data bytes
> 56 bytes from 8.8.8.8: icmp_seq=0 ttl=122 time=0.531 ms
> 56 bytes from 8.8.8.8: icmp_seq=1 ttl=122 time=0.676 ms
> --- 8.8.8.8 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.531/0.604/0.676/0.073 ms
> Good: Can ping DNS server
> 
> Good: DNS resolves download.cloud.com
> 
> ERROR: NFS is not currently mounted
> Try manually mounting from inside the VM
> NFS server is  X.X.201.1
> PING X.X.201.1 (X.X.201.1): 48 data bytes
> 56 bytes from X.X.201.1: icmp_seq=0 ttl=255 time=0.463 ms
> 56 bytes from X.X.201.1: icmp_seq=1 ttl=255 time=0.482 ms
> --- X.X.201.1 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.463/0.473/0.482/0.000 ms
> Good: Can ping nfs server
> 
> Management server is 10.237.3.8. Checking connectivity.
> Good: Can connect to management server port 8250
> 
> ERROR: Java process not running.  Try restarting the SSVM.
> root@s-4200-VM:/var/log#
> =
> 
> The result is OK except the NFS test, but we checked the IP address is not
> correct (X.X.201.1 which is the public IP address of the gateway rather
> than the actual NFS server IP). We tested mounting to the actual NFS server
> and it works fine.
> 
> Have tried stopping and starting back the SSVM and the issue still persists.
> 
> Anyone can help to advice how we can resolve the problem?
> 
> Looking forward to y

Re: How to get internet traffic to instance using Isolated network

2021-01-10 Thread Rakesh v
You can get internet traffic to VM using two options which I know. One is 
enabling static Nat on an IP and assign it to VM and other is port forwarding

1. Acquire public IP from isolated network . Enable static Nat and select the 
VM which pops up

2. Enable port forwarding for the public on VR on isolated network and select 
the proper port of VM

Sent from my iPhone

> On 10-Jan-2021, at 12:22 PM, Support Admin  
> wrote:
> 
> 
> Hello,
> 
> I am already setup Advanced zone using VLAN like as public VLAN 500 and guest 
> VLAN 600-999.
> 
> Public ip range : 185.158.20.0/22
> Gateway : 185.158.20.1
> Netmask : 255.255.252.0
> IP Range : 185.158.20.2 - 185.158.23.254
> 
> POD gateway : 192.168.1.1
> IP Range : 192.168.1.50-192.168.1.60
> Guest ip range : 10.10.10.0/8
> 
> My KVM host have two interface Em1 (LAN) interface for CT and Em2 (WAN) 
> interface for public.
> 
> All is working fine.
> 
> Create iSolated network 192.168.30.0/24 with name is LAN3. And add this 
> interface into instance. 
> 
> 
> 
> So I got ip from this network 192.168.30.5 into eth1 but eth0 can't ip from 
> my guest network.
> 
> 
> 
> When I restart network I see error can't assign ip into eth0 interface.
> 
> 
> 
> This is my virtual router interface from LAN3 Isolated network 
> 192.168.30.0/24. and this router public internet ping is ok.
> 
> 
> 
> Also I setup Egress rules 
> 
> 
> 
> 
> I have two query ::
> 1. how to get internet traffic into my instance using my Isolatated network or
> 2. Can I input direct public interface into my Instance. 
> 
> I can't understand what is problem my environment.
> 
> -- 
> 
> Thanks & Regards.
> Support Admin
> 
> Facebook | Twitter | Website
> 
> 116/1 West Malibagh, D. I. T Road
> 
> Dhaka-1217, Bangladesh
> 
> Mob : +088 01716915504
> 
> Email : support.ad...@technologyrss.com
> 
> Web : www.technologyrss.com


Re: [DISCUSS] Marvin tests interaction

2021-03-29 Thread Rakesh v
I have added my thoughts in the issue link. Hope that's useful to you.

Sent from my iPhone

> On Mar 29, 2021, at 4:51 AM, Nicolas Vazquez  
> wrote:
> 
> Hi,
> 
> I would like to propose an idea to improve the interaction with the marvin 
> tests through the management server. This could be useful for development and 
> test environments in which tests could be easily started, configured and 
> their results monitored through the UI.
> 
> This could be achieved by creating a new service in charge of the execution 
> of the tests and sending results back to the management server, so it can 
> display them. A more detailed description: 
> https://github.com/apache/cloudstack/issues/4799
> 
> I would like to hear your thoughts and ideas about it. Would you find this 
> useful?
> 
> 
> Regards,
> 
> Nicolas Vazquez
> 
> nicolas.vazq...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
> 
> 
>