Dedicated hosts for Domain/Account

2019-08-12 Thread Rakesh Venkatesh
Hello

In my cloudstack setup, I have three KVM hypervisors out of which two
hypervisors are dedicated to Root/admin account and the third is not
dedicated. When I enable the maintenance mode on the dedicated hypervisor,
it will always migrate the vm's from dedicated to non dedicated hypervisor
but not to second dedicated hypervisor. I dont think this is the expected
behavior. Can any one please verify? The dedicated hypervisors will be
added to avoid set and the deployment planning manager skips these
hypervisors.

If I dedicate the third hypervisor to different domain and enable the
maintenance mode on the first hypervisor then all the vm's will be stopped
instead of migrating to second dedicated hypervisor of the same
domain/account.


I have highlighted the necessary logs in red. You can see from the logs
that host with id 17 and 20 are dedicated but not 26. When maintenance mode
is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id 26



2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
pools: null
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) DeploymentPlanner allocation algorithm:
com.cloud.deploy.FirstFitPlanner@6fecace4
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) This VM has last host_id specified, trying to choose the
same host: 20
2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) The last host of this VM is in avoid set
2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Cannot choose the last host to deploy this VM
2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Searching resources only under specified Pod: 8
2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
(logid:a16d7711) Listing clusters in order of aggregate capacity, that have
(atleast one host with) enough CPU and RAM capacity under this Pod: 8
2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
pools: null
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) DeploymentPlanner allocation algorithm:
com.cloud.deploy.FirstFitPlanner@6fecace4
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) Trying to allocate a host and storage pools from dc:8,
pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) Is ROOT volume READY (pool already allocated)?: Yes
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
(logid:bbb870bf) This VM has last host_id specified, trying to choose the
same host: 26
2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
(logid:b7e8e3a2) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
pools: null
2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
(logid:b7e8e3a2) DeploymentPlanner allocation algorithm:
com.cloud.deploy.FirstFitPlanner@6fecace4
2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
(logid:b7e8e3a2) Trying to allocate a host and storage pools from dc:8,
pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
(Work-Job-Executor-8:ctx-1cc07ab1 job-2461

Re: Dedicated hosts for Domain/Account

2019-08-12 Thread Andrija Panic
Considering that manual VM LIVE migrations via CloudStack from
non-dedicated to dedicated SHOULD/DOES work - then I would say this is an
"unhandled" case, which indeed should be handled and live migration should
happen instead of stopping the VMs.

I assume someone else might jump in - but if not, please raise GitHub
issues as a bug report.


Thx

On Mon, 12 Aug 2019 at 16:52, Rakesh Venkatesh 
wrote:

> Hello
>
> In my cloudstack setup, I have three KVM hypervisors out of which two
> hypervisors are dedicated to Root/admin account and the third is not
> dedicated. When I enable the maintenance mode on the dedicated hypervisor,
> it will always migrate the vm's from dedicated to non dedicated hypervisor
> but not to second dedicated hypervisor. I dont think this is the expected
> behavior. Can any one please verify? The dedicated hypervisors will be
> added to avoid set and the deployment planning manager skips these
> hypervisors.
>
> If I dedicate the third hypervisor to different domain and enable the
> maintenance mode on the first hypervisor then all the vm's will be stopped
> instead of migrating to second dedicated hypervisor of the same
> domain/account.
>
>
> I have highlighted the necessary logs in red. You can see from the logs
> that host with id 17 and 20 are dedicated but not 26. When maintenance mode
> is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id
> 26
>
>
>
> 2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
> pools: null
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) DeploymentPlanner allocation algorithm:
> com.cloud.deploy.FirstFitPlanner@6fecace4
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
> pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
> 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) This VM has last host_id specified, trying to choose the
> same host: 20
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) The last host of this VM is in avoid set
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Cannot choose the last host to deploy this VM
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Searching resources only under specified Pod: 8
> 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> (logid:a16d7711) Listing clusters in order of aggregate capacity, that have
> (atleast one host with) enough CPU and RAM capacity under this Pod: 8
> 2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> pools: null
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) DeploymentPlanner allocation algorithm:
> com.cloud.deploy.FirstFitPlanner@6fecace4
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) Trying to allocate a host and storage pools from dc:8,
> pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) Is ROOT volume READY (pool already allocated)?: Yes
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> (logid:bbb870bf) This VM has last host_id specified, trying to choose the
> same host: 26
> 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> (logid:b7e8e3a2) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> pools: null
> 2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerI

Re: Dedicated hosts for Domain/Account

2019-08-12 Thread Rakesh Venkatesh
Thanks for the quick reply.
I was browsing through the code and found the following


// check affinity group of type Explicit dedication exists. If No
put
// dedicated pod/cluster/host in avoid list
List vmGroupMappings =
_affinityGroupVMMapDao.findByVmIdType(vm.getId(), "ExplicitDedication");

if (vmGroupMappings != null && !vmGroupMappings.isEmpty()) {
isExplicit = true;
}


So this feature will work only if vm's are associated with affinity groups.
I created two vm's with same affinity group and after enabling the
maintenance mode they were migrated to the other dedicated machines.
So no need to create a github issue I guess.

On Mon, Aug 12, 2019 at 5:04 PM Andrija Panic 
wrote:

> Considering that manual VM LIVE migrations via CloudStack from
> non-dedicated to dedicated SHOULD/DOES work - then I would say this is an
> "unhandled" case, which indeed should be handled and live migration should
> happen instead of stopping the VMs.
>
> I assume someone else might jump in - but if not, please raise GitHub
> issues as a bug report.
>
>
> Thx
>
> On Mon, 12 Aug 2019 at 16:52, Rakesh Venkatesh 
> wrote:
>
> > Hello
> >
> > In my cloudstack setup, I have three KVM hypervisors out of which two
> > hypervisors are dedicated to Root/admin account and the third is not
> > dedicated. When I enable the maintenance mode on the dedicated
> hypervisor,
> > it will always migrate the vm's from dedicated to non dedicated
> hypervisor
> > but not to second dedicated hypervisor. I dont think this is the expected
> > behavior. Can any one please verify? The dedicated hypervisors will be
> > added to avoid set and the deployment planning manager skips these
> > hypervisors.
> >
> > If I dedicate the third hypervisor to different domain and enable the
> > maintenance mode on the first hypervisor then all the vm's will be
> stopped
> > instead of migrating to second dedicated hypervisor of the same
> > domain/account.
> >
> >
> > I have highlighted the necessary logs in red. You can see from the logs
> > that host with id 17 and 20 are dedicated but not 26. When maintenance
> mode
> > is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id
> > 26
> >
> >
> >
> > 2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
> > pools: null
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
> > pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) This VM has last host_id specified, trying to choose the
> > same host: 20
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) The last host of this VM is in avoid set
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Cannot choose the last host to deploy this VM
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Searching resources only under specified Pod: 8
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Listing clusters in order of aggregate capacity, that
> have
> > (atleast one host with) enough CPU and RAM capacity under this Pod: 8
> > 2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> > pools: null
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1

VR Loses Instance IP Address

2019-08-12 Thread li jerry
Hi All
  After our ACS is upgraded to 4.11.3, the VM on the shared network usually 
loses its IP address (it cannot get an IP address in the guest VM).

Analysis of the cloud.log in vr, we found that sometimes restarting or deleting 
the A VM will cause the B VM to disappear in /etc/dhcphosts.txt.
Because /etc/dhcphosts.txt is inconsistent with the data in 
/var/lib/misc/dnsmasq.leases. This triggers delete_leases to remove the IP 
address of the B VM from the DHCP server.



2019-07-25 12:03:58,519  CsHelper.py execute:193 Command 'ip link show eth0 | 
grep 'state DOWN'' returned non-zero exit status 1
2019-07-25 12:03:58,529  CsRoute.py add_network_route:73 Adding route: dev eth0 
table: Table_eth0 network: 10.40.51.0/24 if not present
2019-07-25 12:03:58,530  CsHelper.py execute:188 Executing: ip route show type 
throw 10.40.51.0/24 table Table_eth0 proto static
2019-07-25 12:03:58,544  CsHelper.py execute:188 Executing: sudo ip route flush 
cache
2019-07-25 12:03:58,582  CsHelper.py execute:188 Executing: systemctl start 
cloud-password-server@10.40.51.252
2019-07-25 12:03:58,603  CsHelper.py service:225 Service 
cloud-password-server@10.40.51.252 start
2019-07-25 12:03:58,604  CsRoute.py defaultroute_exists:115 Checking if default 
ipv4 route is present
2019-07-25 12:03:58,604  CsHelper.py execute:188 Executing: ip -4 route list 0/0
2019-07-25 12:03:58,617  CsRoute.py defaultroute_exists:119 Default route 
found: default via 10.40.51.1 dev eth0
2019-07-25 12:03:58,619  CsHelper.py execute:188 Executing: ip addr show
2019-07-25 12:03:58,635  CsFile.py commit:60 Nothing to commit. The 
/etc/dnsmasq.d/cloud.conf file did not change
2019-07-25 12:03:58,635  CsFile.py commit:66 Wrote edited file 
/etc/dhcphosts.txt
2019-07-25 12:03:58,635  CsFile.py commit:68 Updated file in-cache configuration
2019-07-25 12:03:58,635  CsFile.py commit:60 Nothing to commit. The 
/etc/dhcpopts.txt file did not change
2019-07-25 12:03:58,636  CsDhcp.py delete_leases:122 Attempting to delete 
entries from dnsmasq.leases file for VMs which are not on dhcphosts
file
2019-07-25 12:03:58,636  CsDhcp.py delete_leases:133 dhcp_release $(ip route 
get 10.40.51.231 | grep eth | head -1 | awk '{print $3}') 10.40.
51.231 1e:00:94:00:04:40
2019-07-25 12:03:58,636  CsHelper.py execute:188 Executing: dhcp_release $(ip 
route get 10.40.51.231 | grep eth | head -1 | awk '{print $3}')
10.40.51.231 1e:00:94:00:04:40
2019-07-25 12:03:58,660  CsDhcp.py delete_leases:137 Deleted 1 entries from 
dnsmasq.leases file
2019-07-25 12:03:58,661  CsFile.py commit:66 Wrote edited file /etc/hosts
2019-07-25 12:03:58,661  CsFile.py commit:68 Updated file in-cache configuration
2019-07-25 12:03:58,661  CsDhcp.py write_hosts:156 Updated hosts file
2019-07-25 12:03:58,662  CsHelper.py execute:188 Executing: systemctl restart 
dnsmasq
2019-07-25 12:03:58,772  CsHelper.py service:225 Service dnsmasq restart
2019-07-25 12:03:58,772  CsHelper.py execute:188 Executing: systemctl stop 
conntrackd
2019-07-25 12:03:58,793  CsHelper.py service:225 Service conntrackd stop
2019-07-25 12:03:58,793  CsHelper.py execute:188 Executing: systemctl stop 
keepalived
2019-07-25 12:03:58,813  CsHelper.py service:225 Service keepalived stop
2019-07-25 12:03:58,813  CsHelper.py execute:188 Executing: mount
2019-07-25 12:04:31,229  update_config.py :146 update_config.py :: 
Processing incoming file => vm_dhcp_entry.json.41460506-6ea7-4474-
a970-b923726889b8



Incorrect packages being maintained at download.cloudstack.org

2019-08-12 Thread Syed Mushtaq
Hi All,

We have found that the RPMs for 4.12.0.0 on http://download.cloudstack.org/
are incorrect. For example the package "cloudstack-common" has files that
are not present in the 4.12 branch. Does anyone knows who maintains them?
We need to fix that as it is the official URL for installation.

Thanks,
-Syed


Re: Incorrect packages being maintained at download.cloudstack.org

2019-08-12 Thread Gabriel Beims Bräscher
Hello Syed,

What files are you referring to?

Thanks,
Gabriel.

Em seg, 12 de ago de 2019 às 14:19, Syed Mushtaq 
escreveu:

> Hi All,
>
> We have found that the RPMs for 4.12.0.0 on
> http://download.cloudstack.org/
> are incorrect. For example the package "cloudstack-common" has files that
> are not present in the 4.12 branch. Does anyone knows who maintains them?
> We need to fix that as it is the official URL for installation.
>
> Thanks,
> -Syed
>


Re: Incorrect packages being maintained at download.cloudstack.org

2019-08-12 Thread Siddhartha Kattoju
Hi Gabriel,


Specifically there is a reference to patchviasocket.py

which
was removed in 4.11 The cloudstack common package
cloudstack-common-4.12.0.0-1.el7.centos.x86_64.rpm

 is definitely problematic but its likely that they are all being built
from the wrong branch or tag.


Best Regards,


Sid


On Mon, Aug 12, 2019 at 3:17 PM Gabriel Beims Bräscher 
wrote:

> Hello Syed,
>
> What files are you referring to?
>
> Thanks,
> Gabriel.
>
> Em seg, 12 de ago de 2019 às 14:19, Syed Mushtaq 
> escreveu:
>
> > Hi All,
> >
> > We have found that the RPMs for 4.12.0.0 on
> > http://download.cloudstack.org/
> > are incorrect. For example the package "cloudstack-common" has files that
> > are not present in the 4.12 branch. Does anyone knows who maintains them?
> > We need to fix that as it is the official URL for installation.
> >
> > Thanks,
> > -Syed
> >
>


Re: Incorrect packages being maintained at download.cloudstack.org

2019-08-12 Thread Gabriel Beims Bräscher
Hello Syed and Sid,

There is a conceptual misunderstanding. A "git branch" is different from "a
git tag"; each release (4.11.z.s, 4.12.0.0, and so on) is based on a tag,
not a branch. The branch is created based on 4.y.0.0. Then, as other
versions are added and released the branch is updated for further releases
of the branch.

You are right when you say that file 'patchviasocket.py is not on branches
4.11 and 4.12, however the same is not valid for tags 4.11.0.0, 4.11.1.0,
4.11.2.0, and 4.12.0.0.

You can verify that the commit [1], removing 'patchviasocket.py', was
merged after releases 4.11.2.0 and 4.12.0.0, but in time for 4.11.3.0 and
master (which will be tagged and released as 4.13.0.0).

You will see that the 'patchviasocked.py' is on 4.11.2 (version released on
November 2018):
https://github.com/apache/cloudstack/blob/4.11.2.0/scripts/vm/hypervisor/kvm/patchviasocket.py

And also on 4.12.0.0 (released in March 2019):
https://github.com/apache/cloudstack/blob/4.12.0.0/scripts/vm/hypervisor/kvm/patchviasocket.py

4.11.3 (released in June 2019), on the other hand, does not have that file:
https://github.com/apache/cloudstack/blob/4.11.3.0/scripts/vm/hypervisor/kvm/patchviasocket.py

Therefore, there is nothing wrong with the packages released for 4.12.0.0.
They are aligned with the tag 4.12.0.0 that we have in Github.

Does that explain your doubts?

Regards,
Gabriel.

[1]
https://github.com/apache/cloudstack/commit/0700d91a685701bd25a8d1e5a9e46780e971

Em seg, 12 de ago de 2019 às 16:38, Siddhartha Kattoju <
skatt...@cloudops.com> escreveu:

> Hi Gabriel,
>
>
> Specifically there is a reference to patchviasocket.py
> <
> https://github.com/apache/cloudstack/blob/4.10/scripts/vm/hypervisor/kvm/patchviasocket.py
> >
> which
> was removed in 4.11 The cloudstack common package
> cloudstack-common-4.12.0.0-1.el7.centos.x86_64.rpm
> <
> http://download.cloudstack.org/centos7/4.12/cloudstack-common-4.12.0.0-1.el7.centos.x86_64.rpm
> >
>  is definitely problematic but its likely that they are all being built
> from the wrong branch or tag.
>
>
> Best Regards,
>
>
> Sid
>
>
> On Mon, Aug 12, 2019 at 3:17 PM Gabriel Beims Bräscher <
> gabrasc...@gmail.com>
> wrote:
>
> > Hello Syed,
> >
> > What files are you referring to?
> >
> > Thanks,
> > Gabriel.
> >
> > Em seg, 12 de ago de 2019 às 14:19, Syed Mushtaq <
> syed1.mush...@gmail.com>
> > escreveu:
> >
> > > Hi All,
> > >
> > > We have found that the RPMs for 4.12.0.0 on
> > > http://download.cloudstack.org/
> > > are incorrect. For example the package "cloudstack-common" has files
> that
> > > are not present in the 4.12 branch. Does anyone knows who maintains
> them?
> > > We need to fix that as it is the official URL for installation.
> > >
> > > Thanks,
> > > -Syed
> > >
> >
>