Github Issues
Hi All, We have been trialling replacing Jira with Github Issues. I think that we should have a conversation about it before it become the new standard by default. From my perspective, I don't like it. Searching has become far more difficult, categorising has also. When there is a bug fix it can only be targeted for a single version, which makes them easy to lose track of, and when looking at milestones issues and PRs get jumbled up and people are commenting on issues when it should by the PR and vice-versa (yes I've done it too). In summary, from an administrative point of view it causes a lot more problems than it solves. I yield the floor to hear other people's opinions... Kind regards, Paul Angus paul.an...@shapeblue.com www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
Re: Github Issues
Hi Paul, My 2 cents on the topic. people are commenting on issues when it should by the PR and vice-versa > I think this is simply due to the fact that with one login you can do both, versus before you had to have a JIRA login which people might have tried to avoid, preferring using github directly, ensuring the conversation will only be on the PR. Most of the issues in Jira didn't have any conversation at all. But I do feel also the pain of searching the issues on github as it's more free-hand than a jira system. At the same time it's easier and quicker to navigate, so it ease the pain at the same time ;-) I would say that the current labels isn't well organized to be able to search like in jira but it could. For example any label has a prefix describing the jira attribute type (component, version, ...) Then a bot scanning the issue content could set some of them as other open source project are doing. The bad thing here is that you might end up with too many labels. Maybe @resmo can give his point of view on how things are managed in Ansible (https://github.com/ansible/ansible/pulls - lots of labels, lots of issues and PRs). I don't know if that's a solution but labels seem the only way to organize things. Marc-Aurèle On Tue, Jul 17, 2018 at 10:53 AM, Paul Angus wrote: > Hi All, > > We have been trialling replacing Jira with Github Issues. I think that > we should have a conversation about it before it become the new standard by > default. > > From my perspective, I don't like it. Searching has become far more > difficult, categorising has also. When there is a bug fix it can only be > targeted for a single version, which makes them easy to lose track of, and > when looking at milestones issues and PRs get jumbled up and people are > commenting on issues when it should by the PR and vice-versa (yes I've done > it too). > In summary, from an administrative point of view it causes a lot more > problems than it solves. > > I yield the floor to hear other people's opinions... > > > Kind regards, > > Paul Angus > > > paul.an...@shapeblue.com > www.shapeblue.com > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > > >
Re: Github Issues
I may have voiced my concerns earlier but as a user, I think that JIRA issues are easier to follow than PRs. - As Paul said an issue may affect more than one version. - It may also require more than one PR to fully resolve the issue. - Issues tend to be described in terms of a problem that the user would recognize while PRs are most often described as what was done to fix the problem. The JIRA could be much easier to relate to what the user is seeing and more likely to show up in Google. Ron On 17/07/2018 4:53 AM, Paul Angus wrote: Hi All, We have been trialling replacing Jira with Github Issues. I think that we should have a conversation about it before it become the new standard by default. From my perspective, I don't like it. Searching has become far more difficult, categorising has also. When there is a bug fix it can only be targeted for a single version, which makes them easy to lose track of, and when looking at milestones issues and PRs get jumbled up and people are commenting on issues when it should by the PR and vice-versa (yes I've done it too). In summary, from an administrative point of view it causes a lot more problems than it solves. I yield the floor to hear other people's opinions... Kind regards, Paul Angus paul.an...@shapeblue.com www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue -- Ron Wheeler President Artifact Software Inc email: rwhee...@artifact-software.com skype: ronaldmwheeler phone: 866-970-2435, ext 102
Re: Github Issues
Ron, keep in mind that PRs on Github are different from Issues. They are two different features. There will be a much cleaner, tighter integration between issues and the solution when everything is on Github. will On Tue, Jul 17, 2018, 9:33 AM Ron Wheeler wrote: > I may have voiced my concerns earlier but as a user, I think that JIRA > issues are easier to follow than PRs. > - As Paul said an issue may affect more than one version. > - It may also require more than one PR to fully resolve the issue. > - Issues tend to be described in terms of a problem that the user would > recognize while PRs are most often described as what was done to fix the > problem. The JIRA could be much easier to relate to what the user is > seeing and more likely to show up in Google. > > Ron > On 17/07/2018 4:53 AM, Paul Angus wrote: > > Hi All, > > > > We have been trialling replacing Jira with Github Issues. I think that > we should have a conversation about it before it become the new standard by > default. > > > > From my perspective, I don't like it. Searching has become far more > difficult, categorising has also. When there is a bug fix it can only be > targeted for a single version, which makes them easy to lose track of, and > when looking at milestones issues and PRs get jumbled up and people are > commenting on issues when it should by the PR and vice-versa (yes I've done > it too). > > In summary, from an administrative point of view it causes a lot more > problems than it solves. > > > > I yield the floor to hear other people's opinions... > > > > > > Kind regards, > > > > Paul Angus > > > > > > paul.an...@shapeblue.com > > www.shapeblue.com > > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > > @shapeblue > > > > > > > > -- > Ron Wheeler > President > Artifact Software Inc > email: rwhee...@artifact-software.com > skype: ronaldmwheeler > phone: 866-970-2435, ext 102 > >
Re: Github Issues
I sort of agree with Marc-Aurèle and Will, and like github issues way better than Jira. It definitely is easier that both the issues and fix for those issues live in the same place and easily can be referenced from one another. The only thing is that we need to come up with good set of labels (for both issues and PRs) for tracking purpose. Discussing the issue at hand under the issue itself can be even good, it will leave a trail of what has been discussed around the issue which led to the fix and potentially discussion can be continued under PR itself. Essentially they are targeting the same "problem". As for the point Ron brought up, if one issue was that big that required multiple PR for it to be fixed, it only makes sens to me to create subset of issues all referencing the "parent" issue, and each individual PR fixes one of those smaller issue. Khosrow On Tue, Jul 17, 2018 at 11:08 AM Will Stevens wrote: > Ron, keep in mind that PRs on Github are different from Issues. They are > two different features. > > There will be a much cleaner, tighter integration between issues and the > solution when everything is on Github. > > will > > On Tue, Jul 17, 2018, 9:33 AM Ron Wheeler > wrote: > > > I may have voiced my concerns earlier but as a user, I think that JIRA > > issues are easier to follow than PRs. > > - As Paul said an issue may affect more than one version. > > - It may also require more than one PR to fully resolve the issue. > > - Issues tend to be described in terms of a problem that the user would > > recognize while PRs are most often described as what was done to fix the > > problem. The JIRA could be much easier to relate to what the user is > > seeing and more likely to show up in Google. > > > > Ron > > On 17/07/2018 4:53 AM, Paul Angus wrote: > > > Hi All, > > > > > > We have been trialling replacing Jira with Github Issues. I think > that > > we should have a conversation about it before it become the new standard > by > > default. > > > > > > From my perspective, I don't like it. Searching has become far more > > difficult, categorising has also. When there is a bug fix it can only be > > targeted for a single version, which makes them easy to lose track of, > and > > when looking at milestones issues and PRs get jumbled up and people are > > commenting on issues when it should by the PR and vice-versa (yes I've > done > > it too). > > > In summary, from an administrative point of view it causes a lot more > > problems than it solves. > > > > > > I yield the floor to hear other people's opinions... > > > > > > > > > Kind regards, > > > > > > Paul Angus > > > > > > > > > paul.an...@shapeblue.com > > > www.shapeblue.com > > > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > > > @shapeblue > > > > > > > > > > > > > -- > > Ron Wheeler > > President > > Artifact Software Inc > > email: rwhee...@artifact-software.com > > skype: ronaldmwheeler > > phone: 866-970-2435, ext 102 > > > > >
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Hey Mike, I got the branch 4.11 to start fixing the problem we discussed, but I do not think my commit was backported to 4.11. I mean, I am at "VirtualMachineManagerImpl" and the code is not here. I also checked the commit ( https://github.com/apache/cloudstack/commit/f2efbcececb3cfb06a51e5d3a2e77417c19c667f) that introduced those changes to master, and according to Github, it is only in the master branch, and not in 4.11. I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack remote repository in the 4.11 branch, and as you can see, the code there is the “old” one. https://github.com/apache/cloudstack/blob/4.11/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java I got a little confused now. Did you detect the problem in 4.11 or in master? On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike wrote: > Another comment here: The part that is broken is if you try to let > CloudStack pick the primary storage on the destination side. That code no > longer exists in 4.11.1. > > On 7/16/18, 9:24 PM, "Tutkowski, Mike" wrote: > > To follow up on this a bit: Yes, you should be able to migrate a VM > and its storage from one cluster to another today using non-managed > (traditional) primary storage with XenServer (both the source and > destination primary storages would be cluster scoped). However, that is one > of the features that was broken in 4.11.1 that we are discussing in this > thread. > > On 7/16/18, 9:20 PM, "Tutkowski, Mike" > wrote: > > For a bit of info on what managed storage is, please take a look > at this document: > > https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire% > 20in%20CloudStack.docx?dl=0 > > The short answer is that you can have zone-wide managed storage > (for XenServer, VMware, and KVM). However, there is no current zone-wide > non-managed storage for XenServer. > > On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: > > I assume by "managed storage", you guys mean primary storages, > either zone -wide or cluster-wide. > > For Xen hypervisor, ACS does not support "zone-wide" primary > storage yet. Still, I can live migrate a VM with data disks between > clusters with storage migration from web GUI, today. So, your statement > below does not reflect current behavior of the code. > > >- If I want to migrate a VM across clusters, but if > at least one of its >volumes is placed in a cluster-wide managed > storage, the migration is not >allowed. Is that it? > > [Mike] Correct > > > > > > > > > > > -- Rafael Weingärtner
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
I only noticed it in master. The example code I was comparing it against was from 4.11.0. I never checked against 4.11.1. > On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner > wrote: > > Hey Mike, I got the branch 4.11 to start fixing the problem we discussed, > but I do not think my commit was backported to 4.11. I mean, I am at > "VirtualMachineManagerImpl" and the code is not here. I also checked the > commit ( > https://github.com/apache/cloudstack/commit/f2efbcececb3cfb06a51e5d3a2e77417c19c667f) > that introduced those changes to master, and according to Github, it is > only in the master branch, and not in 4.11. > > I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack > remote repository in the 4.11 branch, and as you can see, the code there is > the “old” one. > https://github.com/apache/cloudstack/blob/4.11/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java > > I got a little confused now. Did you detect the problem in 4.11 or in > master? > > > On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike > wrote: > >> Another comment here: The part that is broken is if you try to let >> CloudStack pick the primary storage on the destination side. That code no >> longer exists in 4.11.1. >> >> On 7/16/18, 9:24 PM, "Tutkowski, Mike" wrote: >> >>To follow up on this a bit: Yes, you should be able to migrate a VM >> and its storage from one cluster to another today using non-managed >> (traditional) primary storage with XenServer (both the source and >> destination primary storages would be cluster scoped). However, that is one >> of the features that was broken in 4.11.1 that we are discussing in this >> thread. >> >>On 7/16/18, 9:20 PM, "Tutkowski, Mike" >> wrote: >> >>For a bit of info on what managed storage is, please take a look >> at this document: >> >>https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire% >> 20in%20CloudStack.docx?dl=0 >> >>The short answer is that you can have zone-wide managed storage >> (for XenServer, VMware, and KVM). However, there is no current zone-wide >> non-managed storage for XenServer. >> >>On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: >> >>I assume by "managed storage", you guys mean primary storages, >> either zone -wide or cluster-wide. >> >>For Xen hypervisor, ACS does not support "zone-wide" primary >> storage yet. Still, I can live migrate a VM with data disks between >> clusters with storage migration from web GUI, today. So, your statement >> below does not reflect current behavior of the code. >> >> >> - If I want to migrate a VM across clusters, but if >> at least one of its >> volumes is placed in a cluster-wide managed >> storage, the migration is not >> allowed. Is that it? >> >>[Mike] Correct >> >> >> >> >> >> >> >> >> >> >> > > > -- > Rafael Weingärtner
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Hi, Mike, Rafael: Thanks for the clarification of what "managed storage" is and working on fixing the broken bits. Yiping On 7/16/18, 8:28 PM, "Tutkowski, Mike" wrote: Another comment here: The part that is broken is if you try to let CloudStack pick the primary storage on the destination side. That code no longer exists in 4.11.1. On 7/16/18, 9:24 PM, "Tutkowski, Mike" wrote: To follow up on this a bit: Yes, you should be able to migrate a VM and its storage from one cluster to another today using non-managed (traditional) primary storage with XenServer (both the source and destination primary storages would be cluster scoped). However, that is one of the features that was broken in 4.11.1 that we are discussing in this thread. On 7/16/18, 9:20 PM, "Tutkowski, Mike" wrote: For a bit of info on what managed storage is, please take a look at this document: https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0 The short answer is that you can have zone-wide managed storage (for XenServer, VMware, and KVM). However, there is no current zone-wide non-managed storage for XenServer. On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: I assume by "managed storage", you guys mean primary storages, either zone -wide or cluster-wide. For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. Still, I can live migrate a VM with data disks between clusters with storage migration from web GUI, today. So, your statement below does not reflect current behavior of the code. - If I want to migrate a VM across clusters, but if at least one of its volumes is placed in a cluster-wide managed storage, the migration is not allowed. Is that it? [Mike] Correct
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Ok, thanks. I had the impression that we said it was backported to 4.11. I will get master and work on it then. On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike wrote: > I only noticed it in master. The example code I was comparing it against > was from 4.11.0. I never checked against 4.11.1. > > > On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner < > rafaelweingart...@gmail.com> wrote: > > > > Hey Mike, I got the branch 4.11 to start fixing the problem we discussed, > > but I do not think my commit was backported to 4.11. I mean, I am at > > "VirtualMachineManagerImpl" and the code is not here. I also checked the > > commit ( > > https://github.com/apache/cloudstack/commit/ > f2efbcececb3cfb06a51e5d3a2e77417c19c667f) > > that introduced those changes to master, and according to Github, it is > > only in the master branch, and not in 4.11. > > > > I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack > > remote repository in the 4.11 branch, and as you can see, the code there > is > > the “old” one. > > https://github.com/apache/cloudstack/blob/4.11/engine/ > orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java > > > > I got a little confused now. Did you detect the problem in 4.11 or in > > master? > > > > > > On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike < > mike.tutkow...@netapp.com > >> wrote: > > > >> Another comment here: The part that is broken is if you try to let > >> CloudStack pick the primary storage on the destination side. That code > no > >> longer exists in 4.11.1. > >> > >> On 7/16/18, 9:24 PM, "Tutkowski, Mike" > wrote: > >> > >>To follow up on this a bit: Yes, you should be able to migrate a VM > >> and its storage from one cluster to another today using non-managed > >> (traditional) primary storage with XenServer (both the source and > >> destination primary storages would be cluster scoped). However, that is > one > >> of the features that was broken in 4.11.1 that we are discussing in this > >> thread. > >> > >>On 7/16/18, 9:20 PM, "Tutkowski, Mike" > >> wrote: > >> > >>For a bit of info on what managed storage is, please take a look > >> at this document: > >> > >>https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire% > >> 20in%20CloudStack.docx?dl=0 > >> > >>The short answer is that you can have zone-wide managed storage > >> (for XenServer, VMware, and KVM). However, there is no current zone-wide > >> non-managed storage for XenServer. > >> > >>On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: > >> > >>I assume by "managed storage", you guys mean primary > storages, > >> either zone -wide or cluster-wide. > >> > >>For Xen hypervisor, ACS does not support "zone-wide" primary > >> storage yet. Still, I can live migrate a VM with data disks between > >> clusters with storage migration from web GUI, today. So, your statement > >> below does not reflect current behavior of the code. > >> > >> > >> - If I want to migrate a VM across clusters, but > if > >> at least one of its > >> volumes is placed in a cluster-wide managed > >> storage, the migration is not > >> allowed. Is that it? > >> > >>[Mike] Correct > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > > > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Cool, if it’s just in master, then that makes it easier. Also, it means we did not have a process issue by introducing enhancement code in between release candidates. It would mean, however, that our documentation is a bit incorrect if, in fact, it states that that feature exists in 4.11.1. > On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner > wrote: > > Ok, thanks. I had the impression that we said it was backported to 4.11. > > I will get master and work on it then. > > On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike > wrote: > >> I only noticed it in master. The example code I was comparing it against >> was from 4.11.0. I never checked against 4.11.1. >> >>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner < >> rafaelweingart...@gmail.com> wrote: >>> >>> Hey Mike, I got the branch 4.11 to start fixing the problem we discussed, >>> but I do not think my commit was backported to 4.11. I mean, I am at >>> "VirtualMachineManagerImpl" and the code is not here. I also checked the >>> commit ( >>> https://github.com/apache/cloudstack/commit/ >> f2efbcececb3cfb06a51e5d3a2e77417c19c667f) >>> that introduced those changes to master, and according to Github, it is >>> only in the master branch, and not in 4.11. >>> >>> I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack >>> remote repository in the 4.11 branch, and as you can see, the code there >> is >>> the “old” one. >>> https://github.com/apache/cloudstack/blob/4.11/engine/ >> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java >>> >>> I got a little confused now. Did you detect the problem in 4.11 or in >>> master? >>> >>> >>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike < >> mike.tutkow...@netapp.com wrote: >>> Another comment here: The part that is broken is if you try to let CloudStack pick the primary storage on the destination side. That code >> no longer exists in 4.11.1. On 7/16/18, 9:24 PM, "Tutkowski, Mike" >> wrote: To follow up on this a bit: Yes, you should be able to migrate a VM and its storage from one cluster to another today using non-managed (traditional) primary storage with XenServer (both the source and destination primary storages would be cluster scoped). However, that is >> one of the features that was broken in 4.11.1 that we are discussing in this thread. On 7/16/18, 9:20 PM, "Tutkowski, Mike" wrote: For a bit of info on what managed storage is, please take a look at this document: https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire% 20in%20CloudStack.docx?dl=0 The short answer is that you can have zone-wide managed storage (for XenServer, VMware, and KVM). However, there is no current zone-wide non-managed storage for XenServer. On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: I assume by "managed storage", you guys mean primary >> storages, either zone -wide or cluster-wide. For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. Still, I can live migrate a VM with data disks between clusters with storage migration from web GUI, today. So, your statement below does not reflect current behavior of the code. - If I want to migrate a VM across clusters, but >> if at least one of its volumes is placed in a cluster-wide managed storage, the migration is not allowed. Is that it? [Mike] Correct >>> >>> >>> -- >>> Rafael Weingärtner >> > > > > -- > Rafael Weingärtner
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Correct. I do think the problem here is only in the release notes. Just to confirm, you found the problem while testing 4.12 (from master), right? On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike wrote: > Cool, if it’s just in master, then that makes it easier. > > Also, it means we did not have a process issue by introducing enhancement > code in between release candidates. > > It would mean, however, that our documentation is a bit incorrect if, in > fact, it states that that feature exists in 4.11.1. > > > On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner < > rafaelweingart...@gmail.com> wrote: > > > > Ok, thanks. I had the impression that we said it was backported to 4.11. > > > > I will get master and work on it then. > > > > On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike < > mike.tutkow...@netapp.com> > > wrote: > > > >> I only noticed it in master. The example code I was comparing it against > >> was from 4.11.0. I never checked against 4.11.1. > >> > >>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner < > >> rafaelweingart...@gmail.com> wrote: > >>> > >>> Hey Mike, I got the branch 4.11 to start fixing the problem we > discussed, > >>> but I do not think my commit was backported to 4.11. I mean, I am at > >>> "VirtualMachineManagerImpl" and the code is not here. I also checked > the > >>> commit ( > >>> https://github.com/apache/cloudstack/commit/ > >> f2efbcececb3cfb06a51e5d3a2e77417c19c667f) > >>> that introduced those changes to master, and according to Github, it is > >>> only in the master branch, and not in 4.11. > >>> > >>> I checked the "VirtualMachineManagerImpl" class at the Apache > CloudStack > >>> remote repository in the 4.11 branch, and as you can see, the code > there > >> is > >>> the “old” one. > >>> https://github.com/apache/cloudstack/blob/4.11/engine/ > >> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java > >>> > >>> I got a little confused now. Did you detect the problem in 4.11 or in > >>> master? > >>> > >>> > >>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike < > >> mike.tutkow...@netapp.com > wrote: > >>> > Another comment here: The part that is broken is if you try to let > CloudStack pick the primary storage on the destination side. That code > >> no > longer exists in 4.11.1. > > On 7/16/18, 9:24 PM, "Tutkowski, Mike" > >> wrote: > > To follow up on this a bit: Yes, you should be able to migrate a VM > and its storage from one cluster to another today using non-managed > (traditional) primary storage with XenServer (both the source and > destination primary storages would be cluster scoped). However, that > is > >> one > of the features that was broken in 4.11.1 that we are discussing in > this > thread. > > On 7/16/18, 9:20 PM, "Tutkowski, Mike" > wrote: > > For a bit of info on what managed storage is, please take a look > at this document: > > https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire% > 20in%20CloudStack.docx?dl=0 > > The short answer is that you can have zone-wide managed storage > (for XenServer, VMware, and KVM). However, there is no current > zone-wide > non-managed storage for XenServer. > > On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: > > I assume by "managed storage", you guys mean primary > >> storages, > either zone -wide or cluster-wide. > > For Xen hypervisor, ACS does not support "zone-wide" primary > storage yet. Still, I can live migrate a VM with data disks between > clusters with storage migration from web GUI, today. So, your > statement > below does not reflect current behavior of the code. > > > - If I want to migrate a VM across clusters, but > >> if > at least one of its > volumes is placed in a cluster-wide managed > storage, the migration is not > allowed. Is that it? > > [Mike] Correct > > > > > > > > > > > > >>> > >>> > >>> -- > >>> Rafael Weingärtner > >> > > > > > > > > -- > > Rafael Weingärtner > -- Rafael Weingärtner
Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)
Correct, I happened to find it while testing a PR of mine targeted at master. > On Jul 17, 2018, at 1:30 PM, Rafael Weingärtner > wrote: > > Correct. I do think the problem here is only in the release notes. > > Just to confirm, you found the problem while testing 4.12 (from master), > right? > > On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike > wrote: > >> Cool, if it’s just in master, then that makes it easier. >> >> Also, it means we did not have a process issue by introducing enhancement >> code in between release candidates. >> >> It would mean, however, that our documentation is a bit incorrect if, in >> fact, it states that that feature exists in 4.11.1. >> >>> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner < >> rafaelweingart...@gmail.com> wrote: >>> >>> Ok, thanks. I had the impression that we said it was backported to 4.11. >>> >>> I will get master and work on it then. >>> >>> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike < >> mike.tutkow...@netapp.com> >>> wrote: >>> I only noticed it in master. The example code I was comparing it against was from 4.11.0. I never checked against 4.11.1. > On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner < rafaelweingart...@gmail.com> wrote: > > Hey Mike, I got the branch 4.11 to start fixing the problem we >> discussed, > but I do not think my commit was backported to 4.11. I mean, I am at > "VirtualMachineManagerImpl" and the code is not here. I also checked >> the > commit ( > https://github.com/apache/cloudstack/commit/ f2efbcececb3cfb06a51e5d3a2e77417c19c667f) > that introduced those changes to master, and according to Github, it is > only in the master branch, and not in 4.11. > > I checked the "VirtualMachineManagerImpl" class at the Apache >> CloudStack > remote repository in the 4.11 branch, and as you can see, the code >> there is > the “old” one. > https://github.com/apache/cloudstack/blob/4.11/engine/ orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java > > I got a little confused now. Did you detect the problem in 4.11 or in > master? > > > On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike < mike.tutkow...@netapp.com >> wrote: > >> Another comment here: The part that is broken is if you try to let >> CloudStack pick the primary storage on the destination side. That code no >> longer exists in 4.11.1. >> >> On 7/16/18, 9:24 PM, "Tutkowski, Mike" wrote: >> >> To follow up on this a bit: Yes, you should be able to migrate a VM >> and its storage from one cluster to another today using non-managed >> (traditional) primary storage with XenServer (both the source and >> destination primary storages would be cluster scoped). However, that >> is one >> of the features that was broken in 4.11.1 that we are discussing in >> this >> thread. >> >> On 7/16/18, 9:20 PM, "Tutkowski, Mike" >> wrote: >> >> For a bit of info on what managed storage is, please take a look >> at this document: >> >> https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire% >> 20in%20CloudStack.docx?dl=0 >> >> The short answer is that you can have zone-wide managed storage >> (for XenServer, VMware, and KVM). However, there is no current >> zone-wide >> non-managed storage for XenServer. >> >> On 7/16/18, 6:20 PM, "Yiping Zhang" wrote: >> >> I assume by "managed storage", you guys mean primary storages, >> either zone -wide or cluster-wide. >> >> For Xen hypervisor, ACS does not support "zone-wide" primary >> storage yet. Still, I can live migrate a VM with data disks between >> clusters with storage migration from web GUI, today. So, your >> statement >> below does not reflect current behavior of the code. >> >> >> - If I want to migrate a VM across clusters, but if >> at least one of its >> volumes is placed in a cluster-wide managed >> storage, the migration is not >> allowed. Is that it? >> >> [Mike] Correct >> >> >> >> >> >> >> >> >> >> >> > > > -- > Rafael Weingärtner >>> >>> >>> >>> -- >>> Rafael Weingärtner >> > > > > -- > Rafael Weingärtner