Sounds good...We don't want to add technical debt if it's going to make our
work a lot harder in the future.


On Mon, Jun 17, 2013 at 3:14 PM, John Burwell <jburw...@basho.com> wrote:

> Mike,
>
> My goal is not incur further technical debt in 4.2 by adding more
> Storage->Hypervisor dependencies that need to be inverted.  Recognizing
> that we are close to 4.2, the question becomes is there a simple approach
> that will permit this dependency to be inverted?   I will dig into the code
> tomorrow to see if there is something straightforward we can do for 4.2.  I
> invite others to do the same ...
>
> Thanks,
> -John
>
> On Jun 17, 2013, at 5:09 PM, Mike Tutkowski <mike.tutkow...@solidfire.com>
> wrote:
>
> > I think a hack-day session on this would be great.
> >
> > To me, since we're so late in the game for 4.2, I think we need to take
> two
> > approaches here: 1) Short-term solution for 4.2 (that hopefully will not
> > make future refactoring work too much more difficult than it might
> already
> > be) and 2) Long-term solution such as what John is talking about.
> >
> >
> > On Mon, Jun 17, 2013 at 3:03 PM, John Burwell <jburw...@basho.com>
> wrote:
> >
> >> Edison,
> >>
> >> As part of the hack day discussion, I think we need to determine how to
> >> establish that layer and invert these dependencies.  Hypervisors must
> know
> >> about storage and network devices.  A VM is the nexus of a particular
> set
> >> of storage devices/volumes and network devices/interfaces.  From an
> >> architectural perspective, we sustain a system circular dependencies
> >> between these layers.  Since VM must know about storage and networking,
> I
> >> want to invert the dependencies such that storage and network are
> >> hypervisor agnostic.  I believe it is entirely feasible, and will yield
> a
> >> more robust, general purpose storage layer with wider potential use than
> >> just to support hypervisors.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 17, 2013, at 4:54 PM, Edison Su <edison...@citrix.com> wrote:
> >>
> >>> But currently there is no such hypervisor layer yet, and to me it's
> >> related to storage, not related to hypervisor. It's a property of a
> storage
> >> to support one hypervisor, two hypervisors, or all the hypervisors, not
> a
> >> property of hypervisor.
> >>> I agree, that add a hypervisor type on the storagepoolcmd is not a
> >> proper solution, as we already see, it's not flexible enough for
> Solidfire.
> >>> How about add a getSupportedHypervisors on storage plugin, which will
> >> return ImmutableSet<StorageProtocol>?
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: John Burwell [mailto:jburw...@basho.com]
> >>>> Sent: Monday, June 17, 2013 1:42 PM
> >>>> To: dev@cloudstack.apache.org
> >>>> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> >> Storage?
> >>>>
> >>>> Edison,
> >>>>
> >>>> For me, this issue comes back to the whole notion of the overloaded
> >>>> StoragePoolType.  A hypervisor plugin should declare a method akin to
> >>>> getSupportedStorageProtocols() : ImmutableSet<StorageProtocol> which
> >>>> the Hypervisor layer can use to filter the available DataStores from
> the
> >>>> Storage subsystem.  For example, as RBD support expands to other
> >>>> hypervisors, we should only have to modify those hypervisor plugins --
> >> not
> >>>> the Hypervisor orchestration components or any aspect of the Storage
> >> layer.
> >>>>
> >>>> Thanks,
> >>>> -John
> >>>>
> >>>> On Jun 17, 2013, at 4:27 PM, Edison Su <edison...@citrix.com> wrote:
> >>>>
> >>>>> There are storages which can only work with one hypervisor, e.g.
> >>>>> Currently, Ceph can only work on KVM. And the data store created in
> >>>> VCenter, can only work with Vmware.
> >>>>>
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >>>>>> Sent: Monday, June 17, 2013 1:12 PM
> >>>>>> To: dev@cloudstack.apache.org
> >>>>>> Subject: Re: Hypervisor Host Type Required at Zone Level for Primary
> >>>> Storage?
> >>>>>>
> >>>>>> I figured you might have something to say about this, John. :)
> >>>>>>
> >>>>>> Yeah, I have no idea behind the motivation for this change other
> than
> >>>>>> what Edison just said in a recent e-mail.
> >>>>>>
> >>>>>> It sounds like this change went in so that the allocators could look
> >>>>>> at the VM characteristics and see the hypervisor type. With this
> >>>>>> info, the allocator can decide if a particular zone-wide storage is
> >>>>>> acceptable. This doesn't apply for my situation as I'm dealing with
> a
> >>>>>> SAN, but some zone-wide storage is static (just a volume "out there"
> >>>>>> somewhere). Once this volume is used for, say, XenServer purposes,
> it
> >>>> can only be used for XenServer going forward.
> >>>>>>
> >>>>>> For more details, I would recommend Edison comment.
> >>>>>>
> >>>>>>
> >>>>>> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell <jburw...@basho.com>
> >>>>>> wrote:
> >>>>>>
> >>>>>>> Mike,
> >>>>>>>
> >>>>>>> I know my thoughts will come as a galloping shock, but the idea of
> a
> >>>>>>> hypervisor type being attached to a volume is the type of
> dependency
> >>>>>>> I think we need to remove from the Storage layer.  What attributes
> >>>>>>> of a DataStore/StoragePool require association to a hypervisor
> type?
> >>>>>>> My thought is that we should expose query methods allow the
> >>>>>>> Hypervisor layer to determine if a DataStore/StoragePool requires
> >>>>>>> such a reservation, and we track that reservation in the Hypervisor
> >> layer.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> -John
> >>>>>>>
> >>>>>>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski
> >>>>>>> <mike.tutkow...@solidfire.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Hi Edison,
> >>>>>>>>
> >>>>>>>> How's about if I add this logic into ZoneWideStoragePoolAllocator
> >>>>>>> (below)?
> >>>>>>>>
> >>>>>>>> After filtering storage pools by tags, it saves off the ones that
> >>>>>>>> are for any hypervisor.
> >>>>>>>>
> >>>>>>>> Next, we filter the list down more by hypervisor.
> >>>>>>>>
> >>>>>>>> Then, we add the storage pools back into the list that were for
> any
> >>>>>>>> hypervisor.
> >>>>>>>>
> >>>>>>>> @Override
> >>>>>>>>
> >>>>>>>> protected List<StoragePool> select(DiskProfile dskCh,
> >>>>>>>>
> >>>>>>>> VirtualMachineProfile<? extends VirtualMachine> vmProfile,
> >>>>>>>>
> >>>>>>>> DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
> >>>>>>>>
> >>>>>>>> s_logger.debug("ZoneWideStoragePoolAllocator to find storage
> >>>>>>>> pool");
> >>>>>>>>
> >>>>>>>> List<StoragePool> suitablePools = new ArrayList<StoragePool>();
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     List<StoragePoolVO> storagePools =
> >>>>>>>>
> >>>>>>
> >>>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> >>>>>>>> ),
> >>>>>>>> dskCh.getTags());
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     if (storagePools == null) {
> >>>>>>>>
> >>>>>>>>         storagePools = new ArrayList<StoragePoolVO>();
> >>>>>>>>
> >>>>>>>>     }
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     List<StoragePoolVO> anyHypervisorStoragePools =
> >>>>>>>> newArrayList<StoragePoolVO>();
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     for (StoragePoolVO storagePool : storagePools) {
> >>>>>>>>
> >>>>>>>>         if
> >>>>>>>> (storagePool.getHypervisor().equals(HypervisorType.Any)) {
> >>>>>>>>
> >>>>>>>>             anyHypervisorStoragePools.add(storagePool);
> >>>>>>>>
> >>>>>>>>         }
> >>>>>>>>
> >>>>>>>>     }
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     List<StoragePoolVO> storagePoolsByHypervisor =
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCent
> >>>>>> e
> >>>>>>> rId(),
> >>>>>>>> dskCh.getHypervisorType());
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     storagePools.retainAll(storagePoolsByHypervisor);
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     storagePools.addAll(anyHypervisorStoragePools);
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     // add remaining pools in zone, that did not match tags, to
> >>>>>>>> avoid set
> >>>>>>>>
> >>>>>>>>     List<StoragePoolVO> allPools =
> >>>>>>>>
> >>>>>>
> >>>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(
> >>>>>>>> ),
> >>>>>>>> null);
> >>>>>>>>
> >>>>>>>>     allPools.removeAll(storagePools);
> >>>>>>>>
> >>>>>>>>     for (StoragePoolVO pool : allPools) {
> >>>>>>>>
> >>>>>>>>         avoid.addPool(pool.getId());
> >>>>>>>>
> >>>>>>>>     }
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>     for (StoragePoolVO storage : storagePools) {
> >>>>>>>>
> >>>>>>>>         if (suitablePools.size() == returnUpTo) {
> >>>>>>>>
> >>>>>>>>             break;
> >>>>>>>>
> >>>>>>>>         }
> >>>>>>>>
> >>>>>>>>         StoragePool pol = (StoragePool)this.dataStoreMgr
> >>>>>>>> .getPrimaryDataStore(storage.getId());
> >>>>>>>>
> >>>>>>>>         if (filter(avoid, pol, dskCh, plan)) {
> >>>>>>>>
> >>>>>>>>             suitablePools.add(pol);
> >>>>>>>>
> >>>>>>>>         } else {
> >>>>>>>>
> >>>>>>>>             avoid.addPool(pol.getId());
> >>>>>>>>
> >>>>>>>>         }
> >>>>>>>>
> >>>>>>>>     }
> >>>>>>>>
> >>>>>>>>     return suitablePools;
> >>>>>>>>
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
> >>>>>>>> mike.tutkow...@solidfire.com> wrote:
> >>>>>>>>
> >>>>>>>>> Hi Edison,
> >>>>>>>>>
> >>>>>>>>> I haven't looked into this much, so maybe what I suggest here
> >>>>>>>>> won't make sense, but here goes:
> >>>>>>>>>
> >>>>>>>>> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might
> >>>>>>>>> not be the name of the enumeration...I forget). The
> >>>>>>> ZoneWideStoragePoolAllocator
> >>>>>>>>> could use this to be less choosy about if a storage pool
> qualifies
> >>>>>>>>> to be used.
> >>>>>>>>>
> >>>>>>>>> Does that make any sense?
> >>>>>>>>>
> >>>>>>>>> Thanks!
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Mon, Jun 17, 2013 at 11:28 AM, Edison Su <
> edison...@citrix.com>
> >>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> I think it's due to this
> >>>>>>>>>>
> >>>>>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-
> >>>>>> wide+prima
> >>>>>>> ry+storage+target
> >>>>>>>>>> There are zone-wide storages, may only work with one particular
> >>>>>>>>>> hypervisor. For example, the data store created on VCenter can
> be
> >>>>>>> shared by
> >>>>>>>>>> all the clusters in a DC, but only for vmware. And, CloudStack
> >>>>>>>>>> supports multiple hypervisors in one Zone, so, somehow, need a
> >>>>>>>>>> way to tell mgt server, for a particular zone-wide storage,
> which
> >>>>>>>>>> can only work with certain hypervisors.
> >>>>>>>>>> You can treat hypervisor type on the storage pool, is another
> >>>>>>>>>> tag, to help storage pool allocator to find proper storage pool.
> >>>>>>>>>> But seems hypervisor type is not enough for your case, as your
> >>>>>>>>>> storage pool can
> >>>>>>> work
> >>>>>>>>>> with both vmware/xenserver, but not for other hypervisors(that's
> >>>>>>>>>> your current code's implementation limitation, not your storage
> >>>>>>>>>> itself
> >>>>>>> can't do
> >>>>>>>>>> that).
> >>>>>>>>>> So I'd think you need to extend ZoneWideStoragePoolAllocator,
> >>>>>>>>>> maybe, a new allocator called:
> >>>>>>>>>> solidfirezonewidestoragepoolAllocator. And,
> >>>>>>> replace
> >>>>>>>>>> the following line in applicationContext.xml:
> >>>>>>>>>> <bean id="zoneWideStoragePoolAllocator"
> >>>>>>>>>>
> >>>>>>>
> >>>>>>
> class="org.apache.cloudstack.storage.allocator.ZoneWideStoragePoolAll
> >>>>>> ocat
> >>>>>> or"
> >>>>>>>>>> />
> >>>>>>>>>> With your solidfirezonewidestoragepoolAllocator
> >>>>>>>>>> It also means, for each CloudStack mgt server deployment, admin
> >>>>>>>>>> needs
> >>>>>>> to
> >>>>>>>>>> configure applicationContext.xml for their needs.
> >>>>>>>>>>
> >>>>>>>>>>> -----Original Message-----
> >>>>>>>>>>> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> >>>>>>>>>>> Sent: Saturday, June 15, 2013 11:34 AM
> >>>>>>>>>>> To: dev@cloudstack.apache.org
> >>>>>>>>>>> Subject: Hypervisor Host Type Required at Zone Level for
> Primary
> >>>>>>>>>> Storage?
> >>>>>>>>>>>
> >>>>>>>>>>> Hi,
> >>>>>>>>>>>
> >>>>>>>>>>> I recently updated my local repo and noticed that we now
> require
> >>>>>>>>>>> a hypervisor type to be associated with zone-wide primary
> >> storage.
> >>>>>>>>>>>
> >>>>>>>>>>> I was wondering what the motivation for this might be?
> >>>>>>>>>>>
> >>>>>>>>>>> In my case, my zone-wide primary storage represents a SAN.
> >>>>>>>>>>> Volumes are carved out of the SAN as needed and can currently
> be
> >>>>>>>>>>> utilized on both
> >>>>>>>>>> Xen
> >>>>>>>>>>> and VMware (although, of course, once you've used a given
> >>>> volume
> >>>>>>>>>>> on
> >>>>>>> one
> >>>>>>>>>>> hypervisor type or the other, you can only continue to use it
> >>>>>>>>>>> with
> >>>>>>> that
> >>>>>>>>>>> hypervisor type).
> >>>>>>>>>>>
> >>>>>>>>>>> I guess the point being my primary storage can be associated
> >>>>>>>>>>> with more
> >>>>>>>>>> than
> >>>>>>>>>>> one hypervisor type because of its dynamic nature.
> >>>>>>>>>>>
> >>>>>>>>>>> Can someone fill me in on the reasons behind this recent change
> >>>>>>>>>>> and recommendations on how I should proceed here?
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks!
> >>>>>>>>>>>
> >>>>>>>>>>> --
> >>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>> *(tm)*
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> *Mike Tutkowski*
> >>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>>> o: 303.746.7302
> >>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>> *(tm)*
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> *Mike Tutkowski*
> >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>> o: 303.746.7302
> >>>>>>>> Advancing the way the world uses the
> >>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> *(tm)*
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> *Mike Tutkowski*
> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>> e: mike.tutkow...@solidfire.com
> >>>>>> o: 303.746.7302
> >>>>>> Advancing the way the world uses the
> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>> *(tm)*
> >>>
> >>
> >>
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to