Here's what I see if I look at this from a matter-of-fact standpoint.

When Nova works with libvirt, libvirt might have something that Nova doesn't know about, but Nova doesn't care. Nova's database is the only world that Nova cares about. This allows Nova to have one source of data.

With Magnum, if we take data from both our database and the k8s API, we will have a split view of the world. This has both positives and negatives.

It does allow an end-user to do whatever they want with their cluster, and they don't necessarily have to use Magnum to do things, but Magnum will still have the correct status of stuff. It allows the end-user to choose what they want to use. Another positive is that because each clustering service is architected slightly different, it allows each service to know what it knows, without Magnum trying to guess some commonality between them.

A problem I see arising is the complexity added when gathering data from separate clusters. If I have one of every cluster, what happens when I need to get my list of containers? I would rather do just one call to the DB and get them, otherwise I'll have to call k8s, then call swarm, then mesos, and then aggregate all of them to return. I don't know if the only thing we will be retrieving from k8s are k8s-unique objects, but this is a situation that comes to my mind. Another negative is the possibility that the API does not perform as well as the DB. If the nova instance running the k8s API is super overloaded, the k8s API return will take longer than a call to the DB.

Let me know if I'm way off-base in any of these points. I'm not going to give an opinion at this point, this is just how I see things.

On 9/17/2015 7:53 AM, Jay Lau wrote:
Anyone who have some comments/suggestions on this? Thanks!

On Mon, Sep 14, 2015 at 3:57 PM, Jay Lau <jay.lau....@gmail.com> wrote:

Hi Vikas,

Thanks for starting this thread. Here just show some of my comments here.

The reason that Magnum want to get k8s resource via k8s API including two
reasons:
1) Native clients support
2) With current implantation, we cannot get pod for a replication
controller. The reason is that Magnum DB only persist replication
controller info in Magnum DB.

With the bp of objects-from-bay, the magnum will always call k8s API to
get all objects for pod/service/rc. Can you please show some of your
concerns for why do we need to persist those objects in Magnum DB? We may
need to sync up Magnum DB and k8s periodically if we persist two copies of
objects.

Thanks!

<https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>

2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvika...@gmail.com>:

Hi Team,

As per object-from-bay blueprint implementation [1], all calls to magnum db
are being skipped for example pod.create() etc.

Are not we going to use magnum db at all for pods/services/rc ?


Thanks
Vikas Choudhary


[1] https://review.openstack.org/#/c/213368/


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,

Jay Lau (Guangya Liu)





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Thanks,

Ryan Rossiter (rlrossit)


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to