No worries, happy to share. Answers below.

Michael


On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.ther...@gmail.com> wrote:
>
> Hi Michael,
>
> Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a 
> release target for the API vs config-drive thing? I’ll have a look at an 
> instance as soon as I’ll be able to log into one of my amphora.

No I have no timeline for the amphora-agent config update API.  Either
way, the initial config will be installed via config drive. The API is
intended for runtime updates.
>
> By the way, three sub-questions remains:
>
> 1°/ - What is the best place to push some documentation improvement ?

Patches are welcome!  All of our documentation is included in the
source code repository here:
https://github.com/openstack/octavia/tree/master/doc/source

Our patches follow the normal OpenStack gerrit review process
(OpenStack does not use pull requests).

> 2°/ - Is the amphora-agent an auto-generated file at image build time or do I 
> need to create one and give it to the diskimage-builder process?

The amphora-agent code itself is installed automatically with the
diskimage-builder process via the "amphora-agent" element.
The amphora-agent configuration file is only installed at amphora boot
time by nova using the config drive capability. It is also
auto-generated by the controller.

> 3°/ - The amphora agent source-code is available at 
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent
>  isn’t?

Yes, the agent code that runs in the amphora instance is all under
https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
in the main octavia repository.

>
> Sorry for the questions volume, but I prefer to really understand the 
> underlying mechanisms before we goes live with the solution.
>
> G.
>
> Le mer. 1 août 2018 à 02:36, Michael Johnson <johnso...@gmail.com> a écrit :
>>
>> Hi Flint,
>>
>> Happy to help.
>>
>> Right now the list of controller endpoints is pushed at boot time and
>> loaded into the amphora via config drive/nova.
>> In the future we plan to be able to update this list via the amphora
>> API, but it has not been developed yet.
>>
>> I am pretty sure centos is getting the config file as our gate job
>> that runs with centos 7 amphora has been passing. It should be in the
>> same /etc/octavia/amphora-agent.conf location as the ubuntu based
>> amphora.
>>
>> Michael
>>
>>
>>
>> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.ther...@gmail.com> wrote:
>> >
>> > Hi Michael, thanks a lot for that explanation, it’s actually how I 
>> > envisioned the flow.
>> >
>> > I’ll have to produce a diagram for my peers understanding, I maybe can 
>> > share it with you.
>> >
>> > There is still one point that seems to be a little bit odd to me.
>> >
>> > How the amphora agent know where to find out the healthManagers and worker 
>> > services? Is that because the worker is sending the agent some catalog 
>> > informations or because we set that at diskimage-create time?
>> >
>> > If so, I think the Centos based amphora is missing the agent.conf because 
>> > currently my vms doesn’t have any.
>> >
>> > Once again thanks for your help!
>> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnso...@gmail.com> a 
>> > écrit :
>> >>
>> >> Hi Flint,
>> >>
>> >> We don't have a logical network diagram at this time (it's still on
>> >> the to-do list), but I can talk you through it.
>> >>
>> >> The Octavia worker, health manager, and housekeeping need to be able
>> >> to reach the amphora (service VM at this point) over the lb-mgmt-net
>> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
>> >> the database and the information we save from the compute driver (I.e.
>> >> what IP was assigned to the instance).
>> >>
>> >> The Octavia API process does not need to be connected to the
>> >> lb-mgmt-net at this time. It only connects the the messaging bus and
>> >> the Octavia database. Provider drivers may have other connectivity
>> >> requirements for the Octavia API.
>> >>
>> >> The amphorae also send UDP packets back to the health manager on port
>> >> 5555. This is the heartbeat packet from the amphora. It contains the
>> >> health and statistics from that amphora. It know it's list of health
>> >> manager endpoints from the configuration file
>> >> "controller_ip_port_list"
>> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list).
>> >> Each amphora will rotate through that list of endpoints to reduce the
>> >> chance of a network split impacting the heartbeat messages.
>> >>
>> >> This is the only traffic that passed over this network. All of it is
>> >> IP based and can be routed (it does not require L2 connectivity).
>> >>
>> >> Michael
>> >>
>> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.ther...@gmail.com> 
>> >> wrote:
>> >> >
>> >> > Hi Folks,
>> >> >
>> >> > I'm currently deploying the Octavia component into our testing 
>> >> > environment which is based on KOLLA.
>> >> >
>> >> > So far I'm quite enjoying it as it is pretty much straight forward 
>> >> > (Except for some documentation pitfalls), but I'm now facing a weird 
>> >> > and hard to debug situation.
>> >> >
>> >> > I actually have a hard time to understand how Amphora are communicating 
>> >> > back and forth with the Control Plan components.
>> >> >
>> >> > From my understanding, as soon as I create a new LB, the Control Plan 
>> >> > is spawning an instance using the configured Octavia Flavor and Image 
>> >> > type, attach it to the LB-MGMT-NET and to the user provided subnet.
>> >> >
>> >> > What I think I'm misunderstanding is the discussion that follows 
>> >> > between the amphora and the different components such as the 
>> >> > HealthManager/HouseKeeper, the API and the Worker.
>> >> >
>> >> > How is the amphora agent able to found my control plan? Is the 
>> >> > HealthManager or the Octavia Worker initiating the communication to the 
>> >> > Amphora on port 9443 and so give the agent the API/Control plan 
>> >> > internalURL?
>> >> >
>> >> > If anyone have a diagram of the workflow I would be more than happy ^^
>> >> >
>> >> > Thanks a lot in advance to anyone willing to help :D
>> >> >
>> >> > _______________________________________________
>> >> > OpenStack-operators mailing list
>> >> > OpenStack-operators@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to