Hello all,
I Had a follow up query, sorry if it is an obvious question. once I
implement the socket call as a sidecar process, say as a lua script that
reads the new configuration from portal into a variable, how would I then
update this variable within haproxy to be used by lua script loaded using
lua_load directive? I understand we can do it using stats socket via CLI
and socat but is there a way to do it from within lua script (the one
running as sidecar process) itself so that I check and update portal config
variable every 5 min automatically?


Thank you

On Thu, 27 May 2021, 17:37 reshma r, <[email protected]> wrote:

> Hi, thank you for the detailed and informative reply! It definitely helped
> clarify things.
> Indeed I had disabled chroot to read the files at runtime, but have since
> switched to read during init phase . Thank you for the tips on the
> architecture aspect as well. I will study and explore these options.
>
> Thanks,
> Reshma
>
> On Thu, May 27, 2021 at 11:50 AM Willy Tarreau <[email protected]> wrote:
>
>> Hi,
>>
>> On Wed, May 26, 2021 at 10:43:17PM +0530, reshma r wrote:
>> > Hi Tim thanks a lot for the reply. I am not familiar with what a sidecar
>> > process is. I will look into it. If it is specific to haproxy, if you
>> could
>> > point to some relevant documentation that would be helpful.
>>
>> It's not specific to haproxy, it's a general principle consisting in
>> having another process deal with certain tasks. See it as an assistant
>> if you want. We can do a parallel at lower layers so that it might be
>> clearer. Your kernel deals with routing tables, yet the kernel never
>> manipulates files by itself nor does it stop processing packets to
>> dump a routing table update into a file. Instead it's up to separate
>> processes to perform such slow tasks, and keep it in sync.
>>
>> > >I am making a
>> > > socket call which periodically checks whether the portal has been
>> changed
>> > > (from within haproxy action).
>> >
>> > Leaving aside the writing to file bit for a moment, is it otherwise
>> okay to
>> > do to the above within Haproxy alone and read the config fetched from
>> > portal into a global variable instead of saving to file? Or is it not an
>> > advisable solution? Actually this is what I am doing at present and I
>> have
>> > not observed any issues with performance...
>>
>> What you must never ever do is to read/write files at run time, as this
>> is extremely slow and will pause your traffic. It can be supported to
>> load a file during boot from Lua for example since at this point there
>> is no network processing in progress. We only slightly discourage from
>> doing so because most often the code starts by reading during init and
>> two months later it ends up being done at runtime (and people start to
>> disable chroots and permission drops in order to do this).
>>
>> It's possible to read/set process-wide variables from the CLI, so maybe
>> you can send some events there. Also as Tim mentioned, it's possible to
>> read/set maps from the CLI, that are also readable from Lua. That may
>> be another option to pass live info between an external process and your
>> haproxy config or Lua code. In fact nowadays plenty of people are
>> (ab)using maps to use them as dynamic routing tables or to store dynamic
>> thresholds. What is convenient with them is that they're loaded during
>> boot and you can feed the whole file over the CLI at runtime to pass
>> updates. Maybe that can match your needs.
>>
>> Last point, as a general architecture rule, as soon as you're using
>> multiple components (portal, LB, agents, etc), it's critically important
>> to define who's authoritary over the other ones. Once you do that you
>> need to make sure that the other ones can be sacrified and restarted.
>> In your case I suspect the authority is the portal and that the rest
>> can be killed and restarted at any time. This means that the trust you
>> put in such components must always be lower than the trust you put in
>> the authority (portal I presume).
>>
>> Thus these components must not play games like dumping files by
>> themselves.
>> In the best case they could be consulted to retrieve a current state
>> to be reused in case of a reload. But your portal should be the one
>> imposing its desired state on others. For example, imagine you face a
>> bug, a crash, an out-of-memory or whatever situation where your haproxy
>> dies in the middle of a dump to this file. Your file is ruined and you
>> cannot reuse it. Possibly you can't even restart the service anymore
>> because your corrupted file causes startup errors.
>>
>> This means you'll necessarily have to make sure that a fresh new copy
>> can be instantly delivered by the portal just to cover this unlikely
>> case. If you implement this capability in your portal, then it should
>> become the standard way to produce that file (you don't want the file
>> to come from two different sources, do you?). Then you can simply have
>> a sidecar process dedicated to communication with this portal in charge
>> of feeding such updates via the CLI at runtime (or maybe you can retrieve
>> them directly from the portal using Lua or whatever other solution that
>> best suits your needs).
>>
>> Reasoning in terms of worst case scenario will help you figure the
>> best control flow, and to design a solid solution that covers all use
>> cases at once.
>>
>> Hoping this helps,
>> Willy
>>
>

Reply via email to