On 24. 08. 22 17:48, hamid wrote:
> Such use case (authoritative data) is fine, I was merely speaking about caching server before.

Understood. Interesting. Was my understanding of DynDB correct? It reads from a backend DB into memory, to eliminate the latency?

Not necessairly. DynDB API simply allows you to hook into BIND internals and gives you access to all the gory details in the database API.

It's up to the dyndb module implementation to decide what to do with it.

An complex example is:
https://pagure.io/bind-dyndb-ldap

It establishes persistent connection to a RFC 4533-compliant LDAP server (that's no-SQL, see! :-)), pulls in all the data into BIND server memory, and keeps monitoring the server for changes. Any changes on the LDAP side and applied to the DNS side in real-time, and it also works the other way around and writes DNS updates back into LDAP.

This is one possible way how it can be implemented. With all the data in memory it is "simple" to turn on inline-signing and thus have proper DNSSEC support "for free". Doing so with incomplete data would be way more challenging to implement in BIND.


In my proposed "workaround" (a hidden primary server with DLZ and multiple secondary caching nodes) we gain speed but lose the dynamic element. Meaning if a record is updated in the DB, the change will be reflected on the caching servers after ttl.

It sounds almost like confusing authoritative and recursive roles? Auth servers can be updated at any time using dynamic updates or zone transfers. All what's needed is to send out NOTIFY message from the primary ("master") to secondaries. No need to wait for TTL to expire.


With DynDB, a reload would be necessary after any update if I understand correctly.

No, see above.


Trying to understand how would a hidden primary with DLZ, paired with multiple secondaries with dynamic zones leveraging nsupdate work. Do secondaries poll the primary? Or is primary sending updates to secondaries when a record is updated? I believe it's the former, just trying to clarify.

Depending on amount of data, source system/protocol, and other local conditions you might be able to use other ways to export data in DNS format than DLZ or DynDB. For example implementing "something" which pretends to do only AXFR/IXFR/NOTIFY might be another option. Yet another option might be _something else_ based on AXFR/nsupdate.

I hope it helps.
Petr Špaček



Regards
Hamid Maadani


-------- Original message --------
From: Ondřej Surý <ond...@isc.org>
Date: 8/24/22 02:32 (GMT-08:00)
To: hamid <ha...@dexo.tech>
Cc: ML BIND Users <bind-users@lists.isc.org>
Subject: Re: Thread handling

On 24. 8. 2022, at 11:01, hamid <ha...@dexo.tech <mailto:ha...@dexo.tech>> wrote:

> Perhaps, describing the use case first (why do you want to use MongoDB at all) might have the benefit of not wasting time on your end.

Forgot to answer this, my use case would be the same as someone who uses a SQL DB backend I imagine: to be able to configure multiple BIND endpoints, using the same backend DB instead of configuration files, so there is no need to worry about change propagation and use of configuration management tools like chef, ansible etc.
I just prefer to use no-sql backends like MongoDB, or Amazon's DocumentDB.

If there is any specific downside to using no-sql databases, or any reason it would not make sense, I would appreciate it if you can explain it a bit. I am aware of the latency it would introduce, but was under the impression that DynDB is introduced to address that.

Such use case (authoritative data) is fine, I was merely speaking about caching server before.

You have to calculate the benefit-cost ratio yourself compared to other provisioning systems - f.e. hidden primary with multiple secondaries updated with nsupdate works reasonably well in smaller deployments.

Cheers,
Ondrej
--
Ondřej Surý (He/Him)
ond...@isc.org <mailto:ond...@isc.org>

My working hours and your working hours may be different. Please do not feel obligated to reply outside your normal working hours.


Regards
Hamid Maadani


-------- Original message --------
From: Hamid Maadani <ha...@dexo.tech <mailto:ha...@dexo.tech>>
Date: 8/24/22 01:08 (GMT-08:00)
To: Ondřej Surý <ond...@isc.org <mailto:ond...@isc.org>>, Evan Hunt <e...@isc.org <mailto:e...@isc.org>>
Cc:bind-users@lists.isc.org <mailto:bind-users@lists.isc.org>
Subject: Re: Thread handling

> BIND does have dyndb support, since 9.11.
> As far as I know, though, the only two dyndb modules in existence are
> the bind-dyndb-ldap modiule that was written by Red Hat as part of
> FreeIPA, and a toy module used for testing. If you were interested in
> writing your MongoDB module for dyndb instead of DLZ, I'd be quite
> excited about that, I've long hoped the API would get more use.

Interesting. I looked in the contrib directory and only found the DLZ modules there. Can you please point me in the direction of the source code for that toy module? I would definitely work on a mongo-dyndb implementation as well, when the time permits. > I am fairly confident that any advantage from shared cache will be lost because the extra latency caused by communication with the MongoDB (or any other no-sql systems). > Perhaps, describing the use case first (why do you want to use MongoDB at all) might have the benefit of not wasting time on your end.

This is a bit confusing to me. As I understand it, a normal bind zone is loaded into memory, and requests are served from that memory-cache, hence super fast. DLZ modules however, make a call to the backend database per query. Which would introduce latency (in my tests, 50ms when using an Atlas cluster in the same geographical region). However, why would this be specific to no-sql databases?! Doesn't this also apply to any sql based DB?

Now, an advantage of DLZ, is any change you make to the backend DB takes place immediately without the need to reload the server. Not a huge advantage in my personal opinion, but I do see the use case for it. Looking for a way to load queried records from a backend database into memory to speed up the responses, I found posts about the DynDB API. If Evan can kindly point me in the right direction there, I will be developing both DLZ and DynDB modules for MongoDB, as I do see use cases for each one.

The caching question that I asked, was more around having a workaround without DynDB, because I was under the impression that DynDB API is not available at the moment. My understanding of a BIND caching server (and admittedly , I am by no means an advanced user when it comes to BIND), is that it would query records from another server, and cache them for the life (TTL) of that record, and serve it. This cache, exists in memory, correct? So in theory, if I was to use a DLZ to keep my records in a backend DB, I can technically create a BIND server with the DLZ enabled (let's say a docker image), and then put a caching server in front of it, which is "customer facing". That way, all queries will come to the caching server, and will be served super fast because they are cached in memory, but the actual records live in a backend DB somewhere. Long story short, I was trying to see if the same can be achieved with one single instance instead of two, which sounds like it can not be done.

Regards
Hamid Maadani

August 24, 2022 12:40 AM, "Ondřej Surý" <ond...@isc.org <mailto:ond...@isc.org?to=%22ond%c5%99ej%20sur%c3%bd%22%20%3cond...@isc.org%3E>> wrote:

    On 24. 8. 2022, at 8:48, Evan Hunt <e...@isc.org
    <mailto:e...@isc.org>> wrote:
    In the absence of that, is caching from DLZ a possible configuration
    on a single BIND server?

    Not DLZ, no. And I'm not sure dyndb can be used for the cache
    database,
    either; do you know something about it that I don't?

    It would definitely be easier to *make* dyndb work for the cache;
    it has all the necessary API calls, and DLZ doesn't. But I don't
    know a way to configure it to take the place of the cache currently.
    If you do, please educate me.
    I am fairly confident that any advantage from shared cache will be
    lost because the extra latency caused by communication with the
    MongoDB (or any other no-sql systems).
    Perhaps, describing the use case first (why do you want to use
    MongoDB at all) might have the benefit of not wasting time on your
    end.
    Ondrej

--
Petr Špaček

--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Reply via email to