> Such use case (authoritative data) is fine, I was merely speaking about
> caching server before.Understood. Interesting. Was my understanding of DynDB
> correct? It reads from a backend DB into memory, to eliminate the latency?In
> my proposed "workaround" (a hidden primary server with DLZ and multiple
> secondary caching nodes) we gain speed but lose the dynamic element. Meaning
> if a record is updated in the DB, the change will be reflected on the caching
> servers after ttl.With DynDB, a reload would be necessary after any update if
> I understand correctly. Trying to understand how would a hidden primary with
> DLZ, paired with multiple secondaries with dynamic zones leveraging nsupdate
> work. Do secondaries poll the primary? Or is primary sending updates to
> secondaries when a record is updated? I believe it's the former, just trying
> to clarify.RegardsHamid Maadani
-------- Original message --------From: Ondřej Surý <ond...@isc.org> Date:
8/24/22 02:32 (GMT-08:00) To: hamid <ha...@dexo.tech> Cc: ML BIND Users
<bind-users@lists.isc.org> Subject: Re: Thread handling On 24. 8. 2022, at
11:01, hamid <ha...@dexo.tech> wrote:> Perhaps, describing the use case first
(why do you want to use MongoDB at all) might have the benefit of not wasting
time on your end.Forgot to answer this, my use case would be the same as
someone who uses a SQL DB backend I imagine: to be able to configure multiple
BIND endpoints, using the same backend DB instead of configuration files, so
there is no need to worry about change propagation and use of configuration
management tools like chef, ansible etc.I just prefer to use no-sql backends
like MongoDB, or Amazon's DocumentDB.If there is any specific downside to using
no-sql databases, or any reason it would not make sense, I would appreciate it
if you can explain it a bit. I am aware of the latency it would introduce, but
was under the impression that DynDB is introduced to address that.Such use case
(authoritative data) is fine, I was merely speaking about caching server
before.You have to calculate the benefit-cost ratio yourself compared to other
provisioning systems - f.e. hidden primary with multiple secondaries updated
with nsupdate works reasonably well in smaller
deployments.Cheers,Ondrej--Ondřej Surý (He/Him)ondrej@isc.orgMy working hours
and your working hours may be different. Please do not feel obligated to reply
outside your normal working hours.RegardsHamid Maadani-------- Original message
--------From: Hamid Maadani <ha...@dexo.tech> Date: 8/24/22 01:08 (GMT-08:00)
To: Ondřej Surý <ond...@isc.org>, Evan Hunt <e...@isc.org> Cc:
bind-us...@lists.isc.orgSubject: Re: Thread handling > BIND does have dyndb
support, since 9.11.> As far as I know, though, the only two dyndb modules in
existence are> the bind-dyndb-ldap modiule that was written by Red Hat as part
of> FreeIPA, and a toy module used for testing. If you were interested in>
writing your MongoDB module for dyndb instead of DLZ, I'd be quite> excited
about that, I've long hoped the API would get more use.Interesting. I looked in
the contrib directory and only found the DLZ modules there. Can you please
point me in the direction of the source code for that toy module?I would
definitely work on a mongo-dyndb implementation as well, when the time
permits.> I am fairly confident that any advantage from shared cache will be
lost because the extra latency caused by communication with the MongoDB (or any
other no-sql systems).> Perhaps, describing the use case first (why do you want
to use MongoDB at all) might have the benefit of not wasting time on your
end.This is a bit confusing to me. As I understand it, a normal bind zone is
loaded into memory, and requests are served from that memory-cache, hence super
fast.DLZ modules however, make a call to the backend database per query. Which
would introduce latency (in my tests, 50ms when using an Atlas cluster in the
same geographical region). However, why would this be specific to no-sql
databases?! Doesn't this also apply to any sql based DB?Now, an advantage of
DLZ, is any change you make to the backend DB takes place immediately without
the need to reload the server. Not a huge advantage in my personal opinion, but
I do see the use case for it.Looking for a way to load queried records from a
backend database into memory to speed up the responses, I found posts about the
DynDB API. If Evan can kindly point me in the right direction there, I will be
developing both DLZ and DynDB modules for MongoDB, as I do see use cases for
each one.The caching question that I asked, was more around having a workaround
without DynDB, because I was under the impression that DynDB API is not
available at the moment. My understanding of a BIND caching server (and
admittedly , I am by no means an advanced user when it comes to BIND), is that
it would query records from another server, and cache them for the life (TTL)
of that record, and serve it. This cache, exists in memory, correct?So in
theory, if I was to use a DLZ to keep my records in a backend DB, I can
technically create a BIND server with the DLZ enabled (let's say a docker
image), and then put a caching server in front of it, which is "customer
facing".That way, all queries will come to the caching server, and will be
served super fast because they are cached in memory, but the actual records
live in a backend DB somewhere.Long story short, I was trying to see if the
same can be achieved with one single instance instead of two, which sounds like
it can not be done.RegardsHamid MaadaniAugust 24, 2022 12:40 AM, "Ondřej Surý"
<ond...@isc.org> wrote:On 24. 8. 2022, at 8:48, Evan Hunt <e...@isc.org>
wrote:In the absence of that, is caching from DLZ a possible configurationon a
single BIND server?Not DLZ, no. And I'm not sure dyndb can be used for the
cache database,either; do you know something about it that I don't?It would
definitely be easier to *make* dyndb work for the cache;it has all the
necessary API calls, and DLZ doesn't. But I don'tknow a way to configure it to
take the place of the cache currently.If you do, please educate me.I am fairly
confident that any advantage from shared cache will be lost because the extra
latency caused by communication with the MongoDB (or any other no-sql
systems).Perhaps, describing the use case first (why do you want to use MongoDB
at all) might have the benefit of not wasting time on your end.Ondrej--Ondřej
Surý (He/Him)ondrej@isc.orgMy working hours and your working hours may be
different. Please do not feel obligated to reply outside your normal working
hours.-- Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe
from this listISC funds the development of this software with paid support
subscriptions. Contact us at https://www.isc.org/contact/ for more
information.bind-users mailing
listbind-us...@lists.isc.orghttps://lists.isc.org/mailman/listinfo/bind-users
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from
this list
ISC funds the development of this software with paid support subscriptions.
Contact us at https://www.isc.org/contact/ for more information.
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users