RE: [URL Verdict: Neutral][Non-DoD Source] Re: Attempting to configure an ISC BIND repository on Red Hat Linux 7.9

2022-04-29 Thread DeCaro, James John (Jim) CIV DISA FE (USA) via bind-users
I set the repository up manually because the system does not have dnf 
installed, and the isc instructions provided an alternative method to download 
the repository file.  The "download" actually resulted in a text display of the 
copr repo file contents, which I added in whole and in part to the locally 
created repo file for testing.  All variations resulted in the same error.

Thank you so much for your input, I will hopefully test it sometime today.

V/R
Jim DeCaro

-Original Message-
From: Michał Kępień  
Sent: Thursday, April 28, 2022 4:55 PM
To: DeCaro, James John (Jim) CIV DISA FE (USA) 
Cc: bind-users@lists.isc.org; Mcallister, Reginald CTR DISA FE (USA) 

Subject: [URL Verdict: Neutral][Non-DoD Source] Re: Attempting to configure an 
ISC BIND repository on Red Hat Linux 7.9

All active links contained in this email were disabled.  Please verify the 
identity of the sender, and confirm the authenticity of all links contained 
within the message prior to copying and pasting the address to a Web browser.  






> Dnf is not available. Therefore using yum
> 
> Linux Red Hat 7.9 virtual machine on VMware, has internet connectivity
> 
> Set up local repository in 
> /etc/yum.repos.d/download.copr.fedorainfracloud.org_results_isc_bind_epel-8-_.repo:

Is something (e.g. policy) forcing you to set this repository up
manually?  IMHO it would have been simpler with the "copr" yum plugin.
CentOS 7 allows installing it via "yum install yum-plugin-copr", though
RHEL 7 seems to not have heard of a "yum-plugin-copr" package, so you
have to prod it a bit (similarly for EPEL, which you are going to need
for libnghttp2 if you plan to use the stable "bind" repository, which
currently contains BIND 9.18):

# yum install 
Caution-http://mirror.centos.org/centos/7/os/x86_64/Packages/yum-plugin-copr-1.1.31-54.el7_8.noarch.rpm
# yum install 
Caution-https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# yum copr enable isc/bind
# yum install isc-bind

(I just tested these commands on a fresh RHEL 7 Docker image.)

> now receiving error: 
> "Caution-https://download.copr.fedorainfracloud.org/results/isc/bind/epel-8-x86_64/repodata/repomd.xml:
>  [Errno 14] HTTPS Error 503 - Service Unavailable" for each of the sites in 
> isc: Caution-https:// 
> download.copr.fedorainfracloud.org/results/isc/bind/epel-8-x86_64/(i.e. 
> repeats 10 x)
> 
> curl -k 
> Caution-https://download.copr.fedorainfracloud.org/results/isc/bind/epel-8-x86_64/
>   shows web page content so the connection is good

And does:

curl -v -k 
Caution-https://download.copr.fedorainfracloud.org/results/isc/bind/epel-7-x86_64/repodata/repomd.xml

output an XML file?  What IP is it trying to connect to?  Are you able
to verify that yum tries to reach the same IP when you try to install
packages?

> internet search indicates a possible issue with the target site (which I 
> doubt)

It is certainly within the realm of possibility.  Copr is backed by a
CDN, so I can imagine a situation in which the specific host you are
connecting to from your vantage point is dysfunctional in some way while
others are working just fine.

-- 
Best regards,
Michał Kępień
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Re: Tuning Authoritative Memory Usage

2022-04-29 Thread Matt Corallo
Pulled memory for two servers - one with 1G of non-swap memory where BIND is using ~300-400M RSS, 
one with 2G of non-swap memory where BIND is using ~1-1.2G RSS.


Both report near-identical memory usage via stats channel - TotalUse of 2,500,551,839, InUse of 
1,761,884,984 (and Malloced about the same), ContextSize of 93,232, Lost 0. Most of the memory is in 
the two zonemgr-pools that each has - roughly 1,272,678,490 per pool on each host.


Matt

On 4/28/22 10:43 AM, Ondřej Surý wrote:

Pull the memory stats from the statschannel (json or xml). Also make sure you 
run 9.18 with jemalloc (you can use jemalloc with 9.16, but it needs to be 
linked explicitly with LDFLAGS or pre-loaded).

Ondřej
--
Ondřej Surý — ISC (He/Him)

My working hours and your working hours may be different. Please do not feel 
obligated to reply outside your normal working hours.


On 28. 4. 2022, at 18:48, Matt Corallo  wrote:

Gah, I'm a blind fool. The original and post-config-restoration number quoted 
here are correct, the 1024M stat was looking at the wrong process. Apologies 
about that, it appears the max-cache-size knob does *not* change the total 
memory usage of the process after a restart, it is ~300M on the host either way.

Matt


On 4/28/22 9:44 AM, Matt Corallo wrote:
And then I restarted it with the original setting and it jumped right up to 
~300M, a bit higher than it was before (though before it had been running for a 
bit). In any case it does look like the max-cache-size setting drives memory 
usage up a little bit, but there's quite some noise.
FWIW, Happy to enable AXFR for the zones/catalog, but there's nothing 
particularly strange about the setup, and the full configs are in the OP so not 
sure it'll make it all that much more visible than just cat'ing /dev/urandom 
into a zonefile. Let me know if there's further debugging that makes sense here.
Matt

On 4/28/22 9:38 AM, Matt Corallo wrote:
Hmm, they all have max-cache-size set to 8M (see config snippets in OP) but 
still show the divergent memory usage.

That said, I tried bumping one to 1024M on one of the smaller hosts and usage 
increased from ~270MB to ~437MB.

Matt

On 4/28/22 8:44 AM, Ondřej Surý wrote:

  From top of my head - try setting the max-cache-size to infinite.  The 
internal views might still pre-allocate some stuff based on available memory.

Ondrej
--
Ondřej Surý (He/Him)
ond...@isc.org

My working hours and your working hours may be different. Please do not feel 
obligated to reply outside your normal working hours.


On 28. 4. 2022, at 17:26, Matt Corallo  wrote:



On 4/27/22 9:19 AM, Petr Špaček wrote:

On 27. 04. 22 16:04, Matt Corallo wrote:

I run a number of BIND9 (9.16-27-1~deb11u1 - Debian Stable) secondaries with some large 
zones (10s of DNSSEC-signed zones with ~100k records, not counting signatures, with a 
smattering of other zones). Somewhat to my surprise, even with "recursion no" 
the memory usage of instances is highly correlated with the hosts's available memory - 
BIN9 uses ~400M RSS on hosts with 1G of non-swap memory, but 2.3G on hosts with 4G of 
non-swap memory, all with identical configs and the same zones.

Before we dive in, the general recommendation is:
"If you are concerned about memory usage, upgrade to BIND 9.18." It has lot 
smaller memory footprint than 9.16.
It can have many reasons, but **if the memory usage is not growing without 
bounds** then I'm betting it is just an artifact of the old memory allocator. 
It has a design quirk which causes it not return memory to OS (if it allocated 
in small blocks). As a result, the memory usage visible on OS level peaks at 
some value and then stays there.
If that's what's happening you should see it in internal BIND statistics: Stats 
channel at URL /json/v1 shows value memory/InUse which will be significantly 
smaller than value seen by OS.
In case the two values are close then you are seeing some other quirk and we 
need to dig deeper.
Petr Špaček
P.S. BIND 9.18 does not suffer from this, so I suggest you just upgrade and see.


Upgraded to 9.18.2 and indeed memory usage is down double-digit-%, but the 
surprising host-dependent memory usage is still there - on hosts with 1G of 
non-swap memory bind is eating 470M, on hosts with 4G of non-swap memory 1.9G.

This is right after startup, but at least with 9.16 I wasn't seeing any 
evidence of leaks. Indeed, heap fragmentation meant the memory usage increased 
a bit over time and then plateaued, but not by much, and ultimately the peak 
memory usage was still highly dependent on the host's available memory.

Matt
--
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users



--
Visit https://lists.isc.org/mailman/listinfo/bind-users to u