What is seen in 'top' is what compile does to the sys. snmpd just freacks
out, and the rest as well.
This is VMWare. Storage below is VSAN.
bgpd streches 4 arms - to fw1 and 3 remote VPS. No big deal here. Private
stuff, no massive peering. No peering at all, except mentioned.
Compile sucks out all rss and I don't think this is OK to have this machine
in line, handling traffic.
If I had only one node, with active connections, I'd say I'm offline while
compile is active.

//mxb

On Thu, 20 Jun 2019 at 13:05, Stuart Henderson <[email protected]> wrote:

> On 2019-06-20, Otto Moerbeek <[email protected]> wrote:
> > On Wed, Jun 19, 2019 at 11:29:32PM +0200, Maxim Bourmistrov wrote:
> >
> >> Hey,
> >>
> >> long story short: reboot and re-link is not practical.
> >>
> >> Long story:
> >> Time to upgrade 6.4 to 6.5.
> >> If re-link been active in 6.4 (don't remember) - I never noticed it.
> >> Installing via NOT RECOMMENDED WAY(following upgrade65.html) -
> scripting on
> >> steroides (ansible).
> >> All down. Reboot.
> >> and now I get a SLOW sys - why ?! - compiling new kernel:
> >>
> >> load averages:  3.25,  1.45,  0.60
> >>
> >> 53 processes: 1 running, 49 idle, 3 on processor
> >>
> >>                      up  0:04
> >> CPU0 states:  0.0% user,  0.0% nice, 21.0% sys, 63.7% spin,  0.6% intr,
> >> 14.7% idle
> >> CPU1 states:  0.5% user,  0.0% nice, 22.3% sys, 56.2% spin,  0.0% intr,
> >> 20.9% idle
> >> CPU2 states:  0.7% user,  0.0% nice, 71.5% sys, 19.6% spin,  0.0% intr,
> >>  8.3% idle
> >> CPU3 states:  0.5% user,  0.0% nice,  6.3% sys, 63.3% spin,  0.0% intr,
> >> 29.9% idle
> >> Memory: Real: 382M/792M act/tot Free: 1177M Cache: 310M Swap: 0K/1279M
> >>
> >>   PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU
> COMMAND
> >> 51958 _snmpd    64    0  956K 3148K run/0     -         3:25 119.87%
> snmpd
> >> 17683 root      64    0  166M  174M onproc/2  -         3:10 99.41% ld
> >> 59133 root       2    0 1404K 4248K sleep/0   select    0:08 16.70% sshd
> >> 39714 root      18    0  908K  988K sleep/1   pause     0:05 12.55% ksh
> >> 69806 _tor       2    0   29M   41M sleep/3   kqread    0:28  8.15% tor
> >> 56629 _pflogd    4    0  744K  576K sleep/3   bpf       0:19  7.57%
> pflogd
> >> 92193 _iscsid    2    0  732K 1256K sleep/3   kqread    0:15  4.64%
> iscsid
> >>   288 _squid     2    0   17M   14M sleep/0   kqread    0:11  4.00%
> squid
> >> 53448 _lldpd     2    0 2656K 3848K sleep/3   kqread    0:07  3.32%
> lldpd
> >> 42939 _syslogd   2    0 1108K 1692K sleep/3   kqread    0:03  1.66%
> syslogd
> >>  2842 _bgpd     10    0 1172K 1896K onproc/1  -         0:03  1.46% bgpd
> >>
> >>
> >> I don't think THIS IS OK.
> >> I'm lucky - secondary (but, if ONLY primary??)
> >>
> >>
> >> For whatever reason, after rebooting, I got back 6.4 kernel.
> >> (I'd like to here some great explanation here and MORE around the
> <subject>)
> >
> > Why not investigate why your system is slow? To me it looks like at
> > least snmpd is having a problem. The ld will disappear at some point.
>
> Depends on what bgpd is being used for, but there's a high chance snmpd is
> churning while bgpd adds new routes. If so then try "filter-routes yes" in
> snmpd.conf.
>
> But there are certainly some situations where the relinking is very slow
> and a major resource drain indeed. On this system there's plenty of RAM
> so maybe just slow disks or cpu (but can't really say much as there's
> NO DMESG...*sigh*). On systems with <=256MB running the relink in the
> background can be quite a problem depending on what daemons are running.
>
> > You could start with following the proper upgrade procedure.
> >
> > What's difficult about booting into bsd.rd and doing an upgrade?
>
> Again depends on what bgpd is being used for, but prior to sysupgrade
> (which isn't relevant to OP yet as it's on 6.5), getting network in bsd.rd
> in order to do a standard upgrade can be quite a challenge.
>
>
>

Reply via email to