More attempts at tracking this down. The suggested dtrace command does
usually abort with:

  Assertion failed: (buf->dtbd_timestamp >= first_timestamp),
file /usr/src/cddl/contrib/opensolaris/lib/libdtrace/common/dt_consume.c,
    line 3330.

but with some luck soon after each machine reboot I can leave the dtrace
running for about 10 or 20 seconds (max) before terminating it with a ^C, and succeed in collecting the report. If I miss the opportunity to leave
dtrace running just long enough to collect useful info, but not long
enough for it to hit the assertion check, then any further attempt
to run the dtrace script hits the assertion fault immediately.

Btw, (just in case) I have recompiled kernel from source (base/release/11.2.0)
with debugging symbols, although the behaviour has not changed:

  FreeBSD floki.ijs.si 11.2-RELEASE FreeBSD 11.2-RELEASE #0 r337238:
Fri Aug 3 17:29:42 CEST 2018 m...@xxx.ijs.si:/usr/obj/usr/src/sys/FLOKI amd64


Anyway, after several attempts I was able to collect a useful dtrace
output from the suggested dtrace stript:

# dtrace -n 'dtmalloc::solaris:malloc {@allocs[stack(), args[3]] =
  count()} dtmalloc::solaris:free {@frees[stack(), args[3]] = count()}'

while running "zpool list" repeatedly in another terminal screen:

# (while true; do zpool list -Hp >/dev/null; vmstat -m | fgrep solaris; \
      sleep 0.2; done) | awk '{print $2-a; a=$2}'
454303
570
570
570
570
570
570
570
570
570
570
570
570
570
570

Two samples of the collected dtrace output (after about 15 seconds)
are at:

  https://www.ijs.si/usr/mark/tmp/dtrace1.out.bz2
  https://www.ijs.si/usr/mark/tmp/dtrace2.out.bz2

(the dtrace2.out is probably cleaner, I made sure no other service
 was running except my sshd and syslog)

Not really sure what I'm looking at, but a couple of large entries
stand out:

$ awk '/^ .*[0-9]+ .*[0-9]$/' dtrace2.out | sort -k1n | tail -5
           114688              138
           114688              138
           114688              138
           114688              138
           114688              138

Thanks in advance for looking into it,
  Mark




2018-08-01 09:12, myself wrote:
On Tue, Jul 31, 2018 at 11:54:29PM +0200, Mark Martinec wrote:
I have now upgraded this host from 11.1-RELEASE-p11 to 11.2-RELEASE
and the situation has not improved. Also turned off all services.
ZFS is still leaking memory about 30 MB per hour, until the host
runs out of memory and swap space and crashes, unless I reboot it
first every four days.

Any advise before I try to get rid of that faulted disk with a pool
(or downgrade to 10.3, which was stable) ?

2018-08-01 00:09, Mark Johnston wrote:
If you're able to use dtrace, it would be useful to try tracking
allocations with the solaris tag:

# dtrace -n 'dtmalloc::solaris:malloc {@allocs[stack(), args[3]] =
count()} dtmalloc::solaris:free {@frees[stack(), args[3]] = count();}'

Try letting that run for one minute, then kill it and paste the output.
Ideally the host will be as close to idle as possible while still
demonstrating the leak.

Good and bad news:

The suggested dtrace command bails out:

# dtrace -n 'dtmalloc::solaris:malloc {@allocs[stack(), args[3]] =
count()} dtmalloc::solaris:free {@frees[stack(), args[3]] = count();}'
dtrace: description 'dtmalloc::solaris:malloc ' matched 2 probes
Assertion failed: (buf->dtbd_timestamp >= first_timestamp), file
/usr/src/cddl/contrib/opensolaris/lib/libdtrace/common/dt_consume.c,
line 3330.
Abort trap

But I did get one step further, localizing the culprit.

I realized that the "solaris" malloc count goes up in sync with
the 'telegraf' monitoring service polls, which also has a ZFS plugin
which monitors the zfs pool and ARC. This plugin runs 'zpool list -Hp'
periodically.

So after stopping telegraf (and other remaining services),
the 'vmstat -m' shows that InUse count for "solaris" goes up by 552
every time that I run "zpool list -Hp" :

# (while true; do zpool list -Hp >/dev/null; vmstat -m | \
    fgrep solaris; sleep 1; done) | awk '{print $2-a; a=$2}'
6664427
541
552
552
552
552
552
552
552
552
_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to