On 12/22/2020 10:09 AM, mike tancsa wrote:
> On 12/22/2020 10:07 AM, Mark Johnston wrote:
>> Could you go to frame 11 and print zone->uz_name and
>> bucket->ub_bucket[18]?  I'm wondering if the item pointer was mangled
>> somehow.
> Thank you for looking!
>
> (kgdb) frame 11
>
> #11 0xffffffff80ca47d4 in bucket_drain (zone=0xfffff800037da000,
> bucket=0xfffff801c7fd5200) at /usr/src/sys/vm/uma_core.c:758
> 758             zone->uz_release(zone->uz_arg, bucket->ub_bucket,
> bucket->ub_cnt);
> (kgdb) p zone->uz_name
> $1 = 0xffffffff8102118a "mbuf_jumbo_9k"
> (kgdb) p bucket->ub_bucket[18]
> $2 = (void *) 0xfffff80de4654000
> (kgdb) p bucket->ub_bucket   
> $3 = 0xfffff801c7fd5218
>
> (kgdb)
>
Not sure if its coincidence or not, but previously I was running with
arc being limited to ~30G of the 64G of RAM on the box.  I removed that
limit a few weeks ago after upgrading the box to RELENG_12 to pull in
the OpenSSL changes.  The panic seems to happen under disk load. I have
3 zfs pools that are pretty busy receiving snapshots. One day a week, we
write a full set to a 4th zfs pool off some geli attached drives via USB
for offsite cold storage.  The crashes happened with that extra level of
disk work.  gstat shows most of the 12 drives off 2 mrsas controllers at
or close to 100% busy during the 18hrs it takes to dump out the files.

Trying a new cold storage run now with the arc limit back to
vfs.zfs.arc_max=29334498304

    ---Mike



_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to