On Thu, Jun 05, 2008 at 01:53:37AM +0800, Tz-Huan Huang wrote:
> On Thu, Jun 5, 2008 at 12:31 AM, Dag-Erling Sm??rgrav <[EMAIL PROTECTED]> 
> wrote:
> > "Tz-Huan Huang" <[EMAIL PROTECTED]> writes:
> >> The vfs.zfs.arc_max was set to 512M originally, the machine survived for
> >> 4 days and panicked this morning. Now the vfs.zfs.arc_max is set to 64M
> >> by Oliver's suggestion, let's see how long it will survive. :-)
> >
> > [EMAIL PROTECTED] ~% uname -a
> > FreeBSD ds4.des.no 8.0-CURRENT FreeBSD 8.0-CURRENT #27: Sat Feb 23 01:24:32 
> > CET 2008     [EMAIL PROTECTED]:/usr/obj/usr/src/sys/ds4  amd64
> > [EMAIL PROTECTED] ~% sysctl -h vm.kmem_size_min vm.kmem_size_max 
> > vm.kmem_size vfs.zfs.arc_min vfs.zfs.arc_max
> > vm.kmem_size_min: 1,073,741,824
> > vm.kmem_size_max: 1,073,741,824
> > vm.kmem_size: 1,073,741,824
> > vfs.zfs.arc_min: 67,108,864
> > vfs.zfs.arc_max: 536,870,912
> > [EMAIL PROTECTED] ~% zpool list
> > NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> > raid                   1.45T    435G   1.03T    29%  ONLINE     -
> > [EMAIL PROTECTED] ~% zfs list | wc -l
> >     210
> >
> > Haven't had a single panic in over six months.
> 
> Thanks for your information, the major difference is that we
> runs on 7-stable and the size of our zfs pool is much bigger.

I'm don't think the panics are related to pool size. More to the load
and characteristics of your workload.

> [EMAIL PROTECTED] uname -a
> FreeBSD cml2.csie.ntu.edu.tw 7.0-STABLE FreeBSD 7.0-STABLE #40: Sat
> May 31 10:29:16 CST 2008
> [EMAIL PROTECTED]:/usr/local/obj/usr/local/src/sys/CML2  amd64
> [EMAIL PROTECTED] sysctl -h vm.kmem_size_min vm.kmem_size_max vm.kmem_size
> vfs.zfs.arc_min vfs.zfs.arc_max
> vm.kmem_size_min: 0
> vm.kmem_size_max: 1,610,612,736
> vm.kmem_size: 1,610,612,736
> vfs.zfs.arc_min: 16,777,216
> vfs.zfs.arc_max: 67,108,864
> [EMAIL PROTECTED] zpool list
> NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
> sun                    11.3T   9.03T   2.30T    79%  ONLINE     -
> [EMAIL PROTECTED] zfs list | wc -l
>      295

If we're comparing who has bigger... :)

beast:root:~# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    732G    604G    128G    82%  ONLINE     -

but:

beast:root:~# zfs list | wc -l
    1932

No panics.

PS. I'm quite sure the ZFS version I've in perforce will fix most if not
all 'kmem_map too small' panics. It's not yet committed, but I do want
to MFC it into RELENG_7.

-- 
Pawel Jakub Dawidek                       http://www.wheel.pl
[EMAIL PROTECTED]                           http://www.FreeBSD.org
FreeBSD committer                         Am I Evil? Yes, I Am!

Attachment: pgp9rbBT2lsbh.pgp
Description: PGP signature

Reply via email to