On Wed, Oct 20, 2010 at 11:27:38AM -0400, Sean Thomas Caron wrote:
> I've been playing with ZFS in 8.1-RELEASE (amd64) on a Sun Fire
> X4500 with 16 GB RAM and in general it seems to work well when used
> as recommended.
> 
> BUT...
> 
> In spite of the suggestion of Sun and FreeBSD developers to the
> contrary, I have been trying to create some zraid pools of size
> greater than 9 disks and it seems to give some trouble.
> 
> If I try to create a pool containing more than 9 [500 GB] disks,
> doesn't matter if it is zraid1 or zraid2, the system seems to reboot
> given any amount of sustained reads from the pool (haven't tested
> mirrored pools). Now, I am not sure of:
> 
> - Whether the reboot in the zraid1 case is caused by exactly the
> same issue as the reboot in the zraid2 case
> 
> - Whether this is an issue of total number of member disks, or total
> amount of disk space in the pool. All I have to work with at the
> moment is 500 GB drives.
> 
> I am not doing any sysctl tuning; just running with the defaults or
> what the system automatically sizes. I tried playing around with
> tuning some sysctl parameters including setting arc_max to be very
> small and it didn't seem to help any; pools of greater than 9 disks
> in size are always unstable.
> 
> Writes seem to work OK; I can, say, pull stuff from over the network
> and save it to the pool, or I can do something like,
> 
> dd if=/dev/random of=/mybigpool/bigfile bs=1024 size=10M
> 
> and it will write data all day pretty happily. But if I try to read
> back from the pool, for example,
> 
> dd if=/mybigpool/bigfile of=/dev/null bs=1024
> 
> or even to just do something like,
> 
> cp /mybigpool/bigfile /mybigpool/bigfile_2
> 
> the system reboots pretty much immediately. I never see anything on
> the console at all; it just reboots.
> 
> Even if I build a new kernel with debugging options:
> 
> options KDB
> options DDB
> 
> the system still just reboots; I never see anything on the console
> and I never get to the debugger.
> 
> So, as I say, very easy to reproduce the problem, just create a
> zraid pool of any type with more than 9 member disks, dump some data
> to it, then try to read it back, and the machine will reboot.
> 
> If I create a pool with 9 or fewer disks, the system seems perfectly
> stable. I was never able to reproduce the reboot behavior as long as
> the pools contained 9 or fewer drives, beating on it fairly hard
> with iozone and multiple current dd operations writing large files
> to and from memory.
> 
> Just wondering if anyone's seen this problem before and as to
> whether or not it is a known bug and may have been fixed in STABLE
> or CURRENT? Should I report this as a bug? Should I just create
> pools of 9 or fewer drives? Not sure if my customer is going to want
> to use STABLE or CURRENT in production but I wanted to run this by
> the list just to see.

There are users here using FreeBSD ZFS with *lots* of disks (I think
someone was using 32 disks at one point) reliably.  Some of them post
here regularly (with other issues that don't consist of sporadic
reboots).

The kernel options may not be sufficient.  I'm used to using these:

# Debugging options
options         BREAK_TO_DEBUGGER       # Sending a serial BREAK drops to DDB
options         KDB                     # Enable kernel debugger support
options         KDB_TRACE               # Print stack trace automatically on 
panic
options         DDB                     # Support DDB
options         GDB                     # Support remote GDB

And in /etc/rc.conf, setting:

ddb_enable="yes"

Next: arc_max isn't "technically" a sysctl, meaning it can't be changed
in real-time, so I'm not sure how you managed to do that.  Validation:

sysctl: oid 'vfs.zfs.arc_max' is a read only tunable
sysctl: Tunable values are set in /boot/loader.conf

Your system may be reporting something relating to kmem exhaustion but
is then auto-rebooting so fast that you can't see the message on VGA
console.  Do you have serial console?

Please try setting the following tunables in /boot/loader.conf and
reboot the machine, then see if the same problem persists.

vm.kmem_size="16384M"
vfs.zfs.arc_max="14336M"
vfs.zfs.prefetch_disable="1"
vfs.zfs.zio.use_uma="0"
vfs.zfs.txg.timeout="5"

I would also advocate you try 8.1-STABLE as there have been many changes
in ZFS since then (and I'm not just referring to the v15 import),
including how the ARC gets sized/adjusted.  CURRENT is highly
bleeding-edge, so I would start or stick with STABLE.

Finally, there's always the possibility that the PSU has some sort of
load problem with that many disks all being accessed at the same time.
I imagine the power draw of that system is quite high.  I can't imagine
Sun shipping a box with a insufficient PSU, but then again power draw
changes depending on the RPM of the disks used and many other things.

-- 
| Jeremy Chadwick                                   j...@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |

_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to