This sounds quite like the problems I've been having with a spotty
sata controller and/or motherboard.  See my thread from last week
about copying large amounts of data that forced a reboot.  Lots of
good info from engineers and users in that thread.



On Sun, Mar 15, 2009 at 1:17 PM, Markus Denhoff <denh...@net-bite.net> wrote:
> Hi there,
>
> we set up an OpenSolaris/ZFS based storage server with two zpools: rpool is
> a mirror for the operating system. tank is a raidz for data storage.
>
> The system is used to store large video files and has attached 12x1GB
> SATA-drives (2 mirrored for the system). Everytime large files are copied
> around the system hangs without apparent reason, 50% kernel CPU usage (so
> one core is occupied totally) and about 2GB of free RAM (8GB installed). On
> idle nothing crashes. Furthermore every scrub on tank hangs the system up
> below 1% finished. Neither the /var/adm/messages nor the /var/log/syslog
> file contains any errors or warnings. We limited the ZFS ARC cache to 4GB
> with an entry in /etc/system.
>
> Does anyone has an idea what's happening there and how to solve the problem?
>
> Below some outputs which may help.
>
> Thanks and greetings from germany,
>
> Markus Denhoff,
> Sebastian Friederichs
>
> # zpool status tank
>  pool: tank
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME         STATE     READ WRITE CKSUM
>        tank         ONLINE       0     0     0
>          raidz1     ONLINE       0     0     0
>            c6t2d0   ONLINE       0     0     0
>            c6t3d0   ONLINE       0     0     0
>            c6t4d0   ONLINE       0     0     0
>            c6t5d0   ONLINE       0     0     0
>            c6t6d0   ONLINE       0     0     0
>            c6t7d0   ONLINE       0     0     0
>            c6t8d0   ONLINE       0     0     0
>            c6t9d0   ONLINE       0     0     0
>            c6t10d0  ONLINE       0     0     0
>            c6t11d0  ONLINE       0     0     0
>
> errors: No known data errors
>
> # zpool iostat
>               capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> rpool       37.8G   890G      3      2  94.7K  17.4K
> tank        2.03T  7.03T    112      0  4.62M    906
> ----------  -----  -----  -----  -----  -----  -----
>
> # zfs list
> NAME                       USED  AVAIL  REFER  MOUNTPOINT
> rpool                     39.8G   874G    72K  /rpool
> rpool/ROOT                35.7G   874G    18K  legacy
> rpool/ROOT/opensolaris    35.6G   874G  35.3G  /
> rpool/ROOT/opensolaris-1  89.9M   874G  2.47G  /tmp/tmp8CN5TR
> rpool/dump                2.00G   874G  2.00G  -
> rpool/export               172M   874G    19K  /export
> rpool/export/home          172M   874G    21K  /export/home
> rpool/swap                2.00G   876G    24K  -
> tank                      1.81T  6.17T  32.2K  /tank
> tank/data                 1.81T  6.17T  1.77T  /data
> tank/public-share         34.9K  6.17T  34.9K  /public-share
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to