Hi!

On Feb 1, 2012, at 7:43 PM, Bob Friesenhahn wrote:

> On Wed, 1 Feb 2012, Jan Hellevik wrote:
>> The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
>> than the other disks in the pool. The output is from a 'zfs receive' after 
>> about 3 hours.
>> The two c5dx disks are the 'rpool' mirror, the others belong to the 'backup' 
>> pool.
> 
> Are all of the disks the same make and model?  What type of chassis are the 
> disks mounted in?  Is it possible that the environment that this disk 
> experiences is somehow different than the others (e.g. due to vibration)?

They are different makes - I try to make pairs of different brands to minimise 
risk.

The disks are in a Rackable Systems enclosure (disk shelf?). 16 disks, all 
SATA. Connected to a SASUC8I controller on the server.

This is a backup server I recently put together to keep backups from my main 
server. I put in the disks from the old 'backup' pool and have started a 2TB 
zfs send/receive from my main server. So far thing look ok, it is just the 
somewhat high values on that one disk that worries me a little.

> 
>> Should I be worried? And what other commands can I use to investigate 
>> further?
> 
> It is difficult to say if you should be worried.
> 
> Be sure to do 'iostat -xe' to see if there are any accumulating errors 
> related to the disk.
> 

This is the most current output from iostat. It has been running a zfs receive 
for more than a day. No errors. zpool status also reports no errors.


                            extended device statistics       ---- errors --- 
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
    8.1   18.7  142.5  180.4  0.0  0.1    0.1    3.2   0   8   0   0   0   0 
c5d0
   10.2   18.7  186.3  180.4  0.0  0.1    0.1    3.3   0   9   0   0   0   0 
c5d1
    0.0   36.7    0.0 3595.8  0.0  0.1    0.0    3.2   0   9   0   0   0   0 
c6t66d0
    0.0   36.0    0.0 3642.2  0.0  0.1    0.0    3.9   0  12   0   0   0   0 
c6t70d0
    0.0   36.1    0.0 3642.2  0.0  0.1    0.0    2.9   0   5   0   0   0   0 
c6t74d0
    0.0   39.6    0.0 4071.8  0.0  0.0    0.0    0.7   0   2   0   0   0   0 
c6t76d0
    0.2    0.0    0.3    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 
c6t77d0
    0.2   36.8    0.3 3595.8  0.0  0.1    0.0    1.9   0   4   0   0   0   0 
c6t78d0
    0.2    0.0    0.3    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 
c6t79d0
    0.2   39.6    0.3 4071.6  0.0  0.1    0.0    1.6   0   5   0   0   0   0 
c6t80d0
    0.2    0.0    0.3    0.0  0.0  0.0    0.0    0.0   0   0   0   0   0   0 
c6t81d0

admin@master:/export/home/admin$ zpool list         
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
backup  4.53T  2.17T  2.36T    47%  1.00x  ONLINE  -

admin@master:/export/home/admin$ zpool status
  pool: backup
 state: ONLINE
 scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012
config:

        NAME         STATE     READ WRITE CKSUM
        backup       ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c6t78d0  ONLINE       0     0     0
            c6t66d0  ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            c6t70d0  ONLINE       0     0     0
            c6t74d0  ONLINE       0     0     0
          mirror-2   ONLINE       0     0     0
            c6t76d0  ONLINE       0     0     0
            c6t80d0  ONLINE       0     0     0

errors: No known data errors


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to