> On Fri, 13 Nov 2009, inouk wrote:
> Your system has every little RAM (512MB).  It is less
> than is 
> recommended for Solaris 10 or for zfs and if it was a
> PC, it would be 
> barely enough to run Windows XP.  Since zfs likes to
> use RAM and 
> expects and sufficient RAM will be available, it
> seems likely that 
> this system is both paging badly, and is also not
> succeeding to cache 
> enough data to operate efficiently.  Zfs is
> re-reading from disk where 
> normally the data would be cached.
> 
> The simple solution is to install a lot more RAM.
>  2GB is a good 
> tarting point.
> 

I don't agree, especially if you compare with Windows XP.  It has windowing 
system and any other fancy stuffs.  The server I'm talking about has nothing on 
it except system background processes (sendmail, kernel threads, and all).  
Finally, swap isn't used at all.  So, I could say almost 90% of ram is 
available for zfs operations.

Anyway, I discovered something interesting: while investigating, I "offlined" 
the second disk in mirror pool:

============================================================
pool: rpool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror      DEGRADED     0     0     0
            c0t0d0s0  ONLINE       0     0     0
            c0t2d0s0  OFFLINE      0     0     0

errors: No known data errors
============================================================


It went from 650KB to 1200KB (1.2MB) according to pfilestat:
============================================================
     STATE   FDNUM      Time Filename
   running       0        5%
   waitcpu       0       12%
      read       0       16% /opt/export/flash_recovery/OVO_2008-02-20.fl
   sleep-r       0       65%

     STATE   FDNUM      KB/s Filename
      read       0      1200 /opt/export/flash_recovery/OVO_2008-02-20.fl

Total event time (ms): 4999   Total Mbytes/sec: 1
============================================================

Also, in read transferts, sevice time reduced to between 80ms and 100ms:

============================================================
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  
dad0      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0  
dad1    168.8    0.0 21608.9    0.0 13.5  1.7   89.9  78  88 
============================================================

Sounds like a bus bottleneck, as if two HD's can't use the same bus for data 
transfert.  I don't know the hardware specifications of Netra X1, though...
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to