I am running zfs set on 3 x 300gb HD's, I do see my disk activity going crazy
all the time, is there any reason for it? I have nothing running on this system,
just did setit up for testing purposes. I do replicate data from different
system once a day trough rsync but that is quick process and I am not sure why
am I getting this I/O activity on the system.
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
351.3 0.0 41312.2 0.0 0.0 24.1 0.0 68.5 1 98 c1t1d0
351.4 0.0 41312.3 0.0 0.0 24.1 0.0 68.5 1 98 c1t1d0s0
340.3 0.0 41384.7 0.0 0.0 21.4 0.0 62.8 1 85 c1t2d0
340.3 0.0 41384.8 0.0 0.0 21.4 0.0 62.8 1 85 c1t2d0s0
355.4 0.0 41825.8 0.0 0.0 24.8 0.0 69.9 1 100 c1t3d0
355.4 0.0 41825.9 0.0 0.0 24.8 0.0 69.9 1 100 c1t3d0s0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
317.9 0.0 38718.9 0.0 0.0 30.8 0.0 97.0 1 100 c1t1d0
317.9 0.0 38718.9 0.0 0.0 30.8 0.0 97.0 1 100 c1t1d0s0
410.2 0.0 50768.9 0.0 0.0 25.7 0.0 62.6 1 96 c1t2d0
410.2 0.0 50768.8 0.0 0.0 25.7 0.0 62.6 1 96 c1t2d0s0
409.2 0.0 51087.7 0.0 0.0 34.1 0.0 83.3 1 100 c1t3d0
409.2 0.0 51087.7 0.0 0.0 34.1 0.0 83.3 1 100 c1t3d0s0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 9.0 0.0 32.5 0.0 0.4 0.0 41.3 0 14 c1t0d0
0.0 9.0 0.0 32.5 0.0 0.4 0.0 41.3 0 14 c1t0d0s5
432.0 0.0 53225.5 0.0 0.0 27.3 0.0 63.2 1 93 c1t1d0
432.0 0.0 53225.5 0.0 0.0 27.3 0.0 63.2 1 93 c1t1d0s0
306.0 0.0 36914.6 0.0 0.0 25.9 0.0 84.7 1 95 c1t2d0
306.0 0.0 36914.6 0.0 0.0 25.9 0.0 84.7 1 95 c1t2d0s0
336.0 0.0 40197.2 0.0 0.0 18.6 0.0 55.5 1 82 c1t3d0
336.0 0.0 40197.2 0.0 0.0 18.6 0.0 55.5 1 82 c1t3d0s0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
350.0 0.0 41241.2 0.0 0.0 26.5 0.0 75.8 1 94 c1t1d0
350.0 0.0 41241.2 0.0 0.0 26.5 0.0 75.8 1 94 c1t1d0s0
367.0 0.0 43291.3 0.0 0.0 28.4 0.0 77.3 1 100 c1t2d0
367.0 0.0 43291.4 0.0 0.0 28.4 0.0 77.3 1 100 c1t2d0s0
363.0 0.0 43679.1 0.0 0.0 26.3 0.0 72.4 1 96 c1t3d0
363.0 0.0 43679.2 0.0 0.0 26.3 0.0 72.4 1 96 c1t3d0s0
I did try to see what is going on on the disk, I did try to kill any processes
that might be doing anything (there was rsync server setup, so I did kill
that..) Anyway when I do:
# fuser -c /d
/d:
nothing has any locks on it.
[09:33:34] [EMAIL PROTECTED]: /root > zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
d 832G 515G 317G 61% ONLINE -
[09:33:47] [EMAIL PROTECTED]: /root > zpool status
pool: d
state: ONLINE
scrub: scrub in progress, 35.37% done, 1h1m to go
config:
NAME STATE READ WRITE CKSUM
d ONLINE 0 0 0
raidz ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
[09:34:02] [EMAIL PROTECTED]: /root > zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
d 515G 317G 93 7 7.25M 667K
d 515G 317G 714 0 87.6M 0
d 515G 317G 717 0 87.8M 0
d 515G 317G 671 0 83.5M 0
d 515G 317G 856 0 106M 0
d 515G 317G 699 0 85.5M 0
d 515G 317G 782 0 96.1M 0
d 515G 317G 718 0 88.3M 0
^C
Any idea what is going on and why there is so much reading going on?
Thanks for help or suggestions.
Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss