Does this reveal anything;

dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { 
@[execname,fds[arg0].fi_pathname]=count(); }'


On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote:

> While looking over iostats from various programs, I see that
> my OS HDD is busy writing, about 2Mb/sec stream all the time
> (at least while the "dcpool" import/recovery attempts are
> underway, but also now during a mere zdb walk).
> 
> According to "iostat" this load stands out greatly:
>                   extended device statistics
>    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>   25.0    0.0  100.0    0.0  0.0  0.3    0.0   11.6   0  29 c7t0d0
>   10.0    0.0   40.0    0.0  0.0  0.1    0.0    8.4   0   8 c7t1d0
>    2.0    0.0    8.0    0.0  0.0  0.0    0.0   13.6   0   3 c7t2d0
>   32.0    0.0  188.0    0.0  0.0  0.3    0.0    9.8   0  31 c7t3d0
>   14.0    0.0  116.0    0.0  0.0  0.1    0.0   10.3   0  14 c7t4d0
>    2.0    0.0    8.0    0.0  0.0  0.0    0.0   19.0   0   4 c7t5d0
>    0.0  327.0    0.0 2947.6  0.2  0.1    0.5    0.2   5   5 c4t1d0
>   59.0    0.0  125.5    0.0  0.0  0.7    0.0   12.4   0  73 
> c0t600144F09844CF0000004D8376AE0002d0
> 
> "zpool iostat" confirmed it is rpool and not some other
> partition:
> ----------  -----  -----  -----  -----  -----  -----
> rpool       17.1G  2.77G      0    271  1.08K  2.20M
>  c4t1d0s0  17.1G  2.77G      0    271  1.08K  2.20M
> ----------  -----  -----  -----  -----  -----  -----
> 
> For a while I thought it might be some swapping IO (despite
> the fact that vmstat and top show no swap-area usage at the
> moment, and no PI/PO operations in vmstat). I disabled swap
> in rpool volumes, but the 2Mb/s stream is still there, both
> with TXG sync times bumped to 30 sec and reduced to 1 sec.
> 
> So far I did not find a DTraceToolkit-0.99 utility which
> would show me what that would be:
> 
> # /export/home/jim/DTraceToolkit-0.99/rwsnoop  | egrep -v '/proc|/dev|<unkn'
>  UID    PID CMD          D   BYTES FILE
>    0   1251 freeram-watc W      78 /var/log/freeram-watchdog.log.1307796483
>    0   1251 freeram-watc W      78 /var/log/freeram-watchdog.log.1307796483
>    0   1251 freeram-watc W      78 /var/log/freeram-watchdog.log.1307796483
>    0    394 nscd         R   13492 /etc/security/prof_attr
>    0   1251 freeram-watc W      78 /var/log/freeram-watchdog.log.1307796483
>    0   1251 freeram-watc W      78 /var/log/freeram-watchdog.log.1307796483
>    0    677 utmpd        R       4 /var/adm/wtmpx
>    0   1251 freeram-watc W     156 /var/log/freeram-watchdog.log.1307796483
> 
> These lines appear about once per second, so I gather that
> file-based IO is not heavy. There are many "<unknown>"
> entries which seem to be associated with "grep" and other
> pipes I have running, but they shouldn't go through disk?
> 
> "iosnoop" also shows little detail, except also pointing
> that all writes on the system at this moment go to rpool:
>    0     6 W 32081558  16384 zpool-rpool <none>
>    0     6 W 32092916  16384 zpool-rpool <none>
>    0     6 W 32116495  16384 zpool-rpool <none>
>    0     6 W 32117832  16384 zpool-rpool <none>
>    0     6 W 32125173  16384 zpool-rpool <none>
>    0     0 R 3025883400   4096      sched <none>
>    0     6 W 32156861  16384 zpool-rpool <none>
>    0     6 W 32206221   8192 zpool-rpool <none>
>    0     6 W 16701011  16384 zpool-rpool <none>
>    0     6 W 16702547   4096 zpool-rpool <none>
>    0     6 W  5281714  16384 zpool-rpool <none>
>    0     6 W  5462106   4096 zpool-rpool <none>
>    0     6 W  5251672  16384 zpool-rpool <none>
>    0     6 W  5253790   4096 zpool-rpool <none>
>    0     6 W  5257100  16384 zpool-rpool <none>
>    0     6 W  5408779  16384 zpool-rpool <none>
>    0     6 W  5431113  16384 zpool-rpool <none>
>    0     6 W  5433800  16384 zpool-rpool <none>
>    0     6 W  5438181  16384 zpool-rpool <none>
>    0     6 W  5447201  16384 zpool-rpool <none>
>    0     6 W  5462114   4096 zpool-rpool <none>
>    0     6 W  5503260   2048 zpool-rpool <none>
>    0     6 W  5510618  16384 zpool-rpool <none>
>    0     6 W 16633532   2048 zpool-rpool <none>
>    0     6 W 16640398  16384 zpool-rpool <none>
>    0     6 W 16648096  16384 zpool-rpool <none>
>    0     6 W 16650717  16384 zpool-rpool <none>
>    0     6 W 16651864  16384 zpool-rpool <none>
>    0     6 W 16658841  16384 zpool-rpool <none>
>    0     6 W 16658883  16384 zpool-rpool <none>
>    0     6 W 16662945  16384 zpool-rpool <none>
> 
> 
> I have little idea which dataset it could be landing on,
> or which process/task generates such a stream?
> 
> As I wrote above, the system is busy trying to import
> or zdbwalk "dcpool" which resides in a volume on another
> separate "pool", system is otherwise idle, and all other
> processes which write to files account to a few bytes
> per second on average...
> 
> I had a hunch this may be related to having dedup(=verify)
> on the root pool as well, but disabling it and waiting
> about 5 minutes did not change "iostat" substantially.
> True, at some seconds the writes went down to about
> 80 write IOPS ~ 300-500k/sec, but they went back up to
> 200-300 wIOPS ~ 2-3Mb/sec just afterwards.
> 
> So I'm still wondering... what is being written at such
> rate and does not deplete my pool's free space? ;)
> 
> Thanks for hints,
> //Jim
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to