On 20/07/2010 18:41, Johnson Earls wrote:
Hello,

I am hoping that someone on this list can enlighten me about the DTrace i/o 
provider.  I am apparently not understanding where the i/o provider actually 
sits in the stack.

My understanding, from reading http://wikis.sun.com/display/DTrace/io+Provider, 
is that (discounting NFS for this purpose) the i/o provider probes fire when 
I/O is going to a specific disk device - in other words, below the filesystem 
layer.

I am using both the iopattern dtrace script and my own dtrace script modified from 
the iopattern script to gather read and write bandwidth statistics on a fibre channel 
SAN disk device.  I do this through the io:genunix::start and io:genunix::done probe, 
filtering on args[1]->dev_statname for the disk device name and accumulating the 
bandwidth statistics from args[0]->b_count.

However, I am seeing occasional reports of i/o bandwidth anywhere from 40 to 
100 GB per second, on a 4Gbps fiber channel device.  I am obviously not 
understanding how the io provider is working.

My questions:

Do io:genunix::start and io:genunix::done fire *only* for physical device 
access, or will they fire when the request is being served by a Solaris cache?

If they fire on requests that are served by a cache, is there any way to 
determine this in order to filter those results out?

If they fire only on physical device access, what can explain the buffer counts 
being reported at many times higher than what the physical device is capable of?


Check documentation for args[0]->b_flags, for example:

io:::start
{
  printf("%s\n", args[0]->b_flags&B_PHYS ? "physical":logical");
}
_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to