In order to get more information on IO performance problems I created the 
script below:
#!/usr/sbin/dtrace -s
#pragma D option flowindent
syscall::*write*:entry
/pid == $1 && guard++ == 0/
{
        self -> ts = timestamp;
        self->traceme = 1;
        printf("fd: %d", arg0);
}
fbt:::
/self->traceme/
{
/*      elapsd =timestamp - self -> ts;
        printf(" elapsed : %d" , elapsd); */
        printf(" timestamp : %d" , timestamp);
}
syscall::*write*:return
/self->traceme/
{
        self->traceme = 0;
        elapsed=timestamp - self -> ts;
        printf(" timestamp : %d" , timestamp);
        printf("\telapsed : %d" , elapsed);
        exit(0);
}

I gives me the timestamp for every fbt call during a write system call.
A snippet is here below
  8                  <- schedctl_save          timestamp : 1627201334052600
  8                <- savectx                  timestamp : 1627201334053000
  0  -> restorectx                             timestamp : 1627202741110300 
<-------- difference = 1.407.057.300
  0    -> schedctl_restore                     timestamp : 1627202741115100
  0    <- schedctl_restore                     timestamp : 1627202741116100
Visible is that the thread of for 1.4 s off cpu.
Storage is on SAN with fibers between the system and the storage.

Is it possible to dig deeper with dtrace to see how the HBA's are performing.
Other suggestions are welcome too.

Regards Hans-Peter
-- 
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to