[EMAIL PROTECTED] said:
> see in your case too, cpu 12 is getting loaded, it will be interesting to see
> what happens to cpu#12 when  you increase  the load further and increase
> number of NICs.
> 
> you can also use intrstat to get more details of what NIC interrupting  what
> cpu! once you reach >95% on cpu#12 things won't scale well. 

Actually, both #8 _and_ #12 are busy handling interrupts.  The intrstat
logs I have show #12 handling e1000g0, and #8 handling e1000g1.  It happened
at the time of this "mpstat" snapshot, the test was finishing up and two
ttcp's were running on e1000g0 while only one was still running on e1000g1,
hence #12 working harder than #8.

We definitely can't get more than 90-95MB/sec out of each link, but it
would be interesting to see what happens when all four links are connected.

The two points I want to emphasize are:

(1) Before I enabled the fanout settings, there was only one CPU handling
    interrupts for _both_ interfaces.

(2) Before using "psradm" to limit interrupt-handling to one thread per
    core on the T2000, network performance was noticeably slower.


> And I haven't found any successful tuning so far to distribute these
> interrupts among other idle cpus. 

Yes, that is puzzling.  Which version of Solaris-10 (what kernel patch
level) are you running?  And what values did you use for the tunable
"ddi_msix_alloc_limit" and "ip_soft_rings_cnt" settings?

Regards,

Marion


_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to