Hi Jim,

Jim M wrote:
Hello.

One of the developers I support is having a performance challenge. He is writing a simple message processing application that receives a small message on a socket and writes a response to that socket, then waits for the next message. The application runs one thread per client connection. On a single-CPU system, a single application thread can process a message and transmit its response in under 1 ms. With two or more processors enabled, still running a single thread, there is a delay of about 40 ms between recipt of the message and transmission of the response. This delay is consistent, not intermittent. Running additional threads causes the delay to become inconsistent, but always in excess of 40 ms. We are running on a 900 MHz V880 with 4 processors under Solaris 9. Timing measurements were made with snoop.
If anyone would care to suggest something we might do to get the latency down 
to a reasonable level on an SMP configuration, I would greatly appreciate it.  
Please let me know if you would find any additional information helpful.
I'd be curious to see what things look like under Solaris 10 or OpenSolaris. Besides, DTrace would sure be handy here for understanding where the time is going.
How did you enable/disable the system's CPUs for comparison? psradm(1M)?
If so, how does performance look if you:
   - Bind the application to a CPU using pbind(1M)?
   - Put the application in a single CPU processor set using psrset(1M)?
- Enable 1, 2, 3, 4 CPUs using psradm(1M). Does the overhead increase linearly with the number of CPUs enabled?

Data from the above might be telling...

Thanks,
-Eric
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to