Thanks all for your answer. 

I am doing a project, in which 10k users set up connection with one server.
The server part is originally implemented using select() mode. Since it 
consumed a lot of CPU, I replaced select() mode with port_getn(). Unfortunally, 
the CPU consumption is not reduced at all. I am trying to analyze the reason: 

To use select(), you have to store all the interested sockets in an array or 
list. Then you check each element of the array/list to see if any of them is 
ready to read/write (using FD_ISSET). 

By using port_getn(), you already get the sockets which are ready to read/write 
from the eventList.  Therefore, you just read/write data from/to the sockets in 
the eventList (event. portev_object), you do not need to search the whole 
array/list to find the sockets. 

However, in my case, each time I get eventList from port_getn(), I still need 
to search the while array/list to get some other client information based on 
socketID I get from eventList. Due to this reason, I did not get benefit from 
port_getn() and CPU is not reduced. Is my analysis correct? 

I still do not understand why in my case put_getn() does not improve CPU usage 
at all. It should be at least a little better than, if not significantly 
outperforms select() approach, because as you guys agreed, the complexity of 
select() and FD_ISSET() is O(FD_SETSIZE), which is much worse than using 
port_getn(). Is it possible that although port_getn() function saves some CPU, 
the re-associate process (port_associate()) consumes CPU and therefore 
counteracts the benefit it gets. BTW, my application only has data available on 
a few at a time.
 
 
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to