On Wed 13 Aug 2014 03:23:21 PM EDT, Grant Edwards wrote:
> On 2014-08-13, Alec Ten Harmsel <a...@alectenharmsel.com> wrote:
>> 2014-08-13 12:21 GMT-05:00 Grant Edwards <grant.b.edwa...@gmail.com>:
>
>> Without knowing what you're doing, this sounds like a bad idea; if
>> you *need* to synchronize threads, why aren't they running in the
>> same process?
>
> I'm trying to decouple different portions of a system as much as
> possible.  Currently, the different parts communicate via Unix domain
> sockets.  That works OK, but for a few of the high-frequence
> operations I'm trying to find a way to eliminate the overhead that's
> involved in sockets (system calls, context switches, copying data from
> userspace to kernel space and back to user space).

Decoupling == great. Is it possible that you could do something like 
this:

Thread 'a' in process 'a':
1. Event in thread 'a' is generated or whatever
2. if low-frequency event, send a message over a socket to a different 
thread/process
3. if high-frequency event, push event into shared memory thread-safe 
queue

Thread 'b' in process 'a':
1. infinite loop reading from shared memory queue and processing events

Ideally, the high-frequency events should be handled in the same 
thread, then in the same process, and only lastly in a different 
process. While decoupling is great, it seems like you're losing the 
benefits of it by tightly coupling all of your "decoupled" threads and 
processes.

> I may have to stick with sockets when I want to block until some event
> happens.

To be clear, do you want to block or sleep/yield until an event happens?

I'm sorry for not being too helpful. Just one last question: Can you 
describe what exactly your code is supposed to do, or is it something 
that you can't talk about because it's a work thing? I don't care 
either way, but I'm just curious because it seems you need to optimize 
quite a bit.

Alec

Reply via email to