Write data to memory mapped file/shared memory. Keep track of last written byte 
as new_length;

Use atomic.StoreUint64(pointer to header.length, new_length);

readers read header.length atomically to determine the last valid byte (using 
whatever facilities their language has).

A reader then knows that bytes up to header.length are valid to consume.

This assumes you are always appending to the buffer - never reusing the earlier 
buffer space. If you desire to do this, then it is much more complicated as you 
need to determine that all readers have consumed the data before the writer 
reuses it.

The above must work in order for Go to have a happens before relationship with 
the atomics - all writes must be visible to a reader that see the updated value 
in the header.


> On Jan 22, 2023, at 12:53 PM, Peter Rabbitson <ribasu...@gmail.com> wrote:
> 
> 
> 
> On Sun, Jan 22, 2023 at 7:39 PM robert engels <reng...@ix.netcom.com 
> <mailto:reng...@ix.netcom.com>> wrote:
> The atomic store will force a memory barrier - as long as the reader (in the 
> other process) atomically reads the “new value”, all other writes prior will 
> also be visible.
> 
> Could you translate this to specific go code? What would constitute what you 
> called "the atomic store" in the playground example I gave?  
>  
> BUT you can still have an inter-process race condition if you are updating 
> the same memory mapped file regions - and you need an OS mutex to protect 
> against this
> 
> Correct. This particular system is multiple-reader single-threaded-writer, 
> enforced by a Fcntl POSIX advisory lock. Therefore as long as I make the 
> specific writer consistent - I am done.
>  
> You can look at projects like https://github.com/OpenHFT/Chronicle-Queue 
> <https://github.com/OpenHFT/Chronicle-Queue> for ideas.
> 
> Still, large-scale shared memory systems are usually not required. I would 
> use a highly efficient message system like Nats.io <http://nats.io/> and not 
> reinvent the wheel. Messaging systems are also far more flexible.
> 
> 
> Nod, the example you linked is vaguely in line with what I want. You are also 
> correct that reinventing a wheel is bad form, and is to be avoided at all 
> costs. Yet the latency sensitivity of the particular IPC unfortunately does 
> call for an even rounder wheel. My problem isn't about "what to do" nor "is 
> there another way", but rather "how do I do this from within the confines of 
> go". 
>  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com 
> <mailto:golang-nuts+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/CAMrvTS%2BHqsqCOMay3c8D5LuTwcmtuZQJY7gs8Rw5rXBLiYwErg%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/golang-nuts/CAMrvTS%2BHqsqCOMay3c8D5LuTwcmtuZQJY7gs8Rw5rXBLiYwErg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/08AA8225-8F65-4297-AAB8-9FDA888C674B%40ix.netcom.com.

Reply via email to