So it seems that reentrant locks are a hard requirement after all. The reason being that an event handler can itself trigger an event which will be handled by a similar callback synchronously. (basically, callback composition) If the first event handler locks the UI tree, the next synchronous callback will also attempt to take the lock. If no lock reentrancy, it's a deadlock.
This is the same scenario as described here (which doesn't seem to have any satisfying answer) https://stackoverflow.com/questions/30469808/recursive-critical-sections-in-go For now, I'm going to take advantage of the fact that there is no goroutine preemption in wasm to make locking in event handling callbacks optional (using trylock). That's a ugly hack, but there is no way around it for now. ***and yes, I have thouhgt about sending mutating functions to a goroutine dedicated to call them via a channel (basically a UI thread) but that would make the API too cumbersome to use by the end users as they would now have to remember to create closures and put them on a channel. A tedious DX only because we need to be concurrent-aware. Not even that concurrency is required all the time.**** On Wednesday, July 6, 2022 at 3:33:50 PM UTC+2 atd...@gmail.com wrote: > Well, in my case, only the writing goroutine should be able to have access > to the datastructure (to keep the invariants). So I think a mutex should do > the trick. > Indeed, it looks like transactions or a reimplementation of a UI > thread/event loop. > > > A concurrent hash=array mapped trie might indeed help if reads had to > happen concurrently with writes. > In my experience though, this is allocation heavy and for one of my > current targets, it might be impractical: wasm in the browser is highly > sensitive to allocations and those concurrent datastructures rely heavily > on copy-on-write. > > For now, my main idea is to deal with establishing critical sections in > the main goroutine (which is framework handled so should be fine) > and unfortunately, document that framework users spawning their own > goroutine should establish a critical section where all tree > access/mutation should happen, by taking the global lock. > > One alternative would be storing operations into a queue as suggested and > then passing it to a tree modifying goroutine. Might be better although it > might add some API surface. Need to think about it. > > > > On Wednesday, July 6, 2022 at 1:58:01 PM UTC+2 ren...@ix.netcom.com wrote: > >> Depends on if you “concurrent” readers or serialized interlaced ones. >> >> It depends on you transaction semantics - eg dirty reads, etc. >> >> It is fairly easy for readers to grab the read lock and the writer thread >> (since they were added to a queue) to grab the write lock when it has work >> to do. >> >> Like I said, a copy on write - partitioned - can be highly effective. >> >> Review the ConcurrentHashMap source for an efficient implementation of a >> complex data structure that is highly concurrent for both reading and >> writing. >> >> If you have a tree, you would most likely partition by branches. >> >> The idea is to mutate only a portion of the structure and efficiently >> swap it it when ready. >> >> On Jul 6, 2022, at 5:59 AM, Brian Candler <b.ca...@pobox.com> wrote: >> >> >> >> On Wednesday, 6 July 2022 at 11:33:29 UTC+1 ren...@ix.netcom.com wrote: >> >>> As I said, make the mutating op simply record the desired op on a queue >>> and process the serially on another thread. You are essentially >>> implementing transactions. >>> >> >> But how do you protect tree *readers* from having mutations take place >> under their feet? You could serialize all reads as well, and copy the >> results. But I don't think that's what the OP was intending. >> >> You could have aRWMutex, but then it's up to the API caller to obtain >> that mutex *for as long as necessary* (e.g. while traversing part of the >> tree). >> >> Depending on the use case, a "functional" approach might be worth >> considering: >> - nodes are strictly immutable >> - if you need to modify a node, create a new node instead >> - recursively repeat for every other node that holds a pointer to this >> node. >> - atomically update the "tree root" pointer >> >> Then each reader sees a snapshot of the tree at a point in time, and >> garbage collection should take care of the rest. >> >> -- >> >> You received this message because you are subscribed to the Google Groups >> "golang-nuts" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to golang-nuts...@googlegroups.com. >> >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/golang-nuts/888413c1-d4ff-4752-a607-2a3f2196e1dfn%40googlegroups.com >> >> <https://groups.google.com/d/msgid/golang-nuts/888413c1-d4ff-4752-a607-2a3f2196e1dfn%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> >> -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/3f2582b4-461b-4e49-b2b3-0b8481e3f8d9n%40googlegroups.com.