You can look at github.com/robaho/keydb and it’s LSM trees as an alternative 
method of concurrency. You can also take the low level tree.go and wrap it in a 
RW mutex. 

> On Jan 6, 2021, at 9:28 AM, ksbh...@gmail.com <ksbhas...@gmail.com> wrote:
> 
> 
> https://pkg.go.dev/lang.yottadb.com/go/yottadb gives you B*trees with a 
> hierarchical key-value model that you can experiment with in a Docker 
> container (or of course a virtual or real machine). When you get to needing 
> persistence and concurrency control, you can also get ACID transactions 
> (i.e., for linearization, it “occurs” at commit time).
> 
> Regards
> – Bhaskar
> 
>> On Wednesday, January 6, 2021 at 1:19:01 AM UTC-5 ren...@ix.netcom.com wrote:
>> I think you have to go a bit more and use a RW mutex to ensure memory 
>> consistency (for the simple solution). 
>> 
>>>> On Jan 5, 2021, at 8:52 PM, joseph.p...@gmail.com <joseph.p...@gmail.com> 
>>>> wrote:
>>>> 
>>> Well, I think I only need to lock on writes, and it'll be easier if I just 
>>> lock the entire tree on writes. Reads will be the majority of the 
>>> operations by far. This is for a bit of caching before we go to a K/V 
>>> database like REDIS, etc.
>> 
>>> 
>>> 
>>>>> On Tuesday, January 5, 2021 at 5:16:36 PM UTC-8 k.alex...@gmail.com wrote:
>>>>>> On Tue, Jan 5, 2021, 6:59 PM Nathan Fisher <nfi...@junctionbox.ca> wrote:
>>>>> 
>>>>>> Does write only locking provide read correctness? I would’ve thought 
>>>>>> based on the memory model it could cause issues?
>>>>>> 
>>>>>> https://golang.org/ref/mem#tmp_2
>>>>> 
>>>>> 
>>>>> It depends on your notion of "read correctness", specifically when you 
>>>>> consider each read to have occurred with respect to its concurrent 
>>>>> writes. Linearizability may be a weaker guarantee than you want, and 
>>>>> that's okay.
>>>>> 
>>>>> Linearizability requires that, for each operation, you can pick some 
>>>>> point between the start and end of an operation when it can be said to 
>>>>> have "occurred". When you consider all the operations in that order, the 
>>>>> results you see must be the same as a sequential execution.
>>>>> 
>>>>> In the case I have described, we can pick a linearization point for reads 
>>>>> just before the last write which they passed on their way down the tree. 
>>>>> The reads should then see all the writes which happened prior to this 
>>>>> point.
>>>>> 
>>>>> This isn't the order the operations enter the root, but linearizability 
>>>>> doesn't care. It doesn't have an opinion on when overlapping operations 
>>>>> "occur" with respect to one another.
>>>>> 
>>>>> I don't think using a happens-before relation for the program order seen 
>>>>> by each goroutine is going to cause a problem with respect to choosing 
>>>>> these linearization points, but maybe I'm missing something.
>>>>> 
>>>>> Maybe also there is a standardized notion of read correctness that you're 
>>>>> referring to which I am not aware of.
>>>> 
>>> -- 
>> 
>>> You received this message because you are subscribed to the Google Groups 
>>> "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to golang-nuts...@googlegroups.com.
>> 
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/8d97ad06-f7e6-4fdd-8ec4-0803e0ad3dd1n%40googlegroups.com.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/78177b5a-d1b9-49bd-a275-5bd2585f8b15n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/161FAA80-5AF1-446A-9A45-7409ADCC3A79%40ix.netcom.com.

Reply via email to