Good point, simple examples are almost never enough. I guess I was hoping 
that we'd end up with 3-5 examples of a really simple case. Maybe I'll come 
up with a more complex example.

I'm loading under ~5K files in parallel and possibly making an HTTP request 
for each. I like the slice solution, but I need the files to be accessible 
later by path (hence the map key). This is a command line utility. Adding 
parallelism already cut the time down from 1-2 minutes to 5 seconds, so I'm 
not concerned about map contention. So, my example does kinda capture the 
core algorithm, but it doesn't explain it very well.


On Monday, October 16, 2017 at 9:42:18 AM UTC-7, Bryan Mills wrote:
>
> On Sunday, October 15, 2017 at 1:45:06 PM UTC-4, Alex Buchanan wrote:
>>
>> Show me the code! :)
>>
>> Here's mine: https://play.golang.org/p/ZwAlu5VuYr
>>
>
> The problem that sync.Map is intended to address is cache contention.
> Unfortunately, it doesn't currently address that problem well for stores 
> of disjoint keys (see https://golang.org/issue/21035).
>
> That said, if you're spawning a goroutine per write, you've already got 
> enough contention (on the scheduler) that any other synchronization 
> overhead is unlikely to matter.
>
> As you say: show me the code! The example you gave is simple enough that 
> it's not even clear why you need a map — a slice or channel would suffice.
> (Personally, I'd be inclined to use a slice: 
> https://play.golang.org/p/jpS06KFNbv)
>
> So a more realistic use case would help: what do you want to do with this 
> map when you're done with it?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to