> Maybe I should apologize for the origin of this idea: 
https://github.com/go101/go101/wiki/How-to-perfectly-clone-a-slice%3F
No worries, I learn a lot from articles you posted on go101.org, appreciate 
it.

> My current opinion is that it is best to let the Go runtime specialize 
zero-capacity slicing: 
https://github.com/golang/go/issues/68488#issuecomment-2267179883

This means both of these will point to *runtime.zerobase*, is that correct?
    s := make([]int, 0)
    s := x[:0:0]

If that's the case, if someone intentionally keeps alive slice's underlying 
array using s[:0:0], wouldn't the behavior of that program be affected? I 
don't know if there is real world case for this though.
On Friday 27 September 2024 at 14:16:36 UTC+7 tapi...@gmail.com wrote:

> Maybe I should apologize for the origin of this idea: 
> https://github.com/go101/go101/wiki/How-to-perfectly-clone-a-slice%3F
>
> When I posted that wiki article, I was not aware of this (maybe tiny) 
> drawback. 
>
> My current opinion is that it is best to let the Go runtime specialize 
> zero-capacity slicing: 
> https://github.com/golang/go/issues/68488#issuecomment-2267179883
>
> On Thursday, September 26, 2024 at 8:29:59 PM UTC+8 Hikmatulloh Hari Mukti 
> (Hari) wrote:
>
>> Hi gophers, I want to know the reason behind the decision of using 
>> *append(s[:0:0], 
>> s...)* over the previous code since the two code return different slice 
>> when dealing slice with zero len. The previous code will return brand new 
>> slice with size zero, while the current code return an empty slice that's 
>> still pointing to the previous array. And also, why not maybe using 
>> *append(S(nil), 
>> s...)* instead? This will return nil when dealing with zero len slice 
>> though, but what's the potential problem that it will cause?
>>
>> I don't know if this can be considered for a problem, but here is my 
>> concern for the current code, *append(s[:0:0], s...)* :
>>
>> If we try to create slices from an array pool to reduce allocation by 
>> using append, and many our slices turned out to be zero, slices.Clone will 
>> return slice that still pointing to array in the pool. If we try creating 
>> many of them concurrently, (if I understand it correctly) the pool may try 
>> to create many array objects as the object retrieved from Get may haven't 
>> been Put back to the pool. Those array objects can only be 
>> garbage-collected after those slices are no longer used / reachable and if 
>> it's an array of a big struct, wouldn't it might potentially pressure the 
>> memory?
>>
>> Here is just a pseudo-code for illustration only. I think the array 
>> generated by pool will only be garbage-collected once *ch* is consumed 
>> and the slices are no longer used:
>>
>> var pool = sync.Pool{New: func() any { return &[255]bigstruct{} }}
>> var ch = make(chan []bigstruct, 1000)
>> for i := 0; i < 1000; i++ {
>> go func() {
>> arr := pool.Get().(*[255]bigstruct)
>> defer pool.Put(arr)
>> s := arr[:0]
>> ch <- slices.Clone(s) // slice points to arr
>> }()
>> }
>>
>>
>> CMIIW and thank you!
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/496fe711-22c7-4510-bffc-114e976d8db1n%40googlegroups.com.

Reply via email to