Hi there,

We have a large-scale recommendation system serving millions of users which 
is built using Golang. It has worked well until recently when we are trying 
to enlarge our index or candidate pool by 10X in which case the number of 
candidate objects created to serve each user request can also increase by 
5~10X. Those huge number of objects created on heap cause a big jump of the 
CPU used for GC itself and thus significantly reduces the system throughput.

We have tried different ways to reduce GC cost, like using soft memory limit 
<https://weaviate.io/blog/gomemlimit-a-game-changer-for-high-memory-applications>
 and 
dynamically tuning the value of GOGC similar to what is described here 
<https://www.uber.com/blog/how-we-saved-70k-cores-across-30-mission-critical-services/>.
 
Those indeed helped, but they won't reduce the intrinsic cost of GC because 
the huge number of objects in heap have to be recycled anyway. 

I'm wondering if you have any suggestions about how to reduce object 
allocations during request serving?

Thanks!
Best

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/a6b8f58f-9452-43e6-9e63-92d944dd0caan%40googlegroups.com.

Reply via email to