Hi Josh,

Did you elaborate further on this topic? Especially with the rise of 
multi-schedulers and mixed workloads in Kubernetes (Kubelet) I´m looking 
for advice and best practices how to handle these things w/out negatively 
affecting the go runtimes of each container.
The JVM has gotten some improvements for cgroups in order to not negatively 
affect the GC. Some users wrote packages to tweak GOMAXPROCS before 
starting up containers.

Especially in larger NUMA systems I´m afraid of performance issues due to 
imbalanced NUMA layouts when running (overcommitting) resources.

I went through all articles related to cgroups in this and the golang-dev 
ML. I´m still not sure whether this really is a problem or it´s too early 
and many people do not yet face it.
Any feedback is appreciated.

Am Mittwoch, 19. April 2017 22:06:08 UTC+2 schrieb Josh Roppo:
>
> Hi y'all,
>
> I have a question related to improving overall CPU utilization efficiency 
> of our CPU intensive distributed workloads. I've also posted this  on the 
> Gophers slack but as Dave Cheney pointed out recently, it's not the best 
> medium for long winded questions. I haven't been able to find too much 
> written on this topic, so I appreciate any advice!
>
> I'd like to improve CPU utilization efficiency of Go processes by 
> over-scheduling pods on Kubernetes nodes. The issue is that this workload 
> is variable and heterogeneous; so it's very rare that all processes consume 
> all the available resources. Allowing specific processes/Pods to burst 
> would improve compute resource efficiency, without impairing our processing 
> performance.
>
> eg:
> Turn 4x Go processes running on 16CPU VMs at average 60% CPU utilization.
> Into 1x 64CPU Kubernetes Node, running 6 pods ~96% CPU utilization.
> Theoretically this can be achieved by setting each Pods' Scheduler Request 
> to 9600milliCPU, and cgroups Limit to 16000milliCPU(via the Pod 
> specification).
>
> My question is; what is a safe setting for configuring GOMAXPROCS for each 
> Pod to not degrade the Go runtime performance? If it were set to 10, I 
> don't think it'd be able burst and take advantage of the extra cgroups 
> Limit. 
> Conversely, would setting GOMAXPROCS to 16; cause runtime instability if 
> all 6 Pods bursted and attempted to utilize full cgroups Limit? Or would 
> the kernel allocation simply throttle the performance(and over-threading) 
> in a safe manner.
>
> It seems Google is working on dynamic thread changes 
> <https://groups.google.com/forum/#!msg/golang-nuts/Jle66C2iECs/h6wp7nj5CgAJ> 
> for 
> their runtimes, but that's more advanced than what we're ready for right 
> now.
>
> Stating why this is a horrible idea, is also a helpful response..
>
> Thanks for any advice,
> Josh Roppo
>
> <3 Go Community 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to