On the Hypervisor the memory is the main limitation, depending on the workload 
CPU load average might as well.

server and storage capacity, we tent to start the procurment process of adding 
hardware  at around 50% or based on known upcoming workload.
Red line is to not go bellow N+1 on any resource.

Hope this help.


From: S.Fuller <[email protected]>
Date: Saturday, August 19, 2023 at 12:03
To: [email protected] <[email protected]>
Subject: Capacity Planning
I'm curious what datapoints people are using for capacity planning within their 
Cloudstack environment and what benchmarks are you setting to decide that you 
need to add additional capacity to your cluster? To this point, I've been 
tracking
ZjQcmQRYFpfptBannerStart
Ce message provient d'un expéditeur non fiable / This Message Is From an 
Untrusted Sender
Vous n'avez pas déjà correspondu avec cet expéditeur / You have not previously 
corresponded with this sender.
    Report Suspicious  
<https://us-phishalarm-ewt.proofpoint.com/EWT/v1/P9oxn-zGMifF!MPXTcoDGFFyA1uvAKPR57y-bqJNAeQmUQxN5r7NJOCE-2XnJfMVeS3jH7VfNE7sfiRPeXZ-_MY6l8peJsYfOI2x3a1_kNJaoSYrUCTh2bPJ_B1L3pw_UE7-4pQ$>
   ‌
ZjQcmQRYFpfptBannerEnd

I'm curious what datapoints people are using for capacity planning within

their Cloudstack environment and what benchmarks are you setting to decide

that you need to add additional capacity to your cluster? To this point,

I've been tracking % used memory per node, as well 5m/15m load per core

along with 5m avg CPU.



--

Steve Fuller

[email protected]

Reply via email to