Something like the Software Freedom Conservancy was something I was hoping
existed - I don't know if anyone else has ever heard of them, but I bet
they could help.
Yeah, I was looking at mac1.metal instances which are surprisingly cheap
for macs, but still pretty expensive.
—Mark
Software Freedom Conservancy exists largely to help FLOSS orgs do this
sort of thing safely and conveniently, while retaining independent
governance. I believe Homebrew had a good experience with them, and
Buildbot itself is a member. Was that one of the options considered when
this question came u
Yeah, I was thinking of the US as well, and I meant non-profit, which
doesn't have tax deductible donations but is assumed to not make money. The
problem is there is a lot of work around becoming a legal entity and
accepting donations or whatever. I honestly have no idea how much work
exactly - but
On Mon, May 17, 2021 at 7:54 PM Christopher Nielsen <
masc...@rochester.rr.com> wrote:
>
> Pinning our buildbot VMs to specific NUMA nodes can result in starvation,
> when multiple VMs assigned to a given node are all busy. That would also
> result in underutilization of the other node, if VMs ass
Well, my recommendation for our setup, is to avoid pinning.
Why? For several reasons:
* Out-of-the-box, ESX/ESXi makes a best-effort attempt to schedule all vCPUs
for a given VM, on a single NUMA node.
* Even when that’s not possible, the hypervisor schedules VM vCPUs to
hyperthread pairs.
Pinn
>
> We don’t want any type of pinning, as that will further exacerbate the
> situation.
>
Why would that be? Do the virtual servers have a low number of physical
cores or something?
--
Jason Liu
On Mon, May 17, 2021 at 6:26 PM Christopher Nielsen <
masc...@rochester.rr.com> wrote:
> We don’t
If the guests on a virtual server are exerting a heavy enough load that the
virtual host is not able to obtain the resources it needs, then the entire
system's performance, both physical and virtual, can be affected. I'm not
claiming to be familiar enough with the specifics of the situation to clai
Hmmm, perhaps I responded a bit too quickly, as there can be some performance
benefit to pinning to a specific NUMA node (CPU socket). Particularly if our
VMs were only running with only four vCPUs each.
So to qualify my statement: Given the configuration we’re running - VMs with
eight vCPUs/ea
Sounds good.
And perhaps that’s a good long-term plan: If we can install additional memory
in the Xserves, such that all buildbot VMs can be allocated 9 or 10 GB each,
that should help across-the-board.
Combine that with an upgrade to six-core Westmere Xeons, and our build times
will increase
On May 17, 2021, at 17:42, Christopher Nielsen wrote:
> Also, I see two buildbot VMs with 9 GB of memory allocated:
>
> OS X El Capitan v10.11.6 (15G22010)
> Xcode v8.2.1 (8C1002)
> Apple LLVM version 8.0.0 (clang-800.0.42.1)
> Architecture: x86_64
> C++ library: libc++
> CPU: 8 ⨉ 2.15 GHz
> RAM:
Also, I see two buildbot VMs with 9 GB of memory allocated:
OS X El Capitan v10.11.6 (15G22010)
Xcode v8.2.1 (8C1002)
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Architecture: x86_64
C++ library: libc++
CPU: 8 ⨉ 2.15 GHz
RAM: 9 GB
Boot date: 2021-05-01T21:54:00Z
macOS Mojave v10.14.6 (18G9028)
Xc
On May 17, 2021, at 17:30, Christopher Nielsen wrote:
> If you total up the memory allocated across all VMs, is there at least one or
> two GB free for the hypervisor?
The hypervisor reserves 4.x GB for itself.
If you total up the memory allocated across all VMs, is there at least one or
two GB free for the hypervisor?
> On 2021-05-17-M, at 18:26, Ryan Schmidt wrote:
>
>> On May 17, 2021, at 07:36, Christopher Nielsen wrote:
>>
>> As for overcommitment: I’m simply suggesting that we reduce the number
We don’t want any type of pinning, as that will further exacerbate the
situation.
> On 2021-05-17-M, at 18:24, Ryan Schmidt wrote:
>
>> On May 17, 2021, at 13:13, Jason Liu wrote:
>>
>> Regarding CPU overcommitment: Are the virtual hosts doing any sort of CPU
>> pinning? Many virtualization p
On May 17, 2021, at 07:36, Christopher Nielsen wrote:
> As for overcommitment: I’m simply suggesting that we reduce the number of
> vCPUs per builder, from eight to six.
And I'm suggesting that doing so will slow things down in those situations when
only one or two VMs on a host are busy.
I
On May 17, 2021, at 13:13, Jason Liu wrote:
> Regarding CPU overcommitment: Are the virtual hosts doing any sort of CPU
> pinning? Many virtualization products have the ability to specify which of
> the pCPU cores a guest is allowed to use. As far as I can remember, products
> like KVM and ESXi
On Sun, May 16, 2021 at 3:38 PM Ryan Schmidt
wrote:
>
>
> On May 16, 2021, at 09:48, Christopher Nielsen wrote:
>
>> Upgrading them to six-core Xeons would absolutely help, for sure. But I’m
>> quite certain that we could also improve the situation, by reducing the
>> level of CPU overcommitment
It used to be that you had to pay for vCenter, to get deep insights into VM
performance. But ‘esxtop' is certainly a great starting point. And perhaps it
does provide enough info, to get an idea of where time is being spent.
> On 2021-05-17-M, at 02:55, Daniel J. Luke wrote:
>
> I'm not an ESX
On Mon, 17 May 2021 at 10:39, Ruben Di Battista wrote:
>
> Just as a side note, here in France I just created a non-profit association
> for a project I'm working on related to the organization of an event, and the
> process is almost free and reasonably fast. In a matter of few weeks we had
> t
Just as a side note, here in France I just created a non-profit association
for a project I'm working on related to the organization of an event, and
the process is almost free and reasonably fast. In a matter of few weeks we
had the association published on the official governmental gazette and a
On May 17, 2021, at 2:03 AM, Ryan Schmidt wrote:
> On May 16, 2021, at 17:57, Daniel J. Luke wrote:
>> On May 16, 2021, at 10:48 AM, Christopher Nielsen wrote:
>>> I’d bet the hypervisor is spending more time on scheduling and pre-emption,
>>> than actual processing time.
>>
>> This is something
On May 16, 2021, at 17:57, Daniel J. Luke wrote:
> On May 16, 2021, at 10:48 AM, Christopher Nielsen wrote:
>> I’d bet the hypervisor is spending more time on scheduling and pre-emption,
>> than actual processing time.
>
> This is something we could actually measure, though, right? Then we do
On May 16, 2021, at 14:46, Mark Anderson wrote:
> I keep wondering if we became like a not-for-profit If we could get someone
> like MacStadium or Amazon or something to donate server time to us. Or accept
> donations from Github sponsorship. I could look into what that would take,
> although i
On May 16, 2021, at 10:48 AM, Christopher Nielsen
wrote:
> I’d bet the hypervisor is spending more time on scheduling and pre-emption,
> than actual processing time.
This is something we could actually measure, though, right? Then we don't have
to just speculate (and if we do determine that a
I keep wondering if we became like a not-for-profit If we could get someone
like MacStadium or Amazon or something to donate server time to us. Or
accept donations from Github sponsorship. I could look into what that would
take, although it might be way more trouble than it's worth. I think my
curr
On May 16, 2021, at 09:48, Christopher Nielsen wrote:
> In terms of the ratio of vCPUs to GB of RAM, 1:1 isn’t totally unreasonable.
> However, we should also reserve 2 GB of RAM for the OS, including the disk
> cache. So perhaps 6 vCPUs would be a better choice.
MacPorts base hasn't ever co
In terms of the ratio of vCPUs to GB of RAM, 1:1 isn’t totally unreasonable.
However, we should also reserve 2 GB of RAM for the OS, including the disk
cache. So perhaps 6 vCPUs would be a better choice.
As for the total physical CPUs available on our Xserves, here’s the rub: While
hyperthreadi
On May 14, 2021, at 07:12, Christopher Nielsen wrote:
> Since we’re overcommitting on CPU, I’m wondering if it would make sense to
> reduce the vCPUs in each VM to 4? In addition to reducing any swapping, that
> might also reduce the hypervisor context-switching overhead, and improve
> build ti
Ryan,
Thanks for the detailed info, it’s great to have a better idea of our buildbot
setup!
Since we’re overcommitting on CPU, I’m wondering if it would make sense to
reduce the vCPUs in each VM to 4? In addition to reducing any swapping, that
might also reduce the hypervisor context-switching
On May 12, 2021, at 07:41, Christopher Nielsen wrote:
>
> On 2021-05-12-W, at 08:32, Christopher Nielsen wrote:
>
>> Looking at the build times for various ports, it varies significantly.
>>
>> I was curious, are we overcommitting virtual CPUs vs. the number of
>> available physical cores on ou
On a semi-related note, relative to port build times in-general…
While building Mame locally numerous times last year, I noticed that link times
are excruciating slow: For a standard non-debug build, link times were on the
order of 10+ minutes. And for a debug build, it ballooned up to 50+ minut
To clarify my question about overcommitment: Are the total number of virtual
CPUs for the buildbot VMs running on a given Xserve, greater than the number of
physical CPU cores available?
> On 2021-05-12-W, at 08:32, Christopher Nielsen
> wrote:
>
> Looking at the build times for various ports
Looking at the build times for various ports, it varies significantly.
I was curious, are we overcommitting virtual CPUs vs. the number of available
physical cores on our Xserves? And is disk swapping coming into play, within
the VMs themselves?
33 matches
Mail list logo