On 3/19/21 10:41 AM, Paolo Bonzini wrote:
> On 19/03/21 10:33, Stefan Hajnoczi wrote:
>> On Thu, Mar 18, 2021 at 09:30:41PM +0100, Paolo Bonzini wrote:
>>> On 18/03/21 20:46, Stefan Hajnoczi wrote:
>>>> The QEMU Project has 50,000 minutes of GitLab CI quota. Let's enable
>>>> GitLab Merge Requests so that anyone can submit a merge request and get
>>>> CI coverage.
>>>
>>> Each merge request consumes about 2500.  That won't last long.
>>
>> Yikes, that is 41 hours per CI run. I wonder if GitLab's CI minutes are
>> on slow machines or if we'll hit the same issue with dedicated runners.
>> It seems like CI optimization will be necessary...
> 
> Shared runners are 1 vCPU, so it's really 41 CPU hours per CI run.
> That's a lot but not unheard of.
> 
> Almost every 2-socket server these days will have at least 50 CPUs; with
> some optimization we probably can get it down to half an hour of real
> time, on a single server running 3-4 runners with 16 vCPUs.

Yesterday I tried to add my wife's computer she use at home to
my gitlab namespace to test Laurent latest series.

Specs:

- Intel(R) Core(TM) i7-7567U CPU @ 3.50GHz
- SSD 256GB
- 16GB RAM

So 1 runner with 4 vCPUs.

With 9 failed jobs, and 2 not run (due to previous stage failure),
the pipeline summary is:

130 jobs for m68k-iotests in 623 minutes and 49 seconds (queued for 31
seconds)

Network bandwidth/latency isn't an issue, I have a decent connection
IMO.

# du -chs /var/lib/docker
67G     /var/lib/docker

^ This is a lot (fresh docker install)

This matches your "41 CPU hours per CI run." comment.

Regards,

Phil.

Reply via email to