We're already using ccache, but it is used intra run (ie, for arm-01, sim,
etc). We tried persisting it across runs (from different PRs) but it seems
there was issues with that.
What I suggest means we have one cache pero board/config. The scripting
shouldn't be too difficult but it requires mu
On Wed, Mar 31, 2021 at 1:13 PM Matias N. wrote:
> My reasoning with the per board+config ccache is that it should more or less
> detect
> something like this without hardcoding any rules: only files that are
> impacted by any
> changes in the current PR would be rebuilt, any other file would be
On Wed, Mar 31, 2021, at 11:26, Nathan Hartman wrote:
> I wonder if we need the tests to be a bit smarter in restricting what they
> test.
>
> For example if the change only affects files in arch/arm/stm32h7 then only
> stm32h7 configs will be tested.
That can be difficult to do. A header might
On Wed, Mar 31, 2021 at 9:05 AM Matias N. wrote:
> Then situation is a bit better now, since we did not continue to submit as
> many Para. However, the builds are still lagging (there are ones from 20hr
> ago running). I see many of the new "cancelling duplicates" jobs queued but
> they have not
> Anyway, I see the "other" includes a very long list of very different
> platforms. Maybe splitting it into avr, risc-v, xtensa could help?
When Xtensa was merged with "others" it had only 3 configs: nsh,
ostest and smp. Now it contains around 30, and surely more to come..
I agree that we shoul
On Tue, Mar 30, 2021, at 21:44, Brennan Ashton wrote:
> We were sharing cache but ran into some strange issues with collisions and
> disabled it unfortunately.
Do you remember the exact issue?
What if we had one ccache cache per : run shared across runs?
(not sure if that would be too much stora
On Wed, Mar 31, 2021, at 09:33, Abdelatif Guettouche wrote:
> > Also, I agree about simplifying macOS build if it will help while we get to
> > q better situation.
>
> We can have a special test list for macOS that includes just a few
> configs from the simulator and other chips.
I think just dr
> Also, I agree about simplifying macOS build if it will help while we get to q
> better situation.
We can have a special test list for macOS that includes just a few
configs from the simulator and other chips.
> There has been some talk about supporting non hosted runners, but there are
some se
Then situation is a bit better now, since we did not continue to submit as many
Para. However, the builds are still lagging (there are ones from 20hr ago
running). I see many of the new "cancelling duplicates" jobs queued but they
have not yet run.
I have cancelled a few myself. In case I cancel
Part of the issue here is also limits across the organization. This has
been discussed for a couple months now on the Apache build mailing lists
and the GitHub has been part of them trying to figure out a smart path
forward.
We were sharing cache but ran into some strange issues with collisions an
Most likely a single very powerful machine could be actually quite faster than
GH
since we could parallelize much harder and have all resources to ourselves.
The problem is that rolling our own can be quite a pain to maintain IMHO
(unless someone has access to some powerful high-availabilty spare
Looks like Apache's runners are having issues. Other projects using
Github Actions have stuck queues as well.
On Tue, Mar 30, 2021 at 10:04 PM Nathan Hartman
wrote:
>
> On Tue, Mar 30, 2021 at 3:30 PM Matias N. wrote:
> >
> > It appears we overwhelmed CI. There are a couple of running jobs (not
We definitely need better server to support the CI, it doesn't have
processing power enough to run the CI when there are more than 5 PRs.
It doesn't scale well.
Also I think we could keep only one test for MacOS because it is too
slow! Normally MacOS delays more than 2h to complete.
Maybe we coul
On Tue, Mar 30, 2021 at 3:30 PM Matias N. wrote:
>
> It appears we overwhelmed CI. There are a couple of running jobs (notably one
> is a macOS run which is taking about 2hrs as of now) but they are for PRs
> from 12hs ago at least. There are a multitude of queued runs for many recent
> PRs. Th
It appears we overwhelmed CI. There are a couple of running jobs (notably one
is a macOS run which is taking about 2hrs as of now) but they are for PRs from
12hs ago at least. There are a multitude of queued runs for many recent PRs.
The problem is that new runs (from force pushes) do not cancel
Hi,
Does anyone know why the GitHub PR prechecks don't seem to be running?
In particular, it seems that these four:
* Build / Fetch-Source (pull_request)
* Build Documentation / build-html (pull_request)
* Check / check (pull_request)
* Lint / YAML (pull_request)
are all stuck in "Queued -- Wai
16 matches
Mail list logo