Re: [Twisted-Python] GitHub Actions parallelism limit increase

2021-04-06 Thread Kyle Altendorf



On 2021-03-30 17:14, Kyle Altendorf wrote:

On 2021-03-30 15:24, Glyph wrote:

On Mar 30, 2021, at 7:57 AM, Kyle Altendorf  wrote:

Hi All,

Has anyone contacted GitHub to see if they would be willing to 
increase the parallelism limit in Actions?  My understanding is that 
we maintain two CI systems (GitHub Actions and Azure Pipelines) for 
the sake of more parallelism.  While perhaps worthwhile, this doesn't 
seem fun.  Maybe GitHub would be willing to help out.


Not as far as I know. Do you want to give it a shot?


That was the plan.  Submitted.  I'll let you all know.


From GitHub:

Hi Kyle,

Thanks for taking the time to write in.

You would need to upgrade the twisted organization account to 
Enterprise

Cloud for a higher concurrent jobs limit. However, the discount on the
twisted organization account is only for GitHub Team, so an upgrade
would require payment.

https://docs.github.com/en/actions/reference/usage-limits-billing-and-administration#usage-limits

All the best,
Jimmy


https://docs.github.com/en/github/getting-started-with-github/githubs-products#github-enterprise
https://docs.github.com/en/github/getting-started-with-github/githubs-products#github-team

So it seems we already have 60x parallelism and a discount ($4/month per 
user if we have Teams for free) and at least a few of us didn't realize 
it?  Admittedly macOS remains at 5x until you get to the enterprise 
level.


So, case closed on that I suppose.  I'm not sure where the rest of the 
discussion about just-GitHub-for-CI stands, but that can be separated 
from thread.


Cheers,
-kyle

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
https://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] GitHub Actions parallelism limit increase

2021-04-06 Thread Kyle Altendorf

On 2021-04-01 09:36, Adi Roiban wrote:

For example, I don't understand why you need to create an sdist and 
from sdist to create a wheel and then install that wheel inside tox for 
each tox test environment.


Why not create the wheel once and install the same wheel in all 
environments.


Are you referring to the use of tox-wheel?  Note that the workflow I 
setup in towncrier has a single build job that creates an sdist and from 
that a wheel one time, uploads those as artifacts in the workflow, and 
then every other test job and the publish job use those exact 
pre-created files.  So, I think we agree here.


I'll let Grainger weigh in on their present take on tox-wheel if they 
want.



There is an extra 1 minute required for each job to generate the
coverage XML file. We can download all the coverage artifacts and
generate the XML only once.


I did coverage collection from all jobs for pymodbus.  Looks like I
still had the XML report generating in each job, but that could
presumably be shifted.  Anyways, I'm sure several of us could do this
off the top of our heads but in case it is of any interest here's what 
I
did.  Or maybe you just point out something silly I did there and 
should

learn from.  :]

https://github.com/riptideio/pymodbus/pull/592/files


That is one big matrix :) ... I don't understand why you need test and 
check and coverage, each with a separate matrix.


I don't think it really is as big as you think.  The test and coverage 
matrix are separated (and the coverage matrix is kind of not a matrix, 
what with one job) because that's the level at which GitHub allows you 
to describe dependencies.  If you want to collect the coverage during 
test runs and then combine all the results together before uploading to 
codecov etc a single time, I think this is just a mandatory structure.  
The coverage job must be separated so it can depend on the test matrix.  
Having checks be a separate matrix is kind of neither here nor there on 
this topic, I think.  And the All job is compensation for a missing 
GitHub Actions feature.



I am thinking of something like this:

* A matrix to run jobs with coverage on each environment (os + py 
combination + extra stuff noipv6, nodeps, etc)
* Each job from the matrix will run an initial combine to combine 
sub-process coverage

* Each job will upload once python coverage raw file.
* A single job will  download those coverage file, combine them once 
again to generate an XML file to be pushed to codecov.io


Sounds good.  Sounds like exactly what I suggested.  :]  Except for 
maybe misunderstanding about what level of separation is needed for the 
interjob dependencies.


Hopefully this clarifies a bit.  I think we actually intend the same 
thing.  Unless I've missed something else?


Cheers,
-kyle

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
https://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python