Hello devs, I am in the process of creating the 1.19.3 and 1.20.2 patch releases, and arriving to the step of preparing the PyFlink wheel files, I was surprised by the fact that the suggested way is to use the Azure Pipeline [1], which points to a deprecated doc on how to deploy a pipeline, etc. Anyways, I do not have an Azure account, and to register one it requires to give a lot of personal info (name, address, credit card) to Microsoft, so IMO it is not feasible to expect any committer to setup an Azure account and use that to produce wheel builds.
I checked what we exactly do under the hood here, and I see that for MacOS, we already utilizing `cibuildwheel`. So my question is, do we have anything against simply setting up a GitHub workflow that uses `cibuildwheel` for both OS? That workflow can be manually triggered by the release manager on their fork, so it still be isolated from the upstream repo. The only difference I found is that `cibuildwheel` cannot build for `manylinux1`, and uses `manylinux2014` instead, but AFAIK that does not matter. I already set up a GH workflow [2] and also produced wheel files with it [3]. I would like to propose to transition to this model for building wheel files, cause it a lot simpler, and does not depend on anything other than GitHub. WDYT? Best, Ferenc [1] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73631092#CreatingaFlinkRelease-BuildandstageJavaandPythonartifacts [2] https://github.com/ferenc-csaky/flink/commit/cea118f948e1eec435d6827a6eee0bafe6bb71d2 [3] https://github.com/ferenc-csaky/flink/actions/runs/15281370107