On Fri, 25 Oct 2024 15:16:30 GMT, Aleksey Shipilev <sh...@openjdk.org> wrote:

> > Can we trust the cache that much? I mean, up to now it's only been a 
> > performance hack, now it will become a necessary part of the pipeline.
> 
> Yeah, I guess that's the risk. I can redo this to use the same 
> upload/download-artifact we use to transfer bundles between build and test 
> jobs. Probably next week.

Actually, as I read ["Caching dependencies to speed up 
workflows"](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/caching-dependencies-to-speed-up-workflows),
 this is not as high of the risk: "GitHub will remove any cache entries that 
have not been accessed in over 7 days. There is no limit on the number of 
caches you can store, but the total size of all caches in a repository is 
limited to 10 GB. Once a repository has reached its maximum cache storage, the 
cache eviction policy will create space by deleting the oldest caches in the 
repository."

I don't think we are anywhere close to 10GB limit for a single PR. I guess we 
can have some thrashing when there are multiple PRs in flight. Using artifacts 
would likely be more bullet-proof.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/21692#issuecomment-2438177353

Reply via email to