Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
merging.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the fea
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
@StephanEwen yes I will take care of this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/4293
@zentol Do you want to merge and validate this this as part of your ongoing
build optimization project?
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
Ok, now i agree that we should enable the cache as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user greghogan commented on the issue:
https://github.com/apache/flink/pull/4293
Not only timeouts but we're likely downloading multiple gigabytes across
the 12 builds per TravisCI job.
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/4293
Once we are able to delete the travis cache on our own, we may experiment a
bit more and try without the cleanup task or just running it for failed jobs.
For now, though, @StephanEwen has a poi
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
Given that enabling the cache doesn't appear to improve build times I
wouldn't enable it, especially since it does require more efforts for cleaning
up corrupted caches. It appears there are only down
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/4293
sounds like a good plan but in the past it wasn't clear where the caches
got corrupted, i.e. by travis during cache restore, or by mvn during the build.
If the former is still possible, we could only f
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
We only have to check for corrupted artifacts if the build actually failed;
if it succeeded we implicitly know that the artifacts are OK. This gives us the
debugging benefits, without increasing the b
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/4293
I kind of like having automatic checks for corrupt caches. The 30s should
not be a big issue.
One thing that this may give us is more stability against failed dependency
downloads, which
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/4293
Looks like with the cache it is only downloading flink artefacts from
earlier versions for API comparisons (I excluded
`$HOME/.m2/repository/org/apache/flink/` from the cache).
Also, it seems there
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
yeah I disabled the logging of downloaded libraries, feel free to remove it
temporarily.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/4293
to be honest, I think the cache was used in the second build of the PR as
indicated by these lines (of the first build profile):
```
attempting to download cache archive - 3.41s
fetching
PR
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
Seems like this ain't working for a Pull Request. I also checked your local
builds (https://travis-ci.org/NicoK/flink/builds/251943625 &
https://travis-ci.org/NicoK/flink/builds/252022358) but it' har
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/4293
that's actually hard to know in advance - I had only 3 builds that went
under the limit by chance so I was able to use the cache henceforth in those
three which constantly were below the 49min. This PR
Github user zentol commented on the issue:
https://github.com/apache/flink/pull/4293
Forks also benefit from this.
It would be cool if we knew how much this affects the build times given
that we're also adding more work (for the scanning of the cache).
Note that the t
16 matches
Mail list logo