t away.
Thanks,
--Gautham
-Original Message-
From: Ayush Saxena
Sent: Tuesday, May 14, 2024 11:58 PM
To: gautham.bangal...@gmail.com
Cc: common-dev@hadoop.apache.org
Subject: Re: Hadoop Windows Build
Hi Gautham,
I think this Windows build has some issues, first is for hadoop-common maven
sit
ave
> us an hour.
>
> Thanks,
> --Gautham
>
> -Original Message-
> From: Allen Wittenauer
> Sent: Saturday, May 4, 2024 10:20 AM
> To: bui...@apache.org
> Cc: common-dev@hadoop.apache.org
> Subject: Re: Hadoop Windows Build
>
>
>
> > On May 3,
ore.
I'll try to cache the docker image for the precommit pipeline, it might save us
an hour.
Thanks,
--Gautham
-Original Message-
From: Allen Wittenauer
Sent: Saturday, May 4, 2024 10:20 AM
To: bui...@apache.org
Cc: common-dev@hadoop.apache.org
Subject: Re: Hadoop Windows Buil
gt; that it doesn't run for all the
> > open PRs. So, it would be great if we could run it by ourselves instead
> of reaching out to you folks.
> >
> > Thanks,
> > --Gautham
> >
> > -Original Message-
> > From: Ayush Saxena
> > Sent: Mo
> On May 3, 2024, at 9:04 AM, Gavin McDonald wrote:
>
> Build times are in the order of days, not hours, how is the caching helping
> here?
It won’t help for full builds but for PRs where it only does parts of
the tree it can be dramatic. (Remember: this is running Yetus which will
On Fri, May 3, 2024 at 5:56 PM Allen Wittenauer wrote:
>
>
> > On Apr 26, 2024, at 9:42 AM, Cesar Hernandez
> wrote:
> >
> > My two cents is to use cleanWs() instead of deleteDir() as
> > documented in: https://plugins.jenkins.io/ws-cleanup/
>
>
> If this was a generic, run of the mill b
> On Apr 26, 2024, at 9:42 AM, Cesar Hernandez wrote:
>
> My two cents is to use cleanWs() instead of deleteDir() as
> documented in: https://plugins.jenkins.io/ws-cleanup/
If this was a generic, run of the mill build, that could be an option.
Definitely don’t want to do that for Ha
be great if we could run it by ourselves instead of
> reaching out to you folks.
>
> Thanks,
> --Gautham
>
> -Original Message-
> From: Ayush Saxena
> Sent: Monday, April 29, 2024 7:29 PM
> To: Chris Thistlethwaite
> Cc: common-dev@hadoop.apache.org
> Subject
instead of
reaching out to you folks.
Thanks,
--Gautham
-Original Message-
From: Ayush Saxena
Sent: Monday, April 29, 2024 7:29 PM
To: Chris Thistlethwaite
Cc: common-dev@hadoop.apache.org
Subject: Re: Hadoop Windows Build
Thanx Chris, that would be great
-Ayush
On Mon, 29 Apr 2024 at 19
Thanx Chris, that would be great
-Ayush
On Mon, 29 Apr 2024 at 19:07, Chris Thistlethwaite
wrote:
> I'm following along on lists.a.o. I can cancel all the Windows jobs in
> queue, we have a groovy script for that.
>
> -Chris T.
> #asfinfra
>
> On 2024/04/28 17:35:21 Gautham Banasandra wrote:
>
I'm following along on lists.a.o. I can cancel all the Windows jobs in queue,
we have a groovy script for that.
-Chris T.
#asfinfra
On 2024/04/28 17:35:21 Gautham Banasandra wrote:
> Yeah, I just noticed that. May I know how I can abort all the jobs at once? I
> only saw that I
> can cancel the
Yeah, I just noticed that. May I know how I can abort all the jobs at once? I
only saw that I
can cancel the jobs one-by-one.
Thanks,
--Gautham
On 2024/04/28 15:19:13 Ayush Saxena wrote:
> Thanx Gautham for chasing this.
>
> I think there are still some 119 in the build queue, if you see on the
Thanx Gautham for chasing this.
I think there are still some 119 in the build queue, if you see on the left
here [1](Search for Build Queue). They are all stuck on "Waiting for next
available executor on Windows"
If you aborted all previously & they showed up now again, then something is
still me
Hi folks,
I apologize for the inconvenience caused. I've now applied the mitigation
described in [3].
Unfortunately, there are only 12 Windows nodes in the whole swarm of Jenkins
build nodes.
Thus, this caused a starvation of the Windows nodes for other projects.
I had reached out to the infra
Found this on dev@hadoop -> Moving to common-dev (the ML we use)
I think there was some initiative to enable Windows Pre-Commit for every PR
and that seems to have gone wild, either the number of PRs raised are way
more than the capacity the nodes can handle or something got misconfigured
in the j
15 matches
Mail list logo