On 5 January 2018 at 15:04, Steve Ebersole wrote:
> TBH, I'm ok with just dropping the TODO collection as a part of the Jenkins
> jobs.
Even better, that will bring down the times from 15m to 12m :)
Doing that now.
>
> On Fri, Jan 5, 2018 at 8:56 AM Sanne Grinovero wrote:
>>
>> On 5 January 201
TBH, I'm ok with just dropping the TODO collection as a part of the Jenkins
jobs.
On Fri, Jan 5, 2018 at 8:56 AM Sanne Grinovero wrote:
> On 5 January 2018 at 14:07, Steve Ebersole wrote:
> > I have no idea what GitBlamer is. Never heard of it
>
> I figured it out; it's implicitly (by default)
On 5 January 2018 at 14:07, Steve Ebersole wrote:
> I have no idea what GitBlamer is. Never heard of it
I figured it out; it's implicitly (by default) invoked by the job
tasks of finding "TODO"'s and similar markers in the project,
to add "blame" information to the final report.
So for each and
I have no idea what GitBlamer is. Never heard of it
On Fri, Jan 5, 2018, 7:38 AM Sanne Grinovero wrote:
> On 5 January 2018 at 13:12, Sanne Grinovero wrote:
> > On 5 January 2018 at 12:28, Steve Ebersole wrote:
> >> FWIW... I do not know the rules about how these slaves spin up, but in
> the
On 5 January 2018 at 13:12, Sanne Grinovero wrote:
> On 5 January 2018 at 12:28, Steve Ebersole wrote:
>> FWIW... I do not know the rules about how these slaves spin up, but in the
>> 10+ minutes since I kicked off that job it is still waiting in queue.
>
> When there are no slaves it might take
On 5 January 2018 at 12:28, Steve Ebersole wrote:
> FWIW... I do not know the rules about how these slaves spin up, but in the
> 10+ minutes since I kicked off that job it is still waiting in queue.
When there are no slaves it might take some extra minutes; on top of
that I was manually killing s
FWIW... I do not know the rules about how these slaves spin up, but in the
10+ minutes since I kicked off that job it is still waiting in queue. And
there is actually a job (Debezium Deploy Snapshots) in front of it that has
been waiting over 3.5 hours
On Fri, Jan 5, 2018 at 6:20 AM Steve Ebers
I went to manually kick off the main ORM job, but saw that you already had
- however it had failed with GC/memory problems[1]. I kicked off a new
run...
[1] http://ci.hibernate.org/job/hibernate-orm-master-h2-main/951/console
On Fri, Jan 5, 2018 at 4:59 AM Sanne Grinovero wrote:
> On 4 Januar
On 5 January 2018 at 07:57, Yoann Rodiere wrote:
> Great, thanks for all the work!
>
> Now that we have on-demand slave spawning, maybe we could get rid of our
> "hack" consisting in assigning 5 slots to each slave and a weight of 3 to
> each job? I would expect the website and release jobs to rar
On 4 January 2018 at 23:52, Steve Ebersole wrote:
> Awesome Sanne! Great work.
>
> Anything you need us to do to our jobs?
No changes *should* be needed. It would help me if you could all
manually trigger the jobs you consider important and highlight
suspucious problems so that we get awareness
Great, thanks for all the work!
Now that we have on-demand slave spawning, maybe we could get rid of our
"hack" consisting in assigning 5 slots to each slave and a weight of 3 to
each job? I would expect the website and release jobs to rarely wait in the
queue, and if they do we can always set up
Awesome Sanne! Great work.
Anything you need us to do to our jobs?
On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero wrote:
> Hi all,
>
> we're having shiny new boxes running CI: more secure, way faster and
> less "out of disk space" prolems I hope.
>
> # Slaves
>
> Slaves have been rebuilt from sc
Hi all,
we're having shiny new boxes running CI: more secure, way faster and
less "out of disk space" prolems I hope.
# Slaves
Slaves have been rebuilt from scratch:
- from Fedora 25 to Fedora 27
- NVMe disks for all storage, including databases, JDKs, dependency
stores, indexes and journals
13 matches
Mail list logo