Re: [hibernate-dev] ORM CI jobs - erroneous github triggers

2018-01-04 Thread Sanne Grinovero
Also these jobs were configured to build automatically every 5 hours:
 - hibernate-orm-4.2-h2
 - hibernate-orm-4.3-h2

I removed the schedule, they will be built when (and only when)
anything is committed to their respective branches.


On 3 January 2018 at 19:15, Steve Ebersole  wrote:
> Nice!  Glad you found something.  Thanks for making the changes.
>
>
>
> On Wed, Jan 3, 2018 at 12:16 PM Sanne Grinovero  wrote:
>>
>> I've made the change on:
>>- hibernate-orm-5.0-h2
>>- hibernate-orm-5.1-h2
>>- hibernate-orm-master-h2-main
>>
>> Let's see if it helps, then we can figure out a way to check all jobs
>> are using this workaround.
>>
>>
>> On 3 January 2018 at 18:12, Sanne Grinovero  wrote:
>> > Hi Steve,
>> >
>> > this rings a bell, we had this bug in the past and apparently it's
>> > regressed again :(
>> >
>> > The latest Jenkins bug seems to be:
>> >  - https://issues.jenkins-ci.org/browse/JENKINS-42161
>> >
>> > I'll try the suggested workarount, aka to enable SCM poll without any
>> > frequency.
>> >
>> > Thanks,
>> > Sanne
>> >
>> >
>> > On 3 January 2018 at 17:35, Steve Ebersole  wrote:
>> >> So I just pushed to the ORM master branch, which has caused the
>> >> following
>> >> jobs to be queued up:
>> >>
>> >>
>> >>- hibernate-orm-5.0-h2
>> >>- hibernate-orm-5.1-h2
>> >>- hibernate-orm-master-h2-main
>> >>
>> >> Only one of those jobs is configured to "watch" master.  So why do
>> >> these
>> >> other jobs keep getting triggered?
>> >>
>> >> I see the same exact thing on my personal fork as well.  At the same
>> >> time I
>> >> pushed to my fork's 5.3 branch, which triggered the 6.0 job to be
>> >> queued.
>> >>
>> >>
>> >>
>> >> On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole 
>> >> wrote:
>> >>
>> >>> The legacy ORM jobs (5.1-based ones at least) are getting triggered
>> >>> when
>> >>> they should not be.  Generally they all show they the run is triggered
>> >>> by a
>> >>> "SCM change", but it does not show any changes.  The underlying
>> >>> problem
>> >>> (although I am at a loss as to why) is that there has indeed been SCM
>> >>> changes pushed to Github, but against completely different branches.
>> >>> As
>> >>> far as I can tell these job's Github setting are correct.  Any ideas
>> >>> what
>> >>> is going on?
>> >>>
>> >>> This would not be such a big deal if the CI environment did not
>> >>> throttle
>> >>> all waiting jobs down to one active job.  So the jobs I am actually
>> >>> interested in are forced to wait (sometimes over an hour) for these
>> >>> jobs
>> >>> that should not even be running.
>> >>>
>> >>>
>> >>>
>> >> ___
>> >> hibernate-dev mailing list
>> >> hibernate-dev@lists.jboss.org
>> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


Re: [hibernate-dev] ORM CI jobs - erroneous github triggers

2018-01-04 Thread Steve Ebersole
Lol.  No idea why these are even built period.  Thanks Sanne

On Thu, Jan 4, 2018 at 6:13 AM Sanne Grinovero  wrote:

> Also these jobs were configured to build automatically every 5 hours:
>  - hibernate-orm-4.2-h2
>  - hibernate-orm-4.3-h2
>
> I removed the schedule, they will be built when (and only when)
> anything is committed to their respective branches.
>
>
> On 3 January 2018 at 19:15, Steve Ebersole  wrote:
> > Nice!  Glad you found something.  Thanks for making the changes.
> >
> >
> >
> > On Wed, Jan 3, 2018 at 12:16 PM Sanne Grinovero 
> wrote:
> >>
> >> I've made the change on:
> >>- hibernate-orm-5.0-h2
> >>- hibernate-orm-5.1-h2
> >>- hibernate-orm-master-h2-main
> >>
> >> Let's see if it helps, then we can figure out a way to check all jobs
> >> are using this workaround.
> >>
> >>
> >> On 3 January 2018 at 18:12, Sanne Grinovero 
> wrote:
> >> > Hi Steve,
> >> >
> >> > this rings a bell, we had this bug in the past and apparently it's
> >> > regressed again :(
> >> >
> >> > The latest Jenkins bug seems to be:
> >> >  - https://issues.jenkins-ci.org/browse/JENKINS-42161
> >> >
> >> > I'll try the suggested workarount, aka to enable SCM poll without any
> >> > frequency.
> >> >
> >> > Thanks,
> >> > Sanne
> >> >
> >> >
> >> > On 3 January 2018 at 17:35, Steve Ebersole 
> wrote:
> >> >> So I just pushed to the ORM master branch, which has caused the
> >> >> following
> >> >> jobs to be queued up:
> >> >>
> >> >>
> >> >>- hibernate-orm-5.0-h2
> >> >>- hibernate-orm-5.1-h2
> >> >>- hibernate-orm-master-h2-main
> >> >>
> >> >> Only one of those jobs is configured to "watch" master.  So why do
> >> >> these
> >> >> other jobs keep getting triggered?
> >> >>
> >> >> I see the same exact thing on my personal fork as well.  At the same
> >> >> time I
> >> >> pushed to my fork's 5.3 branch, which triggered the 6.0 job to be
> >> >> queued.
> >> >>
> >> >>
> >> >>
> >> >> On Tue, Jan 2, 2018 at 1:54 PM Steve Ebersole 
> >> >> wrote:
> >> >>
> >> >>> The legacy ORM jobs (5.1-based ones at least) are getting triggered
> >> >>> when
> >> >>> they should not be.  Generally they all show they the run is
> triggered
> >> >>> by a
> >> >>> "SCM change", but it does not show any changes.  The underlying
> >> >>> problem
> >> >>> (although I am at a loss as to why) is that there has indeed been
> SCM
> >> >>> changes pushed to Github, but against completely different branches.
> >> >>> As
> >> >>> far as I can tell these job's Github setting are correct.  Any ideas
> >> >>> what
> >> >>> is going on?
> >> >>>
> >> >>> This would not be such a big deal if the CI environment did not
> >> >>> throttle
> >> >>> all waiting jobs down to one active job.  So the jobs I am actually
> >> >>> interested in are forced to wait (sometimes over an hour) for these
> >> >>> jobs
> >> >>> that should not even be running.
> >> >>>
> >> >>>
> >> >>>
> >> >> ___
> >> >> hibernate-dev mailing list
> >> >> hibernate-dev@lists.jboss.org
> >> >> https://lists.jboss.org/mailman/listinfo/hibernate-dev
> ___
> hibernate-dev mailing list
> hibernate-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


Re: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope

2018-01-04 Thread Scott Marlow
I can arrange to keep access to the specific
ExtendedBeanManager/LifecycleListener,
that is not difficult.

What changes do we need from the CDI implementation?

On Jan 3, 2018 4:36 PM, "Steve Ebersole"  wrote:

If you have access to the specific ExtendedBeanManager/LifecycleListener,
that should already be enough.  Those things are already properly scoped to
the SessionFactory, unless you are passing the same instance to multiple
SessionFactory instances.

On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow  wrote:

> On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole 
> wrote:
>
>> Scott, how would we register a listener for this event?
>>
>
> If we want a standard solution, we could ask for an earlier CDI
> pre-destroy listener.
>
> The problem we have had with most CDI "listeners" so far is that they are
>> non-contextual, meaning there has been no way to link that back to a
>> specific SessionFactory..  If I can register this listener with a reference
>> back to the Sessionfactory, this should actually be fine.
>>
>
> I could pass the EMF to the org.hibernate.jpa.event.spi.jpa.
> ExtendedBeanManager.LifecycleListener, if that helps.
>
>
>>
>> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow  wrote:
>>
>>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero 
>>> wrote:
>>>
>>> > Any dependency injection framework will have some capability to define
>>> > the graph of dependencies across components, and such graph could be
>>> > very complex, with details only known to the framework.
>>> >
>>> > I don't think we can solve the integration by having "before all
>>> > others" / "after all others" phases as that's too coarse grained to
>>> > define a full graph; we need to find a way to have the DI framework
>>> > take in consideration our additional components both in terms of DI
>>> > consumers and providers - then let the framework wire up things in the
>>> > order it prefers. This is also to allow the DI engine to print
>>> > appropriate warnings for un-resolvable situations with its native
>>> > error handling, which would resolve in more familiar error messages.
>>> >
>>> > If that's not doable *or a priority* then all we can do is try to make
>>> > it clear enough that there will be limitations and hopefully describe
>>> > these clearly. Some of such limitations might be puzzling as you
>>> > describe.
>>> >
>>> >
>>> >
>>> > On 20 December 2017 at 12:50, Yoann Rodiere 
>>> wrote:
>>> > > Hello all,
>>> > >
>>> > > TL;DR: Application-scoped beans cannot be used as part of the
>>> @PreDestroy
>>> > > method of ORM-instantiated CDI beans, and it's a bit odd because
>>> they can
>>> > > be used as part of the @PostConstruct method.
>>> > >
>>> > > I've been testing the CDI integration in Hibernate ORM for the past
>>> few
>>> > > days, trying to integrate it into Search. I think I've discovered
>>> > something
>>> > > odd: when CDI-managed beans are destroyed, they cannot access other
>>> > > Application-scoped CDI beans anymore. Not sure whether this is a
>>> problem
>>> > or
>>> > > not, so maybe we should discuss it a bit before going forward with
>>> the
>>> > > current behavior.
>>> > >
>>> > > Short reminder: scopes define when CDI beans are created and
>>> destroyed.
>>> > > @ApplicationScoped is pretty self-explanatory: created when the
>>> > application
>>> > > starts and destroyed when it stops. Some other scopes are a bit more
>>> > > convoluted: @Singleton basically means created *before* the
>>> application
>>> > > starts and destroyed *after* the application stops (and also means
>>> "this
>>> > > bean shall not be proxied"), @Dependent means created when an
>>> instance is
>>> > > requested and destroyed when the instance is released, etc.
>>> > >
>>> > > The thing is, Hibernate ORM is typically started very early and shut
>>> down
>>> > > very late in the CDI lifecycle - at least within WildFly. So when
>>> > Hibernate
>>> > > starts, CDI Application-scoped beans haven't been instantiated yet,
>>> and
>>> > it
>>> > > turns out that when Hibernate ORM shuts down, CDI has already
>>> destroyed
>>> > > Application-scoped beans.
>>> > >
>>> > > Regarding startup, Steve and Scott solved the problem by delaying
>>> bean
>>> > > instantiation to some point in the future when the Application scope
>>> is
>>> > > active (and thus Application-scoped beans are available). This makes
>>> it
>>> > > possible to use Application-scoped beans within ORM-instantiated
>>> beans as
>>> > > soon as the latter are constructed (i.e. within their @PostConstruct
>>> > > methods).
>>> > > However, when Hibernate ORM shuts down, the Application scope has
>>> already
>>> > > been terminated. So when ORM destroys the beans it instantiated,
>>> those
>>> > > ORM-instantiated beans cannot call a method on referenced
>>> > > Application-scoped beans (CDI proxies will throw an exception).
>>> > >
>>> > > All in all, the only type of beans we can currently use in a
>>> @PreDestroy
>>> > > method of an ORM-instantiated bean is @Dependent 

Re: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope

2018-01-04 Thread Steve Ebersole
Well there seems to be some disagreement about that.  I personally think we
do not need anything other than a pre-shutdown hook so that we can release
our CDI references.  Sanne seemed to think we needed something more
"integrated".  I think we should start with the simple and add deeper
integration (which requires actual CDI changes) only if we see that is
necessary.  Sanne?

On Thu, Jan 4, 2018 at 7:58 AM Scott Marlow  wrote:

> I can arrange to keep access to the specific
> ExtendedBeanManager/LifecycleListener, that is not difficult.
>
> What changes do we need from the CDI implementation?
>
>
> On Jan 3, 2018 4:36 PM, "Steve Ebersole"  wrote:
>
> If you have access to the specific ExtendedBeanManager/LifecycleListener,
> that should already be enough.  Those things are already properly scoped to
> the SessionFactory, unless you are passing the same instance to multiple
> SessionFactory instances.
>
> On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow  wrote:
>
>> On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole 
>> wrote:
>>
>>> Scott, how would we register a listener for this event?
>>>
>>
>> If we want a standard solution, we could ask for an earlier CDI
>> pre-destroy listener.
>>
>> The problem we have had with most CDI "listeners" so far is that they are
>>> non-contextual, meaning there has been no way to link that back to a
>>> specific SessionFactory..  If I can register this listener with a reference
>>> back to the Sessionfactory, this should actually be fine.
>>>
>>
>> I could pass the EMF to the 
>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener,
>> if that helps.
>>
>>
>>>
>>> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow  wrote:
>>>
 On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero 
 wrote:

 > Any dependency injection framework will have some capability to define
 > the graph of dependencies across components, and such graph could be
 > very complex, with details only known to the framework.
 >
 > I don't think we can solve the integration by having "before all
 > others" / "after all others" phases as that's too coarse grained to
 > define a full graph; we need to find a way to have the DI framework
 > take in consideration our additional components both in terms of DI
 > consumers and providers - then let the framework wire up things in the
 > order it prefers. This is also to allow the DI engine to print
 > appropriate warnings for un-resolvable situations with its native
 > error handling, which would resolve in more familiar error messages.
 >
 > If that's not doable *or a priority* then all we can do is try to make
 > it clear enough that there will be limitations and hopefully describe
 > these clearly. Some of such limitations might be puzzling as you
 > describe.
 >
 >
 >
 > On 20 December 2017 at 12:50, Yoann Rodiere 
 wrote:
 > > Hello all,
 > >
 > > TL;DR: Application-scoped beans cannot be used as part of the
 @PreDestroy
 > > method of ORM-instantiated CDI beans, and it's a bit odd because
 they can
 > > be used as part of the @PostConstruct method.
 > >
 > > I've been testing the CDI integration in Hibernate ORM for the past
 few
 > > days, trying to integrate it into Search. I think I've discovered
 > something
 > > odd: when CDI-managed beans are destroyed, they cannot access other
 > > Application-scoped CDI beans anymore. Not sure whether this is a
 problem
 > or
 > > not, so maybe we should discuss it a bit before going forward with
 the
 > > current behavior.
 > >
 > > Short reminder: scopes define when CDI beans are created and
 destroyed.
 > > @ApplicationScoped is pretty self-explanatory: created when the
 > application
 > > starts and destroyed when it stops. Some other scopes are a bit more
 > > convoluted: @Singleton basically means created *before* the
 application
 > > starts and destroyed *after* the application stops (and also means
 "this
 > > bean shall not be proxied"), @Dependent means created when an
 instance is
 > > requested and destroyed when the instance is released, etc.
 > >
 > > The thing is, Hibernate ORM is typically started very early and
 shut down
 > > very late in the CDI lifecycle - at least within WildFly. So when
 > Hibernate
 > > starts, CDI Application-scoped beans haven't been instantiated yet,
 and
 > it
 > > turns out that when Hibernate ORM shuts down, CDI has already
 destroyed
 > > Application-scoped beans.
 > >
 > > Regarding startup, Steve and Scott solved the problem by delaying
 bean
 > > instantiation to some point in the future when the Application
 scope is
 > > active (and thus Application-scoped beans are available). This
 makes it
 > > possible to use Application-scoped beans within ORM-instantiated
 b

Re: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope

2018-01-04 Thread Sanne Grinovero
On 4 January 2018 at 16:39, Sanne Grinovero  wrote:
> On 4 January 2018 at 14:19, Steve Ebersole  wrote:
>> Well there seems to be some disagreement about that.  I personally think we
>> do not need anything other than a pre-shutdown hook so that we can release
>> our CDI references.  Sanne seemed to think we needed something more
>> "integrated".  I think we should start with the simple and add deeper
>> integration (which requires actual CDI changes) only if we see that is
>> necessary.  Sanne?
>
> I guess it's totally possible that the current solution you all have
> been working on covers most practical use cases and most immediate
> user's needs, so that's great, but I wonder if we can clearly document
> the limitations which I'm assuming we have (I can't).
>
> I don't believe we can handle all complex dependency graphs that a CDI
> user might expect with before & after phases, however I had no time to
> prove this with a meaningful example.
>
> If someone with more CDI experience could experiment with complex
> dependency graphs then we should be able to better document the
> limitations - which I strongly suspect exist - and make a good case to
> need the JPA/CDI integration deeper at spec level, however "make it
> work as users expect" might not be worthwhile of a spec update, one
> could say it's the implementation's job so essentially a problem in
> how we deal with integration details.
>
> It's possible that there's no practical need for such a deeper
> integration but it makes me a bit nervous to not be able to specify
> the limitations to users.
>
> More concrete example: Steve mentions having a "PRE-shutdown hook" to
> release our references to managed beans; what if some other beans
> depend on these? What if these other beans have wider scopes, like app
> scope? Clearly the CDI engine is in the position to figure this out
> and might want to initiate a cascade shutdown of such other beans
> (which we don't manage directly) so this is essentially initiating a
> whole-shutdown (not just a PRE-shutdown).
>
> Vice-versa, same situation can arise during initialization; I'm afraid
> this would get hairy quickly, while supposedly any CDI implementation
> should have the means to handle ordering details appropriately, so I'd
> hope we delegate it all to it to happen during its normal phases
> rather than layering outer/inner phases around.
>
> I'm not sure who to ask for a better opinion; I'll add Stuart in CC as
> he's the only smart person I know with deep expertise in both
> Hibernate and CDI, with some luck he'll say I'm wrong and we're good
> :)

Lol, re-reading "Hibernate + CDI expertise" it's hilarious I forgot
the most obvious expert name :)
Not sure if Gavin is interested but I'll add him too.

Thanks,
Sanne

>
> Thanks,
> Sanne
>
>
>>
>> On Thu, Jan 4, 2018 at 7:58 AM Scott Marlow  wrote:
>>>
>>> I can arrange to keep access to the specific
>>> ExtendedBeanManager/LifecycleListener, that is not difficult.
>>>
>>> What changes do we need from the CDI implementation?
>>>
>>>
>>> On Jan 3, 2018 4:36 PM, "Steve Ebersole"  wrote:
>>>
>>> If you have access to the specific ExtendedBeanManager/LifecycleListener,
>>> that should already be enough.  Those things are already properly scoped to
>>> the SessionFactory, unless you are passing the same instance to multiple
>>> SessionFactory instances.
>>>
>>> On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow  wrote:

 On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole 
 wrote:
>
> Scott, how would we register a listener for this event?


 If we want a standard solution, we could ask for an earlier CDI
 pre-destroy listener.

> The problem we have had with most CDI "listeners" so far is that they
> are non-contextual, meaning there has been no way to link that back to a
> specific SessionFactory..  If I can register this listener with a 
> reference
> back to the Sessionfactory, this should actually be fine.


 I could pass the EMF to the
 org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, if
 that helps.

>
>
> On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow  wrote:
>>
>> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero 
>> wrote:
>>
>> > Any dependency injection framework will have some capability to
>> > define
>> > the graph of dependencies across components, and such graph could be
>> > very complex, with details only known to the framework.
>> >
>> > I don't think we can solve the integration by having "before all
>> > others" / "after all others" phases as that's too coarse grained to
>> > define a full graph; we need to find a way to have the DI framework
>> > take in consideration our additional components both in terms of DI
>> > consumers and providers - then let the framework wire up things in
>> > the
>> > order it prefers. This is also to allow the DI engine to print
>> 

Re: [hibernate-dev] CDI integration in Hibernate ORM and the Application scope

2018-01-04 Thread Sanne Grinovero
On 4 January 2018 at 14:19, Steve Ebersole  wrote:
> Well there seems to be some disagreement about that.  I personally think we
> do not need anything other than a pre-shutdown hook so that we can release
> our CDI references.  Sanne seemed to think we needed something more
> "integrated".  I think we should start with the simple and add deeper
> integration (which requires actual CDI changes) only if we see that is
> necessary.  Sanne?

I guess it's totally possible that the current solution you all have
been working on covers most practical use cases and most immediate
user's needs, so that's great, but I wonder if we can clearly document
the limitations which I'm assuming we have (I can't).

I don't believe we can handle all complex dependency graphs that a CDI
user might expect with before & after phases, however I had no time to
prove this with a meaningful example.

If someone with more CDI experience could experiment with complex
dependency graphs then we should be able to better document the
limitations - which I strongly suspect exist - and make a good case to
need the JPA/CDI integration deeper at spec level, however "make it
work as users expect" might not be worthwhile of a spec update, one
could say it's the implementation's job so essentially a problem in
how we deal with integration details.

It's possible that there's no practical need for such a deeper
integration but it makes me a bit nervous to not be able to specify
the limitations to users.

More concrete example: Steve mentions having a "PRE-shutdown hook" to
release our references to managed beans; what if some other beans
depend on these? What if these other beans have wider scopes, like app
scope? Clearly the CDI engine is in the position to figure this out
and might want to initiate a cascade shutdown of such other beans
(which we don't manage directly) so this is essentially initiating a
whole-shutdown (not just a PRE-shutdown).

Vice-versa, same situation can arise during initialization; I'm afraid
this would get hairy quickly, while supposedly any CDI implementation
should have the means to handle ordering details appropriately, so I'd
hope we delegate it all to it to happen during its normal phases
rather than layering outer/inner phases around.

I'm not sure who to ask for a better opinion; I'll add Stuart in CC as
he's the only smart person I know with deep expertise in both
Hibernate and CDI, with some luck he'll say I'm wrong and we're good
:)

Thanks,
Sanne


>
> On Thu, Jan 4, 2018 at 7:58 AM Scott Marlow  wrote:
>>
>> I can arrange to keep access to the specific
>> ExtendedBeanManager/LifecycleListener, that is not difficult.
>>
>> What changes do we need from the CDI implementation?
>>
>>
>> On Jan 3, 2018 4:36 PM, "Steve Ebersole"  wrote:
>>
>> If you have access to the specific ExtendedBeanManager/LifecycleListener,
>> that should already be enough.  Those things are already properly scoped to
>> the SessionFactory, unless you are passing the same instance to multiple
>> SessionFactory instances.
>>
>> On Wed, Jan 3, 2018 at 10:09 AM Scott Marlow  wrote:
>>>
>>> On Tue, Jan 2, 2018 at 2:42 PM, Steve Ebersole 
>>> wrote:

 Scott, how would we register a listener for this event?
>>>
>>>
>>> If we want a standard solution, we could ask for an earlier CDI
>>> pre-destroy listener.
>>>
 The problem we have had with most CDI "listeners" so far is that they
 are non-contextual, meaning there has been no way to link that back to a
 specific SessionFactory..  If I can register this listener with a reference
 back to the Sessionfactory, this should actually be fine.
>>>
>>>
>>> I could pass the EMF to the
>>> org.hibernate.jpa.event.spi.jpa.ExtendedBeanManager.LifecycleListener, if
>>> that helps.
>>>


 On Tue, Jan 2, 2018 at 1:39 PM Scott Marlow  wrote:
>
> On Wed, Dec 20, 2017 at 9:48 AM, Sanne Grinovero 
> wrote:
>
> > Any dependency injection framework will have some capability to
> > define
> > the graph of dependencies across components, and such graph could be
> > very complex, with details only known to the framework.
> >
> > I don't think we can solve the integration by having "before all
> > others" / "after all others" phases as that's too coarse grained to
> > define a full graph; we need to find a way to have the DI framework
> > take in consideration our additional components both in terms of DI
> > consumers and providers - then let the framework wire up things in
> > the
> > order it prefers. This is also to allow the DI engine to print
> > appropriate warnings for un-resolvable situations with its native
> > error handling, which would resolve in more familiar error messages.
> >
> > If that's not doable *or a priority* then all we can do is try to
> > make
> > it clear enough that there will be limitations and hopefully describe
> > these clearly. Some of such limitations might be 

[hibernate-dev] New CI slaves now available!

2018-01-04 Thread Sanne Grinovero
Hi all,

we're having shiny new boxes running CI: more secure, way faster and
less "out of disk space" prolems I hope.

# Slaves

Slaves have been rebuilt from scratch:
 - from Fedora 25 to Fedora 27
 - NVMe disks for all storage, including databases, JDKs, dependency
stores, indexes and journals
 - Now using C5 instances to benefit from Amazon's new "Nitro" engines [1]
 - hardware offloading of network operations by enabling ENA [2]
 - NVMe drives also using provisioned IO

This took a bit of unexpected low level work as .. Fedora images don't
support ENA yet so I had to create a custom Fedora re-distribution AMI
first, it wasn't possible to simply compile the kernel modules for the
standard Fedora images. These features are expected to come in future
Fedora Cloud images but I didn't want to wait so made our own :) [3]

# Cloud scaling

Idle slaves will self-terminate after some timeout (currently 30m).
When there are many jobs queueing up, more slaves (up to 5) will
automatically start.

If you're the first to trigger a build you'll have to be patient, as
it's possible after some quiet time (after the night?) all slaves are
gone; the system will boot up new ones automatically ASAP but this
initial boot takes some extra couple of minutes.

# Master node

Well, security patching mostly, but also finally figured out how to
workaround the bugs which were preventing us to upgrade Jenkins.

So now Jenkins is upgraded to latest, including *all plugins*. It
seems to work but let's keep an eye on it, those plugins are not all
maintained at the quality one would expect.

In particular attempting to change EC2 configuration properties will
now trigger a super annoying NPE [4]; either don't make further
changes or resort to XML editing of the configuration.

# Next

I'm not entirely done; eventually I'd like to convert our master node
to ENA/C5/NVMe as well - especially to be able to move all master and
slaves into the same physical cluster - but I'll stop now and get back
to Java so you all get a chance to identify problems caused by the new
slaves before I cause more trouble..

Thanks,
Sanne

1 - 
https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_architecture_bare_metal_ec2_guard_duty_security_tool/
2 - 
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html
3 - https://pagure.io/atomic-wg/issue/271
4 - https://issues.jenkins-ci.org/browse/JENKINS-46856
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


Re: [hibernate-dev] New CI slaves now available!

2018-01-04 Thread Steve Ebersole
Awesome Sanne!  Great work.

Anything you need us to do to our jobs?

On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero  wrote:

> Hi all,
>
> we're having shiny new boxes running CI: more secure, way faster and
> less "out of disk space" prolems I hope.
>
> # Slaves
>
> Slaves have been rebuilt from scratch:
>  - from Fedora 25 to Fedora 27
>  - NVMe disks for all storage, including databases, JDKs, dependency
> stores, indexes and journals
>  - Now using C5 instances to benefit from Amazon's new "Nitro" engines [1]
>  - hardware offloading of network operations by enabling ENA [2]
>  - NVMe drives also using provisioned IO
>
> This took a bit of unexpected low level work as .. Fedora images don't
> support ENA yet so I had to create a custom Fedora re-distribution AMI
> first, it wasn't possible to simply compile the kernel modules for the
> standard Fedora images. These features are expected to come in future
> Fedora Cloud images but I didn't want to wait so made our own :) [3]
>
> # Cloud scaling
>
> Idle slaves will self-terminate after some timeout (currently 30m).
> When there are many jobs queueing up, more slaves (up to 5) will
> automatically start.
>
> If you're the first to trigger a build you'll have to be patient, as
> it's possible after some quiet time (after the night?) all slaves are
> gone; the system will boot up new ones automatically ASAP but this
> initial boot takes some extra couple of minutes.
>
> # Master node
>
> Well, security patching mostly, but also finally figured out how to
> workaround the bugs which were preventing us to upgrade Jenkins.
>
> So now Jenkins is upgraded to latest, including *all plugins*. It
> seems to work but let's keep an eye on it, those plugins are not all
> maintained at the quality one would expect.
>
> In particular attempting to change EC2 configuration properties will
> now trigger a super annoying NPE [4]; either don't make further
> changes or resort to XML editing of the configuration.
>
> # Next
>
> I'm not entirely done; eventually I'd like to convert our master node
> to ENA/C5/NVMe as well - especially to be able to move all master and
> slaves into the same physical cluster - but I'll stop now and get back
> to Java so you all get a chance to identify problems caused by the new
> slaves before I cause more trouble..
>
> Thanks,
> Sanne
>
> 1 -
> https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_architecture_bare_metal_ec2_guard_duty_security_tool/
> 2 -
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html
> 3 - https://pagure.io/atomic-wg/issue/271
> 4 - https://issues.jenkins-ci.org/browse/JENKINS-46856
> ___
> hibernate-dev mailing list
> hibernate-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


[hibernate-dev] Plans to release 5.2.13?

2018-01-04 Thread Gail Badner
We discussed stopping 5.2 releases at the F2F, but I can't remember what
was decided.

I see that there is a 5.2 branch. Should we be backporting to 5.2 branch?

Thanks,
Gail
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev


Re: [hibernate-dev] New CI slaves now available!

2018-01-04 Thread Yoann Rodiere
Great, thanks for all the work!

Now that we have on-demand slave spawning, maybe we could get rid of our
"hack" consisting in assigning 5 slots to each slave and a weight of 3 to
each job? I would expect the website and release jobs to rarely wait in the
queue, and if they do we can always set up a specific "priority queue" for
those jobs, with a dedicated slave pool.
Just asking for this because last time I checked, it was not possible to
assign weight to jobs defined as Jenkins pipelines. So these jobs ended up
with a weight of 1, and we ended up running multiple instances of those on
the same slave... which is obviously not good.
I can do the boring job editing work on each and every job, I'm just asking
if it is seems ok to you... ?

Yoann Rodière
Hibernate NoORM Team
yo...@hibernate.org

On 5 January 2018 at 00:52, Steve Ebersole  wrote:

> Awesome Sanne!  Great work.
>
> Anything you need us to do to our jobs?
>
> On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero  wrote:
>
> > Hi all,
> >
> > we're having shiny new boxes running CI: more secure, way faster and
> > less "out of disk space" prolems I hope.
> >
> > # Slaves
> >
> > Slaves have been rebuilt from scratch:
> >  - from Fedora 25 to Fedora 27
> >  - NVMe disks for all storage, including databases, JDKs, dependency
> > stores, indexes and journals
> >  - Now using C5 instances to benefit from Amazon's new "Nitro" engines
> [1]
> >  - hardware offloading of network operations by enabling ENA [2]
> >  - NVMe drives also using provisioned IO
> >
> > This took a bit of unexpected low level work as .. Fedora images don't
> > support ENA yet so I had to create a custom Fedora re-distribution AMI
> > first, it wasn't possible to simply compile the kernel modules for the
> > standard Fedora images. These features are expected to come in future
> > Fedora Cloud images but I didn't want to wait so made our own :) [3]
> >
> > # Cloud scaling
> >
> > Idle slaves will self-terminate after some timeout (currently 30m).
> > When there are many jobs queueing up, more slaves (up to 5) will
> > automatically start.
> >
> > If you're the first to trigger a build you'll have to be patient, as
> > it's possible after some quiet time (after the night?) all slaves are
> > gone; the system will boot up new ones automatically ASAP but this
> > initial boot takes some extra couple of minutes.
> >
> > # Master node
> >
> > Well, security patching mostly, but also finally figured out how to
> > workaround the bugs which were preventing us to upgrade Jenkins.
> >
> > So now Jenkins is upgraded to latest, including *all plugins*. It
> > seems to work but let's keep an eye on it, those plugins are not all
> > maintained at the quality one would expect.
> >
> > In particular attempting to change EC2 configuration properties will
> > now trigger a super annoying NPE [4]; either don't make further
> > changes or resort to XML editing of the configuration.
> >
> > # Next
> >
> > I'm not entirely done; eventually I'd like to convert our master node
> > to ENA/C5/NVMe as well - especially to be able to move all master and
> > slaves into the same physical cluster - but I'll stop now and get back
> > to Java so you all get a chance to identify problems caused by the new
> > slaves before I cause more trouble..
> >
> > Thanks,
> > Sanne
> >
> > 1 -
> > https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_
> architecture_bare_metal_ec2_guard_duty_security_tool/
> > 2 -
> > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
> enhanced-networking-ena.html
> > 3 - https://pagure.io/atomic-wg/issue/271
> > 4 - https://issues.jenkins-ci.org/browse/JENKINS-46856
> > ___
> > hibernate-dev mailing list
> > hibernate-dev@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/hibernate-dev
> >
> ___
> hibernate-dev mailing list
> hibernate-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev