>From kostik...@gmail.com Thu Aug 27 18:22:37 2015
>
>On Thu, Aug 27, 2015 at 01:12:16PM +0100, Anton Shterenlikht wrote:
>> ia64 stable/10 r286315 boots, but 
>> r286316 hangs at "Entering /boot/kernel/kernel".
>> 
>> Please advise
>
>To state an obvious thing.  The commit which you pointed to, changes
>the code which is not executed at that early kernel boot stage.  The
>revision cannot cause the consequences you described.

yes, I'm surprised too.

>I think that you either have build-environment issue which randomly pops
>up, or there is some other boot-time issue which is sporadic.  The only
>suggestion I have, try many boots with kernels which look either good
>or bad, I would be not surprised if statistic would be completely
>different from binary good/bad outcome.
>
>Otherwise, I do not have an idea.
>

I doubt it's a random or a sporadic issue.
I did a bisection, as suggested, during which
I built world/kernel on 7 revisions, and when I
narrowed it down to <50, a further 4 kernels.
All kernels <=286315 boot, all kernels >= 286316
do not. I think if it were something random,
it wouldn't be such a clear cut picture.

What about my loader.conf:

# cat /boot/loader.conf 
zfs_load="YES"
# soft limits
kern.dfldsiz=536748032  # default soft limit for process data
kern.dflssiz=536748032  # default soft limit for stack
# hard limits
kern.maxdsiz=536748032  # hard limit for process data
kern.maxssiz=536748032  # hard limit for stack
kern.maxtsiz=536748032  # hard limit for text size
                        # processes may not exceed these limits.
# 

My memory:

real memory  = 8589934592 (8192 MB)
avail memory = 8387649536 (7999 MB)

I'll try disabling all these settings in loader.conf
and see if makes a difference.
But these settings have been there for a few years
with no problems.

Anton

_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to