Bezüglich Harry Schmalzbauer's Nachricht vom 16.05.2017 18:26 (localtime):
> B
…
The issue is, that current UEFI implementation is using 64MB staging
memory for loading the kernel and modules and files. When the boot is
called, the relocation code will put the bits from staging area
> On 29. juuni 2017, at 11:24, Harry Schmalzbauer wrote:
>
> Bezüglich Harry Schmalzbauer's Nachricht vom 16.05.2017 18:26 (localtime):
>> B
> …
> The issue is, that current UEFI implementation is using 64MB staging
> memory for loading the kernel and modules and files. When the boot is
Am 28. Juni 2017 22:38:52 GMT+08:00 schrieb Mark Millard :
>A primary test is building lang/gcc5-devel under release/11.0.1
>and then using it under stable/11 or some draft of release/11.1.0 .
Thank you, Mark. Let me know how it went. In the meantime I'll prepare the
change for gcc5 itself.
>I
On 2017-Jun-29, at 3:10 AM, Gerald Pfeifer wrote:
> Am 28. Juni 2017 22:38:52 GMT+08:00 schrieb Mark Millard dsl-only.net>:
>> A primary test is building lang/gcc5-devel under release/11.0.1
>> and then using it under stable/11 or some draft of release/11.1.0 .
>
> Thank you, Mark. Let me know
Hi.
Say I'm having a server that traps more and more often (different
panics: zfs panics, GPFs, fatal traps while in kernel mode etc), and
then I realize it has tonns of permanent errors on all of it's pools
that scrub is unable to heal. Does this situation mean it's a bad memory
case ? Unfo
Hi,
On 29.06.2017 16:37, Eugene M. Zheganin wrote:
Hi.
Say I'm having a server that traps more and more often (different
panics: zfs panics, GPFs, fatal traps while in kernel mode etc), and
then I realize it has tonns of permanent errors on all of it's pools
that scrub is unable to heal. Do
On Thu, Jun 29, 2017 at 6:04 AM, Eugene M. Zheganin wrote:
> Hi,
>
> On 29.06.2017 16:37, Eugene M. Zheganin wrote:
>>
>> Hi.
>>
>>
>> Say I'm having a server that traps more and more often (different panics:
>> zfs panics, GPFs, fatal traps while in kernel mode etc), and then I realize
>> it has
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903
--- Comment #35 from Mateusz Guzik ---
Hi there, sorry for late reply. This somehow fell through the cracks.
First of all there is no kernel bug per se that I can see, rather a bug in the
atom cpu which started manifesting itself. There we
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903
--- Comment #36 from Franco Fichtner ---
You can use https://github.com/opnsense/src/commit/6b79b52c.patch on stable/10,
it was verified working on 11.0.
Cheers,
Franco
--
You are receiving this mail because:
You are on the CC list for
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903
--- Comment #37 from Cassiano Peixoto ---
(In reply to Mateusz Guzik from comment #35)
Mateusz,
Yes i realized many changes has been made on 11-STABLE related to this issue. I
think it could be fixed as well. Anyway i have a server running
I am trying to attach a brand new disk to an azure VM, and
what I see is the disk attachng and detaching immediately like this:
da2 at storvsc3 bus 0 scbus5 target 0 lun 0
da2: Fixed Direct Access SPC-2 SCSI device
da2: 300.000MB/s transfers
da2: Command Queueing e
Hi, folks
any pointer to an explanation would be nice,
there seems to be no zfs(4) manpage ...
Reason for asking: I have a piece of software
that uses 14,000 ioctl() calls on that device during
one execution and I'm asking myself what it tries
to do.
Thanks!
Patrick
signature.asc
Description:
On Thu, Jun 29, 2017 at 8:28 AM, Patrick M. Hausen wrote:
> Hi, folks
>
> any pointer to an explanation would be nice,
> there seems to be no zfs(4) manpage ...
>
> Reason for asking: I have a piece of software
> that uses 14,000 ioctl() calls on that device during
> one execution and I'm asking m
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903
--- Comment #38 from Chris Collins ---
(In reply to Mateusz Guzik from comment #35)
Just to let you know my pfsense unit affected by this issue does not have an
atom cpu.
It has a celeron N3150 cpu.
--
You are receiving this mail becaus
Hi,
I've got a dual boot system at home, booting into Windows 10 for
games. I've noticed that since I updated to 11.1-BETA3, a reboot from
Windows into FreeBSD results in a endless reboot cycle. In order to
reboot FreeBSD, I have to cold-boot. The endless reboot cycle appears
to be triggered when
On Fri, Jun 30, 2017 at 10:23:26AM +1200, Jonathan Chen wrote:
> Hi,
>
> I've got a dual boot system at home, booting into Windows 10 for
> games. I've noticed that since I updated to 11.1-BETA3, a reboot from
> Windows into FreeBSD results in a endless reboot cycle. In order to
> reboot FreeBSD,
On 30 June 2017 at 10:27, Glen Barber wrote:
> On Fri, Jun 30, 2017 at 10:23:26AM +1200, Jonathan Chen wrote:
>> Hi,
>>
>> I've got a dual boot system at home, booting into Windows 10 for
>> games. I've noticed that since I updated to 11.1-BETA3, a reboot from
>> Windows into FreeBSD results in a
On Fri, Jun 30, 2017 at 10:32:23AM +1200, Jonathan Chen wrote:
> On 30 June 2017 at 10:27, Glen Barber wrote:
> > On Fri, Jun 30, 2017 at 10:23:26AM +1200, Jonathan Chen wrote:
> >> Hi,
> >>
> >> I've got a dual boot system at home, booting into Windows 10 for
> >> games. I've noticed that since I
Am 29. Juni 2017 18:55:59 GMT+08:00 schrieb Mark Millard :
>I'm not currently set up to run more than head on
>any of amd64, powerpc64, powerpc, aarch64, or armv6/7
>(which are all I target). And I'm in the middle of
>attempting a fairly large jump to head -r320458 on
>those.
Oh, then I had misu
19 matches
Mail list logo