Hi Marco and Adam,
Thanks for the responses. Answers to your questions are inline
On Sat, Jul 20, 2019 at 06:56:19PM +0200, Marco Steinbach wrote:
> I've outfitted all of them with 4-port Intel PRO/1000 PCIe driven by
> igb(4), and am not using the onboard re(4) NICs.
We use the onboard re(
I have a set of J1900 hosts running 11.0-RELEASE-p1 that experience
seemingly random panics. The panics are all basically the same:
Fatal trap 12: page fault while in kernel mode
fault code = supervisor read data, page not present
Adding workloads to the hosts seems to increase panic frequency, b
Thanks Warner...
On Tue, Mar 19, 2019 at 05:14:15PM -0600, Warner Losh wrote:
> You need to set the NFS mount point properly.
I think I have? The 12.0 environment was basically a copy of a
functioning 11.0 PXE environment, and 12 worked fine with 11's pxeboot.
Regardless, turns out neither 12 n
Hello -stable,
We have a PXE environemt that builds FreeBSD-11 boxes. We've started
to dip our toes into the 12.x waters, but have had trouble getting
FreeBSD-12 to pxeboot. It would crash and burn like so:
Startup error in /boot/lua/loader.lua:
LUA ERROR: cannot open /boot/lua/loader.lua
On Wed, Sep 23, 2015 at 01:37:30PM +0100, Matt Smith wrote:
> If this type of thing is being done on the base system sshd it would
> also be useful to look at the port version of ssh as well? I use the
> port and it has always annoyed me that I get constant "connection
> refused" whilst I'm wai
On Fri, Jul 20, 2012 at 03:46:21PM -0700, Doug Barton wrote:
>
> You probably know this already, but just in case ... Software memory
> tests cannot tell you conclusively that memory is good, only that it's
> bad.
I may have known that in a past life but certainly wasn't thinking about
it now.
On Fri, Jul 20, 2012 at 04:09:28PM +0100, Dr Josef Karthauser wrote:
> Take care though, my system which had been working fine for about
> a year when I noticed the ZFS rot (which all appears to be recent
> in time). I ran memcheck+ on it for 8 hours or so, and it showed no
> errors at all. Howeve
On Thu, Jul 19, 2012 at 06:05:32PM +0100, Dr Joe Karthauser wrote:
> Hi James,
>
> It's almost definitely a memory problem. I'd change it ASAP if I were
> you.
>
> I lost about 70mb from my zfs pool for this very reason just a few
> weeks ago. Luckily I had enough snapshots from before the rot set
I have a ZFS server on which I've seen periodic checksum errors on
almost every drive. While scrubbing the pool last night, it began to
report unrecoverable data errors on a single file.
I compared an md5 of the supposedly corrupted file to an md5 of the
original copy, stored on different media. T
On Wed, Feb 23, 2005 at 12:58:52AM +0100, Pawel Jakub Dawidek wrote:
>
> ...and metadata at the begining of the provider still doesn't fix 'c'
> partition problem and 'a' partition which starts at sector 0, which is
> the default start offset in sysinstall.
Is this the C partition problem you ref
I apologize in advance for the lack of details here. This break occurred
while I was rushing between locations and I didn't have an opportunity
to properly copy down the details of the error.
I have a 5.3 system with a geom_raid3 volume that was running -RELEASE
until this morning. I updated to -S
I built the world instead of just sshd and my problem went away. I guess
I won't build bits and pieces of the freshly cvsup'd world in the
future. :)
-Snow
On Thu, Mar 22, 2001 at 11:56:33AM -0500, James Snow wrote:
> Looking at my cvsup from last night I figured the official
12 matches
Mail list logo