>
> My next plan would be reporting the problem with sufficient
> information so the bug can be fixed.
>
> Destroying the dataset or the whole pool seems like papering over the
> real issue to me and you could still do it if the PR gets ignored for
> too long or a developer agrees that this is the
> It's probably a long shot, but you may try removing bad file using
> illumos (ex opensolaris) system (or liveCD). In the past it used to be
> a little bit more robust than FreeBSD when it came to dealing with
> filesystem corruption. In one case I had illumos printed out a message
> about corrupt
>
> Does:
>
> cat /dev/null > bad.file
>
> Cause a kernel panic?
>
>
>
ah, sadly that does cause a kernel panic. I hadn't tried it though, thanks
for the suggestion.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/fre
, Dec 29, 2012 at 3:35 AM, Artem Belevich wrote:
>
> On Fri, Dec 28, 2012 at 12:46 PM, Greg Bonett wrote:
>
>> However, I can't figure out how to destroy the /tank filesystem without
>> destroying /tank/tempfs (and the other /tank children). Is it possible to
>> de
Many months ago, I believe some *very bad hardware* caused corruption of a
file on one of my zfs file systems. I've isolated the corrupted file and
can reliably induce a kernel panic with "touch bad.file", "rm bad.file", or
"ls -l" in the bad.file's directory (ls in bad.file's dir doesn't cause
pa
> > I'm experiencing a kernel panic that appears to be caused by zfs.
> >
> > No errors are making it into /var/log/messages, but here is the
> > error message that appears on my screen after panic (transcribed):
> >
> > panic solaris assert BSWAP_32(sa_hdr_phys->sa_magic) == SA_MAGIC,
> > file:
>
Hello,
I'm experiencing a kernel panic that appears to be caused by zfs.
No errors are making it into /var/log/messages, but here is the error
message that appears on my screen after panic (transcribed):
panic solaris assert BSWAP_32(sa_hdr_phys->sa_magic) == SA_MAGIC,
file:
/src/sys/modules/zf
I wanted to send a quick message to resolve this thread. With the help
of a friend, I was able to recover the data in question. Since the file
was significantly smaller than the ZFS block size, and compression was
not enabled (or at least was not enabled at the time the file was
written), we were a
On Thu, 2011-06-09 at 14:57 -0700, Artem Belevich wrote:
On Thu, Jun 9, 2011 at 1:00 PM, Greg Bonett wrote:
> > Hi all,
> > I know this is a long shot, but I figure it's worth asking. Is there
> > anyway to recover a file from a zfs snapshot which was destroyed? I
know
&
Hi all,
I know this is a long shot, but I figure it's worth asking. Is there
anyway to recover a file from a zfs snapshot which was destroyed? I know
the name of the file and a unique string that should be in it. The zfs
pool is on geli devices so I can't dd the raw device and look for it.
Any sugg
> If the eSATA port is on the motherboard backplane (e.g. a port that's
> soldered to the motherboard), then you're fine. Be aware that the eSATA
> port may be connected to the JMicron controller, however, which I've
> already said is of questionable quality to begin with. :-)
I wanted to send
Thanks for all the help. I've learned some new things, but haven't fixed
the problem yet.
> 1) Re-enable both CPU cores; I can't see this being responsible for the
> problem. I do understand the concern over added power draw, but see
> recommendation (4a) below.
I re-enabled all cores but exper
ok, I think you're right - there is more than one problem with this
system, but I think I'm starting to isolate them and make some
progress.
> # Debugging options
> options BREAK_TO_DEBUGGER # Sending a serial BREAK drops to DDB
> options KDB # Enable ke
dmesg.log.
On Mon, 2011-02-07 at 21:52 -0800, Jeremy Chadwick wrote:
> On Mon, Feb 07, 2011 at 09:34:36PM -0800, Greg Bonett wrote:
> > Thank you for the help. I've implemented your
> > suggested /boot/loader.conf and /etc/sysctrl.conf tunings.
> > Unfortunately, after
Jeremy Chadwick wrote:
> On Sun, Feb 06, 2011 at 11:50:41PM -0800, Greg Bonett wrote:
> > Thanks for the response.
> > I have no tunings in /boot/loader.conf
> > according to http://wiki.freebsd.org/ZFSTuningGuide for amd64
> > "FreeBSD 7.2+ has improved kernel me
_testing
500 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute
delay.
On Sun, 2011-02-06 at 20:55 -0800, Jeremy Chadwick wrote:
> On Sun,
Hi all,
I am experiencing hard lockup when running 8.1-RELEASE amd64. The last
two times it has happened I was running a zpool scrub (high cpu and io
load). /var/log/messages has some errors looking like:
kernel: ad0: FAILURE - READ_DMA4
but these didn't seem to correspond to the exact time of t
17 matches
Mail list logo