Hello
it seems i have the same problem after zfs boot installation (following this
setup on a snv_69 release
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ). The outputs
from the requested command are similar to the outputs posted by dev2006.
Reading this page, i found no so
I managed to create a link in a ZFS directory that I can't remove. Session as
follows:
# ls
bayes.lock.router.3981 bayes_journal user_prefs
# ls -li bayes.lock.router.3981
bayes.lock.router.3981: No such file or directory
# ls
bayes.lock.router.3981 bayes_journal user_prefs
forgot to specify some details :
in my setup i do not install the ufsroot.
i have 2 disks
-c0d0 for the ufs install
-c1d0s0 which is my zfs root i want to exploit
my idea is to remove the c0d0 disk when the system will be ok
This message posted from opensolaris.org
___
Roger Fujii wrote:
> I managed to create a link in a ZFS directory that I can't remove. Session
> as follows:
>
> # ls
> bayes.lock.router.3981 bayes_journal user_prefs
> # ls -li bayes.lock.router.3981
> bayes.lock.router.3981: No such file or directory
> # ls
> bayes.lock.router.398
Prompted by a recent /. article on atime vs realtime ranting by some
Linux kernel hackers (Linus included) I went back and looked at the
mount_ufs(1M) man page because I was sure that OpenSolaris had more than
just atime,noatime. Yep sure enough UFS has drfatime.
So that got me wondering does
Yannick Robert wrote:
> Hello
>
> it seems i have the same problem after zfs boot installation (following this
> setup on a snv_69 release
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ). The
> outputs from the requested command are similar to the outputs posted by
> dev20
I guess I should have included this output too:
# ls -al
total 124
drwx-- 2 rmf other 5 Aug 9 05:26 .
drwx--x--x 148 rmf other283 Aug 9 05:40 ..
-rw--- 1 rmf other 26616 Apr 16 00:17 bayes_journal
-rw--- 1 rmf other 1938 Apr 15 04
Hi all.
I've just encountered a SunFire V240 which panics whenever a zpool
scrub is done, or whenever two of the filesystems are accessed.
After some rummaging around I came across bug report 6537415 from
July this year, which seems to be an exact replica of the panic msgbuf I see.
I'm won
> in my setup i do not install the ufsroot.
>
> i have 2 disks
> -c0d0 for the ufs install
> -c1d0s0 which is my zfs root i want to exploit
>
> my idea is to remove the c0d0 disk when the system will be ok
Btw. if you're trying to pull the ufs disk c0d0 from the system, and
physically move the
> it seems i have the same problem after zfs boot
> installation (following this setup on a snv_69 release
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ).
Hmm, in step 4., wouldn't it be better to use ufsdump / ufsrestore
instead of find / cpio to clone the ufs root into the
> I managed to create a link in a ZFS directory that I can't remove.
>
> # find . -print
> .
> ./bayes_journal
> find: stat() error ./bayes.lock.router.3981: No such
> file or directory
> ./user_prefs
> #
>
>
> ZFS scrub shows no problems in the pool. Now, this
> was probably cause when I was
This is on a sol10u3 box. I could boot snv temporarily on this box if it
would accomplish something.
> Maybe a kernel with a zfs compiled as debug bits would print
> some extra error messages or maybe panic the machine when
> that broken file is accessed?
Panic? That's rather draconian
Hi!
I'm having hard time finding out if it's possible to force ditto
blocks on different devices.
This mode has many benefits, the least not being that is practically
creates a fully dynamic mode of mirroring (replacing raid1 and raid10
variants), especially when combined with the upcoming vdev r
> This mode has many benefits, the least not being that is practically
> creates a fully dynamic mode of mirroring (replacing raid1 and raid10
> variants), especially when combined with the upcoming vdev remove and
> defrag/rebalance features.
Vdev remove, that's a sure thing. I've heard about def
>
> Actually, ZFS is already supposed to try to write the ditto copies of a
> block on different vdevs if multiple are available.
>
*TRY* being the keyword here.
What I'm looking for is a disk full error if ditto cannot be written
to different disks. This would guarantee that a mirror is written
>> Actually, ZFS is already supposed to try to write the ditto copies of a
>> block on different vdevs if multiple are available.
>
> *TRY* being the keyword here.
>
> What I'm looking for is a disk full error if ditto cannot be written
> to different disks. This would guarantee that a mirror is
Tuomas Leikola wrote:
>> Actually, ZFS is already supposed to try to write the ditto copies of a
>> block on different vdevs if multiple are available.
>
> *TRY* being the keyword here.
>
> What I'm looking for is a disk full error if ditto cannot be written
> to different disks. This would guar
Roger,
Could you send us (off-list is fine) the output of "truss ls -l "? And
also, the output of "zdb -vvv "? (which will compress
well with gzip if it's huge.)
thanks,
--matt
Roger Fujii wrote:
> This is on a sol10u3 box. I could boot snv temporarily on this box if it
> would accomplis
Hello Gino,
Wednesday, April 11, 2007, 10:43:17 AM, you wrote:
>> On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B.
>> Rang wrote:
>> >
>> > That's only one cause of panics.
>> >
>> > At least two of gino's panics appear due to
>> corrupted space maps, for
>> > instance. I think there may also
Darren J Moffat wrote:
> Yannick Robert wrote:
>
>> Hello
>>
>> it seems i have the same problem after zfs boot installation (following this
>> setup on a snv_69 release
>> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ). The
>> outputs from the requested command are simila
20 matches
Mail list logo