Re: [zfs-discuss] ZFS boot: 3 smaller glitches with console,

2007-08-09 Thread Yannick Robert
Hello

it seems i have the same problem after zfs boot installation (following this 
setup on a snv_69 release 
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ). The outputs 
from the requested command are similar to the outputs posted by dev2006.

Reading this page, i found no solution concerning the /dev/random problem. Is 
there somewhere a procedure to repair my install ?

Thanks in advance
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Roger Fujii
I managed to create a link in a ZFS directory that I can't remove.  Session as 
follows:

# ls
bayes.lock.router.3981  bayes_journal   user_prefs
# ls -li bayes.lock.router.3981
bayes.lock.router.3981: No such file or directory
# ls
bayes.lock.router.3981  bayes_journal   user_prefs
# /usr/sbin/unlink bayes.lock.router.3981
unlink: No such file or directory
# find . -print
.
./bayes_journal
find: stat() error ./bayes.lock.router.3981: No such file or directory
./user_prefs
#


ZFS scrub shows no problems in the pool.  Now, this was probably cause when I 
was doing some driver work so I'm not too surprised, BUT it would be nice if 
there was a way to clean this up without having to copy the filesystem to a new 
zfs filesystem and destroying the current one.  Any suggestions anyone?   
Thanks.

-r
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot: 3 smaller glitches with console,

2007-08-09 Thread Yannick Robert
forgot to specify some details : 

in my setup i do not install the ufsroot.

i have 2 disks 
-c0d0 for the ufs install 
-c1d0s0 which is my zfs root i want to exploit

my idea is to remove the c0d0 disk when the system will be ok
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Michael Schuster
Roger Fujii wrote:
> I managed to create a link in a ZFS directory that I can't remove.  Session 
> as follows:
> 
> # ls
> bayes.lock.router.3981  bayes_journal   user_prefs
> # ls -li bayes.lock.router.3981
> bayes.lock.router.3981: No such file or directory
> # ls
> bayes.lock.router.3981  bayes_journal   user_prefs
> # /usr/sbin/unlink bayes.lock.router.3981
> unlink: No such file or directory
> # find . -print
> .
> ./bayes_journal
> find: stat() error ./bayes.lock.router.3981: No such file or directory
> ./user_prefs
> #

make sure you have no unprintable characters in the file name (eg. with a 
command like
ls -las | od -c
or some such)

HTH
Michael
-- 
Michael SchusterSun Microsystems, Inc.
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] drfatime on zfs

2007-08-09 Thread Darren J Moffat
Prompted by a recent /. article on atime vs realtime ranting by some 
Linux kernel hackers (Linus included) I went back and looked at the 
mount_ufs(1M) man page because I was sure that OpenSolaris had more than 
just atime,noatime.  Yep sure enough UFS has drfatime.

So that got me wondering does ZFS need drfatime or is it just not a 
problem because ZFS works in a different way.  If ZFS did have dratime 
how would it impact the "always consistent on disk" requirement.  One 
though was that the ZIL would need to be used to ensure that the writes 
got to disk eventually, but then that would mean we were still writing 
just to the ZIL instead of the dataset itself.

If this is already covered somewhere please point me to the docs since I 
couldn't see it mentioned in anything I've read.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot and swrand (Was Re: ZFS boot: 3 smaller glitches with console, )

2007-08-09 Thread Darren J Moffat
Yannick Robert wrote:
> Hello
> 
> it seems i have the same problem after zfs boot installation (following this 
> setup on a snv_69 release 
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ). The 
> outputs from the requested command are similar to the outputs posted by 
> dev2006.
> 
> Reading this page, i found no solution concerning the /dev/random problem. Is 
> there somewhere a procedure to repair my install ?

I've been thinking about the /dev/random problem recently and I think I 
know what the problem is, but not yet how to fix it.  I've cc'd the rest 
of the crypto team.

With a UFS boot there is no attempted use of randomness until after 
svc://system/cryptosvc has run.  That service does two important things, 
first it starts kcfd to put in place the kernel thread pool for async 
crypto (not relevant to randomness) and secondly it runs 'cryptoadm 
refresh' which pushes the (private) /etc/crypto/kcf.conf into the kernel.

When /dev/random was initially integrated it was monolithic, that is the 
randomness pool and the entropy gatherer we combined.  Later on when KCF 
came along we split appart the pool (drv/random) from the software 
entropy provider (crypto/swrand).

Unlike UFS when we do a ZFS boot we do use the in kernel interface to 
/dev/random (random_get_bytes) before svc://system/cryptosvc has run. 
The message you are seeing is from KCF saying that it has a random pool 
but nothing providing entropy to it.  This is because swrand hasn't yet 
registered with kcf.

Now this was all done prior to newboot and SMF and part of the goal of 
why KCF works this way with software providers is was to ensure no boot 
time performance regressions by doing load on demand rather than forcing 
the loading of all modules at boot time.  With newboot on x86, and soon 
on SPARC, the swrand module will be in the boot archive anyway.


-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Roger Fujii
I guess I should have included this output too:

# ls -al
total 124
drwx--   2 rmf  other  5 Aug  9 05:26 .
drwx--x--x 148 rmf  other283 Aug  9 05:40 ..
-rw---   1 rmf  other  26616 Apr 16 00:17 bayes_journal
-rw---   1 rmf  other   1938 Apr 15 04:03 user_prefs

This isn't an unprintable character thing - find would not report errors 
because of that (I would have never seen this if find didn't spew out the 
error).  It's definitely a directory entry that doesn't point to anything.   
It's really in this funny state that various things think something is there, 
and others don't think something is there at all.

# ls
bayes.lock.router.3981  bayes_journal   user_prefs
# touch bayes.lock.router.3981
touch: bayes.lock.router.3981 cannot create
# rm bayes.lock.router.3981
bayes.lock.router.3981: No such file or directory
# touch test
# ln test bayes.lock.router.3981
ln: cannot create link bayes.lock.router.3981: File exists
# pwd
/home/rmf/.spamassassin
# cd ..
# rm -r .spamassassin
bayes.lock.router.3981: No such file or directory
rm: Unable to remove directory .spamassassin: File exists

The filesystem gods do taunt me... :)

-r
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS-related panic...

2007-08-09 Thread Noel Nihill
Hi all. 

I've just encountered a SunFire V240 which panics whenever a zpool 
scrub  is done, or whenever two of the filesystems are accessed.

After some  rummaging  around I came across bug report 6537415 from
July this year, which seems to be an exact replica of the panic msgbuf I see. 

I'm wondering if there was a patch or something released for this, or was
it put down  to cosmic radiation? We have a good many systems here on
Solaris 10 6/06 and ZFS, all of which are running nicely except this one, which 
seems to 
have gotten itself into a right old state. 

Thanks for any tips.


Some info: 


# uname -a 
SunOS cashel 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Fire-V240 
# cat /etc/release 
   Solaris 10 6/06 s10s_u2wos_09a SPARC 
   Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved. 
Use is subject to license terms. 
 Assembled 09 June 2006 
# zpool status 
  pool: apps-storage 
 state: ONLINE 
status: One or more devices has experienced an error resulting in data 
corruption.  Applications may be affected. 
action: Restore the file in question if possible.  Otherwise restore 
the 
entire pool from backup. 
   see: http://www.sun.com/msg/ZFS-8000-8A 
 scrub: none requested 
config: 

NAMESTATE READ WRITE CKSUM 
apps-storage  ONLINE   0 0 0 
  c0t0d0s4  ONLINE   0 0 0 
  c0t1d0ONLINE   0 0 0 
  c0t2d0ONLINE   0 0 0 
  c0t3d0ONLINE   0 0 0 

errors: 0 data errors, use '-v' for a list 
# zpool list 
NAMESIZEUSED   AVAILCAP  HEALTH 
ALTROOT 
apps-storage254G   5.69G248G 2%  ONLINE - 
# zfs list 
NAME   USED  AVAIL  REFER  MOUNTPOINT 
apps-storage  5.69G   244G  24.5K  /apps-storage 
apps-storage/appl 5.66G   244G  5.66G  /appl 
apps-storage/cache24.5K   244G  24.5K  /data/cache 
apps-storage/data 30.5K   244G  30.5K  /data 
apps-storage/download1  24.5K   244G  24.5K  /data/download1 
apps-storage/download2  24.5K   244G  24.5K  /data/download2 
apps-storage/home 27.5M   244G  27.5M  /export/home 
apps-storage/oradata01  24.5K   244G  24.5K  /oradata01 
apps-storage/oradata02  24.5K   244G  24.5K  /oradata02 
apps-storage/oradata03  24.5K   244G  24.5K  /oradata03 
apps-storage/oradata04  24.5K   244G  24.5K  /oradata04 
apps-storage/oradump  24.5K   244G  24.5K  /oradump 
apps-storage/oralogs1  24.5K   244G  24.5K  /oralogs1 
apps-storage/oralogs2  24.5K   244G  24.5K  /oralogs2 
apps-storage/trace_archive1  24.5K   244G  24.5K  /data/trace_archive1 
apps-storage/trace_log1  24.5K   244G  24.5K  /data/trace_log1 
# 


 


errors: The following persistent errors have been detected: 

  DATASETOBJECT  RANGE 
  mos116 4096-8192 
  17 20  lvl=0 blkid=0 
  17 23  lvl=0 blkid=0 
  17 36  lvl=0 blkid=0 
  .. 
  .. 
  apps-storage/appl  846 0-512 
  apps-storage/appl  848 0-512 
  apps-storage/appl  850 0-512 
  apps-storage/appl  866 0-131072 
  .. 
  .. 
  apps-storage/home  216 131072-262144 
  apps-storage/home  216 262144-393216 
  apps-storage/home  217 0-131072 


 


# pwd 
/var/crash/cashel 
# ls 
boundsunix.0vmcore.0 
# adb -P "adb: " -k ./unix.0 ./vmcore.0 
physmem fe547 
adb: $C 
02a100a0e521 vpanic(11eb430, 7bb701a0, 5, 7bb701e0, 0, 7bb701e8) 
02a100a0e5d1 assfail3+0x94(7bb701a0, 5, 7bb701e0, 0, 7bb701e8, 
133) 
02a100a0e691 space_map_load+0x1a4(600034903b8, 6000b356000, 1000, 
60003490088, 4000, 1) 
02a100a0e761 metaslab_activate+0x3c(60003490080, 8000, 
c000, 7f0eafc4, 
60003490080, c000) 
02a100a0e811 metaslab_group_alloc+0x1c0(3fff, 600, 
8000, 222d5, 
60003459240, ) 
02a100a0e8f1 metaslab_alloc_dva+0x114(0, 222d5, 60003459240, 
600, 60001238b00, 24cbaf) 
02a100a0e9c1 metaslab_alloc+0x2c(0, 600, 60003459240, 3, 24cbaf, 
0) 
02a100a0ea71 zio_dva_allocate+0x4c(6000b119d40, 7bb537ac, 
60003459240, 703584a0, 70358400, 20001 
) 
02a100a0eb21 zio_write_compress+0x1ec(6000b119d40, 23e20b, 23e000, 
1f001f, 3, 60003459240) 
02a100a0ebf1 arc_write+0xe4(6000b119d40, 6000131ad80, 7, 3, 3, 
24cbaf) 
02a100a0ed01 dbuf_sync+0x6d8(6000393f630, 6000afb2ac0, 119, 3, 7, 
24cbaf) 
02a100a0ee21 dnode_sync+0x35c(1, 1, 6000afb2ac0, 60001349c40, 0, 
2) 
02a100a0eee1 dmu_objset_sync_dnodes+0x6c(60001a86f80, 60001a870c0, 
60001349c40, 600035c4310, 
600032b5be0, 0) 
02a100a0ef91 dmu_objset_sync+0x54(60001a86f80, 60001349c40, 3, 3, 
60004d3ef38, 24cbaf) 
02a100a0f0a1 dsl_pool_sync+0xc4(30ad540, 60001a87

Re: [zfs-discuss] ZFS boot: 3 smaller glitches with console,

2007-08-09 Thread Jürgen Keil
> in my setup i do not install the ufsroot.
> 
> i have 2 disks 
> -c0d0 for the ufs install 
> -c1d0s0 which is my zfs root i want to exploit
> 
> my idea is to remove the c0d0 disk when the system will be ok

Btw. if you're trying to pull the ufs disk c0d0 from the system, and
physically move the zfs root disk from c1d0 -> c0d0 and use that as
the only disk (= boot disk) in the system, you'll probably run into the
problem that zfs root becomes unbootable, because in the
etc/zfs/zpool.cache file the c1d0 name is still recorded for the
zpool containing the rootfs.

To fix it you probably have to boot a failsafe kernel from somewhere,
zpool import the pool from the disk's new location, and copy the
updated /etc/zfs/zpool.cache into the zfs root filesystem and build
new boot archives there...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot: 3 smaller glitches with console,

2007-08-09 Thread Jürgen Keil
> it seems i have the same problem after zfs boot
> installation (following this setup on a snv_69 release
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ).

Hmm, in step 4., wouldn't it be better to use ufsdump / ufsrestore
instead of find / cpio to clone the ufs root into the zfs root pool?

cd /zfsroot
ufsdump 0f - / | ufsrestore -xf -


Advantages:

- it copies the mountpoint for the /etc/dfs/dfstab filesystem
  (and all the other mountpoints, like /tmp, /proc, /etc/mnttab, ...)


- it does not mess up the /lib/libc.so.1 shared library

  I think the procedure at the above url could copy the wrong
  version of the shared libc.so.1 into the zfsroot /lib/libc.so.1;
  this might explain bugs like 6423745,
  Synopsis: zfs root pool created while booted 64 bit can not be booted 32 bit

  
- the files hidden by the /devices mount are copied,too


> The outputs from the requested command
> are similar to the outputs posted by dev2006.
> 
> Reading this page, i found no solution concerning the
> /dev/random problem. Is there somewhere a procedure
> to repair my install ?


AFAICT, there's nothing you can do to avoid the
"WARNING: No randomness provider enabled for /dev/random."
message with zfs root at this time.  It seems that zfs mountroot
needs some random numbers for mounting the zfs root filesystem,
and at that point early during the bootstrap there isn't a fully initialized
random device available.  This fact is remembered by the random
device and is reported later on, when the system is fully booted.

I think when the system is fully booted from zfs root, the random
device should work just fine.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Jürgen Keil
> I managed to create a link in a ZFS directory that I can't remove.  
>
> # find . -print
> .
> ./bayes_journal
> find: stat() error ./bayes.lock.router.3981: No such
> file or directory
> ./user_prefs
> #
> 
> 
> ZFS scrub shows no problems in the pool.  Now, this
> was probably cause when I was doing some driver work
> so I'm not too surprised, BUT it would be nice if
> there was a way to clean this up without having to
> copy the filesystem to a new zfs filesystem and
> destroying the current one.

Are you running an opensolaris using release or debug kernel bits?

Maybe a kernel with a zfs compiled as debug bits would print
some extra error messages or maybe panic the machine when
that broken file is accessed?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Roger Fujii
This is on a sol10u3 box.   I could boot snv temporarily on this box if it 
would accomplish something.  

> Maybe a kernel with a zfs compiled as debug bits would print
> some extra error messages or maybe panic the machine when
> that broken file is accessed? 

Panic? That's rather draconian  

Ok...  ran zdb on the containing directory, and I see:

Object  lvl   iblk   dblk  lsize  asize  type
110733116K  2.50K  2.50K 1K  ZFS directory
 264  bonus  ZFS znode
path/.spamassassin
atime   Thu Aug  9 06:10:16 2007
mtime   Thu Aug  9 06:07:39 2007
ctime   Thu Aug  9 06:07:39 2007
crtime  Fri Oct  6 09:37:52 2006
gen 25595
mode40700
size3
parent  3
links   2
xattr   0
rdev0x
microzap: 2560 bytes, 1 entries

bayes.lock.router.3981 = 8659
Indirect blocks:
   0 L0 0:134d8b6e00:200 a00L/200P F=1 B=16034915

segment [, 0a00) size 2.50K

and there is no entry for 8659. (wish zdb was documented somewhere).
I suppose I could just create a gazillion files until it reuses the unused 
slot...
(assuming ZFS reuses object #s)   :)

-r
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Tuomas Leikola
Hi!

I'm having hard time finding out if it's possible to force ditto
blocks on different devices.

This mode has many benefits, the least not being that is practically
creates a fully dynamic mode of mirroring (replacing raid1 and raid10
variants), especially when combined with the upcoming vdev remove and
defrag/rebalance features.

Is this already available? Is it scheduled? Whyt not?

- Tuomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Mario Goebbels
> This mode has many benefits, the least not being that is practically
> creates a fully dynamic mode of mirroring (replacing raid1 and raid10
> variants), especially when combined with the upcoming vdev remove and
> defrag/rebalance features.

Vdev remove, that's a sure thing. I've heard about defrag before, but
when I asked, no one confirmed it.

The same goes for that mention of single disk "RAID", that's I think
supposed to write one parity block for n data blocks, so that disk
errors can be healed without having to have a real redundant setup.

> Is this already available? Is it scheduled? Whyt not?

Actually, ZFS is already supposed to try to write the ditto copies of a
block on different vdevs if multiple are available.

As far as finding out goes, I suppose if you use a simple JBOD, in
theory, you could try by offlining one disk. But I think in a
non-redundant setup, the pool refuses to start if a disk is missing (I
think that should be changed, to allow evacuation of properly dittoed data).

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Tuomas Leikola
>
> Actually, ZFS is already supposed to try to write the ditto copies of a
> block on different vdevs if multiple are available.
>

*TRY*  being the keyword here.

What I'm looking for is a disk full error if ditto cannot be written
to different disks. This would guarantee that a mirror is written on a
separate disk - and the entire filesystem can be salvaged from a full
disk failure.

Think about having the classic case of 50M, 100M and 200M disks. only
150M can be really mirrored and the remaining 50M can only be used
non-redundantly.

> ...But I think in a
> non-redundant setup, the pool refuses to start if a disk is missing (I
> think that should be changed, to allow evacuation of properly dittoed data).

IIRC this is already considered a bug.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Mario Goebbels
>> Actually, ZFS is already supposed to try to write the ditto copies of a
>> block on different vdevs if multiple are available.
> 
> *TRY*  being the keyword here.
> 
> What I'm looking for is a disk full error if ditto cannot be written
> to different disks. This would guarantee that a mirror is written on a
> separate disk - and the entire filesystem can be salvaged from a full
> disk failure.

If you're that bent on having maximum redundancy, I think you should
consider implementing real redundancy. I'm also biting the bullet and
going mirrors (cheaper than RAID-Z for home, less disks needed to start
with).

The problem here is that the filesystem, especially with a considerable
fill factor, can't guarantee the necessary allocation balance across the
vdevs (that is maintaining necessary free space) to spread the ditto
blocks as optimal as you'd like. Implementing the required code would
increase the overhead a lot. Not to mention that ZFS may have to defrag
on the fly more than not to make sure the ditto spread can be maintained
balanced.

And then snapshots on top of that, which are supposed to be physically
and logically immovable (unless you execute commands affecting the pool,
like a vdev remove, I suppose), just increase the existing complexity,
where all that would have to be hammered into.

My 2c.

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Richard Elling
Tuomas Leikola wrote:
>> Actually, ZFS is already supposed to try to write the ditto copies of a
>> block on different vdevs if multiple are available.
> 
> *TRY*  being the keyword here.
> 
> What I'm looking for is a disk full error if ditto cannot be written
> to different disks. This would guarantee that a mirror is written on a
> separate disk - and the entire filesystem can be salvaged from a full
> disk failure.

We call that a "mirror" :-)

> Think about having the classic case of 50M, 100M and 200M disks. only
> 150M can be really mirrored and the remaining 50M can only be used
> non-redundantly.

Assuming full disk failure mode, yes.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Matthew Ahrens
Roger,

Could you send us (off-list is fine) the output of "truss ls -l "?  And 
also, the output of "zdb -vvv "?  (which will compress 
well with gzip if it's huge.)

thanks,
--matt


Roger Fujii wrote:
> This is on a sol10u3 box.   I could boot snv temporarily on this box if it 
> would accomplish something.  
> 
>> Maybe a kernel with a zfs compiled as debug bits would print
>> some extra error messages or maybe panic the machine when
>> that broken file is accessed? 
> 
> Panic? That's rather draconian  
> 
> Ok...  ran zdb on the containing directory, and I see:
> 
> Object  lvl   iblk   dblk  lsize  asize  type
> 110733116K  2.50K  2.50K 1K  ZFS directory
>  264  bonus  ZFS znode
> path/.spamassassin
> atime   Thu Aug  9 06:10:16 2007
> mtime   Thu Aug  9 06:07:39 2007
> ctime   Thu Aug  9 06:07:39 2007
> crtime  Fri Oct  6 09:37:52 2006
> gen 25595
> mode40700
> size3
> parent  3
> links   2
> xattr   0
> rdev0x
> microzap: 2560 bytes, 1 entries
> 
> bayes.lock.router.3981 = 8659
> Indirect blocks:
>0 L0 0:134d8b6e00:200 a00L/200P F=1 B=16034915
> 
> segment [, 0a00) size 2.50K
> 
> and there is no entry for 8659. (wish zdb was documented somewhere).
> I suppose I could just create a gazillion files until it reuses the unused 
> slot...
> (assuming ZFS reuses object #s)   :)
> 
> -r
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS improvements

2007-08-09 Thread Robert Milkowski
Hello Gino,

Wednesday, April 11, 2007, 10:43:17 AM, you wrote:

>> On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B.
>> Rang wrote:
>> > 
>> > That's only one cause of panics.
>> > 
>> > At least two of gino's panics appear due to
>> corrupted space maps, for
>> > instance. I think there may also still be a case
>> where a failure to
>> > read metadata during a transaction commit leads to
>> a panic, too. Maybe
>> > that one's been fixed, or maybe it will be handled
>> by the above bug.
>> 
>> The space map bugs should have been fixed as part of:
>> 
>> 6458218 assertion failed: ss == NULL
>> 
>> Which went into Nevada build 60.  There are several
>> different
>> pathologies that can result from this bug, and I
>> don't know if the
>> panics are from before or after this fix. 

G> If that can help you, we are able to corrupt a zpool on snv_60 doing the 
following a few times:

G> -create a raid10 zpool   (dual path luns)
G> -making a high writing load on that zpool
G> -disabling fc ports on both the fc switches

G> Each time we get a kernel panic, probably because of 6322646, and
G> sometimes we get a corrupted zpool.

Is it still the case? Was the problem of corruption the pool addressed
and hopefully solved?



-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [crypto-discuss] ZFS boot and swrand (Was Re: ZFS boot: 3 smaller glitches with console, )

2007-08-09 Thread Krishna Yenduri
Darren J Moffat wrote:
> Yannick Robert wrote:
>   
>> Hello
>>
>> it seems i have the same problem after zfs boot installation (following this 
>> setup on a snv_69 release 
>> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ). The 
>> outputs from the requested command are similar to the outputs posted by 
>> dev2006.
>>
>> Reading this page, i found no solution concerning the /dev/random problem. 
>> Is there somewhere a procedure to repair my install ?
>> 

 To answer Yannick's question, the /dev/random warning message does not 
indicate
 any problem with the install and can be ignored.

> ...
>
> Unlike UFS when we do a ZFS boot we do use the in kernel interface to 
> /dev/random (random_get_bytes) before svc://system/cryptosvc has run.
>   

 To be exact, the API used by ZFS kernel module  is 
random_get_pseudo_bytes().

> The message you are seeing is from KCF saying that it has a random pool 
> but nothing providing entropy to it.  This is because swrand hasn't yet 
> registered with kcf.
>   

 We had a similar issue with SCTP where in it uses the kernel API
 random_get_pseudo_bytes() before swrand could register.

 The solution we had there was to load swrand directly. From 
uts/sparc/ip/Makefile
78  #
79  # Depends on md5 and swrand (for SCTP). SCTP needs to depend on
80  # swrand as it needs random numbers early on during boot before
81  # kCF subsystem can load swrand.
82  #
83  LDFLAGS += -dy -Nmisc/md5 -Ncrypto/swrand -Nmisc/hook 
-Nmisc/neti


 I think we can do a similar thing here. The zfs (or is it zfs-root ?) 
kernel module
 can have crypto/swrand as a dependency. I see that uts/sparc/zfs/Makefile
 lists drv/random as a dependency. This is not needed because the
 API is in modstubs now and it is not implemented in drv/random any more.
 That can be replaced with crypto/swrand.

 swrand does  not need any crypto signature verification. So, it can 
safely be loaded
 early on during boot.

> Now this was all done prior to newboot and SMF and part of the goal of 
> why KCF works this way with software providers is was to ensure no boot 
> time performance regressions by doing load on demand rather than forcing 
> the loading of all modules at boot time.
 
Yes. This requirement added a lot of complexity to KCF.

> With newboot on x86, and soon 
> on SPARC, the swrand module will be in the boot archive anyway.
>   

 That would be great. It is cleaner and will remove the need for ad hoc
 solutions like above.

-Krishna


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss