Re: [zfs-discuss] [mdb-discuss] onnv_142 - vfs_mountroot: cannot mount root

2010-09-13 Thread Gavin Maltby

On 09/07/10 23:26, Piotr Jasiukajtis wrote:

Hi,

After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system.
Here is what I get.

Any idea why it's not able to import rpool?

I saw this issue also on older builds on a different machines.


This sounds (based on the presence of cpqary) not unlike:

6972328 Installation of snv_139+ on HP BL685c G5 fails due to panic during auto 
install process

which was introduced into onnv_139 by the fix for this

6927876 For 4k sector support, ZFS needs to use DKIOCGMEDIAINFOEXT

The fix is in onnv_148 after the external push switch-off, fixed via

6967658 sd_send_scsi_READ_CAPACITY_16() needs to handle SBC-2 and SBC-3 
response formats

I experienced this on data pools rather than the rpool, but I suspect on the 
rpool
you'd get the vfs_mountroot panic you see when rpool import fails.  My 
workaround
was to compile a zfs with the fix for 6927876 changed to force the default
physical block size of 512 and drop that into the BE before booting to it.
There was no simpler workaround available.

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RFE: allow zfs to interpret '.' as da datatset?

2008-08-31 Thread Gavin Maltby
Hi,

I'd like to be able to utter cmdlines such as

$ zfs set readonly=on .
$ zfs snapshot [EMAIL PROTECTED]

with '.' interpreted to mean the dataset corresponding to
the current working directory.

This would shorten what I find to be a very common operaration -
that of discovering your current (working directory) dataset
and performing some operation on it.  I usally do this
with df and some cut and paste:

([EMAIL PROTECTED]:fx-review/fmaxvm-review2/usr/src/uts )-> df -h .
Filesystem size   used  avail capacity  Mounted on
tank/scratch/gavinm/fx-review/fmaxvm-review2
1.0T15G   287G 5%
/tank/scratch/gavinm/fx-review/fmaxvm-review2

([EMAIL PROTECTED]:fx-review/fmaxvm-review2/usr/src/uts )-> zfs set readonly=on 
tank/scratch/gavinm/fx-review/fmaxvm-review2

I know I could script this, but I'm thing of general ease-of-use.
The failure semantics where . is not a zfs filesystem are clear;
perhaps one concern would be that it would be all to easy to
target the wrong dataset with something like 'zfs destroy .' - I'd
be happy to restrict the usage to non-destructive operations only.

Cheers

Gavin


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel Panic

2008-11-18 Thread Gavin Maltby


Richard Elling wrote:
> Chris Gerhard wrote:
>> My home server running snv_94 is tipping with the same assertion when 
>> someone list a particular file:
>>   
> 
> Failed assertions indicate software bugs.  Please file one.

We learn something new every day!

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fmd dying in zfs shutdown?

2009-02-16 Thread Gavin Maltby

Hi,

James Litchfield wrote:

known issue? I've seen this 5 times over the past few days. I think
these were, for the most part BFUs on top of B107. x86.


Yes, Dan Price reported this happening post the fix to 6802281.
Not sure there is a CR number as yet;  Steve has a proposed
fix which you could try testing with since his test box
was unable to reproduce the problem.

Gavin


# pstack fmd.733
core 'fmd.733' of 733:/usr/lib/fm/fmd/fmd
-  lwp# 1 / thread# 1  
fe8c3347 libzfs_fini (0, fed9e000, 8047d08, fed74964) + 17
fed74979 zfs_fini (84701d0, fed9e000, 8047d38, fed7ac40) + 21
fed75adb bltin_fini (84701d0, 4, fed8a340, fed7a9f0) + 1b
fed7aa0f topo_mod_stop (84701d0, fed9e000, 8047d78, fed7b17e) + 2b
fed7b1ba topo_modhash_unload_all (84abe88, 84939a8, 8047dc8, fed803d2) + 4a
fed804b6 topo_close (84abe88, 84fdd70) + f2
0807c75f fmd_topo_fini (807305c, feef12d5, 0, 0, 8047e60, 8047e30) + 37
0806003d fmd_destroy (809a6d8, 4, 8047e78, 8072f67) + 281
08073075 main (1, 8047ea4, 8047eac, 805f2ef) + 365
0805f34d _start   (1, 8047f30, 0, 8047f44, 8047f5c, 8047f7d) + 7d

# mdb fmd.733
Loading modules: [ fmd libumem.so.1 libnvpair.so.1 libtopo.so.1 
libuutil.so.1 libavl.so.1 libsysevent.so.1 ld.so.1 ]

 > $c
libzfs.so.1`libzfs_fini+0x17(0, fed9e000, 8047d08, fed74964)
libtopo.so.1`zfs_fini+0x21(84701d0, fed9e000, 8047d38, fed7ac40)
libtopo.so.1`bltin_fini+0x1b(84701d0, 4, fed8a340, fed7a9f0)
libtopo.so.1`topo_mod_stop+0x2b(84701d0, fed9e000, 8047d78, fed7b17e)
libtopo.so.1`topo_modhash_unload_all+0x4a(84abe88, 84939a8, 8047dc8, 
fed803d2)

libtopo.so.1`topo_close+0xf2(84abe88, 84fdd70)
fmd_topo_fini+0x37(807305c, feef12d5, 0, 0, 8047e60, 8047e30)
fmd_destroy+0x281(809a6d8, 4, 8047e78, 8072f67)
main+0x365(1, 8047ea4, 8047eac, 805f2ef)
_start+0x7d(1, 8047f30, 0, 8047f44, 8047f5c, 8047f7d)
 > libzfs.so.1`libzfs_fini+0x17/i
libzfs.so.1`libzfs_fini+0x17:   pushl  0x4(%esi)
 > $r
%cs = 0x0043%eax = 0xfed74958 libtopo.so.1`zfs_fini
%ds = 0x004b%ebx = 0xfe934000
%ss = 0x004b%ecx = 0x084701d0
%es = 0x004b%edx = 0xfee12a00
%fs = 0x%esi = 0x
%gs = 0x01c3%edi = 0x084701d0

%eip = 0xfe8c3347 libzfs.so.1`libzfs_fini+0x17
%ebp = 0x08047cd8
%kesp = 0x

%eflags = 0x00010212
id=0 vip=0 vif=0 ac=0 vm=0 rf=1 nt=0 iopl=0x0
status=

 %esp = 0x08047cc4
%trapno = 0xe
 %err = 0x4

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] usedby* properties for datasets created before v13

2009-03-10 Thread Gavin Maltby

Hi,

The manpage says

 Specifically,  used  =  usedbychildren + usedbydataset +
 usedbyrefreservation +, usedbysnapshots.  These  proper-
 ties  are  only  available for datasets created on zpool
 "version 13" pools.

.. and I now realize that "created" at v13 is the important bit,
rather than "created pre v13 and upgraded", and I
see that for datasets created on a version prior to 13
show "-" for these properties (might be nice to note that
in the manpage - I took "-" to mean zero for a while).

Anyway, is there any way to retrospectively populate these
statistics (avoiding dataset reconstruction, that is)?
No chance a scrub would/could do it?

Thanks

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why did my zvol shrink ?

2009-03-11 Thread Gavin Maltby



Brian H. Nelson wrote:
I'm doing a little testing and I hit a strange point. Here is a zvol 
(clone)


pool1/volclone  type volume -
pool1/volclone  origin   pool1/v...@diff1   -
pool1/volclone  reservation  none   default
pool1/volclone  volsize  191G   -
pool1/volclone  volblocksize 8K -

The zvol has UFS on it. It has always been 191G and we've never 
attempted to resize it. However, if I just try to grow it, it gives me 
an error:


-bash-3.00# growfs 
/dev/zvol/rdsk/pool1/volclone400555998 
sectors < current size of 400556032 sectors


Is the zvol is somehow smaller than it was originally? How/why?


I think ufs requires an integer number of cylinder groups and I'm
guessing the volume size you have presented it with is somewhere
in between - so it has rounded down to the largest cylinder
group boundary less than or equal to the device size.

Gavin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Gavin Maltby

dick hoogendijk wrote:


true. Furthermore, much so-called consumer hardware is very good these
days. My guess is ZFS should work quite reliably on that hardware.
(i.e. non ECC memory should work fine!) / mirroring is a -must- !


No, ECC memory is a must too.  ZFS checksumming verifies and corrects
data read back from a disk, but once it is read from disk it is stashed
in memory for your application to use - without ECC you erode confidence that
what you read from memory is correct.

Gavin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Gavin Maltby

Hi,

David Magda wrote:

On Jul 19, 2009, at 20:13, Gavin Maltby wrote:


No, ECC memory is a must too.  ZFS checksumming verifies and corrects
data read back from a disk, but once it is read from disk it is stashed
in memory for your application to use - without ECC you erode 
confidence that

what you read from memory is correct.


Right, because once (say) Apple incorporates ZFS into Mac OS X they'll 
also start shipping MacBooks and iMacs with ECC. 


If customers were committing valuable business data to MacBooks and iMacs
then ECC would be a requirement.  I don't know of terribly many
customers running their business of of a laptop.

If it's so necessary we 
might as well have any kernel that has ZFS in it only allow 'zpool 
create' to be run if the kernel detects ECC modules.


Come on.

>


It's a nice-to-have, but at some point we're getting into the tinfoil 
hat-equivalent of data protection.


On a laptop zfs is a huge amount safer than other filesystems, still has
all the great usability features etc - but zfs does not magically turn
your laptop into a server-grade system.  What you refer to as a tinfoil hat
is an essential component of any server if that is housing business-vital
data;  obviously it is just a nice-to-have on a laptop, but recognise
what you're losing.

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Export ZFS over NFS ?

2007-01-31 Thread Gavin Maltby

On 01/30/07 17:59, Neal Pollack wrote:


I am assuming that one single command;
# zfs set sharenfs=ro bigpool
would share /export as a read-only NFS point?


It will share /export as read-only.  The property will also
be inherited by all filesystem below export, so they
too will be shared read-only.  You can over-ride an
choose different sharenfs property values for particular
filesystems that you want to share differently.  Note that
a subdirectory of a directory that is already shared
cannot itself be shared unless it's in a different
filesystem (I think that summarizes the behaviour).

I have arranged filesystems into hierarchies that will
share most property values, including sharenfs.  For
example I have the following filesystems:

tank
tank/scratch
tank/scratch/
tank/src
tank/src/Codemgr_wsdata_rw
tank/tools
tank/tools/ON
tank/tools/ON/on10
tank/tools/ON/on28
tank/tools/ON/on297
tank/tools/ON/on81
tank/tools/ON/on998
tank/tools/ON/onnv
tank/tools/ON/onnv/i386
tank/tools/ON/onnv/sparc
tank/tools/rootimages
tank/u
tank/u/

tank itself has sharenfs=off.

tank/src has sharenfs=ro so all source is read-only

but tank/src/Codemgr_wsdata_rw is read-write since Teamware needs read-write
access to Codemge_wsdata for bringover;  so each workspace (exported ro)
symlinks to a /net/host/tank/src/Codemgr_wsdata_ws/directory for a writable
dir.

Similarly tank/scratch is shared with root access (for nightly builds before
Solaris 10).

In seeing what you have specified local overrides for the -s option to
zfs get is great.  So here's all my sharenfs etc properties that
are not inherited (excluded quotas to reduce output and used -o
to try and make it format for email):

([EMAIL PROTECTED]: ~ )-> zfs get -s local -o name,property,value all | grep -v 
quota
NAMEPROPERTY   VALUE
tankcom.sun.cte.eu:backup  no
tank/scratchsharenfs   
anon=0,sec=sys,rw,root=pod3:pod4
tank/srcsharenfs   ro
tank/srccompressionon
tank/srccom.sun.cte.eu:backup  yes
tank/src/Codemgr_wsdata_rw  mountpoint 
/export/src/Codemgr_wsdata_rw
tank/src/Codemgr_wsdata_rw  sharenfs   rw
tank/src/Codemgr_wsdata_rw  compressionon
tank/tools  com.sun.cte.eu:backup  yes
tank/tools/ON   sharenfs   ro
tank/tools/cluster  sharenfs   ro
tank/tools/rootimages   sharenfs   
ro,anon=0,root=pod3,root=pod4
tank/tools/www  sharenfs   ro
tank/u  sharenfs   rw
tank/u  com.sun.cte.eu:backup  yes
tank/u/localsrc mountpoint /u/localsrc
tank/u/localsrc sharenfs   on
tank/u/nightly  sharenfs   
rw,root=pod3:pod4,anon=0
tank/u/nightly  com.sun.cte.eu:backup  no
tank/u/nightly/system   mountpoint /u/nightly/system

The com.sun.cte.eu:backup is a local property that determines whether a 
filesystem
is backed up.  A script generates the list of filesystems and that gets sucked
into Networker.  Grouping by functionality helps keeps this simple, as
most filesystems inherit their backup property from the parent and I just
override at the top of branches that I want to backup (and possibly
exclude some bits further down).

Hope that helps

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs submounts and permissions with autofs

2007-04-24 Thread Gavin Maltby

Hi,

Is it expected that if I have filesystem tank/foo and tank/foo/bar
(mounted under /tank) then in order to be able to browse via
/net down into tank/foo/bar I need to have group/other permissions
on /tank/foo open?

# zfs create tank/foo
# zfs create tank/foo/bar
# chown gavinm /tank/foo /tank/foo/bar
# zfs set sharenfs=rw tank/foo

# ls -laR /tank/foo
/tank/foo:
total 9
drwxr-xr-x   3 gavinm   sys3 Apr 24 00:24 .
drwxr-xr-x   9 root sys9 Apr 24 00:23 ..
drwxr-xr-x   2 gavinm   sys2 Apr 24 00:24 bar

/tank/foo/bar:
total 6
drwxr-xr-x   2 gavinm   sys2 Apr 24 00:24 .
drwxr-xr-x   3 gavinm   sys3 Apr 24 00:24 ..

Note that the perms on /tank/foo are 755 at this point.  Now
browse via /net down to the 'bar' level from some nfs client:

([EMAIL PROTECTED]:~ )-> cd /net/TB3.UK.SUN.COM
([EMAIL PROTECTED]:/net/TB3.UK.SUN.COM )-> cd tank/foo
([EMAIL PROTECTED]:/net/TB3.UK.SUN.COM/tank/foo )-> df -h .
Filesystem size   used  avail capacity  Mounted on
TB3.UK.SUN.COM:/tank/foo
   401G25K   401G 1%/net/TB3.UK.SUN.COM/tank/foo
([EMAIL PROTECTED]:/net/TB3.UK.SUN.COM/tank/foo )-> cd bar

([EMAIL PROTECTED]:/net/TB3.UK.SUN.COM/tank/foo/bar )-> df -h .
Filesystem size   used  avail capacity  Mounted on
TB3.UK.SUN.COM:/tank/foo/bar
   401G24K   401G 1%
/net/TB3.UK.SUN.COM/tank/foo/bar

So I am, as expected, in the tank/foo/bar filesystem.

But now change permissions on /tank/foo so that only I can access it:

# chmod 700 /tank/foo

# ls -laR /tank/foo
/tank/foo:
total 9
drwx--   3 gavinm   sys3 Apr 24 00:24 .
drwxr-xr-x   9 root sys9 Apr 24 00:23 ..
drwxr-xr-x   2 gavinm   sys2 Apr 24 00:24 bar

/tank/foo/bar:
total 6
drwxr-xr-x   2 gavinm   sys2 Apr 24 00:24 .
drwx--   3 gavinm   sys3 Apr 24 00:24 ..

And now I cannot browse into filesystem tank/foo/bar, only into
the mountpoint directory (different capitalisation below to
trigger new automounts under /net):

([EMAIL PROTECTED]:/net/TB3.UK.SUN.COM/tank/foo/bar )-> cd /net/TB3.uk.SUN.COM
([EMAIL PROTECTED]:/net/TB3.uk.SUN.COM )-> cd tank/foo
([EMAIL PROTECTED]:/net/TB3.uk.SUN.COM/tank/foo/bar )-> df -h .
Filesystem size   used  avail capacity  Mounted on
TB3.uk.SUN.COM:/tank/foo
   401G25K   401G 1%/net/TB3.uk.SUN.COM/tank/foo

Thanks

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot: Dividing up the name space

2007-04-26 Thread Gavin Maltby



On 04/24/07 17:30, Darren J Moffat wrote:

Richard Elling wrote:


/var/tm Similar to the /var/log rationale.


[assuming /var/tmp]


I intended to type /var/fm not /var/tm or /var/tmp.  The FMA state data 
is I believe something that you would want to share between all boot 
environments on a given bit of hardware, right ?


Yes, under normal production circumstances that is what you'd want.
I guess under some test circumstances you may want different state
for different BEs.

I'd also like to have compression turned on by default for /var/fm.
It will cost nothing in turns of cpu time since additions to that
tree are at a very low rate and only small chunks of data at a time;
but the small chunks can add up in a system suffering solid errors
if the ereports are not throttled in some way, and they're eminently
compressible.

There are a couple of CRs logged for this somewhere.

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [nfs-discuss] Multi-tera, small-file filesystems

2007-04-26 Thread Gavin Maltby



On 04/24/07 01:37, Richard Elling wrote:

Leon Koll wrote:
My guess that Yaniv assumes that 8 pools with 62.5 million files each 
have significantly less chances to be corrupted/cause the data loss 
than 1 pool with 500 million files in it.

Do you agree with this?


I do not agree with this statement.  The probability is the same,
regardless of the number of files.  By analogy, if I have 100 people
and the risk of heart attack is 0.1%/year/person, then dividing those
people into groups does not change their risk of heart attack.


Is that not because heart attacks in different people are (under normal
circumstances!) independent events.  8 filesystems backed by a single
pool are not independent;  8 filesystems from 8 distinct pools are a lot
more independent.

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs panic on boot

2007-09-29 Thread Gavin Maltby

Hi,

Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below.  Is this
a known issue and, more importantly (2TB of data in the pool),
any suggestions on how to recover (other than from backup).

panic[cpu0]/thread=ff003cc8dc80: zfs: allocating allocated 
segment(offset=24872013824 size=4096)
ff003cc8d3c0 genunix:vcmn_err+28 ()ck to the main menu.
ff003cc8d4b0 zfs:zfs_panic_recover+b6 ()
ff003cc8d540 zfs:space_map_add+db ()
ff003cc8d5e0 zfs:space_map_load+1f4 ()
ff003cc8d620 zfs:metaslab_activate+66 ()
ff003cc8d6e0 zfs:metaslab_group_alloc+24e ()
ff003cc8d7b0 zfs:metaslab_alloc_dva+192 ()
ff003cc8d850 zfs:metaslab_alloc+82 ()
ff003cc8d8a0 zfs:zio_dva_allocate+68 ()
ff003cc8d8c0 zfs:zio_next_stage+b3 ()
ff003cc8d8f0 zfs:zio_checksum_generate+6e ()
ff003cc8d910 zfs:zio_next_stage+b3 ()
ff003cc8d980 zfs:zio_write_compress+239 ()
ff003cc8d9a0 zfs:zio_next_stage+b3 ()
ff003cc8d9f0 zfs:zio_wait_for_children+5d ()
ff003cc8da10 zfs:zio_wait_children_ready+20 ()
ff003cc8da30 zfs:zio_next_stage_async+bb ()
ff003cc8da50 zfs:zio_nowait+11 ()
ff003cc8dad0 zfs:dmu_objset_sync+172 ()
ff003cc8db40 zfs:dsl_pool_sync+199 ()
ff003cc8dbd0 zfs:spa_sync+1c5 ()
ff003cc8dc60 zfs:txg_sync_thread+19a ()
ff003cc8dc70 unix:thread_start+8 ()

In case it matters this is an X4600 M2.  There is about
1.5TB in use out of a 2TB pool.  The IO devices are
nothing exciting but adequate for building - 2 x T3b.
The pool was created under sparc on the old nfs server.

Thanks

Gavin


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs panic on boot

2007-10-01 Thread Gavin Maltby

Hi,

On 09/29/07 22:00, Gavin Maltby wrote:

Hi,

Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below.  Is this
a known issue and, more importantly (2TB of data in the pool),
any suggestions on how to recover (other than from backup).

panic[cpu0]/thread=ff003cc8dc80: zfs: allocating allocated 
segment(offset=24872013824 size=4096)


So in desperation I set 'zfs_recover' which just produced an
assertion failure moments after the original panic location.
but also set 'aok' to blast through assertions has allowed
me to import the pool again (I had booted -m milestone=none
and blown away /etc/zfs/zpool.cache to be able to boot at
all).

Luckily just the single corruption apparent at the moment, ie
just a single assertion caught after running for half a day like this:

Sep 30 17:01:53 tb3 genunix: [ID 415322 kern.warning] WARNING: zfs:
allocating allocated segment(offset=24872013824 size=4096)
Sep 30 17:01:53 tb3 genunix: [ID 411747 kern.notice] ASSERTION CAUGHT:
sm->sm_space == space (0xc4896c00 == 0xc4897c00), file:
../../common/fs/zfs/space_map.c, line: 355

What I'd really like to know is whether/how I can map from that
assertion at the pool level back down to a single filesystem
or even file using this segment - perhaps I can recycle that file
to free the segment and set the world straight again?

A scrub is only 20% complete, but has found no errors thus far.  I check
the T3 pair and no complaints there either - I did reboot them just for
luck (last reboot was 2 years ago, apparently!).

Gavin


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs panic on boot

2007-10-01 Thread Gavin Maltby

On 10/01/07 17:01, Richard Elling wrote:

T3 comment below...

[cut]

A scrub is only 20% complete, but has found no errors thus far.  I check
the T3 pair and no complaints there either - I did reboot them just for
luck (last reboot was 2 years ago, apparently!).


Living on the edge...
The T3 has a 2 year battery life (time is counted).  When it decides the
batteries are too old, it will shut down the nonvolatile write cache.
You'll want to make sure you have fresh batteries soon.


Thanks - we have replaced batteries in that time - there is no need to shutdown
during battery replacement.

Gavin


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Preferred backup s/w

2008-02-21 Thread Gavin Maltby

On 02/21/08 16:31, Rich Teer wrote:


What is the current preferred method for backing up ZFS data pools,
preferably using free ($0.00) software, and assuming that access to
individual files (a la ufsbackup/ufsrestore) is required?


For home use I am making very successful use of zfs incremental send
and receive.  A script decides which filesystems to backup (based
on a user property retrieved by zfs get) and snapshots the filesystem;
it then looks for the last snapshot that the pool I'm backing
up and the pool I'm backing up to have in common, and
does a zfs send -i | zfs reveive over than.  Backups are pretty
quick since there is not huge amount of churn in the filesystems,
and on my backup disks I have browsable access to snapshot of
my data from every backup I have run.

Gavin


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Gavin Maltby

On 06/02/06 10:09, Rainer Orth wrote:

Robert Milkowski <[EMAIL PROTECTED]> writes:


So it can look like:

[...]

   c0t2d0s1c0t2d0s1  SVM mirror, SWAP SWAP/s1 size =
   sizeof(/ + /var + 
/opt)


You can avoid this by swapping to a zvol, though at the moment this
requires a fix for CR 6405330.  Unfortunately, since one cannot yet dump to
a zvol, one needs a dedicated dump device in this case ;-(


Dedicated dump devices are *always* best, so this is no loss.  Dumping
through filesystem code when it may be that code itself which caused the
panic is badness.

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs list -o usage info missing 'name'

2006-06-26 Thread Gavin Maltby

Hi

Probbaly been reported a while back, but 'zfs list -o' does not
list the rather useful (and obvious) 'name' property, and nor does the manpage
at a quick read.  snv_42.

# zfs list -o
missing argument for 'o' option
usage:
list [-rH] [-o property[,property]...] [-t type[,type]...]
[filesystem|volume|snapshot] ...

The following properties are supported:

PROPERTY   EDIT  INHERIT   VALUES

type NO   NO   filesystem | volume | snapshot
creation NO   NO   
used NO   NO   
availableNO   NO   
referenced   NO   NO   
compressratioNO   NO   <1.00x or higher if compressed>
mounted  NO   NO   yes | no | -
origin   NO   NO   
quota   YES   NO| none
reservation YES   NO| none
volsize YES   NO   
volblocksize NO   NO   512 to 128k, power of 2
recordsize  YES  YES   512 to 128k, power of 2
mountpoint  YES  YES| legacy | none
sharenfsYES  YES   on | off | share(1M) options
checksumYES  YES   on | off | fletcher2 | fletcher4 | sha256
compression YES  YES   on | off | lzjb
atime   YES  YES   on | off
devices YES  YES   on | off
execYES  YES   on | off
setuid  YES  YES   on | off
readonlyYES  YES   on | off
zoned   YES  YES   on | off
snapdir YES  YES   hidden | visible
aclmode YES  YES   discard | groupmask | passthrough
aclinherit  YES  YES   discard | noallow | secure | passthrough

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs assertion failure

2006-09-08 Thread Gavin Maltby

Hi,

My desktop paniced last night during a zfs receive operation.  This
is a dual opteron system running snv_47 and bfu'd to DEBUG project bits that
are in sync with the onnv gate as of two days ago.  The project bits
are for Opteron FMA and don't appear at all active in the panic.
I'll log a bug unless someone reconises this as a known issue:

> ::status
debugging crash dump vmcore.0 (64-bit) from enogas
operating system: 5.11 onnv-dev (i86pc)
panic message:
assertion failed: ((&dnp->dn_blkptr[0])->blk_birth == 0) || list_head(&dn->dn_dirty_dbufs[txgoff]) 
!= 0L || dn->dn_next_blksz[txgoff] >> 9 == dnp->dn_datablkszsec, file: ../../common/fs/zfs/dnode_syn

dump content: kernel pages only

> $c
vpanic()
assfail+0x7e(f06daa80, f06daa58, 220)
dnode_sync+0x5ef(8e0ce3f8, 0, 8e0c81c0, 8adde1c0)
dmu_objset_sync_dnodes+0xa4(8be25340, 8be25480, 
8adde1c0)
dmu_objset_sync+0xfd(8be25340, 8adde1c0)
dsl_dataset_sync+0x4a(8e2286c0, 8adde1c0)
dsl_pool_sync+0xa7(89ef3900, 248bbb)
spa_sync+0x1d5(82ea2700, 248bbb)
txg_sync_thread+0x221(89ef3900)
thread_start+8()

dnode_sync(dnode_t *dn, int level, zio_t *zio, dmu_tx_t *tx)
{
free_range_t *rp;
int txgoff = tx->tx_txg & TXG_MASK;
dnode_phys_t *dnp = dn->dn_phys;
...
if (dn->dn_next_blksz[txgoff]) {
ASSERT(P2PHASE(dn->dn_next_blksz[txgoff],
SPA_MINBLOCKSIZE) == 0);
ASSERT(BP_IS_HOLE(&dnp->dn_blkptr[0]) ||
list_head(&dn->dn_dirty_dbufs[txgoff]) != NULL ||
dn->dn_next_blksz[txgoff] >> SPA_MINBLOCKSHIFT ==
dnp->dn_datablkszsec);
...
}
...
}


We get

txgoff = 0x248bbb & 0x3 = 0x3
dnp = 0xfe80e648b400

> 0xfe80e648b400::print dnode_phys_t
{
dn_type = 0x16
dn_indblkshift = 0xe
dn_nlevels = 0x1
dn_nblkptr = 0x3
dn_bonustype = 0
dn_checksum = 0
dn_compress = 0
dn_flags = 0x1
dn_datablkszsec = 0x1c
dn_bonuslen = 0
dn_pad2 = [ 0, 0, 0, 0 ]
dn_maxblkid = 0
dn_used = 0x800
dn_pad3 = [ 0, 0, 0, 0 ]
dn_blkptr = [
{
blk_dva = [
{
dva_word = [ 0x2, 0x3015472 ]
}
{
dva_word = [ 0x2, 0x4613b32 ]
}
{
dva_word = [ 0, 0 ]
}
]
blk_prop = 0x801607030001001b
blk_pad = [ 0, 0, 0 ]
blk_birth = 0x221478
blk_fill = 0x1
blk_cksum = {
zc_word = [ 0x4b4b88c4e6, 0x39c18ca2a5a1, 0x16ea3555d00431,
0x640a1f2b2c8b322 ]
}
}
]
dn_bonus = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... ]
}

So regarding the assertion we have dnp->dn_blkptr[0])->blk_birth == 0x221478

> 8e0ce3f8::print -at dnode_t dn_dirty_dbufs[3]
{
8e0ce510 size_t dn_dirty_dbufs[3].list_size = 0x198
8e0ce518 size_t dn_dirty_dbufs[3].list_offset = 0x120
8e0ce520 struct list_node dn_dirty_dbufs[3].list_head = {
8e0ce520 struct list_node *list_next = 0x8e0ce520
8e0ce528 struct list_node *list_prev = 0x8e0ce520
}
}

So we have list_empty() for that list (list_next above points to list_head)
and list_head() will have returned NULL.  So we're relying on the
3rd component of the assertion to pass:

> 8e0ce3f8::print dnode_t dn_next_blksz
dn_next_blksz = [ 0, 0, 0, 0x4a00 ]

We're using the 0x4a00 from that.  0x4a00 >> 9 = 0x25; from the
dnode_phys_t above we have dnp->dn_datablkszsec of 0x1c.  Boom.

Sun folks can login to enogas.uk and /var/crash/enogas/*,0 is
accessible.

Gavin







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs assertion failure

2006-09-08 Thread Gavin Maltby

On 09/08/06 15:20, Mark Maybee wrote:

Gavin,

Please file a bug on this.



I filed 6468748.  Attach the core now.

Cheers

Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss