[zfs-discuss] snapshot incremental diff file list?

2007-07-25 Thread asa
Hello all,
I am interested in getting a list of the changed files between two  
snapshots in a fast and zfs-y way. I know that zfs knows all about  
what blocks have been changed, but can one map that to a file list? I  
know this could be solved with some rsync or star or (some other app)  
love, but those would involve whole tree crawls(kind of slow on a  
multi Tb filesystem).

Possible?

Asa

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-10 Thread asa
Hello all. I am working on an NFS failover scenario between two  
servers.  I am getting the stale file handle errors on my (linux)  
client which point to there being a mismatch in the fsid's of my two  
filesystems when the failover occurs.
I understand that the fsid_guid attribute which is then used as the  
fsid in an NFS share, is created at zfs create time, but I would like  
to see and modify that value on any particular zfs filesystem after  
creation.

More details were discussed at http://www.mail-archive.com/zfs- 
[EMAIL PROTECTED]/msg03662.html but this was talking about the  
same filesystem sitting on a san failing over between two nodes.

On a linux NFS server one can specify in the nfs exports "-o  
fsid=num" which can be an arbitrary number, which would seem to fix  
this issue for me, but it seems to be unsupported on Solaris.

Any thoughts on workarounds to this issue?

Thank you kind sirs and ladies.

Asa
-hack

On Nov 10, 2007, at 10:18 AM, Jonathan Edwards wrote:

> Hey Bill:
>
> what's an object here? or do we have a mapping between "objects" and
> block pointers?
>
> for example a zdb -bb might show:
> th37 # zdb -bb rz-7
>
> Traversing all blocks to verify nothing leaked ...
>
>  No leaks (block sum matches space maps exactly)
>
>  bp count:  47
>  bp logical:518656avg:  11035
>  bp physical:64512avg:   1372
> compression:   8.04
>  bp allocated:  249856avg:   5316
> compression:   2.08
>  SPA allocated: 249856   used:  0.00%
>
> but do we maintain any sort of mapping between the object
> instantiation and how many block pointers an "object" or file might
> consume on disk?
>
> ---
> .je
>
> On Nov 9, 2007, at 15:18, Bill Moore wrote:
>
>> You can just do something like this:
>>
>> # zfs list tank/home/billm
>> NAMEUSED  AVAIL  REFER  MOUNTPOINT
>> tank/home/billm83.9G  5.56T  74.1G  /export/home/billm
>> # zdb tank/home/billm
>> Dataset tank/home/billm [ZPL], ID 83, cr_txg 541, 74.1G, 111066
>> objects
>>
>> Let me know if that causes any trouble.
>>
>>
>> --Bill
>>
>> On Fri, Nov 09, 2007 at 12:14:07PM -0700, Jason J. W. Williams wrote:
>>> Hi Guys,
>>>
>>> Someone asked me how to count the number of inodes/objects in a ZFS
>>> filesystem and I wasn't exactly sure. "zdb -dv " seems
>>> like a likely candidate but I wanted to find out for sure. As to why
>>> you'd want to know this, I don't know their reasoning but I  
>>> assume it
>>> has to do with the maximum number of files a ZFS filesystem can
>>> support (2^48 no?). Thank you in advance for your help.
>>>
>>> Best Regards,
>>> Jason
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-10 Thread asa

On Nov 10, 2007, at 3:49 PM, Mattias Pantzare wrote:

> 2007/11/10, asa <[EMAIL PROTECTED]>:
>> Hello all. I am working on an NFS failover scenario between two
>> servers.  I am getting the stale file handle errors on my (linux)
>> client which point to there being a mismatch in the fsid's of my two
>> filesystems when the failover occurs.
>> I understand that the fsid_guid attribute which is then used as the
>> fsid in an NFS share, is created at zfs create time, but I would like
>> to see and modify that value on any particular zfs filesystem after
>> creation.
>>
>> More details were discussed at http://www.mail-archive.com/zfs-
>> [EMAIL PROTECTED]/msg03662.html but this was talking about the
>> same filesystem sitting on a san failing over between two nodes.
>>
>> On a linux NFS server one can specify in the nfs exports "-o
>> fsid=num" which can be an arbitrary number, which would seem to fix
>> this issue for me, but it seems to be unsupported on Solaris.
>
> As the fsid is created when the file system is created it will be the
> same when you mount it on a different NFS server. Why change it?


> Or are you trying to match two different file systems? Then you also
> have to match all inode-numbers on your files. That is not possible at
> all.
I am trying to match two different file systems.  I have the two file  
systems being replicated via zfs send|recv for a near realtime mirror  
so they are the same filesystem in my head.  There may well be zfs  
goodness going on under the hood which makes this fsid different even  
if they seem like they could be the same because they originated from  
the same filesystem via zfs send/recv. Perhaps what is happening when  
zfs recv recieves a zfs stream is to create a totally new filesystem  
under the new location.

I found an ID parameter on the datasets with:
 > zdb -d tank/test -lv
Dataset tank/test [ZPL], ID 37406, cr_txg 2410348, 593M, 21 objects

It is different on each machine.  Is this the GUID? or something  
else. Some hack way to set it?

I know not enough about inodes and zfs to know if what I am asking is  
silly, and once I get past this FSID issue I will hit that next  
stumbling block of inode and file ID differences which will trip up  
the nfs failover.

I would like for all my NFS clients to hang during the failover, then  
pick up trucking on this new filesystem, perhaps obviously failing  
their writes back to the apps which are doing the writing.  Naive?

Asa

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-20 Thread asa
Well then this is probably the wrong list to be hounding

I am looking for something like 
http://blog.wpkg.org/2007/10/26/stale-nfs-file-handle/
Where when fileserver A dies, fileserver B can come up, grab the same  
IP address via some mechanism(in this case I am using sun cluster) and  
keep on trucking without the lovely stale file handle errors I am  
encountering.

My clients are Linux, Servers are sol 10u4.

it seems that it is impossible to change the fsid on solaris, can you  
point me towards the appropriate NFS client behavior option lingo if  
you have a minute?(just the terminology would be great, there are a  
ton of confusing options in the land of nfs:  Client recovery,  
failover, replicas etc)
I am unable to use block base replication(AVS) underneath the ZFS  
layer because I would like to run with different zpool schemes on each  
server( fast primary server, slower, larger failover server only to be  
used during downtime on the main server.)

Worst case scenario here seems to be that I would have to forcibly  
unmount and remount all my client mounts.

Ill start bugging the nfs-discuss people.

Thank you.

Asa

On Nov 12, 2007, at 1:21 PM, Darren J Moffat wrote:

> asa wrote:
>> I would like for all my NFS clients to hang during the failover,  
>> then  pick up trucking on this new filesystem, perhaps obviously  
>> failing  their writes back to the apps which are doing the  
>> writing.  Naive?
>
> The OpenSolaris NFS client does this already - has done since IIRC  
> around Solaris 2.6.  The knowledge is in the NFS client code.
>
> For NFSv4 this functionality is part of the standard.
>
> -- 
> Darren J Moffat

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-20 Thread asa
I am "rolling my own" replication using zfs send|recv through the  
cluster agent framework and a custom HA shared local storage set of  
scripts(similar to http://www.posix.brte.com.br/blog/?p=75  but  
without avs).  I am not using zfs off of shared storage in the  
supported way. So this is a bit of a lonely area. =)

As these are two different zfs volumes on different zpools of  
differing underlying vdev topology, it appears they are not sharing  
the same fsid and so are assumedly presenting different file handles  
from each other.

I have the cluster parts out of the way(mostly =)), I now need to  
solve the nfs side of things so that at the point of failing over.

I have isolated zfs out of the equation, I receive the same stale file  
handle errors if I try and share an arbitrary UFS folder to the client  
through the cluster interface.

Yeah I am a hack.

Asa

On Nov 20, 2007, at 7:27 PM, Richard Elling wrote:

> asa wrote:
>> Well then this is probably the wrong list to be hounding
>>
>> I am looking for something like 
>> http://blog.wpkg.org/2007/10/26/stale-nfs-file-handle/
>> Where when fileserver A dies, fileserver B can come up, grab the  
>> same  IP address via some mechanism(in this case I am using sun  
>> cluster) and  keep on trucking without the lovely stale file handle  
>> errors I am  encountering.
>>
>
> If you are getting stale file handles, then the Solaris cluster is  
> misconfigured.
> Please double check the NFS installation guide for Solaris Cluster and
> verify that the paths are correct.
> -- richard
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs error listings

2007-12-17 Thread asa
Hello all, looking to get the master list of all the error codes/ 
messages which I could get back from doing bad things in zfs.

I am wrappering the zfs command into python and want to be able to  
correctly pick up on errors which are returned from certain operations.

I did a source code search on opensolaris.org for the text of some of  
the errors I know about, with no luck.  Are these scattered about or  
is there some errors.c file I don't know about?

Thanks in advance.

Asa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs filesystem metadata checksum

2008-04-08 Thread asa
Hello all. I am looking to be able to verify my zfs backups in the  
most minimal way, ie without having to md5 the whole volume.

Is there a way to get a checksum for a snapshot and compare it to  
another zfs volume, containing all the same blocks and verify they  
contain the same information? Even when I destroy the snapshot on the  
source?

kind of like:

zfs create tank/myfs
dd if=/dev/urandom bs=128k count=1000 of=/tank/myfs/TESTFILE
zfs snapshot tank/[EMAIL PROTECTED]
zfs send tank/[EMAIL PROTECTED] | zfs recv tank/myfs_BACKUP

zfs destroy tank/[EMAIL PROTECTED]

zfs snapshot tank/[EMAIL PROTECTED]


someCheckSumVodooFunc(tank/myfs)
someCheckSumVodooFunc(tank/myfs_BACKUP)

is there some zdb hackery which results in a metadata checksum usable  
in this scenario?

Thank you all!

Asa
zfs worshiper
Berkeley, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs filesystem metadata checksum

2008-04-20 Thread asa
thank you. this is exactly what I was looking for.
This is for remote replication so it looks like I am out of luck.  
bummer.

Asa

On Apr 14, 2008, at 4:09 PM, Jeff Bonwick wrote:

> Not at present, but it's a good RFE.  Unfortunately it won't be
> quite as simple as just adding an ioctl to report the dnode checksum.
> To see why, consider a file with one level of indirection: that is,
> it consists of a dnode, a single indirect block, and several data  
> blocks.
> The indirect block contains the checksums of all the data blocks --  
> handy.
> The dnode contains the checksum of the indirect block -- but that's  
> not
> so handy, because the indirect block contains more than just  
> checksums;
> it also contains pointers to blocks, which are specific to the  
> physical
> layout of the data on your machine.  If you did remote replication  
> using
> zfs send | ssh elsewhere zfs recv, the dnode checksum on 'elsewhere'
> would not be the same.
>
> Jeff
>
> On Tue, Apr 08, 2008 at 01:45:16PM -0700, asa wrote:
>> Hello all. I am looking to be able to verify my zfs backups in the
>> most minimal way, ie without having to md5 the whole volume.
>>
>> Is there a way to get a checksum for a snapshot and compare it to
>> another zfs volume, containing all the same blocks and verify they
>> contain the same information? Even when I destroy the snapshot on the
>> source?
>>
>> kind of like:
>>
>> zfs create tank/myfs
>> dd if=/dev/urandom bs=128k count=1000 of=/tank/myfs/TESTFILE
>> zfs snapshot tank/[EMAIL PROTECTED]
>> zfs send tank/[EMAIL PROTECTED] | zfs recv tank/myfs_BACKUP
>>
>> zfs destroy tank/[EMAIL PROTECTED]
>>
>> zfs snapshot tank/[EMAIL PROTECTED]
>>
>>
>> someCheckSumVodooFunc(tank/myfs)
>> someCheckSumVodooFunc(tank/myfs_BACKUP)
>>
>> is there some zdb hackery which results in a metadata checksum usable
>> in this scenario?
>>
>> Thank you all!
>>
>> Asa
>> zfs worshiper
>> Berkeley, CA
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-11-22 Thread Asa Durkee
My Supermicro H8DA3-2's onboard 1068E SAS chip isn't recognized in OpenSolaris, 
and I'd like to keep this particular system "all Supermicro," so the L8i it is. 
I know there have been issues with Supermicro-branded 1068E controllers, so 
just wanted to verify that the stock mpt driver supports it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cross fs/same pool mv

2007-07-02 Thread asa hammond
I have had some success using zfs send recv into a child of a  
compressed filesystem to do this although you have the disadvantage  
of losing your settings.

basically :
zfs create tank/foo
mv a bunch of files into foo
zfs create tank/bar
zfs set compression=on bar
zfs snapshot tank/[EMAIL PROTECTED]
zfs send tank/[EMAIL PROTECTED] | zfs recv tank/bar/foosmall
zfs destroy tank/foo
zfs set compression=on  tank/bar/foosmall
zfs rename tank/bar/foosmall tank/foo


kinda clunky and you have to have twice as much space available and  
there are probably other issues with it as I am not a pro zfs user  
here but, worked for me =)

Asa


On Jul 2, 2007, at 5:32 AM, Carson Gaspar wrote:

> roland wrote:
>
>> is there a reliable method of re-compressing a whole zfs volume  
>> after turning on compression or changing compression scheme ?
>
> It would be slow, and the file system would need to be idle to avoid
> race conditions, and it wouldn't be very fast,  but you _could_ do the
> following (POSIX shell syntax). I haven't tested this, so it could  
> have
> typos or other problems:
>
> find . -type f -print | while read n; do
> TF="$(mktemp ${n%/*}/.tmpXX)"
> if cp -p "$n" "$TF"; then
> if ! mv "$TF" "$n"; then
> echo "failed to re-write $n in mv"
> rm "$TF"
> fi
> else
> echo "failed to re-write $n in cp"
> rm "$TF"
> fi
> done
>
> -- 
> Carson
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cross fs/same pool mv

2007-07-02 Thread asa hammond


> roland wrote:
>
>> is there a reliable method of re-compressing a whole zfs volume  
>> after turning on compression or changing compression scheme ?
>

I have had some success using zfs send recv into a child of a  
compressed filesystem to do this although you have the disadvantage  
of losing your settings.

basically :
zfs create tank/foo
mv a bunch of files into foo
zfs create tank/bar
zfs set compression=on bar
zfs snapshot tank/[EMAIL PROTECTED]
zfs send tank/[EMAIL PROTECTED] | zfs recv tank/bar/foosmall
zfs destroy tank/foo
zfs set compression=on  tank/bar/foosmall
zfs rename tank/bar/foosmall tank/foo


kinda clunky and you have to have twice as much space available and  
there are probably other issues with it as I am not a pro zfs user  
here but, worked for me =)

Asa

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss