Re: [zfs-discuss] odd behavior from zpool replace.

2007-10-14 Thread MC
"One or more devices could not be opened"?  I wonder if this has anything to do 
with our problems here...: 
http://www.opensolaris.org/jive/thread.jspa?messageID=160589𧍍
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-14 Thread me
> I haven't heard from any other core contributors, but this sounds like a
> worthy project to me.  Someone from the ZFS team should follow through
> to create the project on os.org[1]
>
> Its sounds like like Domingos and Roland might constitute the initial
> "project team".

In my opinion, the project should also include an effort in getting LZO
into ZFS. As an advanced fast but efficient variant.

For that matter, if it were up to me, there should be an effort in
modularizing the ZFS compression algorithms into loadable kernel modules,
also allowing easy addition of algorithms. I suppose the same should apply
to other components where possible, e.g. the spacemap allocator discussed
on this list. But I'm a mere C# coder, so I can't really help with that.

-mg

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS recovery tool, intrested?

2007-10-14 Thread Samuel Borgman
Hi,

Having my 700Gb one disk ZFS crashing on me created ample need for a recovery 
tool. 

So I spent the weekend creating a tool that lets you list directories and copy 
files from any pool on a one disk ZFS filesystem, where for example the Solaris 
kernel keeps panicing.

Is there any interest in it being release to the public?

Regards,

/Samuel
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS recovery tool, intrested?

2007-10-14 Thread Tim Spriggs

Yeah, that would have saved me several weeks ago.

Samuel Borgman wrote:
> Hi,
>
> Having my 700Gb one disk ZFS crashing on me created ample need for a recovery 
> tool. 
>
> So I spent the weekend creating a tool that lets you list directories and 
> copy files from any pool on a one disk ZFS filesystem, where for example the 
> Solaris kernel keeps panicing.
>
> Is there any interest in it being release to the public?
>
> Regards,
>
> /Samuel
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS recovery tool, intrested?

2007-10-14 Thread Sean Sprague
Samuel,

> Having my 700Gb one disk ZFS crashing on me created ample need for a recovery 
> tool. 
> 
> So I spent the weekend creating a tool that lets you list directories and 
> copy files from any pool on a one disk ZFS filesystem, where for example the 
> Solaris kernel keeps panicing.
> 
> Is there any interest in it being release to the public?

Yes indeed. Please put the source up here - I am sure that you will receive 
interesting feedback.

Thanks and regards... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] strange zfs recieve behavior

2007-10-14 Thread Edward Pilatowicz
hey all,
so i'm trying to mirror the contents of one zpool to another
using zfs send / recieve while maintaining all snapshots and clones.

essentially i'm taking a recursive snapshot.  them i'm mirroring
the oldest snapshots first and working my way forward.  to deal
with clones i have a hack that uses zfs promote.  i've scripted it
and things seem to work...  except of course for one thing.  ;)
there's one snapshot on my system that i can't seem to transfer.

here's the problem:
---8<---
[EMAIL PROTECTED] zfs send export/ws/[EMAIL PROTECTED] | zfs receive -v -d 
export2
receiving full stream of export/ws/[EMAIL PROTECTED] into export2/ws/[EMAIL 
PROTECTED]
received 134MB stream in 28 seconds (4.77MB/sec)
[EMAIL PROTECTED]
[EMAIL PROTECTED] zfs send -i 070221 export/ws/[EMAIL PROTECTED] | zfs receive 
-v -d export2
receiving incremental stream of export/ws/[EMAIL PROTECTED] into 
export2/ws/[EMAIL PROTECTED]
cannot receive: destination has been modified since most recent snapshot
---8<---

as far as i know, there's nothing special about these two snapshots.
---8<---
[EMAIL PROTECTED] zfs list | grep export/ws/xen-1
export/ws/xen-1   105M  3.09G   104M  /export/ws/xen-1
export/ws/[EMAIL PROTECTED]570K  -   103M  -
export/ws/[EMAIL PROTECTED] 0  -   104M  -
[EMAIL PROTECTED] zfs get -Hp -o value creation export/ws/[EMAIL PROTECTED]
1172088367
[EMAIL PROTECTED] zfs get -Hp -o value creation export/ws/[EMAIL PROTECTED]
1192301172
---8<---

any idea what might be wrong here?  it seems that the problem is
on the recieve side.  i've even tried doing:
zfs rollback export2/ws/[EMAIL PROTECTED]
before doing the second send but that didn't make any difference.

i'm currently running snv_74.  both pool are currently at zfs v8,
but the source pool has seen lots of zfs and live upgrades.

ed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange zfs recieve behavior

2007-10-14 Thread Matthew Ahrens
Edward Pilatowicz wrote:
> hey all,
> so i'm trying to mirror the contents of one zpool to another
> using zfs send / recieve while maintaining all snapshots and clones.

You will enjoy the upcoming "zfs send -R" feature, which will make your 
script unnecessary.

> [EMAIL PROTECTED] zfs send -i 070221 export/ws/[EMAIL PROTECTED] | zfs 
> receive -v -d export2
> receiving incremental stream of export/ws/[EMAIL PROTECTED] into 
> export2/ws/[EMAIL PROTECTED]
> cannot receive: destination has been modified since most recent snapshot

You may be hitting 6343779 "ZPL's delete queue causes 'zfs restore' to fail". 
  To work around it, use "zfs recv -F".

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss