Re: [zfs-discuss] Cluster File System Use Cases

2007-07-13 Thread Richard L. Hamilton
> Bringing this back towards ZFS-land, I think that > there are some clever > things we can do with snapshots and clones. But the > age-old problem > of arbitration rears its ugly head. I think I could > write an option to expose > ZFS snapshots to read-only clients. But in doing so, > I don't

Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-13 Thread Darren J Moffat
Bart Smaalders wrote: > > For those of us who've been swapping to zvols for some time, can > you describe the failure modes? > I can swap fine. I can't dump. LU gets confused about them and I have to re add it. It is slower than swapping directly to a slice. I've never needed to snapshot my swap

Re: [zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-13 Thread Mario Goebbels
> While the original reason for this was swap, I have a sneaky suspicion > that others may wish for this as well, or perhaps something else. > Thoughts? (database folks, jump in :-) Lower overhead storage for my QEMU volumes. I figure other filesystems running within a ZVOL may cause a little bi

Re: [zfs-discuss] ZFS and IBM's TSM

2007-07-13 Thread John
> John wrote: > >> Our main problem with TSM and ZFS is currently > that > >> there seems to be > >> no efficient way to do a disaster restore when the > >> backup > >> resides on tape - due to the large number of > >> filesystems/TSM filespaces. > >> The graphical client (dsmj) does not work at al

Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-13 Thread Daniel Carosone
> > (for lack of a better term. I'm open to suggestions.) a > > "pseudo-zvol". It's meant to be a low > > overhead way to emulate a slice within a pool. So > > no COW or related zfs features > > Are these a zslice? zbart - "Don't have a CoW, man!" This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS and IBM's TSM

2007-07-13 Thread Hans-Juergen Schnitzer
John wrote: John wrote: Our main problem with TSM and ZFS is currently that there seems to be no efficient way to do a disaster restore when the backup resides on tape - due to the large number of filesystems/TSM filespaces. The graphical client (dsmj) does not work at all and with dsmc one

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Kwang-Hyun Baek
no pools available to import yet when I do zpool list it shows my pool with health UNKNOWN This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Kwang-Hyun Baek
pool: pool state: UNKNOWN scrub: none requested config: NAMESTATE READ WRITE CKSUM poolUNKNOWN 0 0 0 c0d0s5UNKNOWN 0 0 0 c0d0s6UNKNOWN 0 0 0 c0d0s4UNKNOWN 0 0 0

[zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
How much fun can you have with a simple thing like powerpath? Here's the story: I have a (remote) system with access to a couple of EMC LUNs. Originally, I set it up with mpxio and created a simple zpool containing the two LUNs. It's now been reconfigured to use powerpath instead of mpxio. My pr

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Mark J Musante
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote: > NAMESTATE READ WRITE CKSUM > poolUNKNOWN 0 0 0 > c0d0s5UNKNOWN 0 0 0 > c0d0s6UNKNOWN 0 0 0 > c0d0s4UNKNOWN 0 0 0 >

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Wade . Stuart
Can you post a "powermt display dev=all", a zpool status and format command? [EMAIL PROTECTED] wrote on 07/13/2007 09:38:01 AM: > How much fun can you have with a simple thing like powerpath? > > Here's the story: I have a (remote) system with access to a couple > of EMC LUNs. Originally,

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Kwang-Hyun Baek
AVAILABLE DISK SELECTIONS: 0. c0d0 /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Specify disk (enter its number): 0 selecting c0d0 Controller working list found [disk formatted, defect list found] Warning: Current Disk has mounted partitions. /dev

Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-13 Thread Darren J Moffat
Daniel Carosone wrote: >>> (for lack of a better term. I'm open to suggestions.) a >>> "pseudo-zvol". It's meant to be a low >>> overhead way to emulate a slice within a pool. So >>> no COW or related zfs features >> Are these a zslice? > > zbart - "Don't have a CoW, man!" but we already have /

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Mark J Musante
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote: > zpool import pool (my pool is named 'pool') returns > > cannot import 'pool': no such pool available What does 'zpool import' by itself show you? It should give you a list of available pools to import. Regards, markm __

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Kwang-Hyun Baek
zpool import pool (my pool is named 'pool') returns cannot import 'pool': no such pool available zpool list returns that pool named 'pool' is available with "Unknown" Health. All I did was "zpool upgrade" because that's what it asked me to do...nothing more. Anyone have any idea? This mess

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Mark J Musante
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote: > zpool list > > it shows my pool with health UNKNOWN That means it's already imported. What's the output of 'zpool status'? Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > > Can you post a "powermt display dev=all", a zpool status and format > command? Sure. There are no pools to give status on because I can't import them. For the others: # powermt display dev=all Pseudo name=emcpower0a CLARiiON ID=APM0004

Re: [zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-13 Thread Lori Alt
Bill Sommerfeld wrote: > On Thu, 2007-07-12 at 16:27 -0700, Richard Elling wrote: > >> I think we should up-level this and extend to the community for comments. >> The proposal, as I see it, is to create a simple, >> > yes > > >> contiguous (?) >> > as I understand the proposal, n

Re: [zfs-discuss] Pseudo file system access to snapshots?

2007-07-13 Thread Mike Gerdts
On 7/11/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote: > > This "restore problem" is my key worry in deploying ZFS in the area > > where I see it as most beneficial. Another solution that would deal > > with the same problem is block-level deduplication. So far my queries > > in this area have bee

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Sean McGrath - Sun Microsystems Ireland
Kwang-Hyun Baek stated: < AVAILABLE DISK SELECTIONS: <0. c0d0 < /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 < Specify disk (enter its number): 0 < selecting c0d0 < Controller working list found < [disk formatted, defect list found] < Warning:

Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-13 Thread Lori Alt
Torrey McMahon wrote: > I really don't want to bring this up but ... > > Why do we still tell people to use swap volumes? Jeff Bonwick has suggested a fix to 6528296 (system hang while zvol swap space shorted). If we can get that fixed, then it may become safe to use true zvols for swap. I'll up

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
You wouldn't happen to be running this on a SPARC would you? I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump when creating a zpool. I filed a bug report, though it doesn't appear to be in the database (not sure if that means it was rejected or I didn't submit it correctly).

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Kwang-Hyun Baek
[EMAIL PROTECTED]:/# prtvtoc /dev/rdsk/c0d0s0 * /dev/rdsk/c0d0s0 partition map * * Dimensions: * 512 bytes/sector * 63 sectors/track * 255 tracks/cylinder * 16065 sectors/cylinder *6336 cylinders *6334 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * U

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > You wouldn't happen to be running this on a SPARC would you? That I would. > I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump > when creating a zpool. I filed a bug report, though it doesn't appear > to be in the databa

Re: [zfs-discuss] zfs "no dataset available"

2007-07-13 Thread Kwang-Hyun Baek
Okay. Now it says the pool cannot be imported. :*( Is there anything I can to fix it? # zpool import pool: pool id: 3508905099046791975 state: UNKNOWN action: The pool cannot be imported due to damaged devices or data. config: poolUNKNOWN c0d0s5UNKNOWN

Re: [zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-13 Thread Darren J Moffat
Mario Goebbels wrote: >> While the original reason for this was swap, I have a sneaky suspicion >> that others may wish for this as well, or perhaps something else. >> Thoughts? (database folks, jump in :-) > > Lower overhead storage for my QEMU volumes. I figure other filesystems > running with

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
There was a Sun Forums post that I referenced in that other thread that mentioned something about mpxio working but powerpath not working. Of course I don't know how valid those statements are/were, and I don't recall much detail given. -- Sean -Original Message- From: Peter Tribble [ma

Re: [zfs-discuss] Cluster File System Use Cases

2007-07-13 Thread Spencer Shepler
On Jul 13, 2007, at 2:20 AM, Richard L. Hamilton wrote: >> Bringing this back towards ZFS-land, I think that >> there are some clever >> things we can do with snapshots and clones. But the >> age-old problem >> of arbitration rears its ugly head. I think I could >> write an option to expose >>

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Brian Wilson
Hmm. Odd. I've got PowerPath working fine with ZFS with both Symmetrix and Clariion back ends. PowerPath Version is 4.5.0, running on leadville qlogic drivers. Sparc hardware. (if it matters) I ran one our test databases on ZFS on the DMX via PowerPath for a couple months until we switc

Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-13 Thread Richard Elling
Lori Alt wrote: > Torrey McMahon wrote: >> I really don't want to bring this up but ... >> >> Why do we still tell people to use swap volumes? > > Jeff Bonwick has suggested a fix to 6528296 (system > hang while zvol swap space shorted). If we can get that > fixed, then it may become safe to use

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
I wonder what kind of card Peter's using and if there is a potential linkage there. We've got the Sun branded Emulux cards in our sparcs. I also wonder if Peter were able to allocate an additional LUN to his system whether or not he'd be able to create a pool on that new LUN. I'm not sure why ex

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread eric kustarz
On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote: > Hmm. Odd. I've got PowerPath working fine with ZFS with both > Symmetrix and Clariion back ends. > PowerPath Version is 4.5.0, running on leadville qlogic drivers. > Sparc hardware. (if it matters) > > I ran one our test databases on ZFS

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Brian Wilson
On Jul 13, 2007, at 1:15 PM, Alderman, Sean wrote: I wonder what kind of card Peter's using and if there is a potential linkage there. We've got the Sun branded Emulux cards in our sparcs. I also wonder if Peter were able to allocate an additional LUN to his system whether or not he'd be

Re: [zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-13 Thread Eric
Thanks for suggesting a broader discussion about the needs and possible uses for specialized storage objects within a pool. In doing so, part of that discussion should include the effect upon overall complexity and manageability as well as conceptual coherence. In his blog post on ZFS layering

[zfs-discuss] do we support zonepath on UFS formated ZFS volume

2007-07-13 Thread Hans Qiao
Hi, ZFS experts, From ZFS release notes, " Solaris 10 6/06 and Solaris 10 11/06: Do Not Place the Root File Systemof a Non-Global Zone on ZFS. The zonepath of a non-global zone should not reside on ZFS for this release. This action might result in patching problems and possibly prevent the sys

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, Brian Wilson <[EMAIL PROTECTED]> wrote: > Hm. How many devices/LUNS can the server see? I don't know how > import finds the pools on the disk, but it sounds like it's not happy > somehow. Is there any possibility it's seeing a Clariion mirror copy > of the disks in the pool as we

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > I wonder what kind of card Peter's using and if there is a potential > linkage there. We've got the Sun branded Emulux cards in our sparcs. I > also wonder if Peter were able to allocate an additional LUN to his > system whether or not he'd

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Manoj Joseph
Peter Tribble wrote: > I've not got that far. During an import, ZFS just pokes around - there > doesn't seem to be an explicit way to tell it which particular devices > or SAN paths to use. You can't tell it which devices to use in a straightforward manner. But you can tell it which directories

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM: > Peter Tribble wrote: > > > I've not got that far. During an import, ZFS just pokes around - there > > doesn't seem to be an explicit way to tell it which particular devices > > or SAN paths to use. > > You can't tell it which devices to use

Re: [zfs-discuss] do we support zonepath on UFS formated ZFS volume

2007-07-13 Thread Darren Dunham
> From ZFS release notes, " Solaris 10 6/06 and Solaris 10 11/06: Do Not > Place the Root File > Systemof a Non-Global Zone on ZFS. The zonepath of a non-global zone > should not reside on > ZFS for this release. This action might result in patching problems and > possibly prevent the system >

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
Doesn't that then create dependence on the cxtxdxsx device name to be available? /dev/dsk/c2t500601601020813Ed0s0 = path1 /dev/dsk/c2t500601681020813Ed0s0 = path2 /dev/dsk/emcpower0a = pseudo device pointing to both paths. So if you've got a zpool on /dev/dsk/c2t500601601020813Ed0s0 and that pa

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Torrey McMahon
Peter Tribble wrote: > On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > >> I wonder what kind of card Peter's using and if there is a potential >> linkage there. We've got the Sun branded Emulux cards in our sparcs. I >> also wonder if Peter were able to allocate an additional LUN to hi

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Torrey McMahon
[EMAIL PROTECTED] wrote: > > > > [EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM: > > >> Peter Tribble wrote: >> >> >>> I've not got that far. During an import, ZFS just pokes around - there >>> doesn't seem to be an explicit way to tell it which particular devices >>> or SAN paths to use

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Darren Dunham
> Doesn't that then create dependence on the cxtxdxsx device name to be > available? > > /dev/dsk/c2t500601601020813Ed0s0 = path1 > /dev/dsk/c2t500601681020813Ed0s0 = path2 > /dev/dsk/emcpower0a = pseudo device pointing to both paths. > > So if you've got a zpool on /dev/dsk/c2t500601601020813E

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Carisdad
Peter Tribble wrote: > # powermt display dev=all > Pseudo name=emcpower0a > CLARiiON ID=APM00043600837 [] > Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] > state=alive; policy=CLAROpt; priority=0; queued-IOs=0 > Owner: default=SP B, current=SP B > =

[zfs-discuss] ZFS and SE99x0 Array Best Practices

2007-07-13 Thread Thomas McPhillips
Does anyone have a best practice for utilizing ZFS with Hitachi SE99x0 arrays?? I'm curious about what type of parity-groups work best with ZFS for various application uses. Examples: OLTP, warehousing, NFS, . Thanks! This message posted from opensolaris.org

[zfs-discuss] zfs list hangs if zfs send is killed (leaving zfs receive process)

2007-07-13 Thread David Smith
I was in the process of doing a large zfs send | zfs receive when I decided that I wanted to terminate the the zfs send process. I killed it, but the zfs receive doesn't want to die... In the meantime my zfs list command just hangs. Here is the tail end of the truss output from a "truss zfs list

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-07-13 Thread David Smith
I don't believe LUN expansion is quite yet possible under Solaris 10 (11/06). I believe this might make it into the next update but I'm not sure on that. Someone from Sun would need to comment on when this will make it into the production release of Solaris. I know this because I was working

Re: [zfs-discuss] zfs list hangs if zfs send is killed (leaving zfs receive process)

2007-07-13 Thread David Smith
Well, the zfs receive process finally died, and now my zfs list works just fine. If there is a better way to capture what is going on, please let me know and I can duplicate the hang. David This message posted from opensolaris.org ___ zfs-discuss m

[zfs-discuss] zfs under "/var/log"?

2007-07-13 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 While waiting for ZFS root/boot I need to migrate some of my files to ZFS. I've already migrated directories like "/usr/local" or email pools, But I haven't touch system directories like "/var/sadm" or "/var/log". Can I safety move "/var/log" to zfs?.