Re: [zfs-discuss] add autocomplete feature for zpool, zfs command

2008-10-09 Thread Nathan Kroenert
Hm - This caused me to ask the question: Who keeps the capabilities in sync? Is there a programmatic way we can have bash (or other shells) interrogate zpool and zfs to find out what it's capabilities are? I'm thinking something like having bash spawn a zfs command to see what options are avai

Re: [zfs-discuss] add autocomplete feature for zpool, zfs command

2008-10-09 Thread Boyd Adamson
Alex Peng <[EMAIL PROTECTED]> writes: > Is it fun to have autocomplete in zpool or zfs command? > > For instance - > > "zfs cr 'Tab key' " will become "zfs create" > "zfs clone 'Tab key' " will show me the available snapshots > "zfs set 'Tab key' " will show me the available properties,

Re: [zfs-discuss] more ZFS recovery

2008-10-09 Thread Ross
Victor, thanks for posting that. It really is interesting to see exactly what happened, and to read about how zfs pools can be recovered. Your work on these forums has done much to re-assure me that ZFS is stable enough for us to be using on a live server, and I look forward to seeing automate

[zfs-discuss] add autocomplete feature for zpool, zfs command

2008-10-09 Thread Alex Peng
Is it fun to have autocomplete in zpool or zfs command? For instance - "zfs cr 'Tab key' " will become "zfs create" "zfs clone 'Tab key' " will show me the available snapshots "zfs set 'Tab key' " will show me the available properties, then "zfs set com 'Tab key'" will become "zfs se

Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Daryl Doami
Hi, Maybe this might be an option too? http://blogs.sun.com/storage/entry/mike_shapiro_and_steve_o Original Message Subject: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0Servers? From: Solaris <[EMAIL PROTECTED]> To: zfs-discuss@opensolaris.org Date: T

Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Chris Greer
If you are having touble booting to the mirrored drive, the following is what we had to do to correctly boot off the mirrored drive in a Thumper mirrored with disksuite. The root drive is c5t0d0 and the mirror is c5t4d0. The BIOS will try those 2 drives. Just a note, if it ever switches to c5t

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote: >> Nevada isn't production code. For real ZFS testing, you must use a >> production release, currently Solaris 10 (update 5, soon to be update 6). > > I m

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-09 Thread Keith Bierman
On Oct 8, 2008, at 4:27 PM 10/8/, Jim Dunham wrote: > , a single Solaris node can not be both > the primary and secondary node. > > If one wants this type of mirror functionality on a single node, use > host based or controller based mirroring software. If one is running multiple zones, couldn

Re: [zfs-discuss] 200805 Grub problems

2008-10-09 Thread Mike Aldred
I seem to be having the same problem as well. Has anyone found out what the cause is, and how to fix it? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zf

[zfs-discuss] "zfs set sharenfs" takes a long time to return.

2008-10-09 Thread James Neal
I have an X4500 fileserver (NFS, Samba) running OpenSolaris 2008.05 pkg upgraded to snv_91 with ~3200 filesystems (and ~27429 datasets, including snapshots). I've been encountering some pretty big slow-downs on this system when running certain zfs commands. The one causing me the most pain at

Re: [zfs-discuss] Segmentation fault / core dump with recursive

2008-10-09 Thread BJ Quinn
Yeah -F should probably work fine (I'm trying it as we speak, but it takes a little while), but it makes me a bit nervous. I mean, it should only be necessary if (as the error message suggests) something HAS actually changed, right? So, here's what I tried - first of all, I set the backup FS t

Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Solaris
Perhaps a better solution would be to front a J4500 with a pair of X4100s with Sun Cluster? Hrrm... On Thu, Oct 9, 2008 at 4:30 PM, Glaser, David <[EMAIL PROTECTED]> wrote: > As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If > spaced right, you can loose 6(?) disks wit

Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Glaser, David
As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If spaced right, you can loose 6(?) disks without the pool dying. The root disk is mirrored, so if one dies it's not the end of the world. With the exception that grub is thoroughly fraked up in that if the 0 disk dies, yo

Re: [zfs-discuss] Pros/Cons of multiple zpools?

2008-10-09 Thread Joseph Mocker
Thanks for the information... In my case, I do not have a root pool, its still UFS. The configuration is essentially that I have two arrays. The system was initially built with one array. A zfs pool was created from the whole disks on the array. The pool is more or less used for general storage

Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Tim
On Thu, Oct 9, 2008 at 3:09 PM, Solaris <[EMAIL PROTECTED]> wrote: > I have been leading the charge in my IT department to evaluate the Sun > Fire X45x0 as a commodity storage platform, in order to leverage > capacity and cost against our current NAS solution which is backed by > EMC Fiberchannel

[zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Solaris
I have been leading the charge in my IT department to evaluate the Sun Fire X45x0 as a commodity storage platform, in order to leverage capacity and cost against our current NAS solution which is backed by EMC Fiberchannel SAN. For our corporate environments, it would seem like a single machine wo

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-09 Thread mike
There's plenty of 8 port, either full 8 or 6+2 combinations etc. Anyway I went with a Supermicro PDSME+ which appears to work well according to the HCL, and bought two of the AOC-SAT2-MV8's and will just use those. It's actually being delivered today... On Thu, Oct 9, 2008 at 9:44 AM, Joe S <[EMA

Re: [zfs-discuss] more ZFS recovery

2008-10-09 Thread Victor Latushkin
Borys Saulyak wrote: >> As a follow up to the whole story, with the fantastic help of >> Victor, the failed pool is now imported and functional thanks to >> the redundancy in the meta data. > It would be really useful if you could publish the steps to recover > the pools. Here it is: Executive s

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Bob Friesenhahn
On Thu, 9 Oct 2008, Miles Nordin wrote: > > catastrophically. If this is really the situation, then ZFS needs to > give the sysadmin a way to isolate and fix the problems > deterministically before filling the pool with data, not just blame > the sysadmin based on nebulous speculatory hindsight gr

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Miles Nordin
> "gs" == Greg Shaw <[EMAIL PROTECTED]> writes: gs> Nevada isn't production code. For real ZFS testing, you must gs> use a production release, currently Solaris 10 (update 5, soon gs> to be update 6). based on list feedback, my impression is that the results of a ``test'' confine

Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-09 Thread Stephen Quintero
Thanks for all of your input. I intalled 2008.11 build 98 as an HVM guest under Xen: If you make a block-level copy of the boot pool and attach it as a disk on the original VM, "zpool import" does not recognize it. If you attach a non-root pool as a disk, "zpool import" does recognize it. S

Re: [zfs-discuss] Pros/Cons of multiple zpools?

2008-10-09 Thread Johan Hartzenberg
On Wed, Oct 8, 2008 at 9:29 PM, Joseph Mocker <[EMAIL PROTECTED]<[EMAIL PROTECTED]> > wrote: > Hello, > > I haven't seen this discussed before. Any pointers would be appreciated. > > I'm curious, if I have a set of disks in a system, is there any benefit > or disadvantage to breaking the disks int

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-09 Thread Joe S
You may need an add-on SATA card. I haven't come across any 8 port motherboards. As far as chipsets are concerned, take a look at something with the Intel X38 chipset. It's the only one of the desktop chipsets that supports ECC ram. Coincidentally, it's also the chipset used in the Sun Ultra 24 wo

Re: [zfs-discuss] ZFS Replication Question

2008-10-09 Thread Richard Elling
Paul Pilcher wrote: > All; > > I have a question about ZFS and how it protects data integrity in the > context of a replication scenario. > > First, ZFS is designed such that all data on disk is in a consistent > state. Likewise, all data in a ZFS snapshot on disk is in a consistent > state. F

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-10-09 Thread Ron Halstead
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to have one compiled for sparc. I tried compiling your source code but it threw up with many errors. I'm not a programmer and reading the source code means absolutely nothing to me. One error was: cc labelfix.c "labelfix.

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote: > Nevada isn't production code. For real ZFS testing, you must use a > production release, currently Solaris 10 (update 5, soon to be update 6). I misstated before in my LDoms case. The corrupted pool was on Solaris 10, with L

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Greg Shaw
Perhaps I mis-understand, but the below issues are all based on Nevada, not Solaris 10. Nevada isn't production code. For real ZFS testing, you must use a production release, currently Solaris 10 (update 5, soon to be update 6). In the last 2 years, I've stored everything in my environment (

Re: [zfs-discuss] Pros/Cons of multiple zpools?

2008-10-09 Thread Thomas Maier-Komor
Joseph Mocker schrieb: > Hello, > > I haven't seen this discussed before. Any pointers would be appreciated. > > I'm curious, if I have a set of disks in a system, is there any benefit > or disadvantage to breaking the disks into multiple pools instead of a > single pool? > > Does multiple poo

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Timh Bergström
Unfortunely I can only agree to the doubts about running ZFS in production environments, i've lost ditto-blocks, i''ve gotten corrupted pools and a bunch of other failures even in mirror/raidz/raidz2 setups with or without hardware mirrors/raid5/6. Plus the insecurity of a sudden crash/reboot will

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 7:44 AM, Ahmed Kamal <[EMAIL PROTECTED]> wrote: > >> >>In the past year I've lost more ZFS file systems than I have any other >>type of file system in the past 5 years. With other file systems I >>can almost always get some data back. With ZFS I can't get an

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Ahmed Kamal
> >In the past year I've lost more ZFS file systems than I have any other >type of file system in the past 5 years. With other file systems I >can almost always get some data back. With ZFS I can't get any back. Thats scary to hear! > > I am really scared now! I was the one trying to

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Wilkinson, Alex
0n Thu, Oct 09, 2008 at 06:37:23AM -0500, Mike Gerdts wrote: >FWIW, I belive that I have hit the same type of bug as the OP in the >following combinations: > >- T2000, LDoms 1.0, various builds of Nevada in control and guest > domains. >- Laptop, VirtualBox 1.6.2, Wi

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 4:53 AM, . <[EMAIL PROTECTED]> wrote: > While it's clearly my own fault for taking the risks I did, it's > still pretty frustrating knowing that all my data is likely still > intact and nicely checksummed on the disk but that none of it is > accessible due to some tiny filesy

[zfs-discuss] ZFS Replication Question

2008-10-09 Thread Paul Pilcher
All; I have a question about ZFS and how it protects data integrity in the context of a replication scenario. First, ZFS is designed such that all data on disk is in a consistent state. Likewise, all data in a ZFS snapshot on disk is in a consistent state. Further, ZFS, by virtue of its 256

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread .
> His explanation: he invalidated the incorrect > uberblocks and forced zfs to revert to an earlier > state that was consistent. Would someone be willing to document the steps required in order to do this please? I have a disk in a similar state: # zpool import pool: tank id: 132344393378

Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-10-09 Thread Christian Heßmann
On 09.10.2008, at 09:17, Brent Jones wrote: > Correct, the other side should be set Read Only, that way nothing at > all is modified when the other hosts tries to zfs send. Since I use the receiving side for backup purposes only, which means that any change would be accidental - shouldn't a rec

Re: [zfs-discuss] Segmentation fault / core dump with recursive send/recv

2008-10-09 Thread Brent Jones
On Wed, Oct 8, 2008 at 10:49 PM, BJ Quinn <[EMAIL PROTECTED]> wrote: > Oh and I had been doing this remotely, so I didn't notice the following error > before - > > receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL > PROTECTED] > cannot receive incremental stream: desti