Hm -
This caused me to ask the question: Who keeps the capabilities in sync?
Is there a programmatic way we can have bash (or other shells)
interrogate zpool and zfs to find out what it's capabilities are?
I'm thinking something like having bash spawn a zfs command to see what
options are avai
Alex Peng <[EMAIL PROTECTED]> writes:
> Is it fun to have autocomplete in zpool or zfs command?
>
> For instance -
>
> "zfs cr 'Tab key' " will become "zfs create"
> "zfs clone 'Tab key' " will show me the available snapshots
> "zfs set 'Tab key' " will show me the available properties,
Victor, thanks for posting that. It really is interesting to see exactly what
happened, and to read about how zfs pools can be recovered.
Your work on these forums has done much to re-assure me that ZFS is stable
enough for us to be using on a live server, and I look forward to seeing
automate
Is it fun to have autocomplete in zpool or zfs command?
For instance -
"zfs cr 'Tab key' " will become "zfs create"
"zfs clone 'Tab key' " will show me the available snapshots
"zfs set 'Tab key' " will show me the available properties, then "zfs set
com 'Tab key'" will become "zfs se
Hi,
Maybe this might be an option too?
http://blogs.sun.com/storage/entry/mike_shapiro_and_steve_o
Original Message
Subject: [zfs-discuss] Strategies to avoid single point of failure w/
X45x0Servers?
From: Solaris <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
Date: T
If you are having touble booting to the mirrored drive, the following is what
we had to do to correctly boot off the mirrored drive in a Thumper mirrored
with disksuite. The root drive is c5t0d0 and the mirror is c5t4d0. The BIOS
will try those 2 drives.
Just a note, if it ever switches to c5t
On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
>> Nevada isn't production code. For real ZFS testing, you must use a
>> production release, currently Solaris 10 (update 5, soon to be update 6).
>
> I m
On Oct 8, 2008, at 4:27 PM 10/8/, Jim Dunham wrote:
> , a single Solaris node can not be both
> the primary and secondary node.
>
> If one wants this type of mirror functionality on a single node, use
> host based or controller based mirroring software.
If one is running multiple zones, couldn
I seem to be having the same problem as well. Has anyone found out what the
cause is, and how to fix it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
I have an X4500 fileserver (NFS, Samba) running OpenSolaris 2008.05 pkg
upgraded to snv_91 with ~3200 filesystems (and ~27429 datasets, including
snapshots).
I've been encountering some pretty big slow-downs on this system when running
certain zfs commands. The one causing me the most pain at
Yeah -F should probably work fine (I'm trying it as we speak, but it takes a
little while), but it makes me a bit nervous. I mean, it should only be
necessary if (as the error message suggests) something HAS actually changed,
right?
So, here's what I tried - first of all, I set the backup FS t
Perhaps a better solution would be to front a J4500 with a pair of
X4100s with Sun Cluster? Hrrm...
On Thu, Oct 9, 2008 at 4:30 PM, Glaser, David <[EMAIL PROTECTED]> wrote:
> As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If
> spaced right, you can loose 6(?) disks wit
As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If
spaced right, you can loose 6(?) disks without the pool dying. The root disk is
mirrored, so if one dies it's not the end of the world. With the exception that
grub is thoroughly fraked up in that if the 0 disk dies, yo
Thanks for the information...
In my case, I do not have a root pool, its still UFS.
The configuration is essentially that I have two arrays. The system was
initially built with one array. A zfs pool was created from the whole
disks on the array. The pool is more or less used for general storage
On Thu, Oct 9, 2008 at 3:09 PM, Solaris <[EMAIL PROTECTED]> wrote:
> I have been leading the charge in my IT department to evaluate the Sun
> Fire X45x0 as a commodity storage platform, in order to leverage
> capacity and cost against our current NAS solution which is backed by
> EMC Fiberchannel
I have been leading the charge in my IT department to evaluate the Sun
Fire X45x0 as a commodity storage platform, in order to leverage
capacity and cost against our current NAS solution which is backed by
EMC Fiberchannel SAN. For our corporate environments, it would seem
like a single machine wo
There's plenty of 8 port, either full 8 or 6+2 combinations etc.
Anyway I went with a Supermicro PDSME+ which appears to work well
according to the HCL, and bought two of the AOC-SAT2-MV8's and will
just use those. It's actually being delivered today...
On Thu, Oct 9, 2008 at 9:44 AM, Joe S <[EMA
Borys Saulyak wrote:
>> As a follow up to the whole story, with the fantastic help of
>> Victor, the failed pool is now imported and functional thanks to
>> the redundancy in the meta data.
> It would be really useful if you could publish the steps to recover
> the pools.
Here it is:
Executive s
On Thu, 9 Oct 2008, Miles Nordin wrote:
>
> catastrophically. If this is really the situation, then ZFS needs to
> give the sysadmin a way to isolate and fix the problems
> deterministically before filling the pool with data, not just blame
> the sysadmin based on nebulous speculatory hindsight gr
> "gs" == Greg Shaw <[EMAIL PROTECTED]> writes:
gs> Nevada isn't production code. For real ZFS testing, you must
gs> use a production release, currently Solaris 10 (update 5, soon
gs> to be update 6).
based on list feedback, my impression is that the results of a
``test'' confine
Thanks for all of your input. I intalled 2008.11 build 98 as an HVM guest
under Xen:
If you make a block-level copy of the boot pool and attach it as a disk on the
original VM, "zpool import" does not recognize it. If you attach a non-root
pool as a disk, "zpool import" does recognize it. S
On Wed, Oct 8, 2008 at 9:29 PM, Joseph Mocker
<[EMAIL PROTECTED]<[EMAIL PROTECTED]>
> wrote:
> Hello,
>
> I haven't seen this discussed before. Any pointers would be appreciated.
>
> I'm curious, if I have a set of disks in a system, is there any benefit
> or disadvantage to breaking the disks int
You may need an add-on SATA card. I haven't come across any 8 port motherboards.
As far as chipsets are concerned, take a look at something with the
Intel X38 chipset. It's the only one of the desktop chipsets that
supports ECC ram. Coincidentally, it's also the chipset used in the
Sun Ultra 24 wo
Paul Pilcher wrote:
> All;
>
> I have a question about ZFS and how it protects data integrity in the
> context of a replication scenario.
>
> First, ZFS is designed such that all data on disk is in a consistent
> state. Likewise, all data in a ZFS snapshot on disk is in a consistent
> state. F
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to
have one compiled for sparc. I tried compiling your source code but it threw up
with many errors. I'm not a programmer and reading the source code means
absolutely nothing to me. One error was:
cc labelfix.c
"labelfix.
On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote:
> Nevada isn't production code. For real ZFS testing, you must use a
> production release, currently Solaris 10 (update 5, soon to be update 6).
I misstated before in my LDoms case. The corrupted pool was on
Solaris 10, with L
Perhaps I mis-understand, but the below issues are all based on Nevada,
not Solaris 10.
Nevada isn't production code. For real ZFS testing, you must use a
production release, currently Solaris 10 (update 5, soon to be update 6).
In the last 2 years, I've stored everything in my environment (
Joseph Mocker schrieb:
> Hello,
>
> I haven't seen this discussed before. Any pointers would be appreciated.
>
> I'm curious, if I have a set of disks in a system, is there any benefit
> or disadvantage to breaking the disks into multiple pools instead of a
> single pool?
>
> Does multiple poo
Unfortunely I can only agree to the doubts about running ZFS in
production environments, i've lost ditto-blocks, i''ve gotten
corrupted pools and a bunch of other failures even in
mirror/raidz/raidz2 setups with or without hardware mirrors/raid5/6.
Plus the insecurity of a sudden crash/reboot will
On Thu, Oct 9, 2008 at 7:44 AM, Ahmed Kamal
<[EMAIL PROTECTED]> wrote:
>
>>
>>In the past year I've lost more ZFS file systems than I have any other
>>type of file system in the past 5 years. With other file systems I
>>can almost always get some data back. With ZFS I can't get an
>
>In the past year I've lost more ZFS file systems than I have any other
>type of file system in the past 5 years. With other file systems I
>can almost always get some data back. With ZFS I can't get any back.
Thats scary to hear!
>
>
I am really scared now! I was the one trying to
0n Thu, Oct 09, 2008 at 06:37:23AM -0500, Mike Gerdts wrote:
>FWIW, I belive that I have hit the same type of bug as the OP in the
>following combinations:
>
>- T2000, LDoms 1.0, various builds of Nevada in control and guest
> domains.
>- Laptop, VirtualBox 1.6.2, Wi
On Thu, Oct 9, 2008 at 4:53 AM, . <[EMAIL PROTECTED]> wrote:
> While it's clearly my own fault for taking the risks I did, it's
> still pretty frustrating knowing that all my data is likely still
> intact and nicely checksummed on the disk but that none of it is
> accessible due to some tiny filesy
All;
I have a question about ZFS and how it protects data integrity in the
context of a replication scenario.
First, ZFS is designed such that all data on disk is in a consistent
state. Likewise, all data in a ZFS snapshot on disk is in a consistent
state. Further, ZFS, by virtue of its 256
> His explanation: he invalidated the incorrect
> uberblocks and forced zfs to revert to an earlier
> state that was consistent.
Would someone be willing to document the steps required in order to do this
please?
I have a disk in a similar state:
# zpool import
pool: tank
id: 132344393378
On 09.10.2008, at 09:17, Brent Jones wrote:
> Correct, the other side should be set Read Only, that way nothing at
> all is modified when the other hosts tries to zfs send.
Since I use the receiving side for backup purposes only, which means
that any change would be accidental - shouldn't a rec
On Wed, Oct 8, 2008 at 10:49 PM, BJ Quinn <[EMAIL PROTECTED]> wrote:
> Oh and I had been doing this remotely, so I didn't notice the following error
> before -
>
> receiving incremental stream of datapool/[EMAIL PROTECTED] into backup/[EMAIL
> PROTECTED]
> cannot receive incremental stream: desti
37 matches
Mail list logo