Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Chris Ridd
On 11 Mar 2010, at 04:17, Erik Trimble wrote: > Matt Cowger wrote: >> On Mar 10, 2010, at 6:30 PM, Ian Collins wrote: >> >> >>> Yes, noting the warning. >> >> Is it safe to execute on a live, active pool? >> >> --m >> > Yes. No reboot necessary. > > The Warning only applies to this

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Mattias Pantzare
> These days I am a fan for forward check access lists, because any one who > owns a DNS server can say that for IPAddressX returns aserver.google.com. > They can not set the forward lookup outside of their domain  but they can > setup a reverse lookup. The other advantage is forword looking access

Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Erik Trimble
Matt Cowger wrote: On Mar 10, 2010, at 6:30 PM, Ian Collins wrote: Yes, noting the warning. Is it safe to execute on a live, active pool? --m Yes. No reboot necessary. The Warning only applies to this circumstance: if you've upgraded from an older build, then upgrading the z

Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Matt Cowger
On Mar 10, 2010, at 6:30 PM, Ian Collins wrote: > Yes, noting the warning. Is it safe to execute on a live, active pool? --m ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Damon Atkins
In /etc/hosts for the format is IP FQDN Alias... Which would means "1.1.1.1 aserver.google.com aserver aserver-le0" I have seen a lot of sysadmins do the following: "1.1.1.1 aserver aserver.google.com" which means the host file (or NIS) does not match DNS As the first entry is FQDN it is then "nam

Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Ian Collins
On 03/11/10 03:21 PM, Harry Putnam wrote: Running b133 When you see this line in a `zpool status' report: status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. Is it safe and effective to heed the advice given i

[zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Harry Putnam
Running b133 When you see this line in a `zpool status' report: status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. Is it safe and effective to heed the advice given in next line: action: Upgrade the pool using

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Kyle McDonald
On 3/10/2010 3:27 PM, Robert Thurlow wrote: As said earlier, it's the string returned from the reverse DNS lookup that needs to be matched. So, to make a long story short, if you log into the server from the client and do "who am i", you will get the host name you need for the share. Anothe

[zfs-discuss] zpool iostat / how to tell if your iop bound

2010-03-10 Thread Chris Banal
What is the best way to tell if your bound by the number of individual operations per second / random io? "zpool iostat" has an "operations" column but this doesn't really tell me if my disks are saturated. Traditional "iostat" doesn't seem to be the greatest place to look when utilizing zfs. Than

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-10 Thread R.G. Keen
I did some reading on DDRn ram and controller chips and how they do ECC. Sorry, but I was moderately incorrect. Here's closer to what happens. DDRn memory has no ECC logic on the DIMMs. What it has is an additional eight bits of memory for each 64 bit read/write operation. That is, for ECC DIM

Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen
Hey list, Grant says his system is hanging after the zpool replace on a v240, running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots. No errors from zpool replace so it sounds like the disk was physically replaced successfully. If anyone else can comment or help Grant diagnose this

Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Well, this system is Solaris 05/09, with patches form November. No snapshots running and no internal contollers. It's a file serving and attached to a HDS disk array. Help and please respond ASAP as this is production! Even an IM would be helpful. --- On Wed, 3/10/10, Cindy Swearingen wrote:

Re: [zfs-discuss] ZFS and FM(A)

2010-03-10 Thread Matthew R. Wilson
Not sure about lighting up the drive tray light, but for automated email notification of faults I use a script that I found here: http://www.prefetch.net/code/fmadmnotifier -Matthew On Wed, Mar 10, 2010 at 2:04 PM, Matt wrote: > Working on my ZFS Bui

Re: [zfs-discuss] Does OpenSolaris mpt driver support LSI 2008 controller

2010-03-10 Thread norm.tallant
So I did manage to get everything to work after switching to the Dev repository and doing a pkg image-update, but what happens when 2010.$spring comes out? Should I wait a week or so after release and then change my repository back to standard and then image-update again? I'm new to Osol; sorr

[zfs-discuss] ZFS and FM(A)

2010-03-10 Thread Matt
Working on my ZFS Build, using a SuperMicro 846E1 chassis and an LSI 1068e SAS controller, I'm wondering how well FM works in OpenSolaris 2009.06. I'm hoping that if ZFS detects an error with a drive, that it'll light up the fault light on the corresponding hot-swap drive in my enclosure and any

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Miles Nordin
> "dc" == Dennis Clarke writes: dc> zfs set dc> sharenfs=nosub\,nosuid\,rw\=hostname1\:hostname2\,root\=hostname2 dc> zpoolname/zfsname/pathname >> wth? Commas and colons are not special characters. This is >> silly. dc> Works real well. I said it was silly, not

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Ian Collins
On 03/11/10 09:27 AM, Robert Thurlow wrote: Ian Collins wrote: On 03/11/10 05:42 AM, Andrew Daugherity wrote: I've found that when using hostnames in the sharenfs line, I had to use the FQDN; the short hostname did not work, even though both client and server were in the same DNS domain and t

Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen
Hi Grant, I don't have a v240 to test but I think you might need to unconfigure the disk first on this system. So I would follow the more complex steps. If this is a root pool, then yes, you would need to use the slice identifier, and make sure it has an SMI disk label. After the zpool replace

Re: [zfs-discuss] Recover rpool

2010-03-10 Thread D. Pinnock
So I was back on it again today and I was following this thread http://opensolaris.org/jive/thread.jspa?threadID=70205&tstart=15 and got the following error when I ran this command zdb -e -bb rpool Traversing all blocks to verify nothing leaked ... Assertion failed: c < SPA_MAXBLOCKSIZE >> SPA_M

Re: [zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread David Dyer-Bennet
On Wed, March 10, 2010 13:32, Matt wrote: > That is exactly what I meant. Sorry for my newbie terminology. I'm so > used to traditional RAID that it's hard to shake. No apology required; it's natural that your questions will occur in the terminology you're familiar with. It's certainly hard to

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Dennis Clarke
>> "ea" == erik ableson writes: >> "dc" == Dennis Clarke writes: > > >> "rw,ro...@100.198.100.0/24", it works fine, and the NFS client > >> can do the write without error. > > ea> I' ve found that the NFS host based settings required the > ea> FQDN, and that the reverse

[zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Please help me out here. I've got a V240 with the root drive, c2t0d0 mirrored to c2t1d0. The mirror is having problems, and I'm unsure of the exact procedure to pull the mirrored drive. I see in various googling: zpool replace rpool c2t1d0 c2t1d0 or I've seen simply: zpool replace rpool c2t1d0

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Robert Thurlow
Ian Collins wrote: On 03/11/10 05:42 AM, Andrew Daugherity wrote: I've found that when using hostnames in the sharenfs line, I had to use the FQDN; the short hostname did not work, even though both client and server were in the same DNS domain and that domain is in the search path, and nsswitc

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Miles Nordin
> "ea" == erik ableson writes: > "dc" == Dennis Clarke writes: >> "rw,ro...@100.198.100.0/24", it works fine, and the NFS client >> can do the write without error. ea> I' ve found that the NFS host based settings required the ea> FQDN, and that the reverse lookup must

Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread David Magda
On Wed, March 10, 2010 14:47, Svein Skogen wrote: > On 10.03.2010 18:18, Edward Ned Harvey wrote: >> The advantage of the tapes is an official support channel, and much >> greater >> archive life. The advantage of the removable disks is that you need no >> special software to do a restore, and yo

Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Gregory Durham
Hey Ed, Thanks for the comment, I have been thinking along the lines of the same thing, I am going to continue to try to use bacula but we will see. Out of curiosity, what version of netbackup are you using? I would love to feel pretty well covered haha. Thanks a lot! Greg On Wed, Mar 10, 2010 a

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-10 Thread Wes Felter
Svein Skogen wrote: Are there any good options for encapsulating/decapsulating a zfs send stream inside FEC (Forward Error Correction)? This could prove very useful both for backup purposes, and for long-haul transmissions. http://www.s.netic.de/gfiala/dvbackup.html http://planete-bcast.inrialp

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Ian Collins
On 03/11/10 05:42 AM, Andrew Daugherity wrote: On Tue, 2010-03-09 at 20:47 -0800, mingli wrote: And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works fine, and the NFS client can do the write without error. Thanks. I've found that when using hostnames in the sh

Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10.03.2010 18:18, Edward Ned Harvey wrote: > The advantage of the tapes is an official support channel, and much greater > archive life. The advantage of the removable disks is that you need no > special software to do a restore, and you could just

Re: [zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread Matt
That is exactly what I meant. Sorry for my newbie terminology. I'm so used to traditional RAID that it's hard to shake. That's great to know. Time to soldier on with the build! -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread David Dyer-Bennet
On Wed, March 10, 2010 12:49, Matt wrote: > So I'm working up my SAN build, and I want to make sure it's going to > behave the way I expect when I go to expand it. > > Currently I'm running 10 - 500GB Seagate Barracuda ES.2 drives as two > drive mirrors added to my tank pool. > > I'm going to be u

[zfs-discuss] ZFS and Striped Mirror behavior with fixed size virtual disks

2010-03-10 Thread Matt
So I'm working up my SAN build, and I want to make sure it's going to behave the way I expect when I go to expand it. Currently I'm running 10 - 500GB Seagate Barracuda ES.2 drives as two drive mirrors added to my tank pool. I'm going to be using this for virtual machine storage, and have creat

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Harry Putnam
"Andrew Daugherity" writes: >> And I update the sharenfs option with "rw,ro...@100.198.100.0/24", >> it works fine, and the NFS client can do the write without error. >> >> Thanks. > > I've found that when using hostnames in the sharenfs line, I had to use > the FQDN; the short hostname did not

Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-10 Thread Harry Putnam
David Dyer-Bennet writes: > On 3/9/2010 4:57 PM, Harry Putnam wrote: >> Also - it appears `zpool scrub -s z3' doesn't really do anything. >> The status report above is taken immediately after a scrub command. >> >> The `scub -s' command just returns the prompt... no output and >> apparently no sc

Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Edward Ned Harvey
> In my case where I reboot the server I cannot get the pool to come > back up. It shows UNAVAIL, I have tried to export before reboot and > reimport it and have not been successful and I dont like this in the > case a power issue of some sort happens. My other option was to mount > using lofiadm h

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Andrew Daugherity
On Tue, 2010-03-09 at 20:47 -0800, mingli wrote: > And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works > fine, and the NFS client can do the write without error. > > Thanks. I've found that when using hostnames in the sharenfs line, I had to use the FQDN; the short hostna

Re: [zfs-discuss] Should ZFS write data out when disk are idle

2010-03-10 Thread Damon Atkins
> > > For a RaidZ, when data is written to a disk, are > individual 32k join together to the same disk and > written out as a single I/O to the disk? > > I/Os can be coalesced, but there is no restriction as > to what can be coalesced. > In other words, subsequent writes can also be > coalesced i

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-10 Thread David Dyer-Bennet
On Wed, March 10, 2010 07:54, Svein Skogen wrote: > Are there any good options for encapsulating/decapsulating a zfs send > stream inside FEC (Forward Error Correction)? This could prove very > useful both for backup purposes, and for long-haul transmissions. I don't know of anything that would

[zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-10 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Are there any good options for encapsulating/decapsulating a zfs send stream inside FEC (Forward Error Correction)? This could prove very useful both for backup purposes, and for long-haul transmissions. If there are any good options for simply piping

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-10 Thread Günther
hello what i'm thinking about is: keep it simple 1. i'm really happy to throw away all sort of tapes. when you need them, they are not working, are to slow ore capacity is too small. use hd*s instead. they are much faster, bigger, cheaper and data are much safer on it. for example a external 2g

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread erik.ableson
I' ve found that the NFS host based settings required the FQDN, and that the reverse lookup must be available in your DNS. Try "rw,root=host1.mydomain.net" Cheers, Erik On 10 mars 2010, at 05:47, mingli wrote: > And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works > fin