Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
On 03/18/10 11:09 AM, Bill Sommerfeld wrote: On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go If blocks that have already been visited are freed a

[zfs-discuss] lazy zfs destroy

2010-03-17 Thread Chris Paul
OK I have a very large zfs snapshot I want to destroy. When I do this, the system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with 128GB of memory. Now this may be more of a function of the IO device, but let's say I don't care that this zfs destroy finishes quickly. I actual

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Damon Atkins
I vote for zfs needing a backup and restore command against a snapshot. backup command should output on stderr at least Full_Filename SizeBytes Modification_Date_1970secSigned so backup software can build indexes and stdout contains the data. The advantage of zfs providing the command is that as

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storgage box?

2010-03-17 Thread David Dyer-Bennet
On 3/17/2010 21:07, Ian Collins wrote: I have a couple of x4540s which use ZFS send/receive to replicate each other hourly. Ech box has about 4TB of data, with maybe 10G of changes per hour. I have run the replication every 15 minutes, but hourly is good enough for us. What software ver

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 08:43:13PM -0500, David Dyer-Bennet wrote: > My own stuff is intended to be backed up by a short-cut combination -- > zfs send/receive to an external drive, which I then rotate off-site (I > have three of a suitable size). However, the only way that actually > works s

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storgage box?

2010-03-17 Thread Ian Collins
On 03/18/10 01:03 PM, Matt wrote: Shipping the iSCSI and SAS questions... Later on, I would like to add a second lower spec box to continuously (or near-continously) mirror the data (using a gig crossover cable, maybe). I have seen lots of ways of mirroring data to other boxes which has left

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread David Dyer-Bennet
On 3/17/2010 17:53, Ian Collins wrote: On 03/18/10 03:53 AM, David Dyer-Bennet wrote: Also, snapshots. For my purposes, I find snapshots at some level a very important part of the backup process. My old scheme was to rsync from primary ZFS pool to backup ZFS pool, and snapshot both pools (wit

Re: [zfs-discuss] zpool reporting consistent read errors

2010-03-17 Thread no...@euphoriq.com
In the end, it was the drive. I replaced the drive and all the errors went away. Another testimony to ZFS - all my data was intact after the resilvering process, even with some other errors in the pool. ZFS resilvered the entire new disk and fixed the other errors. You have to love ZFS. --

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 7:09 PM, Bill Sommerfeld wrote: > On 03/17/10 14:03, Ian Collins wrote: > >> I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% >> done, but not complete: >> >> scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go >> > > Don't panic. If "zpo

[zfs-discuss] Is this a sensible spec for an iSCSI storgage box?

2010-03-17 Thread Matt
Dear list, I am in the process of speccing an OpenSolaris box for iSCSI Storage of XenServer domUs. I'm trying to get the best performance from a combination of decent SATA II disks and some SSDs and I would really appreciate some feedback on my plans. I don't have much idea what the workload

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Khyron
Ian, When you say you spool to tape for off-site archival, what software do you use? On Wed, Mar 17, 2010 at 18:53, Ian Collins wrote: > > I have been using a two stage backup process with my main client, > send/receive to a backup pool and spool to tape for off site archival. > > I use a pa

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Ian Collins
On 03/18/10 03:53 AM, David Dyer-Bennet wrote: Anybody using the in-kernel CIFS is also concerned with the ACLs, and I think that's the big issue. Especially in a paranoid organisation with 100s of ACEs! Also, snapshots. For my purposes, I find snapshots at some level a very important pa

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
On 03/18/10 11:09 AM, Bill Sommerfeld wrote: On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Don't panic. If "zpool iostat" still shows active

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-17 Thread Khyron
For those following along, this is the e-mail I meant to send to the list but instead sent directly to Tonmaus. My mistake, and I apologize for having to re-send. === Start === My understanding, limited though it may be, is that a scrub touches ALL data that has been written, including the pari

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-17 Thread Khyron
Ugh! I meant that to go to the list, so I'll probably re-send it for the benefit of everyone involved in the discussion. There were parts of that that I wanted others to read. >From a re-read of Richard's e-mail, maybe he meant that the number of I/Os queued to a device can be tuned lower and no

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Bill Sommerfeld
On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Don't panic. If "zpool iostat" still shows active reads from all disks in the pool, just step

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Freddie Cash
On Wed, Mar 17, 2010 at 2:03 PM, Ian Collins wrote: > I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% > done, but not complete: > > scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go > > Any ideas? I've had that happen on FreeBSD 7-STABLE (post 7.2 release) us

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 10:15:53AM -0500, Bob Friesenhahn wrote: > Clearly there are many more reads per second occuring on the zfs > filesystem than the ufs filesystem. yes > Assuming that the application-level requests are really the same From the OP, the workload is a "find /". So, ZFS mak

[zfs-discuss] Scrub not completing?

2010-03-17 Thread Ian Collins
I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Any ideas? -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

[zfs-discuss] checksum errors increasing on "spare" vdev?

2010-03-17 Thread Eric Sproul
Hi, One of my colleagues was confused by the output of 'zpool status' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function,

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-17 Thread Tonmaus
Hi, I got a message from you off-list that doesn't show up in the thread even after hours. As you mentioned the aspect here as well I'd like to respond to, I'll do it from here: > Third, as for ZFS scrub prioritization, Richard > answered your question about that. He said it is > low priority

Re: [zfs-discuss] ZFS: clarification on meaning of the autoreplace propert

2010-03-17 Thread Dave Johnson
> Hi Dave, > > I'm unclear about the autoreplace behavior with one > spare that is > connected to two pools. I don't see how it could work > if the autoreplace > property is enabled on both pools, which formats and > replaces a spare Because I already partitioned the disk into slices. Then I ind

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 17.03.2010 18:18, David Dyer-Bennet wrote: > > On Wed, March 17, 2010 10:19, Edward Ned Harvey wrote: > >> However, removable disks are not very >> reliable compared to tapes, and the disks are higher cost per GB, and >> require more volume in the

Re: [zfs-discuss] ZFS: clarification on meaning of the autoreplace property

2010-03-17 Thread Cindy Swearingen
Hi Dave, I'm unclear about the autoreplace behavior with one spare that is connected to two pools. I don't see how it could work if the autoreplace property is enabled on both pools, which formats and replaces a spare disk that might be in-use in another pool (?) Maybe I misunderstand. 1. I th

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread David Dyer-Bennet
On Wed, March 17, 2010 10:19, Edward Ned Harvey wrote: > However, removable disks are not very > reliable compared to tapes, and the disks are higher cost per GB, and > require more volume in the safe deposit box, so the external disk usage is > limited... Only going back for 2-4 weeks of archiv

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Miles Nordin
> "la" == Lori Alt writes: la> This is no longer the case. The send stream format is now la> versioned in such a way that future versions of Solaris will la> be able to read send streams generated by earlier versions of la> Solaris. Your memory of the thread is selective. T

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Miles Nordin
> "k" == Khyron writes: k> Star is probably perfect once it gets ZFS (e.g. NFS v4) ACL nope, because snapshots are lost and clones are expanded wrt their parents, and the original tree of snapshots/clones can never be restored. we are repeating, though. This is all in the archives.

[zfs-discuss] ZFS: clarification on meaning of the autoreplace property

2010-03-17 Thread Dave Johnson
>From pages 29,83,86,90 and 284 of the 10/09 Solaris ZFS Administration guide, it sounds like a disk designated as a hot spare will: 1. Automatically take the place of a bad drive when needed 2. The spare will automatically be detached back to the spare pool when a new device is inserted and bro

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Lori Alt
I think what you're saying is: Why bother trying to backup with "zfs send" when the recommended practice, fully supportable, is to use other tools for backup, such as tar, star, Amanda, bacula, etc. Right? The answer to this is very simple. #1 ... #2 ... Oh, one more thing. "zfs se

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 17.03.2010 16:19, Edward Ned Harvey wrote: *snip > Still ... If you're in situation (b) then you want as many options available > to you as possible. I've helped many people and/or companies before, who > ... Had backup media, but didn't have the

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Edward Ned Harvey
> Why do we want to adapt "zfs send" to do something it was never > intended > to do, and probably won't be adapted to do (well, if at all) anytime > soon instead of > optimizing existing technologies for this use case? The only time I see or hear of anyone using "zfs send" in a way it wasn't inte

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Bob Friesenhahn
On Wed, 17 Mar 2010, Kashif Mumtaz wrote: but on UFS file system averge busy is 50% , any idea why ZFS makes disk more busy ? Clearly there are many more reads per second occuring on the zfs filesystem than the ufs filesystem. Assuming that the application-level requests are really the sam

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread David Dyer-Bennet
On Wed, March 17, 2010 06:28, Khyron wrote: > The Best Practices Guide is also very clear about send and receive > NOT being designed explicitly for backup purposes. I find it odd > that so many people seem to want to force this point. ZFS appears > to have been designed to allow the use of wel

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 11:23 AM, wrote: > > > >IMHO, what matters is that pretty much everything from the disk controller > >to the CPU and network interface is advertised in power-of-2 terms and > disks > >sit alone using power-of-10. And students are taught that computers work > >with bits and

Re: [zfs-discuss] How to reserve space for a file on a zfs filesystem

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 6:43 AM, wensheng liu wrote: > Hi all, > > How to reserve a space on a zfs filesystem? For mkfiel or dd will write > data to the > block, it is time consuming. whiel "mkfile -n" will not really hold the > space. > And zfs's set reservation only work on filesytem, not on fil

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-17 Thread Bob Friesenhahn
On Tue, 16 Mar 2010, Tonmaus wrote: None of them is active on that pool or in any existing file system. Maybe the issue is particular to RAIDZ2, which is comparably recent. On that occasion: does anybody know if ZFS reads all parities during a scrub? Wouldn't it be sufficient for stale corrup

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Edho P Arief
On Wed, Mar 17, 2010 at 9:09 PM, Giovanni Tirloni wrote: > IMHO, what matters is that pretty much everything from the disk controller > to the CPU and network interface is advertised in power-of-2 terms and disks > sit alone using power-of-10. And students are taught that computers work > with bit

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-17 Thread Andrew
Hi all, Great news - by attaching an identical size RDM to the server and then grabbing the first 128K using the command you specified Ross dd if=/dev/rdsk/c8t4d0p0 of=~/disk.out bs=512 count=256 we then proceeded to inject this into the faulted RDM and lo and behold the volume recovered! dd

[zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Kashif Mumtaz
hi , I'm using sun T1000 machines one machine is installed Solaris 10 with UFS and other system with ZFS file system , ZFS machine is performing slow . Running following commands on both systems shows Disk get busy immediatly to 100% ZFS MACHINE find / > /d

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Casper . Dik
>IMHO, what matters is that pretty much everything from the disk controller >to the CPU and network interface is advertised in power-of-2 terms and disks >sit alone using power-of-10. And students are taught that computers work >with bits and so everything is a power of 2. That is simply not tru

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 17.03.2010 15:15, Edward Ned Harvey wrote: >> I think what you're saying is: Why bother trying to backup with "zfs >> send" >> when the recommended practice, fully supportable, is to use other tools >> for >> backup, such as tar, star, Amanda, bacu

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Edward Ned Harvey
> I think what you're saying is: Why bother trying to backup with "zfs > send" > when the recommended practice, fully supportable, is to use other tools > for > backup, such as tar, star, Amanda, bacula, etc. Right? > > The answer to this is very simple. > #1 ... > #2 ... Oh, one more thing.

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 9:34 AM, David Dyer-Bennet wrote: > On 3/16/2010 23:21, Erik Trimble wrote: > >> On 3/16/2010 8:29 PM, David Dyer-Bennet wrote: >> >>> On 3/16/2010 17:45, Erik Trimble wrote: >>> David Dyer-Bennet wrote: > On Tue, March 16, 2010 14:59, Erik Trimble wrote: >>>

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Khyron
To be sure, Ed, I'm not asking: Why bother trying to backup with "zfs send" when there are fully supportable and working options available right NOW? Rather, I am asking: Why do we want to adapt "zfs send" to do something it was never intended to do, and probably won't be adapted to do (well, if

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Edward Ned Harvey
> The one thing that I keep thinking, and which I have yet to see > discredited, is that > ZFS file systems use POSIX semantics.  So, unless you are using > specific features > (notably ACLs, as Paul Henson is), you should be able to backup those > file systems > using well known tools.  This is

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Khyron
Exactly! This is what I meant, at least when it comes to backing up ZFS datasets. There are tools available NOW, such as Star, which will backup ZFS datasets due to the POSIX nature of those datasets. As well, Amanda, Bacula, NetBackup, Networker and probably some others I missed. Re-inventing t

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-17 Thread Ross Walker
On Mar 17, 2010, at 2:30 AM, Erik Ableson wrote: On 17 mars 2010, at 00:25, Svein Skogen wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 16.03.2010 22:31, erik.ableson wrote: On 16 mars 2010, at 21:00, Marc Nicholas wrote: On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen mailto

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Joerg Schilling
Stephen Bunn wrote: > between our machine's pools and our backup server pool. It would be > nice, however, if some sort of enterprise level backup solution in the > style of ufsdump was introduced to ZFS. Star can do the same as ufsdump does but independent of OS and filesystem. Star is curr

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 17.03.2010 13:31, Svein Skogen wrote: > On 17.03.2010 12:28, Khyron wrote: >> Note to readers: There are multiple topics discussed herein. Please >> identify which *SNIP* > > How does backing up the NFSv4 acls help you backup up a zvol (shared f

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread David Dyer-Bennet
On 3/16/2010 23:21, Erik Trimble wrote: On 3/16/2010 8:29 PM, David Dyer-Bennet wrote: On 3/16/2010 17:45, Erik Trimble wrote: David Dyer-Bennet wrote: On Tue, March 16, 2010 14:59, Erik Trimble wrote: Has there been a consideration by anyone to do a class-action lawsuit for false advertisin

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 17.03.2010 12:28, Khyron wrote: > Note to readers: There are multiple topics discussed herein. Please > identify which > idea(s) you are responding to, should you respond. Also make sure to > take in all of > this before responding. Something you

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Stephen Bunn
On 03/17/2010 08:28 PM, Khyron wrote: The Best Practices Guide is also very clear about send and receive NOT being designed explicitly for backup purposes. I find it odd that so many people seem to want to force this point. ZFS appears to have been designed to allow the use of well known tools

[zfs-discuss] How to reserve space for a file on a zfs filesystem

2010-03-17 Thread wensheng liu
Hi all, How to reserve a space on a zfs filesystem? For mkfiel or dd will write data to the block, it is time consuming. whiel "mkfile -n" will not really hold the space. And zfs's set reservation only work on filesytem, not on file? Could anyone provide a solution for this? Thanks very much Vin

[zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Khyron
Note to readers: There are multiple topics discussed herein. Please identify which idea(s) you are responding to, should you respond. Also make sure to take in all of this before responding. Something you want to discuss may already be covered at a later point in this e-mail, including NDMP and

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread Paul van der Zwan
On 17 mrt 2010, at 10:56, zfs ml wrote: > On 3/17/10 1:21 AM, Paul van der Zwan wrote: >> >> On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: >> >>> Someone correct me if I'm wrong, but it could just be a coincidence. That >>> is, perhaps the data that you copied happens to lead to a dedup

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread zfs ml
On 3/17/10 1:21 AM, Paul van der Zwan wrote: On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: Someone correct me if I'm wrong, but it could just be a coincidence. That is, perhaps the data that you copied happens to lead to a dedup ratio relative to the data that's already on there. You c

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Casper . Dik
>Carson Gaspar wrote: >>> Not quite. >>> 11 x 10^12 =~ 10.004 x (1024^4). >>> >>> So, the 'zpool list' is right on, at "10T" available. >> >> Duh, I was doing GiB math (y = x * 10^9 / 2^20), not TiB math (y = x * >> 10^12 / 2^40). >> >> Thanks for the correction. >> >You're welcome. :-) > > >On

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Roland Rambau
Eric, in my understanding ( which I learned from more qualified people but I may be mistaken anyway ), whenever we discuss a transfer rate like x Mb/s, y GB/s or z PB/d, the M, G, T or P refers to the frequency and not to the data. 1 MB/s means "transfer bytes at 1 MHz", NOT "transfer megabyte

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread Paul van der Zwan
On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: > Someone correct me if I'm wrong, but it could just be a coincidence. That is, > perhaps the data that you copied happens to lead to a dedup ratio relative to > the data that's already on there. You could test this out by copying a few > gig

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread Paul van der Zwan
On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: > Someone correct me if I'm wrong, but it could just be a coincidence. That is, > perhaps the data that you copied happens to lead to a dedup ratio relative to > the data that's already on there. You could test this out by copying a few > gig