Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-18 Thread Fajar A. Nugraha
On Fri, Mar 19, 2010 at 12:38 PM, Rob wrote: > Can a ZFS send stream become corrupt when piped between two hosts across a > WAN link using 'ssh'? unless the end computers are bad (memory problems, etc.), then the answer should be no. ssh has its own error detection method, and the zfs send strea

[zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-18 Thread Rob
Can a ZFS send stream become corrupt when piped between two hosts across a WAN link using 'ssh'? For example a host in Australia sends a stream to a host in the UK as follows: # zfs send tank/f...@now | ssh host.uk receive tank/bar -- This message posted from opensolaris.org ___

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 09:54:28PM -0700, Tonmaus wrote: > > (and the details of how much and how low have changed a few times > > along the version trail). > > Is there any documentation about this, besides source code? There are change logs and release notes, and random blog postings along th

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Tonmaus
Hello Dan, Thank you very much for this interesting reply. > roughly speaking, reading through the filesystem does > the least work > possible to return the data. A scrub does the most > work possible to > check the disks (and returns none of the data). Thanks for the clarification. That's what

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Edward Ned Harvey
> > From what I've read so far, zfs send is a block level api and thus > > cannot be > > used for real backups. As a result of being block level oriented, the > > Weirdo. The above "cannot be used for real backups" is obviously > subjective, is incorrect and widely discussed here, so I just say >

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Will Murnane
On Thu, Mar 18, 2010 at 14:44, Chris Murray wrote: > Good evening, > I understand that NTFS & VMDK do not relate to Solaris or ZFS, but I was > wondering if anyone has any experience of checking the alignment of data > blocks through that stack? It seems to me there's a simple way to check. Pic

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Erik Trimble
Erik Trimble wrote: James C. McPherson wrote: On 18/03/10 10:05 PM, Kashif Mumtaz wrote: Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 1

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Erik Trimble
James C. McPherson wrote: On 18/03/10 10:05 PM, Kashif Mumtaz wrote: Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 142900-02(Dec 09 ) U

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread David Magda
On Mar 18, 2010, at 15:00, Miles Nordin wrote: Admittedly the second bullet is hard to manage while still backing up zvol's, pNFS / Lustre data-node datasets, windows ACL's, properties, Some commercial backup products are able to parse VMware's VMDK files to get file system information of th

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread James C. McPherson
On 18/03/10 10:05 PM, Kashif Mumtaz wrote: Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 142900-02(Dec 09 ) UFS machine Hard disk 1 TB s

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread David Magda
On Mar 18, 2010, at 14:23, Bob Friesenhahn wrote: On Thu, 18 Mar 2010, erik.ableson wrote: Ditto on the Linux front. I was hoping that Solaris would be the exception, but no luck. I wonder if Apple wouldn't mind lending one of the driver engineers to OpenSolaris for a few months... Per

Re: [zfs-discuss] lazy zfs destroy

2010-03-18 Thread Brandon High
On Wed, Mar 17, 2010 at 9:19 PM, Chris Paul wrote: > OK I have a very large zfs snapshot I want to destroy. When I do this, the > system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with > 128GB of memory. Now this may be more of a function of the IO device, but > let's say I do

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Brian H. Nelson
I have only heard of alignment being discussed in reference to block-based storage (like DASD/iSCSI/FC). I'm not really sure how it would work out over NFS. I do see why you are asking though. My understanding is that VMDK files are basically 'aligned' but the partitions inside of them may not

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Daniel Carosone
As noted, the ratio caclulation applies over the data attempted to dedup, not the whole pool. However, I saw a commit go by just in the last couple of days about the dedupratio calculation being misleading, though I didn't check the details. Presumably this will be reported differently from the

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 05:21:17AM -0700, Tonmaus wrote: > > No, because the parity itself is not verified. > > Aha. Well, my understanding was that a scrub basically means reading > all data, and compare with the parities, which means that these have > to be re-computed. Is that correct? A scru

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Ian Collins
On 03/18/10 12:07 PM, Khyron wrote: Ian, When you say you spool to tape for off-site archival, what software do you use? NetVault. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-d

Re: [zfs-discuss] Heap corruption, possibly hotswap related (snv_134 with imr_sas, nvdisk drivers)

2010-03-18 Thread Kaya Bekiroğlu
2010/3/18 Kaya Bekiroğlu : > I first noticed this panic when conducting hot-swap tests.  However, > now I see it every hour or so, even when all drives are attached and > no ZFS resilvering is in progress. It appears that these panics recur on my system when the zfs-auto-snapshot service runs. Di

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 21:31, Daniel Carosone wrote: > You have a gremlin to hunt... Wouldn't Sun help here? ;) (sorry couldn't help myself, I've spent a week hunting gremlins until I hit the brick wall of the MPT problem) //Svein - -- - +

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Marc Nicholas
On Thu, Mar 18, 2010 at 2:44 PM, Chris Murray wrote: > Good evening, > I understand that NTFS & VMDK do not relate to Solaris or ZFS, but I was > wondering if anyone has any experience of checking the alignment of data > blocks through that stack? > NetApp has a great little tool called mbrscan/m

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 03:36:22AM -0700, Kashif Mumtaz wrote: > I did another test on both machine. And write performance on ZFS > extraordinary slow. > - > In ZFS data was being write around 1037 kw/s while disk remain busy

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Joseph Mocker
Not having specific knowledge of the VMDK format, I think what you are seeing is that there is extra data associated with maintaining the VMDK. If you are seeing lower dedup ratios than you would expect, it sounds like some of this extra data could be added to each block. The VMDK spec appears

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Miles Nordin
> "c" == Miles Nordin writes: > "mg" == Mike Gerdts writes: c> are compatible with the goals of an archival tool: sorry, obviously I meant ``not compatible''. mg> Richard Elling made an interesting observation that suggests mg> that storing a zfs send data stream on tape i

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Chris Murray
Please excuse my pitiful example. :-) I meant to say "*less* overlap between virtual machines", as clearly block "AABB" occurs in both. -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Murray Sent: 18 March 2010 1

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread Scott Meilicke
>Apple users have different expectations regarding data loss than Solaris and >Linux users do. Come on, no Apple user bashing. Not true, not fair. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.o

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Miles Nordin
> "djm" == Darren J Moffat writes: djm> I've logged CR# "6936195 ZFS send stream while checksumed djm> isn't fault tollerant" to keep track of that. Other tar/cpio-like tools are also able to: * verify the checksums without extracting (like scrub) * verify or even extract the strea

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread Carson Gaspar
Bob Friesenhahn wrote: On Thu, 18 Mar 2010, erik.ableson wrote: Ditto on the Linux front. I was hoping that Solaris would be the exception, but no luck. I wonder if Apple wouldn't mind lending one of the driver engineers to OpenSolaris for a few months... Perhaps the issue is the filesyst

[zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Chris Murray
Good evening, I understand that NTFS & VMDK do not relate to Solaris or ZFS, but I was wondering if anyone has any experience of checking the alignment of data blocks through that stack? I have a VMware ESX 4.0 host using storage presented over NFS from ZFS filesystems (recordsize 4KB). Within

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 17:49, erik.ableson wrote: > Conceptually, think of a ZFS system as a SAN Box with built-in asynchronous > replication (free!) with block-level granularity. Then look at your other > backup requirements and attach whatever is required

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread Bob Friesenhahn
On Thu, 18 Mar 2010, erik.ableson wrote: Ditto on the Linux front. I was hoping that Solaris would be the exception, but no luck. I wonder if Apple wouldn't mind lending one of the driver engineers to OpenSolaris for a few months... Perhaps the issue is the filesystem rather than the drive

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Darren J Moffat
On 18/03/2010 17:26, Svein Skogen wrote: The utility: Can't handle streams being split (in case of streams being larger that a single backup media). I think it should be possible to store the 'zfs send' stream via NDMP and let NDMP deal with the tape splitting. Though that may need additional

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Darren J Moffat
As to your two questions above, I'll try to answer them from my limited understanding of the issue. The format: Isn't fault tolerant. In the least. One single bit wrong and the entire stream is invalid. A FEC wrapper would fix this. I've logged CR# "6936195 ZFS send stream while checksumed isn'

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread erik.ableson
On 18 mars 2010, at 15:51, Damon Atkins wrote: > A system with 100TB of data its 80% full and the a user ask their local > system admin to restore a directory with large files, as it was 30days ago > with all Windows/CIFS ACLS and NFSv4/ACLS etc. > > If we used zfs send, we need to go back to

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread erik.ableson
On 18 mars 2010, at 16:58, David Dyer-Bennet wrote: > On Thu, March 18, 2010 04:50, erik.ableson wrote: > >> It would appear that the bus bandwidth is limited to about 10MB/sec >> (~80Mbps) which is well below the theoretical 400Mbps that 1394 is >> supposed to be able to handle. I know that thes

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Scott Meilicke
>I was planning to mirror them - mainly in the hope that I could hot swap a new >one in the event that an existing one started to degrade. I suppose I could >start with one of each and convert to a mirror later although the prospect of >losing either disk fills me with dread. You do not need to

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 18:37, Darren J Moffat wrote: > On 18/03/2010 17:34, Svein Skogen wrote: >> How would NDMP help with this any more than running a local pipe >> splitting the stream (and handling the robotics for feeding in the next >> tape)? > > Probabl

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Darren J Moffat
On 18/03/2010 17:34, Svein Skogen wrote: How would NDMP help with this any more than running a local pipe splitting the stream (and handling the robotics for feeding in the next tape)? Probably doesn't in that case. I can see the point of NDMP when the tape library isn't physically connected

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 18:28, Darren J Moffat wrote: > On 18/03/2010 17:26, Svein Skogen wrote: The utility: Can't handle streams being split (in case of streams being larger that a single backup media). >>> >>> I think it should be possible to store

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 18:21, Darren J Moffat wrote: >> As to your two questions above, I'll try to answer them from my limited >> understanding of the issue. >> >> The format: Isn't fault tolerant. In the least. One single bit wrong and >> the entire stream is

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Matt
> It is hard, as you note, to recommend a box without > knowing the load. How many linux boxes are you > talking about? This box will act as a backing store for a cluster of 3 or 4 XenServers with upwards of 50 VMs running at any one time. > Will you mirror your SLOG, or load balance them? I > a

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 14:28, Darren J Moffat wrote: > > > On 18/03/2010 13:12, joerg.schill...@fokus.fraunhofer.de wrote: >> Darren J Moffat wrote: >> >>> So exactly what makes it unsuitable for backup ? >>> >>> Is it the file format or the way the utility

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Darren J Moffat
On 18/03/2010 13:12, joerg.schill...@fokus.fraunhofer.de wrote: Darren J Moffat wrote: So exactly what makes it unsuitable for backup ? Is it the file format or the way the utility works ? If it is the format what is wrong with it ? If it is the utility what is needed to f

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Darren J Moffat
On 18/03/2010 12:54, joerg.schill...@fokus.fraunhofer.de wrote: It has been widely discussed here already that the output of zfs send cannot be used as a backup. First define exactly what you mean by "backup". Please don't confuse "backup" and "archival" they aren't the same thing. It would

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread David Dyer-Bennet
On Thu, March 18, 2010 04:50, erik.ableson wrote: > > It would appear that the bus bandwidth is limited to about 10MB/sec > (~80Mbps) which is well below the theoretical 400Mbps that 1394 is > supposed to be able to handle. I know that these two disks can go > significantly higher since I was se

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storage box?

2010-03-18 Thread Scott Meilicke
It is hard, as you note, to recommend a box without knowing the load. How many linux boxes are you talking about? I think having a lot of space for your L2ARC is a great idea. Will you mirror your SLOG, or load balance them? I ask because perhaps one will be enough, IO wise. My box has one SLOG

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Damon Atkins
A system with 100TB of data its 80% full and the a user ask their local system admin to restore a directory with large files, as it was 30days ago with all Windows/CIFS ACLS and NFSv4/ACLS etc. If we used zfs send, we need to go back to a zfs send some 30days ago, and find 80TB of disk space t

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Mike Gerdts
On Wed, Mar 17, 2010 at 9:15 AM, Edward Ned Harvey wrote: >> I think what you're saying is:  Why bother trying to backup with "zfs >> send" >> when the recommended practice, fully supportable, is to use other tools >> for >> backup, such as tar, star, Amanda, bacula, etc.   Right? >> >> The answer

Re: [zfs-discuss] lazy zfs destroy

2010-03-18 Thread Giovanni Tirloni
On Thu, Mar 18, 2010 at 1:19 AM, Chris Paul wrote: > OK I have a very large zfs snapshot I want to destroy. When I do this, the > system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with > 128GB of memory. Now this may be more of a function of the IO device, but > let's say I do

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Joerg Schilling
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote: > > This has been discussed many times in the past already. > > If you archive the incremental "star send" data streams, you cannot > extract single files andit seems that this cannot be fixed without > introducing a different archive f

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 14:12, Joerg Schilling wrote: > Darren J Moffat wrote: > >> So exactly what makes it unsuitable for backup ? >> >> Is it the file format or the way the utility works ? >> >> If it is the format what is wrong with it ? >> >> If

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Joerg Schilling
Darren J Moffat wrote: > So exactly what makes it unsuitable for backup ? > > Is it the file format or the way the utility works ? > > If it is the format what is wrong with it ? > > If it is the utility what is needed to fix that ? This has been discussed many times in the past alr

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Joerg Schilling
Carsten Aulbert wrote: > In case of 'star' the blob coming out of it might also be useless if you > don't > have star (or other tools) around for deciphering it - very unlikely, but > still possible ;) I invite you to inform yourself about star and to test it yourself. Star's backups are com

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Carsten Aulbert
Hi all On Thursday 18 March 2010 13:54:52 Joerg Schilling wrote: > If you have no technical issues to discuss, please stop insulting > people/products. > > We are on OpenSolaris and we don't like this kind of discussions on the > mailing lists. Please act collaborative. > May I suggest this to

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Joerg Schilling
Edward Ned Harvey wrote: > > I invite erybody to join star development at: > > We know, you have an axe to grind. Don't insult some other product just > because it's not the one you personally work on. Yours is better in some > ways, and "zfs send" is better in some ways. If you have no techni

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Henrik Johansson
On 18 mar 2010, at 18.38, Craig Alder wrote: I remembered reading a post about this a couple of months back. This post by Jeff Bonwick confirms that the dedupratio is calculated only on the data that you've attempted to deduplicate, i.e. only the data written whilst dedup is turned on - h

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Edward Ned Harvey
> From what I've read so far, zfs send is a block level api and thus > cannot be > used for real backups. As a result of being block level oriented, the Weirdo. The above "cannot be used for real backups" is obviously subjective, is incorrect and widely discussed here, so I just say "weirdo." I'm

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Edward Ned Harvey
> My own stuff is intended to be backed up by a short-cut combination -- > zfs send/receive to an external drive, which I then rotate off-site (I > have three of a suitable size). However, the only way that actually > works so far is to destroy the pool (not just the filesystem) and > recreate it

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Tonmaus
> On that > occasion: does anybody know if ZFS reads all parities > during a scrub? > > Yes > > > Wouldn't it be sufficient for stale corruption > detection to read only one parity set unless an error > occurs there? > > No, because the parity itself is not verified. Aha. Well, my understanding

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Kashif Mumtaz
Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 142900-02(Dec 09 ) UFS machine Hard disk 1 TB sata Memory 16 GB Processor Processor 1GH 6 c

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Richard Elling
On Mar 16, 2010, at 4:41 PM, Tonmaus wrote: >> Are you sure that you didn't also enable >> something which >> does consume lots of CPU such as enabling some sort >> of compression, >> sha256 checksums, or deduplication? > > None of them is active on that pool or in any existing file system. Mayb

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread James C. McPherson
On 18/03/10 08:36 PM, Kashif Mumtaz wrote: Hi, I did another test on both machine. And write performance on ZFS extraordinary slow. Which build are you running? On snv_134, 2x dual-core cpus @ 3GHz and 8Gb ram (my desktop), I see these results: $ time dd if=/dev/zero of=test.dbf bs=8k count

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Craig Alder
I remembered reading a post about this a couple of months back. This post by Jeff Bonwick confirms that the dedupratio is calculated only on the data that you've attempted to deduplicate, i.e. only the data written whilst dedup is turned on - http://mail.opensolaris.org/pipermail/zfs-discuss/2

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Kashif Mumtaz
Hi, I did another test on both machine. And write performance on ZFS extraordinary slow. I did the following test on both machines For write time dd if=/dev/zero of=test.dbf bs=8k count=1048576 For read time dd if=/testpool/test.dbf of=/dev/null bs=8k ZFS machine has 32GB memory UFS machine

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Paul van der Zwan
On 18 mrt 2010, at 10:07, Henrik Johansson wrote: > Hello, > > On 17 mar 2010, at 16.22, Paul van der Zwan wrote: > >> >> On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: >> >>> Someone correct me if I'm wrong, but it could just be a coincidence. That >>> is, perhaps the data that you co

Re: [zfs-discuss] Is this a sensible spec for an iSCSI storgage box?

2010-03-18 Thread Matt
Ultimately this could have 3TB of data on it and it is difficult to estimate the volume of changed data. It would be nice to have a changes mirrored immediately but asynchronously so as not to impede the master. The second box is likely to have a lower spec with fewer spindles for cost reasons

[zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread erik.ableson
An interesting thing I just noticed here testing out some Firewire drives with OpenSolaris. Setup : OpenSolaris 2009.06 and a dev version (snv_129) 2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 devices on the chain) - one SATA bridge - one PATA bridge Created a zp

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 10:31, Joerg Schilling wrote: > Svein Skogen wrote: > >> Please, don't compare proper backup drives to that rotating head >> non-standard catastrophy... DDS was (in)famous for being a delayed-fuse >> tape-shredder. > > DDS was a WOM (w

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Joerg Schilling
Svein Skogen wrote: > Please, don't compare proper backup drives to that rotating head > non-standard catastrophy... DDS was (in)famous for being a delayed-fuse > tape-shredder. DDS was a WOM (write only memory) type device. It did not report write errors and it had many read errors. Jörg --

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Joerg Schilling
Damon Atkins wrote: > I vote for zfs needing a backup and restore command against a snapshot. > > backup command should output on stderr at least > Full_Filename SizeBytes Modification_Date_1970secSigned > so backup software can build indexes and stdout contains the data. This is something that

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Henrik Johansson
Hello, On 17 mar 2010, at 16.22, Paul van der Zwan wrote: On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: Someone correct me if I'm wrong, but it could just be a coincidence. That is, perhaps the data that you copied happens to lead to a dedup ratio relative to the data that's alr