Re: [zfs-discuss] possible zfs recv bug?

2010-11-23 Thread Tom Erickson
Thanks, James, for reporting this, and thanks, Matt, for the analysis. I filed 7002362 to track this. Tom On 11/23/10 10:43 AM, Matthew Ahrens wrote: I verified that this bug exists in OpenSolaris as well. The problem is that we can't destroy the old filesystem "a" (which has been renamed to

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-23 Thread Darren J Moffat
On 23/11/2010 21:01, StorageConcepts wrote: r...@solaris11:~# zfs list mypool/secret_received cannot open 'mypool/secret_received': dataset does not exist r...@solaris11:~# zfs send mypool/plaint...@test | zfs receive -o encryption=on mypool/secret_received cannot receive: cannot override receiv

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-23 Thread StorageConcepts
I just tested crypto a little and I have some send/receive specific questions about it. It would be great if someone could clarify. Currently ZFS has no background rewriter. However the fact that ZFS applies most of the properties and tunables (like dedup or compression) on write time for all n

Re: [zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-23 Thread StorageConcepts
Hi, I did a quick test (because I'm curious also). The Hardware was a 3 SATA Disk RaidZ1. What I did: 1) Create a pool with NexentaStor 3.0.4 (Pool Version 26, Raidz1 with 3 disks) 2) Disabled all caching (primarycache=none, secondarycache=none) to force media access 3) Copied and extracted

Re: [zfs-discuss] possible zfs recv bug?

2010-11-23 Thread Matthew Ahrens
I verified that this bug exists in OpenSolaris as well. The problem is that we can't destroy the old filesystem "a" (which has been renamed to "rec2/recv-2176-1" in this case). We can't destroy it because it has a child, "b". We need to rename "b" to be under the new "a". However, we are not re

Re: [zfs-discuss] drive replaced from spare

2010-11-23 Thread Bryan Horstmann-Allen
+-- | On 2010-11-23 13:28:38, Tony Schreiner wrote: | > am I supposed to do something with c1t3d0 now? Presumably you want to replace the dead drive with one that works? zpool offline the dead drive, if it isn't already,

[zfs-discuss] drive replaced from spare

2010-11-23 Thread Tony Schreiner
I have a x4540 with a single pool made from a bunch of raidz1's with 2 spares (solaris 10 u7). Been running great for over a year, but I've had my first event. A day ago the system activated one of the spares c4t7d0, but given the status below, I'm not sure what to do next. # zpool st

Re: [zfs-discuss] ashift and vdevs

2010-11-23 Thread Krunal Desai
On Tue, Nov 23, 2010 at 9:59 AM, taemun wrote: > I'm currently populating a pool with a 9-wide raidz vdev of Samsung HD204UI > 2TB (5400rpm, 4KB sector) and a 9-wide raidz vdev of Seagate LP ST32000542AS > 2TB (5900 rpm, 4KB sector) which was created with that binary, and haven't > seen any of the

Re: [zfs-discuss] ashift and vdevs

2010-11-23 Thread taemun
Cheers for the links David, but you'll note that I've commented on the blog you linked (ie, was aware of it). The zpool-12 binary linked from http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/ worked perfectly on my SX11 installation. (It threw some error on b134, so

Re: [zfs-discuss] ashift and vdevs

2010-11-23 Thread Krunal Desai
Interesting, I didn't realize that Soracle was working on/had a solution somewhat in place for 4K-drives. I wonder what will happen first for me, Hitachi 7K2000s hitting a reasonable price, or 4K/variable-size sector support hiting so I can use Samsung F4s or Barracuda LPs. On Tue, Nov 23, 2010 at

Re: [zfs-discuss] ashift and vdevs

2010-11-23 Thread David Magda
On Tue, November 23, 2010 08:53, taemun wrote: > zdb -C shows an shift value on each vdev in my pool, I was just wondering > if it is vdev specific, or pool wide. Google didn't seem to know. > > I'm considering a mixed pool with some "advanced format" (4KB sector) > drives, and some normal 512B sec

[zfs-discuss] possible zfs recv bug?

2010-11-23 Thread James Van Artsdalen
I am seeing a zfs recv bug on FreeBSD and am wondering if someone could test this in the Solaris code. If it fails there then I guess a bug report into Solaris is needed. This is a perverse case of filesystem renaming between snapshots. kraken:/root# cat zt zpool create rec1 da3 zpool create

[zfs-discuss] ashift and vdevs

2010-11-23 Thread taemun
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn't seem to know. I'm considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set p

Re: [zfs-discuss] ZFS doesn't notice errors in mirrored log device?

2010-11-23 Thread Victor Latushkin
On Nov 13, 2010, at 7:33 AM, Edward Ned Harvey wrote: > Log devices are generally write-only. They are only read during boot, after > an ungraceful crash. It is extremely difficult to get a significant number > of GB used on the log device, because they are flushed out to primary storage > s