A bug is being filed on this by Sun. A Senior Sun Engineer was able to
replicate the problem and the only work around they suggested was to
temporarily mount the parent filesystem on the pool. This applies to Sol 10
Update 8; not sure about anything else.
--
This message posted from opensolaris
$zpool create dpool mirror c1t2d0 c1t3d0
$zfs set mountpoint=none dpool
$zfs create -o mountpoint=/export/zones dpool/zones
On Solaris 10 Update 8 when creating a zone with zonecfg and setting the
zonepath to "/export/zones/test1" and then installing with zoneadm install, the
zfs zonepath file s
Are there any performance penalties incurred by mixing vdevs? Say you start
with a raidz1 with three 500gb disks. Then over time you add a mirror with 2
1TB disks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
Posted for my friend Marko:
I've been reading up on ZFS with the idea to build a home NAS.
My ideal home NAS would have:
- high performance via striping
- fault tolerance with selective use of multiple copies attribute
- cheap by getting the most efficient space utilization possible (not raidz,
I was asked a few interesting questions by a fellow co-worker regarding ZFS and
after much google-bombing, still can't find answers. I've seen several people
try to ask these questions, but only to have been answered indirectly.
If I have a pool that consists of a raidz-1 w/ three 500gb disks an
zpool status shows a few checksum errors against 1 device in a raidz1 3 disk
array and no read or write errors against that device. The pool marked as
degraded. Is there a difference if you clear the errors for the pool before you
scrub versus scrubing then clearing the errors? I'm not sure if t
Does anyone know if there are any issues mixing one 5+2 raidz2 in the same pool
with 6 5+1 raidz1 vdevs? Would there be any performance hit?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
I'm using the thumper as a secondary storage device and therefor am technically
only worried about capacity and performance. In regards to availability, if it
fails I should be okay as long as I don't also lose the primary storage during
the time it takes to recover the secondary [knock on wood]
Thanks everyone. Basically I'll be generating a list of files to grab and doing
a wget to pull individual files from an apache web server and then placing them
in their respective nested directory location. When it comes time for a
restore, I generate another list of files scattered throughout t
Point of clarification: I meant recordsize. I'm guessing {from what I've read}
that the blocksize is auto-tuned.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
I'm getting ready to test a thumper (500gig drives/ 16GB) as a backup store for
small (avg 2kb) encrypted text files. I'm considering a zpool of 7 x 5+1 raidz1
vdevs to maximize space and provide some level of redundancy carved into about
10 zfs filesystems. Since the files are encrypted, compre
11 matches
Mail list logo