On 9/16/12 10:40 AM, "Richard Elling" wrote:
>With a zvol of 8K blocksize, 4K sector disks, and raidz you will get 12K
>(data
>plus parity) written for every block, regardless of how many disks are in
>the set.
>There will also be some metadata overhead, but I don't know of a metadata
>sizing for
On Sep 15, 2012, at 6:03 PM, Bob Friesenhahn
wrote:
> On Sat, 15 Sep 2012, Dave Pooser wrote:
>
>> The problem: so far the send/recv appears to have copied 6.25TB of
>> 5.34TB.
>> That... doesn't look right. (Comparing zfs list -t snapshot and looking at
>> the 5.34 ref for the snapshot v
On Sun, Sep 16, 2012 at 7:43 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> There's another lesson to be learned here.
>
> As mentioned by Matthew, you can tweak your reservation (or refreservation)
> on the zvol, but you do so at your own risk, possibly putting yourself in
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bill Sommerfeld
>
> > But simply creating the snapshot on the sending side should be no
> problem.
>
> By default, zvols have reservations equal to their size (so that writes
> don't fail due
On Sat, Sep 15, 2012 at 2:07 PM, Dave Pooser wrote:
> The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
> That... doesn't look right. (Comparing zfs list -t snapshot and looking at
> the 5.34 ref for the snapshot vs zfs list on the new system and looking at
> space used.)
On Fri, Sep 14, 2012 at 11:07 PM, Bill Sommerfeld wrote:
> On 09/14/12 22:39, Edward Ned Harvey
> (**opensolarisisdeadlongliveopens**olaris)
> wrote:
>
>> From:
>> zfs-discuss-bounces@**opensolaris.org[mailto:
>>> zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Dave Pooser
>>>
>>> Unfortu
On Sat, 15 Sep 2012, Dave Pooser wrote:
The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
That... doesn't look right. (Comparing zfs list -t snapshot and looking at
the 5.34 ref for the snapshot vs zfs list on the new system and looking at
space used.)
Is this a p
> The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
> That... doesn't look right. (Comparing zfs list -t snapshot and looking at
> the 5.34 ref for the snapshot vs zfs list on the new system and looking at
> space used.)
>
> Is this a problem? Should I be panicking yet?
W
On 09/14/12 22:39, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Pooser
Unfortunately I did not realize that zvols require disk space sufficient
to duplicate the zvol, and
On 09/15/12 04:46 PM, Dave Pooser wrote:
I need a bit of a sanity check here.
1) I have a a RAIDZ2 of 8 1TB drives, so 6TB usable, running on an ancient
version of OpenSolaris (snv_134 I think). On that zpool (miniraid) I have
a zvol (RichRAID) that's using almost the whole FS. It's shared out v
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave Pooser
>
> Unfortunately I did not realize that zvols require disk space sufficient
> to duplicate the zvol, and my zpool wasn't big enough. After a false start
> (zpool add is dangerous w
I need a bit of a sanity check here.
1) I have a a RAIDZ2 of 8 1TB drives, so 6TB usable, running on an ancient
version of OpenSolaris (snv_134 I think). On that zpool (miniraid) I have
a zvol (RichRAID) that's using almost the whole FS. It's shared out via
COMSTAR Fibre Channel target mode. I'd l
12 matches
Mail list logo