On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a server with let's say,
12 disk slots, and we'll be using 2T disks or something like
On Oct 12, 2012, at 5:50 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Pedantically, a pool can be made in a file, so it works the same...
>
> Pool can only be made in a file, by a system that is able to cre
On Fri, Oct 12, 2012 at 3:28 AM, Jim Klimov wrote:
> In fact, you can (although not recommended due to balancing reasons)
> have tlvdevs of mixed size (like in Freddie's example) and even of
> different structure (i.e. mixing raidz and mirrors or even single
> LUNs) by forcing the disk attachment.
Jim, I'm trying to contact you off-list, but it doesn't seem to be working.
Can you please contact me off-list?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-10-12 16:50, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
So he's looking for a way to do a "zfs receive" on a linux system,
transported over ssh. Suggested answers so far include building a VM on
the receiving side, to run openindiana (or whatever) or using
zfs-fuse
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of andy thomas
>
> According to a Sun document called something like 'ZFS best practice' I
> read some time ago, best practice was to use the entire disk for ZFS and
> not to partition or slice it
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Pedantically, a pool can be made in a file, so it works the same...
Pool can only be made in a file, by a system that is able to create a pool.
Point is, his receiving system runs linux and doesn't have any zfs; his
receiving system
On 2012-Oct-12 08:11:13 +0100, andy thomas wrote:
>This is apparently what had been done in this case:
>
> gpart add -b 34 -s 600 -t freebsd-swap da0
> gpart add -b 634 -s 1947525101 -t freebsd-zfs da1
> gpart show
Assuming that you can be sure that you'll keep 512B sect
2012-10-12 11:11, andy thomas wrote:
Great, thanks for the explanation! I didn't realise you could have a
sort of 'stacked pyramid' vdev/pool structure.
Well, you can - the layers are "pool" - "top-level VDEVs" - "leaf
VDEVs", though on trivial pools like single-disk ones, the layers
kinda merg
On Thu, 11 Oct 2012, Richard Elling wrote:
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom
wrote:
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and
On Thu, 11 Oct 2012, Freddie Cash wrote:
On Thu, Oct 11, 2012 at 2:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and not to
partition or slice it in any way. Does this advic
11 matches
Mail list logo