The only other zfs pool in my system is a mirrored rpool (2 500 gb disks). This
is for my own personal use, so it's not like the data is mission critical in
some sort of production environment.
The advantage I can see with going with raidz2 + spare over raidz3 and no spare
is I would spend much
I think that Device Manager in Windows 7 doesn't do any harm. Instead I used
this utility to try and format an external USB hard drive.
http://www.ridgecrop.demon.co.uk/fat32format.htm
I used the GUI format
http://www.ridgecrop.demon.co.uk/guiformat.htm
I clicked and started this GUI format wit
Thanks for your replies.
Regards
Victor
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks Darren.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jul 27, 2010, at 7:13 AM, Darren J Moffat wrote:
> On 27/07/2010 13:28, Edward Ned Harvey wrote:
>> The opposite is also true. If you have any special properties set on your
>> main pool, they won't automatically be set on your receiving pool. So I
>> personally recommend saving "zpool get al
On 2010-Jul-27 19:43:50 +0800, "Andrey V. Elsukov" wrote:
>On 27.07.2010 1:57, Peter Jeremy wrote:
>> Note that ZFS v15 has been integrated into the development branches
>> (-current and 8-stable) and will be in FreeBSD 8.2 (or you can run it
>
>ZFS v15 is not yet in 8-stable. Only in HEAD. Perhap
On 27.07.2010 1:57, Peter Jeremy wrote:
> Note that ZFS v15 has been integrated into the development branches
> (-current and 8-stable) and will be in FreeBSD 8.2 (or you can run it
ZFS v15 is not yet in 8-stable. Only in HEAD. Perhaps it will be merged
into stable after 2 months.
--
WBR, Andrey
Hi Ketan,
The supported LU + zone configuration migration scenarios
are described here:
http://docs.sun.com/app/docs/doc/819-5461/gihfj?l=en&a=view
I think the problem is that the /zones is a mountpoint.
You might have better results if /zones was just a directory.
See the examples in this se
Hi all,
It seems this issue has to do with CR 6860996 (
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6860996 ),
but following the following tips from Cindy Swearingen did the trick :
%temporary clones are not automatically destroyed on error
A temporary clone is created for an incr
Hi all,
I'm running snv_134 and i'm testing the COMSTAR framework and during
those tests i've created an ISCSI zvol and exported to a server.
Now that the tests are done i have renamed the zvol and so far so
good..things get really weird (at least to me) when i try to destroy
this zvol.
*r...@san
Thanks, Michael. That's exactly right.
I think my requirement is: writable snapshots.
And I was wondering if someone knowledgeable here could tell me if I could do
this magically by using clones without creating a tangled mess of branches,
because clones in a way are writable snapshots.
--
Thi
On 27/07/2010 13:28, Edward Ned Harvey wrote:
The opposite is also true. If you have any special properties set on your
main pool, they won't automatically be set on your receiving pool. So I
personally recommend saving "zpool get all" and "zfs get all" into a txt
file, and store it along with
i have 2 file systems on my primary disk / & /zones . i want to convert it to
zfs root with live upgrade but when i live upgrade it creates the ZFS BE but
instead of creating a separate /zones dataset it uses the same dataset from the
primary BE (c3t1d0s3 ) ... is there any way i can do it so th
On 27.07.10 14:21, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of devsk
I have many core files stuck in snapshots eating up gigs of my disk
space. Most of these are BE's which I don't really want to delete right
now.
v writes:
> Hi,
> A basic question regarding how zil works:
> For asynchronous write, will zil be used?
> For synchronous write, and if io is small, will the whole io be place on
> zil? or just the pointer be save into zil? what about large size io?
>
Let me try.
ZIL : code and data stru
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dav Banks
This message:
> How's that working for you? Seems like it would be as straightforward
> as I was thinking - only possible.
And this message:
> Yeah, that's starting to sound like a f
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of devsk
>
> I have many core files stuck in snapshots eating up gigs of my disk
> space. Most of these are BE's which I don't really want to delete right
> now.
Ok, you don't want to delete them
True! I don't need the same level of redundancy on the backup as the primary.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yeah, that's starting to sound like a fairly simple but equally robust
solution. That may be the final solution. Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
Thanks Cindy - I've been looking for an admin guide!
I'll play with the split command - sounds interesting.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
How's that working for you? Seems like it would be as straightforward as I was
thinking - only possible.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
The reason for wanting raidz was to have some redundancy in the backup without
the big hit on space that duplicating the data would have.
The other issue is the switching process. More likely to have screwups if every
week I, or someone else when I'm out, have to break and reset 24 mirrors
inste
On 27/07/2010 09:20, v wrote:
Hi all,
In Jeff's blog:http://blogs.sun.com/bonwick/entry/raid_z
It mentions original raid-z codes are 599 lines, where can I find it to learn,
current codes are a little big.
From the source code repository, use 'hg log' and 'hg cat' to find and
show the version
Hi all,
In Jeff's blog:http://blogs.sun.com/bonwick/entry/raid_z
It mentions original raid-z codes are 599 lines, where can I find it to learn,
current codes are a little big.
regards
Victor
--
This message posted from opensolaris.org
___
zfs-discuss m
I have many core files stuck in snapshots eating up gigs of my disk space. Most
of these are BE's which I don't really want to delete right now.
Is there a way to get rid of them? I know snapshots are RO but can I do some
magic with clones and reclaim my space?
--
This message posted from opens
25 matches
Mail list logo