In my testing, vmware doesn't see the vm1 and vm2 filesystems. Vmware
doesn't have an automounter, and doesn't traverse NFS4 sub-mounts
(whatever the formal name for them is). Actually, it doesn't support
NFS4 at all!
Regards,
Tristan.
-Original Message-
From: zfs-discuss-boun...
> The real benefit of the of using a
> separate zvol for each vm is the instantaneous
> cloning of a machine, and the clone will take almost
> no additional space initially. In our case we build a
You don't have to use ZVOL devices to do that.
As mentioned by others...
> zfs create my_pool/group1
Hello,
I recently asked myself this question: Is it possible to unset ZFS
properties? Or reset one to its default state without looking up what
that default state is?
I believe the answer is yes, via the zfs inherit command (I haven't
verified yet, but I think a case could be made to add func
I tried to rename and import rpool and to use those /etc/system settings,
but without success. :-(
I've tried also to do this use installed OpenSolaris 5.11 snv_111b and I
have the same result as with Solaris 10.
vladi...@opensolaris:~# zpool import
pool: rpool
id: 8451126758019843293
stat
On Wed, Aug 12, 2009 at 06:17:44PM -0500, Haudy Kazemi wrote:
> I'm wondering what are some use cases for ZFS's utf8only and
> normalization properties. They are off/none by default, and can only be
> set when the filesystem is created. When should they specifically be
> enabled and/or disable
On Wed, Aug 12, 2009 at 6:49 PM, Mattias Pantzare wrote:
It would be nice if ZFS had something similar to VxFS File Change Log.
This feature is very useful for incremental backups and other
directory walkers, providing they support FCL.
>>>
>>> I think this tangent deserves its own t
Hello,
I'm wondering what are some use cases for ZFS's utf8only and
normalization properties. They are off/none by default, and can only be
set when the filesystem is created. When should they specifically be
enabled and/or disabled? (i.e. Where is using them a really good idea?
Where is
>>> It would be nice if ZFS had something similar to VxFS File Change Log.
>>> This feature is very useful for incremental backups and other
>>> directory walkers, providing they support FCL.
>>
>> I think this tangent deserves its own thread. :)
>>
>> To save a trip to google...
>>
>> http://sfdo
> >Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a
> >RAIDZ, you will get only 1TB of usable space.
On Wed, Aug 12, 2009 at 05:30:14PM -0400, Adam Sherman wrote:
> I believe you will get .5 TB in this example, no?
The slices used on each of the three disks will be .5TB. Mult
Erik Trimble wrote:
Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a
RAIDZ, you will get only 1TB of usable space. Of course, there is
always the ability to use partitions instead of the whole disk, but I'm
not going to go into that. Suffice to say, RAIDZ (and practically
On Wed, Aug 12, 2009 at 2:15 PM, Mike Gerdts wrote:
> On Wed, Aug 12, 2009 at 11:53 AM, Damjan
> Perenic wrote:
>> On Tue, Aug 11, 2009 at 11:04 PM, Richard
>> Elling wrote:
>>> On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
>>>
I suspect that if we 'rsync' one of these filesystems to a second
I believe you will get .5 TB in this example, no?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-12, at 16:44, Erik Trimble wrote:
Eric D. Mudama wrote:
On Wed, Aug 12 at 12:11, Erik Trimble wrote:
Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB,
500 GB, etc), can I pu
Eric D. Mudama wrote:
On Wed, Aug 12 at 12:11, Erik Trimble wrote:
Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB,
500 GB, etc), can I put them all into one big array and have data
redundancy, etc? (RAID-Z?)
Yes. RAID-Z requires a minimum of 3 drives, and it can use
diffe
Your example is too simple :-)
On Aug 12, 2009, at 1:03 PM, Charles Menser wrote:
With four drives A,A,B,B where A is fast access and/or
high-throughput, and B is either slow to seek and/or has slower
transfer speed, what are the implications for mirrored ZFS pools?
In particular I am wonderin
Ed Spencer wrote:
I don't know of any reason why we can't turn 1 backup job per filesystem
into say, up to say , 26 based on the cyrus file and directory
structure.
No reason whatsoever. Sometimes the more the better as per the rest of
this thread. The key
here is to test and tweak till you
With four drives A,A,B,B where A is fast access and/or
high-throughput, and B is either slow to seek and/or has slower
transfer speed, what are the implications for mirrored ZFS pools?
In particular I am wondering how the IO performance will compare between:
zpool create mypool mirror A A mirror
On Wed, Aug 12 at 12:11, Erik Trimble wrote:
Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB,
500 GB, etc), can I put them all into one big array and have data
redundancy, etc? (RAID-Z?)
Yes. RAID-Z requires a minimum of 3 drives, and it can use
different drives. Depending
Take a look back through the mail archives for more discussion about
this topic (expanding zpools).
The short answers are:
John Klimek wrote:
I'm a software developer with a little bit of experience in Linux but I've been
wanting to build a fileserver and I've recently heard about ZFS.
Righ
My question is about SSD, and the differences between use SLC for
readzillas instead of MLC.
Sun uses MLCs for Readzillas for their 7000 series. I would think
that if SLCs (which are generally more expensive) were really
needed, they would be used.
That's not entirely accurate. In the 741
I'm a software developer with a little bit of experience in Linux but I've been
wanting to build a fileserver and I've recently heard about ZFS.
Right now I'm considering Windows Home Server because I really don't need every
file mirrored/backed-up but I do like what I heard about ZFS.
Anyways,
I figured out what I did wrong. The filesystem as received on the external HDD
had multiple snapshots, but I failed to check for them. So I had created a
snapshot in order to send/recv on System2. That doesn't work, obviously.
A new local send/recv of the filesystem's correct snapshot did the tr
On Wed, Aug 12, 2009 at 11:53 AM, Damjan
Perenic wrote:
> On Tue, Aug 11, 2009 at 11:04 PM, Richard
> Elling wrote:
>> On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
>>
>>> I suspect that if we 'rsync' one of these filesystems to a second
>>> server/pool that we would also see a performance increa
On Wed, Aug 12, 2009 at 04:53:20AM -0700, Sascha wrote:
> confirmed, it's really an EFI Label. (see below)
>
>format> label
>[0] SMI Label
>[1] EFI Label
>Specify Label type[1]: 0
>Warning: This disk has an EFI label. Changing to SMI label will erase all
>current partitions
What is the best way to use an external HDD for initial replication of a large
ZFS filesystem?
System1 had filesystem; System2 needs to have a copy of filesystem.
Used send/recv on System1 to put filesys...@snap1 on connected external HDD.
Exported external HDD pool and connected/imported on Syst
On Tue, Aug 11, 2009 at 11:04 PM, Richard
Elling wrote:
> On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
>
>> I suspect that if we 'rsync' one of these filesystems to a second
>> server/pool that we would also see a performance increase equal to what
>> we see on the development server. (I don't k
Make sure you reboot after adding that to /etc/system
for making a safe clone I know there must be other ways to do it, but you
could make a new zpool and do a zfs send/receive from old zvols to new ones.
zfs snapshot -r yourz...@snapshot
zfs send -R yourz...@snapshot | zfs recv -vFd yournewzvol
On the Amanda backup mailing list, one poster said he was having a problem
using zfs snapshots as the source file-system for backups.
Gnutar is the actual archiving program in this case.
They said that files which were open on the active file system were still
listed as open in the snapshot. Thi
My EqualLogic arrays do not disconnect when resizing volumes.
When I need to resize, on the Windows side I open the iSCSI control panel, and
get ready to click the 'logon' button. I then resize the volume on the
OpenSolaris box, and immediately after that is complete, on the Windows side,
re-lo
Hi David,
thank for a tip. but I could't use just "zpool import rpool" because it
always said that pool is already used by other system, than I could only try
with a force switch "-f".
Under Solaris 10 that I'm using now to recover this rpool I have rpool named
as mypool01 it should not collide w
Yes! That would be icing on the cake.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've been using a ZFS volume exported via iscsi as a Time Machine drive
for my Mac Book. After a reboot last night (after installing an SSD as
ZIL for a pool), the Mac can't see the volume. I think some combination
of the iscsi target shutdown and the Mac's backup behavior has left the
volume
I don't know of any reason why we can't turn 1 backup job per filesystem
into say, up to say , 26 based on the cyrus file and directory
structure.
The cyrus file and directory structure is designed with users located
under the directories A,B,C,D,etc to deal with the millions of little
files issue
Stephen Green wrote:
I'll let you know
how it works out. Suggestions as to pre/post installation IO tests
welcome.
The installation went off without a hitch (modulo a bad few seconds
after reboot.) Story here:
http://blogs.sun.com/searchguy/entry/homebrew_hybrid_storage_pool
I've got one
I wonder if one prob is that you already have an rpool when you are booted of
the CD.
could you do
zpool import rpool rpool2
to rename?
also if system keeps rebooting on crash you could add these to your /etc/system
(but not if you are booting from disk)
set zfs:zfs_recover=1
set aok=1
th
Yes i think to make an nfs server opensolaris it's good in particular for zfs
raidz.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "z
Darren, I want to give you a short overview of what I tried:
1.
created a zpool on a LUN
resized the LUN on the EVA
exported the zpool
used format -e and label
tried to enlarge slice 0 -> impossible (see posting above)
2.
Same like 1. but exported the zpool before resizing on the EVA
Same resul
Hi Darren,
thanks for your quick answer.
> On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha
> wrote:
> > Then creating a zpool:
> > zpool create -m /zones/huhctmp huhctmppool
> c6t6001438002A5435A0001005Ad0
> >
> > zpool list
> > NAME SIZE USED AVAILCAP HEALTH
> ALTROOT
C. Bergström wrote:
glidic anthony wrote:
thanks but if it's experimental i prefer don't use. My server was use
to an nfs share for an esxi so i prefer it was stable.
But i thnik the best way it's to add an other hdd to make the install
and make my raidz with this 3 disks
Do you really cons
roland writes:
> >SSDs with capacitor-backed write caches
> >seem to be fastest.
>
> how to distinguish them from ssd`s without one?
> i never saw this explicitly mentioned in the specs.
They probably don't have one then (or they should fire their
entire marketing dept).
Capacitors allows
40 matches
Mail list logo