Hi,
My understanding is ZFS itself is a great file system by combining fs/vm with
the numerous feature added to it.
In the similar lines existing fs/vm and array snapshot are still in use and
customers is requesting similar kind of support for zfs.
So it would be very great help of getting s
Hi,
I have done the following (which is required for my case)
Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
created a array level snapshot of the device using "dscli" to another device
which is successful.
Now I make the snapshot device visible to another host (host2)
Hi,
How it would help for instant recovery or point in time recovery ?? i.e
restore data at device/LUN level ?
Currently it is easy as I can unwind the primary device stack and restore data
at device/ LUN level and recreate stack.
Thanks & Regards,
sridhar.
--
This message posted from openso
Hi I am looking in similar lines,
my requirement is
1. create a zpool on one or many devices ( LUNs ) from an array ( array can be
IBM or HPEVA or EMC etc.. not SS7000).
2. Create file systems on zpool
3. Once file systems are in use (I/0 is happening) I need to take snapshot at
array level
a
Hi Andrew,
Regarding your point
-
You will not be able to access the hardware
snapshot from the system which has the original zpool mounted, because
the two zpools will have the same pool GUID (there's an RFE outstanding
on fixing this).
Could you please pr
Hi Darren,
In shot I am looking a way to freeze and thaw for zfs file system so that for
harware snapshot, i can do
1. run zfs freeze
2. run hardware snapshot on devices belongs to the zpool where the given file
system is residing.
3. run zfs thaw
Thanks & Regards,
sridhar.
--
This message po
Hi Darren,
Thanks you for the details. I am aware of export/import of zpool. but with
zpool export pool is not available for writes.
is there a way I can freeze zfs file system at file system level.
As an example, for JFS file system using "chfs -a freeze ..." option.
So if I am taking a hardwa
Hi,
How I can I quiesce / freeze all writes to zfs and zpool if want to take
hardware level snapshots or array snapshot of all devices under a pool ?
are there any commands or ioctls or apis available ?
Thanks & Regards,
sridhar.
--
This message posted from opensolaris.org
_
Thanks for your help.
I would check this out.
Regards,
sridhar.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Darren,
Thanks for your info.
Sorry below might be lengthy :
Yes I am looking for actual implementation rather how to use zpool split.
My requirement is not at zfs file system level and also not zfs snapshot.
As I understood,
If my zpool say mypool is created using " zpool create mypool
Hi,
I was wondering how zpool split works or implemented.
If a pool pool1 is on a mirror having two devices dev1 and dev2 then using
zpool split I can split with the new pool name say pool-mirror on dev2.
How split can change metadata on dev2 and rename/replace and associate with new
name i.e
Hi,
I have downloaded and using opensolaris virtual box image which shows below
versions
zfs version 3
zpool version 14
cat /etc/release shows
2009.06 snv_111b X86
Is this final build available ??
Can i upgrade it to higher version of zfs/zpool ?
can i get any updage vdi image to seek zfs/zpoo
zfs close is at zfs file system level. what i am look here is rebuild the file
system stack from bottom to top. Once i took the snapshot ( hardware) the
snapshot devices carry same copy of data and meta data.
If my snapshot device is dev2 then, the metadata will have smpoolsnap. If I
need to us
Hi,
If I compare zpool is like volume group or disk group, as an example on AIX we
have aixlvm.
AIX lvm provideds command like recreatevg by providing snashot devices.
In case of HPLVM or for Linux LVM, we can create a new vg/lv structure and add
the snapshoted devices in that and then we impor
Hi,
zfs upgrade shows version as 4 and zpool upgrade shows version as 15.
and etc/release show Solaris 10 10/09 s10s_u8wos_08a SPARC.
And my zpool doen't have support for split.
Could you please suggest me how to upgrade my Solaris box with latest version
for zfs and zpool to get
updated supp
Hi Cindys,
Thank you for reply.
zfs/ zpool should have ability of accessing snapshot devices with a
configurable name.
As an example if file system stack is created as
vxfs( /mnt1)
|
|
vxvm(lv1)
|
|
(device from an array / LUN say dev1),
If i take array level or hardware level sn
Hi,
I have two questions:
1) Is there any way of renaming zpool without export/import ??
2) If I took hardware snapshot of devices under a zpool ( where the snapshot
device will be exact copy including metadata i.e zpool and associated file
systems) is there any way to rename zpool name of snap
Hi,
what is the right way to check versions of zfs and zpool ??
I am writing piece of code which call zfs command line further. Before actually
initiating and going ahead I wan to check the kind of version zfs and zpool
present on the system.
As an example "zpool split" is not present on prior
Hi Cindys,
Thank you for step by step explanation and it is quite clear now.
Regards,
sridhar.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I could able to call zfs snapshot on individual file system/ volume using zfs
snapshot
Or
I can call zfs -r snapshot filesys...@snapshot-name to take all snapshots.
I there a way I can specify more than 1 file system/volume of same pool or
different pool to call a single zfs snapshot ??
Hi Cindys,
Thanks for your mail. I have some further queries here based on your answer.
Once zfs split creates new pool (as per below example it is mypool_snap) can I
access mypool_snap just by importing on the same host Host1 ??
what is the current access method of newly created mypool_snap
Hi,
If have below kind of configuration (as an example):
c1t1d1 and c2t2d2 are two LUNs visible (un masked) to both host1 and host2.
Created a pool mypool as below
zpool create mypool mirror c1t1d1 c2t2d2
Now I did zpool split
zpool split mypool mypool_snap
Once i run zpool split, is t
Hi,
With recent additions, using zpool split I could split a mirrored zpool and
create new pool with the given name.
Is there any direct or indirect mechanism where I can create snapshot of
devices under a existing zpool where new devices are created so that I can
recreate stack (new zpool and
I am using solaris 10 9/10 SARC x64 version.
Following are output of release file and uname -a respectively.
bash-3.00# cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use
Thank you for your quick reply.
When I run below command it is showing.
bash-3.00# zpool upgrade
This system is currently running ZFS pool version 15.
All pools are formatted using this version.
How can I upgrade to new zpool and zfs versions so that I can have zpool split
capabilities ?
I am
Hi,
I have a mirror pool tank having two devices underneath. Created in this way
#zpool create tank mirror c3t500507630E020CEAd1 c3t500507630E020CEAd0
Created file system tank/home
#zfs create tank/home
Created another file system tank/home
#zfs create tank/home/sridhar
After that I have cr
Hi,
Follow is what I am looking for.
I need to take hardware snap shot of devices which are provided under zfs file
system.
I could identify type of configuration using zpool status and get the list of
disks.
For example : if a single LUN is assigned to zpool and a file system is created
for
27 matches
Mail list logo