correction
On 1/6/2012 3:34 PM, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D." wrote:

may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2 c0t1d0s1 <===should be c0t1d0s0
4)use lucreate  -c c0t0d0s0 -n new-zfsbe -p c0t1d0s0 <==rpool2
5)lustatus
c0t0d0s0
new-zfsbe
6)luactivate new-zfsbe
7)init 6
now you have two BE old and new
you can create dpool on  slice1 add L2ARC and zil and repartition c0t0d0
if you want you can create rpool on c0t0d0s0 and new BE so everything will be name rpool for root pool

SWAP and DUMP can be on different rpool

good luck


On 1/6/2012 12:32 AM, Jesus Cea wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Sorry if this list is inappropriate. Pointers welcomed.

Using Solaris 10 Update 10, x86-64.

I have been a ZFS heavy user since available, and I love the system.
My servers are usually "small" (two disks) and usually hosted in a
datacenter, so I usually create a ZPOOL used both for system and data.
That is, the entire system contains an unique two-disk zpool.

This have worked nice so far.

But my new servers have SSD too. Using them for L2ARC is easy enough,
but I can not use them as ZIL because no separate ZIL device can be
used in root zpools. Ugh, that hurts!.

So I am thinking about splitting my full two-disk zpool in two zpools,
one for system and other for data. Both using both disks for
mirroring. So I would have two slices per disk.

I have the system in production in a datacenter I can not access, but
I have remote KVM access. Servers are in production, I can't reinstall
but I could be allowed to have small (minutes) downtimes for a while.

My plan is this:

1. Do a scrub to be sure the data is OK in both disks.

2. Break the mirror. The A disk will keep working, B disk is idle.

3. Partition B disk with two slices instead of current full disk slice.

4. Create a "system" zpool in B.

5. Snapshot "zpool/ROOT" in A and "zfs send it" to "system" in B.
Repeat several times until we have a recent enough copy. This stream
will contain the OS and the zones root datasets. I have zones.

6. Change GRUB to boot from "system" instead of "zpool". Cross fingers
and reboot. Do I have to touch the "bootfs" property?

Now ideally I would be able to have "system" as the zpool root. The
zones would be mounted from the old datasets.

7. If everything is OK, I would "zfs send" the data from the old zpool
to the new one. After doing a few times to get a recent copy, I would
stop the zones and do a final copy, to be sure I have all data, no
changes in progress.

8. I would change the zone manifest to mount the data in the new zpool.

9. I would restart the zones and be sure everything seems ok.

10. I would restart the computer to be sure everything works.

So fair, it this doesn't work, I could go back to the old situation
simply changing the GRUB boot to the old zpool.

11. If everything works, I would destroy the original "zpool" in A,
partition the disk and recreate the mirroring, with B as the source.

12. Reboot to be sure everything is OK.

So, my questions:

a) Is this workflow reasonable and would work?. Is the procedure
documented anywhere?. Suggestions?. Pitfalls?

b) *MUST* SWAP and DUMP ZVOLs reside in the root zpool or can they
live in a "nonsystem" zpool? (always plugged and available). I would
like to have a quite small(let say 30GB, I use Live Upgrade and quite
a fez zones) "system" zpool, but my swap is huge (32 GB and yes, I use
it) and I would rather prefer to have SWAP and DUMP in the data zpool,
if possible&  supported.

c) Currently Solaris decides to activate write caching in the SATA
disks, nice. What would happen if I still use the complete disks BUT
with two slices instead of one?. Would it still have write cache
enabled?. And yes, I have checked that the cache flush works as
expected, because I can "only" do around one hundred "write+sync" per
second.

Advices?.

- -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/
j...@jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:j...@jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTwaHW5lgi5GaxT1NAQLe/AP9EIK0tckVBhqzrTHWbNzT2TPUGYc7ZYjS
pZYX1EXkJNxVOmmXrWApmoVFGtYbwWeaSQODqE9XY5rUZurEbYrXOmejF2olvBPL
zyGFMnZTcmWLTrlwH5vaXeEJOSBZBqzwMWPR/uv2Z/a9JWO2nbidcV1OAzVdT2zU
kfboJpbxONQ=
=6i+A
-----END PGP SIGNATURE-----
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder&  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

<<attachment: laotsao.vcf>>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to