Re: [zfs-discuss] zfs log on another zfs pool

2010-05-01 Thread mark.musa...@oracle.com

What problem are you trying to solve?




On 1 May 2010, at 02:18, Tuomas Leikola   
wrote:



Hi.

I have a simple question. Is it safe to place log device on another  
zfs disk?


I'm planning on placing the log on my mirrored root partition. Using  
latest opensolaris.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Default 'zpool' I want to move it to my new raidz pool 'gpool' how?

2010-05-02 Thread mark.musa...@oracle.com
You can't get rid of rpool. That's the pool you're booting from. Root  
pools can only be single disks or n-way mirrors.


As to your other question, you can view the snapshots by using the  
command "zfs list -t all", or turn on the listsnaps property for the  
pool. See  http://docs.sun.com/app/docs/doc/817-2271/ghbxt?a=view for  
more info.




Regards,
Mark

On 2 May 2010, at 15:58, Giovanni  wrote:


Hi guys

I am new to Opensolaris and ZFS world, I have 6x2TB SATA hdds on my  
system, I picked a single 2TB disk and installed opensolaris  
(therefore zpool was created by the installer)


I went ahead and created a new pool "gpool" with raidz (the kind of  
redundancy I want. Here's the output:


@server:/# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
gpool119K  7.13T  30.4K  /gpool
rpool   7.78G  1.78T78K  /rpool
rpool/ROOT  3.30G  1.78T19K  legacy
rpool/ROOT/opensolaris  3.30G  1.78T  3.15G  /
rpool/dump  2.00G  1.78T  2.00G  -
rpool/export 491M  1.78T21K  /export
rpool/export/home491M  1.78T21K  /export/home
rpool/export/home/G   491M  1.78T   491M  /export/home/G
rpool/swap  2.00G  1.78T   101M  -
@server:/#

@server:/# zpool status
 pool: gpool
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   gpool   ONLINE   0 0 0
 raidz1ONLINE   0 0 0
   c8t1d0  ONLINE   0 0 0
   c8t2d0  ONLINE   0 0 0
   c8t3d0  ONLINE   0 0 0
   c8t4d0  ONLINE   0 0 0
   c8t5d0  ONLINE   0 0 0

errors: No known data errors

 pool: rpool
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   rpool   ONLINE   0 0 0
 c8t0d0s0  ONLINE   0 0 0

errors: No known data errors
@server:/#


Now, I want to get rid of "rpool" in its entirely, I want to migrate  
all settings, boot records, files from that rpool to "gpool" and  
then add the member of rpool c8t0d0s0 to my existing "gpool" so that  
I have a RAIDZ of 6x drives.


Any guidance on how to do it? I tried to do zfs snapshot

# zfs snapshot rp...@move


But I don't see the snapshow anywhere on rpool/.zfs (there is  
no .zfs folder)


Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss