Hmm, I am having some problems, I did follow what you suggested and here is what
I did:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool detach mypool c1t3d0
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
errors: No known data errors
so now I have only one disk in my pool... Now, the c1t2d0 disk is a 72fb SAS
drive. I am trying to replace it with SAN 100GB LUN (emcpower0a)
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
2. c1t2d0 <SEAGATE-ST973401LSUN72G-0556-68.37GB>
/[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
3. c1t3d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB>
/[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
4. c2t5006016041E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
/[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
5. c2t5006016941E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
/[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/SUNW,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
6. c3t5006016841E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
/[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
7. c3t5006016141E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
/[EMAIL PROTECTED],700000/[EMAIL PROTECTED],2/SUNW,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
8. emcpower0a <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
/pseudo/[EMAIL PROTECTED]
Specify disk (enter its number): ^D
so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small
Any idea what I am doing wrong? Why it thinks that emcpower0a is too small?
Regards,
Chris
On Thu, 31 May 2007, Richard Elling wrote:
Krzys wrote:
Sorry to bother you but something is not clear to me regarding this
process.. Ok, lets sat I have two internal disks (73gb each) and I am
mirror them... now I want to replace those two mirrored disks into one LUN
that is on SAN and it is around 100gb. Now I do meet one requirement of
having more than 73gb of storage but do I need only something like 73gb at
minimum or do I actually need two luns of 73gb or more since I have it
mirrored?
You can attach any number of devices to a mirror.
You can detach all but one of the devices from a mirror. Obviously, when
the number is one, you don't currently have a mirror.
The resulting logical size will be equivalent to the smallest device.
My goal is simple to move data of two mirrored disks into one single SAN
device... Any ideas if what I am planning to do is duable? or do I need to
use zfs send and receive and just update everything and switch when I am
done?
or do I just add this SAN disk to the existing pool and then remove mirror
somehow? I would just have to make sure that all data is off that disk...
is there any option to evacuate data off that mirror?
The ZFS terminology is "attach" and "detach" A "replace" is an attach
followed by detach.
It is a good idea to verify that the sync has completed before detaching.
zpool status will show the current status.
-- richard
!DSPAM:122,465f396b235932151120594!
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss