yeah it does something funky that I did not expect, zpool seems like its taking
slice 0 of that emc lun rather than taking the whole device...
so when I did create that lun, I formated disk and it looked like this:
format> verify
Primary label contents:
Volume name = < >
ascii name = <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16>
pcyl = 51200
ncyl = 51198
acyl = 2
nhead = 256
nsect = 16
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 63 128.00MB (64/0/0) 262144
1 swap wu 64 - 127 128.00MB (64/0/0) 262144
2 backup wu 0 - 51197 100.00GB (51198/0/0) 209707008
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 128 - 51197 99.75GB (51070/0/0) 209182720
7 unassigned wm 0 0 (0/0/0) 0
that is the reason when I was trying to replace the other disk zpool did take
slice 0 of that disk which was 128mb and treated it as pool rather than taking
the whole disk or slice 2 or whatever it does with normal devices... I have that
system connected to EMC clarion and I am using powerpath software from emc to do
multipathing and stuff... ehh.. will try to replace that device old internal
disk with this one and lets see how that will work.
thanks so much for help.
Chris
On Fri, 1 Jun 2007, Will Murnane wrote:
On 6/1/07, Krzys <[EMAIL PROTECTED]> wrote:
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mypool 68G 53.1G 14.9G 78% ONLINE -
mypool2 123M 83.5K 123M 0% ONLINE -
Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?
Will
!DSPAM:122,46601749220211363223461!
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss