Hello, A question on putting ZFS on EMC pseuo-devices:
I have a T1000 where we were given 100 GB of SAN space from EMC: # format < /dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 2. c1t5006016030602568d0 <DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec 16> /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0 3. c1t5006016830602568d0 <DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec 16> /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0 4. emcpower0a <DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec 16> /pseudo/[EMAIL PROTECTED] Specify disk (enter its number): # powermt display dev=all Pseudo name=emcpower0a CLARiiON ID=APM00052300875 [.HOSTNAME.] Logical device ID=60060160B1221300084781BEAFAADD11 [LUN 87] state=alive; policy=BasicFailover; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3073 [EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 c1t5006016030602568d0s0 SP A0 active alive 0 0 3073 [EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 c1t5006016830602568d0s0 SP B0 active alive 0 0 When I tried to create a pool on the straight device I got an error: # zpool create ldom-sparc-111 emcpower0a cannot open '/dev/dsk/emcpower0a': I/O error # zpool create ldom-sparc-111 emcpower0a [...] open("/dev/zfs", O_RDWR) = 3 open("/etc/mnttab", O_RDONLY) = 4 open("/etc/dfs/sharetab", O_RDONLY) Err#2 ENOENT stat64("/dev/dsk/emcpower0as2", 0xFFBFB2D8) Err#2 ENOENT stat64("/dev/dsk/emcpower0a", 0xFFBFB2D8) = 0 brk(0x000B2000) = 0 open("/dev/dsk/emcpower0a", O_RDONLY) Err#5 EIO fstat64(2, 0xFFBF9F90) = 0 cannot open 'write(2, " c a n n o t o p e n ".., 13) = 13 /dev/dsk/emcpower0awrite(2, " / d e v / d s k / e m c".., 19) = 19 ': write(2, " ' : ", 3) = 3 I/O errorwrite(2, " I / O e r r o r", 9) = 9 write(2, "\n", 1) = 1 close(3) = 0 llseek(4, 0, SEEK_CUR) = 0 close(4) = 0 brk(0x000C2000) = 0 _exit(1) I then put a label on it, and things work fine: Current partition table (original): Total disk cylinders available: 51198 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 usr wm 0 - 51174 99.95GB (51175/0/0) 209612800 1 unassigned wu 0 0 (0/0/0) 0 2 backup wu 0 - 51197 100.00GB (51198/0/0) 209707008 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 # zpool status pool: ldom-sparc-111 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM ldom-sparc-111 ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 errors: No known data errors We have another T1000 with SAN space as well, and I don't remember having to label the disk (though I could be mis-remembering): Total disk sectors available: 524271582 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 34 249.99GB 524271582 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 524271583 8.00MB 524287966 # zpool status pool: ldom-sparc-110 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM ldom-sparc-110 ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 errors: No known data errors Any way to find out why there are differences? Thanks for any info. -- David Magda <dmagda at ee.ryerson.ca> Vimes pulled out his watch and stared at it. It was turning out to be one of those days...the sort that you got every day. -- Terry Pratchett, _The_Fifth_Elephant_ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss