I did something like the following:

format -e /dev/rdsk/c5t0d0p0
fdisk
1 (create)
F (EFI)
6 (exit)
partition
label
1
y
0
usr
wm
64
4194367e
1
usr
wm
4194368
117214990
label
1
y



             Total disk size is 9345 cylinders
             Cylinder size is 12544 (512 byte) blocks

                                               Cylinders
      Partition   Status    Type          Start   End   Length    %
      =========   ======    ============  =====   ===   ======   ===
          1                 EFI               0  9345    9346    100

partition> print
Current partition table (original):
Total disk sectors available: 117214957 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                64        2.00GB          4194367
  1        usr    wm           4194368       53.89GB          117214990
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  8   reserved    wm         117214991        8.00MB          117231374

This isn't the output from when I did it but it is exactly the same steps that 
I followed.

Thanks for the info about slices, I may give that a go later on. I'm not keen 
on that because I have clear evidence (as in zpools set up this way, right now, 
working, without issue) that GPT partitions of the style shown above work and I 
want to see why it doesn't work in my set up rather than simply ignoring and 
moving on.

From: Fajar A. Nugraha [mailto:w...@fajar.net]
Sent: Sunday, 17 March 2013 3:04 PM
To: Andrew Werchowiecki
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] partioned cache devices

On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki 
<andrew.werchowie...@xpanse.com.au<mailto:andrew.werchowie...@xpanse.com.au>> 
wrote:
I understand that p0 refers to the whole disk... in the logs I pasted in I'm 
not attempting to mount p0. I'm trying to work out why I'm getting an error 
attempting to mount p2, after p1 has successfully mounted. Further, this has 
been done before on other systems in the same hardware configuration in the 
exact same fashion, and I've gone over the steps trying to make sure I haven't 
missed something but can't see a fault.

How did you create the partition? Are those marked as solaris partition, or 
something else (e.g. fdisk on linux use type "83" by default).

I'm not keen on using Solaris slices because I don't have an understanding of 
what that does to the pool's OS interoperability.


Linux can read solaris slice and import solaris-made pools just fine, as long 
as you're using compatible zpool version (e.g. zpool version 28).

--
Fajar
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to