All,

my apologies in advance for the wide distribution - it
was recommended that I contact these aliases but if
there is a more appropriate one, please let me know...

I have received the following EFI disk-related
questions from the EMC PowerPath team who
would like to provide more complete support
for EFI disks on Sun platforms...

I would appreciate help in answering these questions...

thanks...
-sharlene
~~~~~~~~~~~~~~~~~~~

1. Reserving the minor numbers that would have been used for the "h" slice for the "wd"
    nodes instead creates a "hole". Is this the long term design and therefore not likely to change?

2. The efi_alloc_and_read(3EXT) function provides a pointer to a dk_gpt_t structure that
    contains an array of efi_nparts dk_part structures. (efi_partition.h) Will efi_nparts be 15 or 16?

3. Will dk_part[7] correspond to [EMAIL PROTECTED],0:h or to [EMAIL PROTECTED],0:wd or is it somehow reserved?

4. Does the fact that efi_nparts is a member of the dk_gpt_t structure suggest that it may
    not always be { 15 | 16 }? (If so, we should probably query it and create the number
    of nodes indicated).

5. If efi_nparts can be bigger than 16, what happens to the naming of the partition
    nodes ("q" through "t")? If smaller?

6. When changing label types (SMI -> EFI, EFI -> SMI), I assume the DKIOCSVTOC and
    DKIOCSETEFI ioctls are being trapped by the driver to cause the proper device nodes
    to be created, what is the preferred method for cleaning up the device nodes? e.g. is a
    reboot always required? Changing from an EFI to an SMI disk label leaves the wd node
    hanging around, a reboot deletes it. Is this a bug? How are the /dev/dsk and /dev/rdsk
    links manipulated at label change time, is format involved in some way other than calling
    the two aforementioned ioctls or is it accomplished entirely within the driver? If so, using
    what mechanism?

7. For disks that are not immediately accessible (i.e. iSCSI, not ready disk or fabric) what
    is done to create the proper device nodes and links? Which of the following is Sparc/x86
    doing and how?

   a. The attach is failed, and no device nodes are created. The attach is reattempted at
       some point in the future triggered by some mechanism (different for sparc AND x86?).
       If so, what event is used to force an attach?

   b. The attach is failed and a default set of nodes is created and adjusted when the device is
       accessable?

   c. The attach is successful and everything is cleaned up later when the device is available.

8. Is it possible to query the path_to_inst data via a system call to determine the minors created
    for a disk during early boot? This question is related to question 7; If the Solaris disk driver
    makes a default assumption about the label type before the device is available and creates
    nodes accordingly, how can this decision be determined? If the device configuration is delayed
    until the device is available the question is not germane.

9. On Opteron, does a plug and play event trigger the attach, ensuring that the proper nodes
    are created as the label type will be available?

10. On Opteron, why do the cXtXdXp[1-4] links/nodes not address the corresponding fdisk
     partitions? Multiple fdisk partitions can be created but are not addressable unless each
     in turn is set as the active partition and then addressed via p0.

11. An EFI labeled disk must constitute the whole disk, so there can only be one fdisk partition,
      any existing fdisk partitions are overwritten. Is this by design? It seems to violate the
      definition of fdisk partitions?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to