Quick reset, Greg Shaw asked for a more descriptive output for zpool status. I've already demonstrated how to do that. We also discussed the difficulty in making a reliable name to physical location map without involving humans.
continuing on... A Darren Dunham wrote: > On Wed, Sep 26, 2007 at 09:53:00AM -0700, Richard Elling wrote: >> A Darren Dunham wrote: > >>> It seems to me that would limit the knowledge to the currently imported >>> machine, not keep it with the pool. >> The point is that the name of the vdev doesn't really matter to ZFS. > > I would assume the name of the disk doesn't matter to VxVM either. It's > just a visible name for administrators. IIRC, the name also appears in the /dev/vx directory structure. Though Mark Ashley claimed this "makes for horrendous support issues and breaks a lot of health check tools," I disagree. zpool status will be perfectly happy and if someone wrote a health check tool which expects /dev/dsk/c* to mean anything then they weren't thinking of modern (Solaris 2+) OSes. Recall that /devices contains physical device entries, /dev is just a bunch of symlinks to aid system administators and applications which look for default devices. IMHO, it is perfectly reasonable to use /dev for this purpose, though in practice it could be any other directory of your choosing. >>> Naming of VxVM disks is valuable and powerful because the names are >>> stored in the diskgroup and are visible to any host that imports it. >> Ah, but VxVM has the concept of owned disks. You cannot share a disk >> across VxVM instances (CVM excepted) and each OS can have only one VxVM >> instance. OTOH, ZFS storage pools are at the vdev level, not the disk >> level, so ZFS is not constrained by the disk boundary. > > I see no difference between ZFS and VxVM in these particulars. I see a radical difference. > Neither one by default allows simultaneous imports, both tend to use > entire disks and have advantages when used that way, both allow a > managed device (ZFS vdev, VxVM disk) to be made out of a portion of a > disk. (I will admit it is less common to do that on VxVM than it is on > ZFS). AFAIK, VxVM still only expects one private region per disk. The private region stores info on the configuration of the logical devices on the disk, and its participation therein. ZFS places this data in the on-disk format on the vdev, which is radically different. With ZFS you could conceivably have a different storage pool per slice or partition. > So amend my notion of naming a "disk" to naming a "vdev". > >> To take this discussion further, if a disk:vdev mapping is not 1:1, then >> how would vanity naming of disks make any difference to ZFS? > > I'm only suggesting that a common use of both is in a 1:1 situation, and > that being able to give names to storage is valuable in that case. I > don't see that the value is diminished becase we can create a > configuration where it's less obvious how it would be used. I think you are still thinking of the old way of doing things where you *had to worry* about disks. To some degree, ZFS frees you from that restriction in that you can worry about storage pools, at a higher level of abstraction. VxVM and SVM got us only part way down the road to abstraction. However, that doesn't relieve us from the serviceability issues surrounding physical disksor vdevs. Even if we had a vdev name service in zpool(1m) to provide human-readable lookups, we still have the issue that the rest of the OS, especially FMA, will know the device by a different name. > I see ZFS as having slightly less need for it at the moment only because > deallocation of storage can only happen on mirrors or by destroying a > pool. As the flexibility for moving/removing storage in a pool comes to > be, I think better ways to view information about the disks/vdevs is > going to be more important. I think there is a use case lurking here, but it is not actually related to ZFS. fmtopo has some knowledge of topology, but it is far from perfect for random hardware, and seems particularly devoid of disk information. cfgadm also has some info, also limited. Virtualization and multipathing further abstract away physical knowledge. Is there any way to get all of the parties involved to support a name service to perform mappings? -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss