> Hi experts, > > I have few issues about ZFS and virtualization: > > [b]Virtualization and performance[/b] > When filesystem traffic occurs on a zpool containing > only spindles dedicated to this zpool i/o can be > distributed evenly. When the zpool is located on a > lun sliced from a raid group shared by multiple > systems the capability of doing i/o from this zpool > will be limited. Avoiding or limiting i/o to this lun > until the load from the other systems decreases would > overall help performance for the local zpool. > I heard some rumors recently about using SMI-S to > "de-virtualize" the traffic and allow Solaris to peek > through the virtualization layers thus optimizing i/o > target selection. Maybe someone has some rumors to > add ;-) > Virtualization with 6920 has been briefly discussed > at > http://www.opensolaris.org/jive/thread.jspa?messageID= > 14984#14984 but without conclusion or > recommendations.
I don't know the answer, but: Wouldn't the overhead of using SMI-S, or some other method, to determine the load on the raid group from the storage array, negate any potential I/O benefits you could gain? Avoiding or limiting I/O to a heavily used LUN in your zpool would reduce the number of spindles in your zpool thus reducing aggregate throughput anyway(?). Storage array layout best practices suggest, if at all possible, to limit the number of LUNs you create from a raid group. Exactly because of the I/O limitations that you mention. I can understand building the smarts into ZFS to handle multipath LUNs (LUNs presented out of more than one controller on the array, active-active configurations, not simply dual-fabric multipathing) and load balance that way. Does ZFS simply take advantage of MPxIO in Solaris for multipathing/load balancing or are there plans to build support for it into the file system itself? > [b]Volume mobility[/b] > One of the major advantages of zfs is sharing of the > zpool capacity between filesystems. I often run > application in small "application containers" located > on separate luns which are zoned to several hosts so > they can be run on different hosts. The idea behind > this is failover, testing and load adjustment. > Because only complete zpools can be migrated capacity > sharing between movable "containers" is currently > impossible. > Are there any plans to allow zpools to be > concurrently shareable between hosts? Clarification, you're not asking for shared file system behaviour are you? Multiple systems zoned to see the same LUNs and simultaneously reading/writing to them? I guess I don't know if I fully understand what you are asking. I haven't tried it but I assume if you coordinated which server had "ownership" of a zpool, there would be nothing from stopping you from creating a zpool on servera with a set of LUNs, creating your zfs file systems within the pools, zoning the same set of LUNs to one or more other servers, and then coordinating who has ownership of the zpool. Ex: You're testing an application/data installed on a ZFS file system on a 32-bit server (x86) system, then you want to test it on an Opteron. So you zone the LUNs to the Opteron and stop using the zpool on the 32-bit server and use it on the Opteron.... I may be completely incorrect about the above. Other than that scenario, I think your questions fit more closely to the shared file system topic that I brought up originally. Still if you had production data in a ZFS file system in your pool as well as test data in a separate ZFS file system also using the same pool (your "application container") the disks making up your common pool would still have to be visible to multiple servers and you probably would want to limit exposure to the other ZFS file systems within that pool on the other servers. Therefore I would think you wouldn't even want the other systems to see the LUNs containing the "production data" file system, but this wouldn't be possible if it was all in a common pool. -Nate This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss