Hello All- I'm working on a Sun Ultra 80 M2 workstation. It has eight 750 GB SATA disks installed. I've tried the following on both ON build 72, Solaris 10 update 4, and Indiana with the same results.
If I create a ZFS filesystem using 1-7 hard drives (I've tried 1 and 7), and then try to make an iSCSI target on that pool, when a client machine tries to access the iSCSI volume, the memory usage on the Ultra 80 goes to the same size as the ZFS filesystem. For example: I'm creating a RaidZ ZFS pool: zpool create -f telephone raidz c9d0 c10d0 c11d0 c12d0 c13d0 c14d0 c15d0 I then create a two terabyte filesystem on that zvol: zfs create -V 2000g telephone/jelley And make it into an iSCSI target: iscsitadm create target -b /dev/zvol/dsk/telephone/jelley jelley Now if I perform a 'iscsitadm list target', the iSCSI target appears like it should: Target: jelley iSCSI Name: iqn.1986-03.com.sun:02:fcaa1650-f202-4fef-b44b-b9452a237511.jelley Connections: 0 Now when I try to connect to it with my Windows 2003 server running the MS iSCSI initiator, I see the memory usage climb to the point that the totally exhausts all available physical memory (prstat): PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 511 root 2000G 106M sleep 59 0 0:02:58 1.1% iscsitgtd/15 2139 root 8140K 4204K sleep 59 0 0:00:00 0.0% sshd/1 2164 root 3276K 2740K cpu1 49 0 0:00:00 0.0% prstat/1 2144 root 2672K 1752K sleep 49 0 0:00:00 0.0% bash/1 574 noaccess 173M 92M sleep 59 0 0:03:18 0.0% java/25 Do you see the iscsitgtd process trying to use 2000 gigabytes of RAM? I can sit there and hold down spacebar while the Windows workstation is trying to access it, and the memory usage climbs at an astronomical rate, until it exhausts all the available memory on the box (several hundred megabytes per minute). The total ram it tries to allocate depends totally on the size of the iSCSI volume. If it's a 1000 megabyte volume, then it only allocates a gig... if it's 600 gigs, it tries to allocate 600 gigs. Now here is the real kicker. I took this down to as simple of a configuration as possible--one single drive with a ZFS filesystem on it. The memory utilization was the same. I then tried creating the iSCSI target on a UFS filesystem. Everything work beautifully, and memory utilization was no longer directly proportional to the size of the iSCSI volume. If I create something small, like a 100 gig iSCSI target, the system does eventually get around to finishing and releases the ram. When what's really strange is when I try to access the iSCSI volume, the memory usage then climbs megabyte per megabyte until it is exhausted, and then access to the iSCSI volume is terribly slow. I can copy a 300 meg file in just six seconds when the memory utilization on the iscsitgtd process is low. But if I try a 2.5 gig file, once it get's about 1500 megs into it, performance drops about 99.9% and it's incredibly slow... again, until it's done and the iscsitgtd releases the ram, then it's plenty zippy for small IO operations. Has anybody else been making iSCSI targets on ZFS pools? I've had a case open with Sun since Oct 3, if any Sun folks want to look at the details (case #65684887). I'm getting very desperate to get this fixed, as this massive amount of storage was the only reason I got this M80... Any pointers would be greatly appreciated. Thanks- John Tracy This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss