Archie Cowan wrote: > I just stumbled upon this thread somehow and thought I'd share my zfs over > iscsi experience. > > We recently abandoned a similar configuration with several pairs of x4500s > exporting zvols as iscsi targets and mirroring them for "high availability" > with T5220s. >
In general, such tasks would be better served by T5220 (or the new T5440 :-) and J4500s. This would change the data paths from: client --<net>-- T5220 --<net>-- X4500 --<SATA>-- disks to client --<net>-- T5440 --<SAS>-- disks With the J4500 you get the same storage density as the X4500, but with SAS access (some would call this direct access). You will have much better bandwidth and lower latency between the T5440 (server) and disks while still having the ability to multi-head the disks. The J4500 is a relatively new system, so this option may not have been available at the time Archie was building his system. -- richard > Initially, our performance was also good using iozone tests, but, in testing > the resilvering processes with 10tb of data, it was abysmal. It took over a > month for a 10tb x4500 mirror that was mostly mirrored to resilver back into > health with its pair. So, not exactly a highly available configuration... if > the other x4500 went unhealthy while the other was still resilvering we'd > have been in a real bad place. > > Also, "zfs send" operations on filesystems hosted by the iscsi zpool couldn't > push out more than a few kilobytes per second. Yes, we had all the > multipathing, vlans, memory buffering and all kinds of nonsense to keep the > network from being the bottleneck but to not much benefit. This was our plan > for keeping our remote sites' filesystems in sync so it was vital. > > Maybe we did something completely wrong with our setup, but I'd suggest you > verify how long it takes to resilver new x4500s into your iscsi pools and > also see how well it does when your zpools are almost full. Our initial good > performance test results were too good to be true and it turned out that they > weren't the whole story. > > Good luck. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss