On 04/08/2011 07:45 PM, Sašo Kiselkov wrote: > On 04/08/2011 07:22 PM, J.P. King wrote: >> >>> No, I haven't tried a S7000, but I've tried other kinds of network >>> storage and from a design perspective, for my applications, it doesn't >>> even make a single bit of sense. I'm talking about high-volume real-time >>> video streaming, where you stream 500-1000 (x 8Mbit/s) live streams from >>> a machine over UDP. Having to go over the network to fetch the data from >>> a different machine is kind of like building a proxy which doesn't >>> really do anything - if the data is available from a different machine >>> over the network, then why the heck should I just put another machine in >>> the processing path? For my applications, I need a machine with as few >>> processing components between the disks and network as possible, to >>> maximize throughput, maximize IOPS and minimize latency and jitter. >> >> I can't speak for this particular situation or solution, but I think in >> principle you are wrong. Networks are fast. Hard drives are slow. Put >> a 10G connection between your storage and your front ends and you'll >> have the bandwidth[1]. Actually if you really were hitting 1000x8Mbits >> I'd put 2, but that is just a question of scale. In a different >> situation I have boxes which peak at around 7 Gb/s down a 10G link (in >> reality I don't need that much because it is all about the IOPS for >> me). That is with just twelve 15k disks. Your situation appears to be >> pretty ideal for storage hardware, so perfectly achievable from an >> appliance. > > I envision this kind of scenario (using my fancy ASCII art skills :-)): > > || ========= streaming server ======== || > +-----+ SAS +-----+ PCI-e +-----+ Ethernet +--------+ > |DISKS| ===> | RAM | ====> | NIC | =======> | client | > +-----+ +-----+ +-----+ +--------+ > > And you are advocating for this kind of scenario: > > || ==== network storage ===== || > +-----+ SAS +-----+ PCI-e +-----+ Ethernet > |DISKS| ===> | RAM | ====> | NIC | ======== ... > +-----+ +-----+ +-----+ > > || ===== streaming server ====== || > +-----+ PCI-e +-----+ PCI-e +-----+ Ethernet +--------+ > ... ==> | NIC | ====> | RAM | ====> | NIC | =======> | client | > +-----+ +-----+ +-----+ +--------+ > > I'm not constrained on CPU (so hooking up multiple streaming servers to > one backend storage doesn't really make sense). > So what exactly what does this scenario add to my needs (besides needing > extra hardware in both the storage and server (10G NICs, cabling, > modules, etc.)? I'm not saying no, I'd love to improve the throughput, > IOPS and latency characteristics of my systems. > >> I can't speak for the S7000 range. I ignored that entire product line >> because when I asked about it the markup was insane compared to just >> buying X4500/X4540s. The price for Oracle kit isn't remotely tenable, so >> the death of the X45xx range is a moot point for me anyway, since I >> couldn't afford it. >> >> [1] Just in case, you also shouldn't be adding any particularly >> significant latency either. Jitter, maybe, depending on the specifics >> of the streams involved. >> >>> Saso >> >> Julian >> -- >> Julian King >> Computer Officer, University of Cambridge, Unix Support >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss@opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > Saso > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
P.S. I forgot to add that I need plenty of storage space also, so while 15k disks are great for throughput and IOPS, they are way too expensive. Also, I hit the IOPS wall before I hit throughput limits (a 3x 4 disk raid-z pool maxes out at round 200 concurrent read streams + 30 live-ingest write streams). -- Saso _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss