On Fri, Dec 25, 2009 at 11:43 PM, Brent Jones <br...@servuhome.net> wrote:

>
> >>
> >>
> >
> >
> > Hang on... if you've got 77 concurrent threads going, I don't see how
> that's
> > a "sequential" I/O load.  To the backend storage it's going to look like
> the
> > equivalent of random I/O.  I'd also be surprised to see 12 1TB disks
> > supporting 600MB/sec throughput and would be interested in hearing where
> you
> > got those numbers from.
> >
> > Is your video capture doing 430MB or 430Mbit?
> >
> > --
> > --Tim
> >
>  >
>
> Think he said 430Mbit/sec, which if these are security cameras, would
> be a good sized installation (30+ cameras).
> We have a similar system, albeit running on Windows. Writing about
> 400Mbit/sec using just 6, 1TB SATA drives is entirely possible, and
> working quite well on our system without any frame loss or much
> latency.
>

Once again, Mb or MB?  They're two completely different numbers.  As for
getting 400Mbit out of 6 SATA drive, that's not really impressive at all.
If you're saying you got 400MB, that's a different story entirely, and while
possible with sequential I/O and a proper raid setup, it isn't happening
with random.


>
> The writes lag is noticeable however with ZFS, and the behavior of the
> transaction group writes. If you have a big write that needs to land
> on disk, it seems all other I/O, CPU and "niceness" is thrown out the
> window in favor of getting all that data on disk.
> I was on a watch list for a ZFS I/O scheduler bug with my paid Solaris
> support, I'll try to find that bug number, but I believe some
> improvements were done in 129 and 130.
>
>
>

-- 
--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to