On Sat, Aug 8, 2009 at 3:02 PM, Ed Spencer<ed_spen...@umanitoba.ca> wrote:
>
> On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
>
>> Enterprise storage should work fine without needing to run a tool to
>> optimize data layout or repair the filesystem.  Well designed software
>> uses an approach which does not unravel through use.
>
> Hmmmm, this is counter to my understanding. I always thought that to
> optimize sequential read performance you must store the data according
> to how the device will read the data.
>
> Spinning rust reads data in a sequential fashion. In order to optimize
> read performance it has to be laid down that way.
>
> When reading files in a directory, the files need to be laid out on the
> physical device sequentially for optimal read performance.
>
> I probably not he person to argue this point though....Is there a DBA
> around?

The DBA's that I know use files that are at least hundreds of
megabytes in size.  Your problem is very different.

> Maybe my problems will go away once we move into the next generation of
> storage devices, SSD's! I'm starting to think that ZFS will really shine
> on SSD's.

Your problem seems to be related to cold reads in a pretty large data
set.  With SSD's (l2arc) you are likely to see a performance boost for
a larger set of recently read files, but my guess is that backups will
still be pretty slow.  There is likely more benefit in restore speed
with SSD's than there is in read speeds.  However, the NVRAM on the
NetApp that is backing your iSCSI LUNs is probably already giving you
most of this benefit (assuming low latency on network connections).

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to