[no Sun folks replying to this?  ok, let me do more spam then...]

Scott, thank you so much for the testing spirit and sharing the result with 
the list! -- We architects can be talking all day long and still don't have 
any idea how the open things would work on "any box", not just the 
poster-boy kind of expensive boxes with tons of hardware.

However, I would just like to suggest that the SSD performance gain would be 
mostly in rates (IOPS), but not throughput (MB/s). If you measure the gain 
in light of rates, you might be (actually should be, by our architecting 
theory) much more impressed.
[well, only if you care about database applications, beyond just our 
personal digital media files on company network...   :-)   ]

Please see the testing below, done before the 10/2008 Sun official 7000 SSD 
availability annoucement, as well as the tech talk by Brendan, a bit long 
(and less fun than my spam), but I am sure it is worth the time to study.
http://blogs.sun.com/brendan/entry/test

Best,
z


----- Original Message ----- 
From: "Scott Laird" <sc...@sigkill.org>
To: "Richard Elling" <richard.ell...@sun.com>
Cc: <zfs-discuss@opensolaris.org>; "Akhilesh Mritunjai" 
<mritun+opensola...@gmail.com>
Sent: Saturday, January 03, 2009 12:02 AM
Subject: Re: [zfs-discuss] Unable to add cache device


> On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling <richard.ell...@sun.com> 
> wrote:
>> Scott Laird wrote:
>>>
>>> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
>>> <mritun+opensola...@gmail.com> wrote:
>>>
>>>>
>>>> As for source, here you go :)
>>>>
>>>>
>>>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650
>>>>
>>>
>>> Thanks.  It's in the middle of get_replication, so I suspect it's a
>>> bug--zpool tries to check on the replication status of existing vdevs
>>> and croaks in the process.  As it turns out, I was able to add the
>>> cache devices just fine once the resilver completed.
>>>
>>
>> It is a bug because the assertion failed.  Please file one.
>> http://en.wikipedia.org/wiki/Assertion_(computing)
>> http://bugs.opensolaris.org
>>
>>> Out of curiosity, what's the easiest way to shove a file into the
>>> L2ARC?  Repeated reads with dd if=file of=/dev/null doesn't appear to
>>> do the trick.
>>>
>>
>> To put something in the L2ARC, it has to be purged from the ARC.
>> So until you run out of space in the ARC, nothing will be placed into
>> the L2ARC.
>
> I have a ~50G working set and 8 GB of RAM, so I'm out of space in my
> ARC.  My read rate is low enough for the disks to keep up, but I'd
> like to see lower latency.  Also, 30G SSDs were cheap last week :-).
>
> My big problem is that dd if=file of=/dev/null doesn't appear to
> actually read the whole file--I can loop over 50G of data in about 20
> seconds while doing under 100 MB/sec of disk I/O.  Does Solaris's dd
> have some sort of of=/dev/null optimization?  Adding conv=swab seems
> to be making it work better, but I'm still only seeing write rates of
> ~1 MB/sec per SSD, even though they're mostly empty.
>
>
> Scott
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to