t. johnson wrote:
>>> One would expect so, yes. But the usefulness of this is limited to the
>>> cases where the entire working set will fit into an SSD cache.
>>>
>> Not entirely out of the question. SSDs can be purchased today
>> with more than 500 GBytes in a 2.5" form factor. One or more of
>>
>>
>> One would expect so, yes. But the usefulness of this is limited to the cases
>> where the entire working set will fit into an SSD cache.
>>
>
> Not entirely out of the question. SSDs can be purchased today
> with more than 500 GBytes in a 2.5" form factor. One or more of
> these would make a
Luke Lonergan wrote:
>> Actually, it does seem to work quite
>> well when you use a read optimized
>> SSD for the L2ARC. In that case,
>> "random" read workloads have very
>> fast access, once the cache is warm.
>>
>
> One would expect so, yes. But the usefulness of this is limited to the ca
On Sun, 23 Nov 2008, Bob Netherton wrote:
>> This argument can be proven by basic statistics without need to resort
>> to actual testing.
>
> Mathematical proof <> reality of how things end up getting used.
Right. That is a good thing since otherwise the technologies that Sun
has recently deplo
> This argument can be proven by basic statistics without need to resort
> to actual testing.
Mathematical proof <> reality of how things end up getting used.
> Luckily, most data access is not completely random in nature.
Which was my point exactly. I've never seen a purely mathematical
mod
On Sat, 22 Nov 2008, Bob Netherton wrote:
>
>> In other words, for random access across a working set larger (by
>> say X%) than the SSD-backed L2 ARC, the cache is useless. This
>> should asymptotically approach truth as X grows and experience
>> shows that X=200% is where it's about 99% true
> In other words, for random access across a working set larger (by say X%)
> than the SSD-backed L2 ARC, the cache is useless. This should asymptotically
> approach truth as X grows and experience shows that X=200% is where it's
> about 99% true.
>
Ummm, before we throw around phrases like
; - Original Message -----
> From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> To: zfs-discuss@opensolaris.org
> Sent: Sat Nov 22 16:43:53 2008
> Subject: Re: [zfs-discuss] ZFS fragmentation with MySQL databases
>
> Kees Nuyt wrote:
>
>> My explanation would be: Whenev
ROTECTED]>
> To: zfs-discuss@opensolaris.org
> Sent: Sat Nov 22 16:43:53 2008
> Subject: Re: [zfs-discuss] ZFS fragmentation with MySQL databases
>
> Kees Nuyt wrote:
>
>> My explanation would be: Whenever a block within a file
>> changes, zfs has to write it at
On Sun, 23 Nov 2008, Tamer Embaby wrote:
>> That is the trade-off between "always consistent" and
>> "fast".
>>
> Well, does that mean ZFS is not best suited for database engines as
> underlying filesystem? With databases it will always be fragmented,
> hence slow performance?
Assuming that the
EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
Sent: Sat Nov 22 16:43:53 2008
Subject: Re: [zfs-discuss] ZFS fragmentation with MySQL databases
Kees Nuyt wrote:
> My explanation would be: Whenever a block within a file
> changes, zfs has to write it at another location ("copy on
>
Kees Nuyt wrote:
> My explanation would be: Whenever a block within a file
> changes, zfs has to write it at another location ("copy on
> write"), so the previous version isn't immediately lost.
>
> Zfs will try to keep the new version of the block close to
> the original one, but after several cha
[Default] On Fri, 21 Nov 2008 17:20:48 PST, Vincent Kéravec
<[EMAIL PROTECTED]> wrote:
> I just try ZFS on one of our slave and got some really
> bad performance.
>
> When I start the server yesterday, it was able to keep
> up with the main server without problem but after two
> days of consecuti
I just try ZFS on one of our slave and got some really bad performance.
When I start the server yesterday, it was able to keep up with the main server
without problem but after two days of consecutive run the server is crushed by
IO.
After running the dtrace script iopattern, I notice that the
14 matches
Mail list logo