On Oct 11, 2012, at 1:09 AM, Rick Macklem wrote:
> Nikolay Denev wrote:
>> On Oct 10, 2012, at 3:18 AM, Rick Macklem
>> wrote:
>>
>>> Nikolay Denev wrote:
On Oct 4, 2012, at 12:36 AM, Rick Macklem
wrote:
> Garrett Wollman wrote:
>> <> said:
>>
Simple
Nikolay Denev wrote:
> On Oct 10, 2012, at 3:18 AM, Rick Macklem
> wrote:
>
> > Nikolay Denev wrote:
> >> On Oct 4, 2012, at 12:36 AM, Rick Macklem
> >> wrote:
> >>
> >>> Garrett Wollman wrote:
> < said:
>
> >> Simple: just use a sepatate mutex for each list that a cache
> >>
Sorry for the slow response. I was dealing with a bit of a family
emergency. Responses inline below.
On 10/09/12 08:54, John Baldwin wrote:
On Monday, October 08, 2012 4:59:24 pm Warner Losh wrote:
On Oct 5, 2012, at 10:08 AM, John Baldwin wrote:
I think cxgb* already have an implementation
Tim is correct in that gzip datastream allows for concatenation of
compressed blocks of data, so you might break the input stream into
a bunch of blocks [A, B, C, etc], and then can append those together
into [A.gz, B.gz, C.gz, etc], and when uncompressed, you will get
the original input stream.
On Mon, Oct 8, 2012 at 9:26 PM, Sushanth Rai wrote:
> I was trying to co-relate the o/p from "top" to that I get from vmstat -z. I
> don't have any user programs that wires memory. Given that, I'm assuming the
> wired memory count shown by "top" is memory used by kernel. Now I would like
> find
On Oct 10, 2012, at 3:18 AM, Rick Macklem wrote:
> Nikolay Denev wrote:
>> On Oct 4, 2012, at 12:36 AM, Rick Macklem
>> wrote:
>>
>>> Garrett Wollman wrote:
<>>> said:
>> Simple: just use a sepatate mutex for each list that a cache
>> entry
>> is on, rather than a globa
On Tue, Oct 09, 2012 at 09:54:03PM -0700, Tim Kientzle wrote:
>
> On Oct 8, 2012, at 3:21 AM, Wojciech Puchar wrote:
>
> >> Not necessarily. If I understand correctly what Tim means, he's talking
> >> about an in-memory compression of several blocks by several separate
> >> threads, and then - a
<
said:
> And, although this experiment seems useful for testing patches that try
> and reduce DRC CPU overheads, most "real" NFS servers will be doing disk
> I/O.
We don't always have control over what the user does. I think the
worst-case for my users involves a third-party program (that they
Garrett Wollman wrote:
> < said:
>
> > And, although this experiment seems useful for testing patches that
> > try
> > and reduce DRC CPU overheads, most "real" NFS servers will be doing
> > disk
> > I/O.
>
> We don't always have control over what the user does. I think the
> worst-case for my u
9 matches
Mail list logo