Marcelo Leal <[EMAIL PROTECTED]> wrote:
> Hello all,
> I think he got some point here... maybe that would be an interesting
> feature for that kind of workload. Caching all the metadata, would make t
> the rsync task more fast (for many files). Try to cache the data is really
> waste of time, bec
Hello all,
I think he got some point here... maybe that would be an interesting feature
for that kind of workload. Caching all the metadata, would make the rsync task
more fast (for many files). Try to cache the data is really waste of time,
because the data will not be read again, and will jus
On Fri, Oct 17, 2008 at 10:51 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Fri, 17 Oct 2008, Al Hopper wrote:
>>
>> a) inexpensive, large capacity SATA drives running at 7,200 RPM and
>> providing, approximately, 300 IOPS.
>> b) expensive, small capacity, SAS drives running at 15k RPM and
>>
On Fri, 17 Oct 2008, Al Hopper wrote:
>
> a) inexpensive, large capacity SATA drives running at 7,200 RPM and
> providing, approximately, 300 IOPS.
> b) expensive, small capacity, SAS drives running at 15k RPM and
> providing, approx, 700 IOPS.
Al,
Where are you getting the above IOPS estimates f
On Thu, Oct 16, 2008 at 6:52 AM, Tomas Ögren <[EMAIL PROTECTED]> wrote:
> On 16 October, 2008 - Ross sent me these 1,1K bytes:
>
>> I might be misunderstanding here, but I don't see how you're going to
>> improve on "zfs set primarycache=metadata".
>>
>> You complain that ZFS throws away 96kb of da
Tomas Ögren wrote:
> On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes:
>
>
>> Tomas Ögren wrote:
>>
>>> On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
>>>
>>>
Tomas Ögren wrote:
> Hello.
>
> Executive summary: I want arc_da
On 16 October, 2008 - Ross sent me these 1,1K bytes:
> I might be misunderstanding here, but I don't see how you're going to
> improve on "zfs set primarycache=metadata".
>
> You complain that ZFS throws away 96kb of data if you're only reading
> 32kb at a time, but then also complain that you ar
I might be misunderstanding here, but I don't see how you're going to improve
on "zfs set primarycache=metadata".
You complain that ZFS throws away 96kb of data if you're only reading 32kb at a
time, but then also complain that you are IO/s bound and that this is
restricting your maximum transf
On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes:
> Tomas Ögren wrote:
> > On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
> >
> >> Tomas Ögren wrote:
> >>> Hello.
> >>>
> >>> Executive summary: I want arc_data_limit (like arc_meta_limit, but for
> >>> data) and set i
Tomas Ögren wrote:
> On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
>
>> Tomas Ögren wrote:
>>> Hello.
>>>
>>> Executive summary: I want arc_data_limit (like arc_meta_limit, but for
>>> data) and set it to 0.5G or so. Is there any way to "simulate" it?
>>>
>> We describe how to
On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
> Tomas Ögren wrote:
> > Hello.
> >
> > Executive summary: I want arc_data_limit (like arc_meta_limit, but for
> > data) and set it to 0.5G or so. Is there any way to "simulate" it?
> >
>
> We describe how to limit the size of the
Tomas Ögren wrote:
> Hello.
>
> Executive summary: I want arc_data_limit (like arc_meta_limit, but for
> data) and set it to 0.5G or so. Is there any way to "simulate" it?
>
We describe how to limit the size of the ARC cache in the Evil Tuning Guide.
http://www.solarisinternals.com/wiki/index.p
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit, but for
data) and set it to 0.5G or so. Is there any way to "simulate" it?
We have a cluster of linux frontends (http/ftp/rsync) for
Debian/Mozilla/etc archives and as a NFS disk backend we currently have
a DL145 running OpenSo
13 matches
Mail list logo