Hello,
I applied suggested modifications on one of our ATS cluster last friday.
Unfortunately, I had issues with IO/Wait during the week-end.
Modified cluster is not the one which gets most load: I have 2 clusters
and the one I modified is parent for the other.
But, this time, I noticed somethin
Hi Luca,
if you observe there are 4 IRQs for Baptiste interface, they're scheduled on
cpu0 by default and I suggest reschedule just the last 3 IRQs :-P
JT
Hello,
I tried to recompile trafficserver, linking libhwloc on Debian Wheezy
and faced following issue:
https://issues.apache.org/jira/browse/TS-1842
Which, unfortunately, has been closed as duplicate.
My solution is to add @hwloc_LIBS@ into few Makefile.am files as
described in joined patch.
Of
Hello Jay,
Strange, I did not saw your reply :-/
For now, I've added more VMs and have no more IO/Wait issue.
Anyway, I'll try your suggestions.
I made some checks. Here re the results:
On 15/06/2014 22:32, jtomo...@yahoo.com.INVALID wrote:
> Hi Baptiste, sorry my late return for your issue.
>
/proc/irq/$IRQ_NET_1/smp_affinity
echo 02 > /proc/irq/$IRQ_NET_2/smp_affinity
echo 04 > /proc/irq/$IRQ_NET_3/smp_affinity
-Original Message-
From: jtomo...@yahoo.com.INVALID [mailto:jtomo...@yahoo.com.INVALID]
Sent: domenica 15 giugno 2014 22:32
To: dev@trafficserver.apach
Hi Baptiste, sorry my late return for your issue.
I suggest some environment and software settings, considering 4GB of RAM and 4
CPU threads:
1- check if ATS is linked with libhwloc library (ldd bin/traffic_server | grep
libhwloc) if not, recompile using it
2- remove irqbalance (for Ubuntu dis
's all about your VM/ATS settings.
> Can you post these clues:
>
> cat /proc/interrupts
> cat etc/trafficserver/records.config | grep thread
> cat etc/trafficserver/storage.config | grep dev
> lshw -c disk -c volume
>
> Cheers
> JT
>
> From: Jan-Frode Myklebu
t: Re: TrafficServer and IO/Wait
Sent: Wed, Jun 4, 2014 10:45:27 AM
On Wed, Jun 04, 2014 at 11:28:07AM +0200, Jean Baptiste Favre wrote:
> Hello,
> Each VM has 4GB memory.
>
> traffic_line -r proxy.config.cache.ram_cache.size gives 2147483648
I would consider increasing proxy.config.cac
On Wed, Jun 04, 2014 at 11:28:07AM +0200, Jean Baptiste Favre wrote:
> Hello,
> Each VM has 4GB memory.
>
> traffic_line -r proxy.config.cache.ram_cache.size gives 2147483648
I would consider increasing proxy.config.cache.ram_cache.size to at
least 3GB on your 4GB VMs. Assuming ATS will have bett
Hello,
Each VM has 4GB memory.
traffic_line -r proxy.config.cache.ram_cache.size gives 2147483648
Cache is splitted on 3 raw disks (sdb, sdc & sdd). Each disk is 10GB and
cache is full:
traffic_line -r proxy.process.cache.percent_full gives 99
I'm not sure how I can find cached entry number.
In
How much memory does the VM have? How much
proxy.config.cache.ram_cache.size have you given it? How much data are
you caching? How much is delivered to clients ?
-jf
11 matches
Mail list logo