Roger that.
I've deployed a new version with the patch
from a290da25a16b7c79d4a7a87f522b4068bca04979 - I'll leave it for a few
days and report back.
Please let me know if you think there are any further issues, or any
subsequent extra patches on top that are relevant.
Thanks!
Cheers,
Just
On Mon, Sep 19, 2016 at 11:04:29AM +0100, Justin Cattle wrote:
> I *think* I have answered my own question. The patch in the email doesn't
> include the switch to xmalloc that was originally in
> the krt-export-filtr-fix branch as well.
>
> I can see from `git blame` that it's in the previous com
I *think* I have answered my own question. The patch in the email doesn't
include the switch to xmalloc that was originally in
the krt-export-filtr-fix branch as well.
I can see from `git blame` that it's in the previous commit,
bc00f058154bb4a630d24d64a55b5f181d235c63 [ Filter: Prefer xmalloc/x
Ok - great.
Should this patch apply to the 1.6 released version ok ?
I was tracking from the krt-export-filtr-fix before, that that is now gone
:)
Cheers,
Just
On 19 September 2016 at 10:13, Ondrej Zajicek
wrote:
> On Mon, Sep 19, 2016 at 09:46:03AM +0100, Justin Cattle wrote:
> > Hi Pavel,
On Mon, Sep 19, 2016 at 09:46:03AM +0100, Justin Cattle wrote:
> Hi Pavel,
>
>
> After running with this latest fixup commit for a week, I see mixed results.
>
> With the first fix you created, all the processes remained using a very
> small amount of memory, consistently. As per my previous em
gt;>>
>>>>> -static int rte_update_nest_cnt; /* Nesting counter to allow
>>>>> recursive
>>>>> updates */
>>>>> -
>>>>> -static inline void
>>>>> -rte_update_lock(void)
>>>>> -{
>>>>&
_nest_cnt++;
>>>> -}
>>>> -
>>>> -static inline void
>>>> -rte_update_unlock(void)
>>>> -{
>>>> - if (!--rte_update_nest_cnt)
>>>> -lp_flush(rte_update_pool);
>>>> -}
>>>> -
>>&
nitely in distinct steps, separated by a period
of
a few hours.
In production, this consumes most of the 32G of memory until the
kernel oom-killer to intervenes.
Production:
BIRD 1.5.0 ready.
bird> show memory
BIRD memory usage
Routing tables: 1405 MB
Route attributes: 84 kB
ROA table
1.6.0 ready.
2391 of 2391 routes for 1201 networks
# birdc show mem
BIRD 1.6.0 ready.
BIRD memory usage
Routing tables:246 kB
Route attributes: 88 kB
ROA tables:192 B
Protocols: 45 kB
Total: 416 kB
# pmap $(pgrep "bird$") |grep total
total
%CPU %MEMVSZ RSS TTY STAT START TIME
COMMAND
bird 3441 0.1 55.4 18275124 18241540 ? Ssl Aug10 73:39
/usr/sbin/bird -f -u bird -g bird
..so that's ~1.4G reported by bird, and ~18G actually consumed by
the
process.
Lab:
BIRD 1.6.0 ready.
bird> show mem
BIRD memory
e process is started, we see "normal" memory usage, which then
>>> seems to grow indefinitely in distinct steps, separated by a period of
>>> a few hours.
>>>
>>> In production, this consumes most of the 32G of memory until the
>>> kernel oom-ki
Hi Ondrej,
Yes - it's a version from git with BGP multipath support:
v1.5.0-19-g8d9eef1.
Cheers,
Just
On 6 September 2016 at 17:05, Ondrej Zajicek wrote:
> On Mon, Sep 05, 2016 at 03:21:40PM +0100, Justin Cattle wrote:
> > Hi,
> >
> >
> > A colleague of mine reported a memory usage issue w
also massively more memory actually used by the
>> daemon process.
>>
>> When the process is started, we see "normal" memory usage, which then
>> seems to grow indefinitely in distinct steps, separated by a period of
>> a few hours.
>>
>> In pro
On Mon, Sep 05, 2016 at 03:21:40PM +0100, Justin Cattle wrote:
> Hi,
>
>
> A colleague of mine reported a memory usage issue with the bird daemon last
> year, which resulted in a request for a core dump, but we never followed it
> up.
> I'd like to re-open this discussion and see if anything can
ee "normal" memory usage, which then
seems to grow indefinitely in distinct steps, separated by a period of
a few hours.
In production, this consumes most of the 32G of memory until the
kernel oom-killer to intervenes.
Production:
BIRD 1.5.0 ready.
bird> show memory
BIRD memory usage
grow indefinitely in distinct steps, separated by a period of a few
hours.
In production, this consumes most of the 32G of memory until the kernel
oom-killer to intervenes.
Production:
BIRD 1.5.0 ready.
bird> show memory
BIRD memory usage
Routing tables: 1405 MB
Route attributes: 84 kB
RO
Hi Ondrej,
> I cannot reproduce the problem. Could you get me a core dump when the memory
> consumption is noticeable higher than after the start?
I was going to do this, but then saw that all our instances are now
using at most 28MB for routing tables!
The config has not changed since I reporte
On Mon, Sep 21, 2015 at 10:10:08AM +0200, Alexander Frolkin wrote:
> Hi Ondrej,
>
> > > Is there something we can do to reduce the memory usage? Or could this
> > > be a memory leak bug?
> > This is definitely a memory leak, probably related to path merging. You
> > are using current code from gi
Hi Ondrej,
> > Is there something we can do to reduce the memory usage? Or could this
> > be a memory leak bug?
> This is definitely a memory leak, probably related to path merging. You
> are using current code from git or patched 1.5.0? I will try to reproduce
> it.
Thanks. We are using a vers
On Fri, Sep 18, 2015 at 01:55:25PM +0200, Alexander Frolkin wrote:
> Hello,
>
> We are running BIRD on a number of servers. It is configured with two BGP
> peers.
>
> We are seeing BIRD using over a gig of memory, and this seems excessive,
> especially given the number of routes.
...
> Is there
Hello,
We are running BIRD on a number of servers. It is configured with two BGP
peers.
We are seeing BIRD using over a gig of memory, and this seems excessive,
especially given the number of routes.
BIRD 1.5.0 ready.
bird> show memory
BIRD memory usage
Routing tables: 1142 MB
Ro
21 matches
Mail list logo