Julian Elischer <[EMAIL PROTECTED]> writes:
> Robert Watson <[EMAIL PROTECTED]> writes:
> > be a good time to try to revalidate that. Basically, the goal would
> > be to make the pcpu cache FIFO as much as possible as that maximizes
> > the chances that the newly allocated object already has lines
I've been trying to figure out why some periodic scripts consume so much
memory. I've narrowed it down to sort(1).
At first, I thought the scripts were using it inefficiently, feeding it
more data than was really needed. Then I discovered this:
[EMAIL PROTECTED] ~% (sleep 10 | sort) & (sleep 5
* Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> I've been trying to figure out why some periodic scripts consume so much
> memory. I've narrowed it down to sort(1).
>
> At first, I thought the scripts were using it inefficiently, feeding it
> more data than was really needed. Then I discovere
Hey all!
Before I go post this as a PR (or go about fixing the libc code), I just
wanted to ask whether this is a known issue (and I simply haven't been able
to find it), or if it's simply my stupidity that makes this fail.
Basically, I have the following code:
addrinfo hints;
addrinfo* res;
Erik Trulsson <[EMAIL PROTECTED]> writes:
> Yep, it seems that GNU sort allocates a quite large buffer by default when
> the size of the input is unknown (such as when it reads input from stdin.)
> A quick check in the source code indicates that it tries to size this buffer
> according to how much
On Sun, Feb 03, 2008 at 02:13:22PM +0100, Ed Schouten wrote:
> * Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> > I've been trying to figure out why some periodic scripts consume so much
> > memory. I've narrowed it down to sort(1).
> >
> > At first, I thought the scripts were using it ineffici
Dag-Erling Smørgrav <[EMAIL PROTECTED]> writes:
> Erik Trulsson <[EMAIL PROTECTED]> writes:
> > Yep, it seems that GNU sort allocates a quite large buffer by default when
> > the size of the input is unknown (such as when it reads input from stdin.)
> > A quick check in the source code indicates th
On Sun, 2008-02-03 at 14:13 +0100, Ed Schouten wrote:
> * Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> > I've been trying to figure out why some periodic scripts consume so much
> > memory. I've narrowed it down to sort(1).
> >
> > At first, I thought the scripts were using it inefficiently,
On Sun, Feb 03, 2008 at 04:31:34PM +0100, Dag-Erling Smørgrav wrote:
> Dag-Erling Smørgrav <[EMAIL PROTECTED]> writes:
> > Erik Trulsson <[EMAIL PROTECTED]> writes:
> > > Yep, it seems that GNU sort allocates a quite large buffer by default when
> > > the size of the input is unknown (such as when
* Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> Count this as a vote for ditching GNU sort in favor of a BSD-licensed
> implementation (from {Net,Open}BSD for instance).
I just looked at the OpenBSD implementation and I can see it already
misses one option that some people will miss, namely num
Hi,
> On Sun, 3 Feb 2008 14:50:18 +0100
> "Heiko Wundram (Beenic)" <[EMAIL PROTECTED]> said:
wundram> hints.ai_flags is logically anded with AI_MASK at the beginning of the
wundram> function, and AI_MASK (at least in my local netdb.h header) does not
contain
wundram> the flag AI_V4MAPP
Stefan Lambrev wrote:
I run from host A : hping --flood -p 22 -S 10.3.3.2
and systat -ifstat on host B to see the traffic that is generated
(I do not want to run this monitoring on the flooder host as it will
effect his performance)
OK, I finally got time to look at this. Firstly, this is qu
Kris Kennaway wrote:
Stefan Lambrev wrote:
I run from host A : hping --flood -p 22 -S 10.3.3.2
and systat -ifstat on host B to see the traffic that is generated
(I do not want to run this monitoring on the flooder host as it will
effect his performance)
OK, I finally got time to look at this
Kris Kennaway wrote:
Fixing all of the above I can send at about 13MB/sec (timecounter is
not relevant any more). The CPU is spending about 75% of the time in
the kernel, so
that is the next place to look. [hit send too soon]
Actually 15MB/sec once I disable all kernel debuggin
I've had some good success with implementing a custom MAC protocol using
Netgraph. The current implementation runs in userland, connects to the
Kernel iface and kernel ethernet nodes. It uses a polling loop with
usleep, All very cool. This is just background as the question really has
to do wi
Dag-Erling Smørgrav wrote:
Julian Elischer <[EMAIL PROTECTED]> writes:
Robert Watson <[EMAIL PROTECTED]> writes:
be a good time to try to revalidate that. Basically, the goal would
be to make the pcpu cache FIFO as much as possible as that maximizes
the chances that the newly allocated object
Kris Kennaway wrote:
You can look at the raw output from pmcstat, which is a collection of
instruction pointers that you can feed to e.g. addr2line to find out
exactly where in those functions the events are occurring. This will
often help to track down the precise causes.
Thanks to the hint
Kris Kennaway wrote:
Stefan Lambrev wrote:
I run from host A : hping --flood -p 22 -S 10.3.3.2
and systat -ifstat on host B to see the traffic that is generated
(I do not want to run this monitoring on the flooder host as it will
effect his performance)
OK, I finally got time to look at this
On Mon, 4 Feb 2008, Alexander Motin wrote:
Kris Kennaway wrote:
You can look at the raw output from pmcstat, which is a collection of
instruction pointers that you can feed to e.g. addr2line to find out
exactly where in those functions the events are occurring. This will often
help to track
19 matches
Mail list logo