On 9/9/13 7:31 AM, Jiri Olsa wrote:
On Mon, Sep 09, 2013 at 07:11:11AM -0700, David Ahern wrote:
On 9/9/13 7:03 AM, Jiri Olsa wrote:
my usage currently is to having this running during the day:
(well whenever I remember to run it.. ;-) )
[jolsa@krava perf]$ sudo ./perf record -g -M 1m -a
and checking report when the system or app get stuck
with multiple files I can just easily (or automaticaly)
remove old ones without restarting session
Aren't you losing potentially important events by doing that --
FORK, COMM, MMAP?
those are synthesized for each file via synthesize_record
function, see:
[PATCH 19/25] perf tools: Move synthesizing into single function
Ok. haven't had time to look through your 2 large patch sets.
Seems like a lot of repetitive work on a loaded system. The overhead of
the task events will dominate compared to the samples.
I have a flight recorder style command that address this problem
(long-running/daemons) by processing task events and then stashing
the sample events on a time-ordered list with chopping to maintain
the time window.
so far I noticed there could be race among EXIT and remaining
SAMPLE events on another CPU mmap than EXIT event.. ending up
with EXIT being stored in the old file, while SAMPLEs will get
to the new one
I was thinking about some 'perf daemon' so I dont need to run that
manually.. seems similar to what you did
Right now I focus on scheduling events. This latest version of it can be
easily recycled for other use cases. Some work would be needed to dump
events to a file versus dumping processed information.
I am in San Jose this week. Not sure if I will have time to finish it to
a point of pushing out patches, but maybe I can push to github in the
next couple of days.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/