* David Ahern <[email protected]> wrote:

> Recovery algorithm in __perf_session__process_events attempts to remap
> a perf.data file with a different file_offset and try again at a new head
> position. Both of these adjustment rely on page_offset. If page_offset is
> 0 then file_offset and head never change which means the remap attempt is
> the same and the fetch_mmaped_event is the same and the processing just
> loops forever.
> 
> Detect this condition and warn the user.
> 
> Signed-off-by: David Ahern <[email protected]>
> Cc: Arnaldo Carvalho de Melo <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Frederic Weisbecker <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Jiri Olsa <[email protected]>
> Cc: Namhyung Kim <[email protected]>
> Cc: Stephane Eranian <[email protected]>
> ---
>  tools/perf/util/session.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
> index cf1fe01..1c4dc45 100644
> --- a/tools/perf/util/session.c
> +++ b/tools/perf/util/session.c
> @@ -1235,6 +1235,12 @@ more:
>               }
>  
>               page_offset = page_size * (head / page_size);
> +             /* catch looping where we never make forward progress. */
> +             if (page_offset == 0) {
> +                     pr_err("Loop detection processing events. Is file 
> corrupted?\n");
> +                     return -1;
> +             }
> +
>               file_offset += page_offset;
>               head -= page_offset;
>               goto remap;

Ah, nice!

Btw., would it make sense to emit a (once-only) warning and optimistically 
fix page_offset up to 1 (or 4096) and let things continue with the next 
set of data - can we recover most of the data in that case?

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to