I observed egltriangle on the N10 (2 cores):
server 30%
client 20%
I misinterpreted that as 100% of one core, which is true for some tools,
and "top" on some Unixes. But I just realized top on Linux goes to 100%
multiple times (for each core).
So my assessment should have been 50% of one core
hey Daniel - thanks for the numbers, however the last statement of having a
core 100% loaded with mir_demo_client_egltriangle doesn't seem reasonable.
running on N4 shows ~20% load on cpu at most.
br,kg
On Mon, Dec 16, 2013 at 2:40 AM, Daniel van Vugt <
daniel.van.v...@canonical.com> wrote:
> N
Nexus10 (3.4.0-4-manta)
Direct 1.0ms with sporadic spikes to 74ms, sometimes 400ms
Nested 1.2ms with sporadic spikes to 100ms, sometimes 1000ms
I'm not convinced the problem is specifically Android. It could just be
a common issue visible on the slowest hardware.
On the Nexus10, just running a
proposal
>>> it's all about using the existing reports unmodified right now. And my
>>> primary task was to assess the feasibility of nesting vs non-nested.
>>>
>>> I wasn't going to spend any more time on input latency measurements this
>>&
on-nested.
I wasn't going to spend any more time on input latency measurements this
week but perhaps there is sufficient interest to get more details...
Out of curiosity: When you say leveraging existing reports, does that
mean this evaluation is using the lttng traces?
Thanks,
Th
t; I wasn't going to spend any more time on input latency measurements this
> week but perhaps there is sufficient interest to get more details...
>
>
Out of curiosity: When you say leveraging existing reports, does that
mean this evaluation is using the lttng traces?
Thanks,
Thom
Yes indeed. I did think about that, but if you look at the merge
proposal it's all about using the existing reports unmodified right now.
And my primary task was to assess the feasibility of nesting vs non-nested.
I wasn't going to spend any more time on input latency measurements
Hi,
Can we split the times up? .. decoding from evdev until a EV_SYN ..
internal processing in the shell.. transfer to client?
On Mon, Dec 16, 2013 at 6:52 AM, Daniel van Vugt <
daniel.van.v...@canonical.com> wrote:
> If I had a theory, I could test if it correlates with the spikes. At the
> mo
If I had a theory, I could test if it correlates with the spikes. At the
moment I don't even have a theory.
The other weird thing I didn't mention was that the "lowlatency" kernel
has higher latency :). But it was worth a try. As are different kernel
schedulers, I haven't tried playing with th
On Fri, 2013-12-13 at 17:31 +0800, Daniel van Vugt wrote:
> Here are some fun numbers I've collected about the latency between input
> events sent from the top-level Mir server to a client. All in
> milliseconds...
>
> Desktop (3.12.0-7-generic)
> Direct 0.8ms
> Nested 1.3ms
>
> Desktop (3.11.0
Here are some fun numbers I've collected about the latency between input
events sent from the top-level Mir server to a client. All in
milliseconds...
Desktop (3.12.0-7-generic)
Direct 0.8ms
Nested 1.3ms
Desktop (3.11.0-11-lowlatency)
Direct 1.0ms
Nested 1.7ms
Nexus4 (3.4.0-3-mako)
Direct 0.9
11 matches
Mail list logo