On Thu, Jan 27, 2011 at 09:12:34PM +0000, Devin Teske wrote:
> On Thu, 2011-01-27 at 22:59 +0200, Kostik Belousov wrote:
> 
> > On Thu, Jan 27, 2011 at 08:50:48PM +0000, Devin Teske wrote:
> > > Probably did something like this:
> > > 
> > >     time sh -c '( firefox & ); sleep 10000000'
> > > 
> > > and then pressed Ctrl-C when he felt that firefox was finished loading.
> > > The moment Ctrl-C is pressed, time(1) shows how long it ran up until you
> > > pressed Ctrl-C.
> > > NOTE: Pressing Ctrl-C will not terminate the firefox instance.
> > 
> > You cannot have 1/100 of seconds precision with this method.
> > This is why I am asking, seeing < 0.1 seconds difference.
> > Not to mention some methodical questions, like whether the caches were
> > warmed before the measurement by several runs before the actual
> > test.
> 
> 
> Really?
> 
> $ time sh -c '( firefox & ); sleep 10000000'
> ^C
> 
> real    0m5.270s
> user    0m0.000s
> sys     0m0.005s
> 
> 
> I'd call that 1/100th of a second precision, wouldn't you?
> 
> HINT: Try using bash instead of csh.
(I supposed that) obvious point of my mail is that you cannot reliably
measure 1/100 second intervals when human interaction is involved.
To make it completely obvious: human has to press CTRL-C, I did not
mean reading the numbers from display.

Attachment: pgp3mBNWKIVxX.pgp
Description: PGP signature



Reply via email to