>be open source. It's a simulated web client and web server, running
>inside the kernel. It's good for load-testing and performance-testing
>many kinds of network devices. With two 1-GHz PIII boxes (one acting
>as the client and the other acting as the server) it can generate
>around 5 (act
In message: <[EMAIL PROTECTED]>
John Polstra <[EMAIL PROTECTED]> writes:
: I'm testing that now. But for how long would microuptime have to
: be interrupted to make this happen? Surely not 7.81 seconds! On
: this same machine I have a curses application running which is
: updating t
John Polstra wrote:
> After 25 minutes testing that with NTIMECOUNTER=5, I haven't
> gotten any microuptime messages. So it appears that my problem was
> just that the current timecounter wrapped all the way around the ring
> while microuptime was interrupted, due to the high HZ value and the
> Is C a great language, or what? ;-)
Nah, just mediocre even when it comes to obfuscation!
Have you played with unlambda?!
> The way I always remember it is that you read the declaration
> inside-out: starting with the variable name and then heading toward
> the outside while obeying the preced
> PS: Chances are most people don't have cdecl any more. You
> can get it like this:
cd /usr/ports/devel/cdecl && make install
:)
-Anthony.
msg31489/pgp0.pgp
Description: PGP signature
On Tue, Feb 05, 2002 at 02:42:38PM -0800, Bakul Shah wrote:
>
> PS: Chances are most people don't have cdecl any more. You
> can get it like this:
>
You can also get it like this:
cd /usr/ports/devel/cdecl ; make install
which I just went and did. Pretty helpful utility :)
--K
To Uns
In article <[EMAIL PROTECTED]>,
Bakul Shah <[EMAIL PROTECTED]> wrote:
> [I see that jdp has answered your question but] cdecl is your friend!
>
> $ cdecl
> Type `help' or `?' for help
> cdecl> explain volatile struct timecounter *timecounter
> declare timecounter as pointer to volatile struct t
> Btw, regarding the volatile thing:
>
> If I do
> extern volatile struct timecounter *timecounter;
>
> microtime()
> {
> struct timecounter *tc;
>
> tc = timecounter;
>
> The compiler complains about loosing the volatile thing.
>
> How do I tell
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>
> Well, either way I will commit the volatile and this NTIMECOUNTER to
> -current now, it's certainly better than what is there now.
Great, thanks.
> Thanks for the help, I owe you one at BSDcon!
I'll look forward
In message <[EMAIL PROTECTED]>, John Polstra writes:
>After 25 minutes testing that with NTIMECOUNTER=5, I haven't
>gotten any microuptime messages. So it appears that my problem was
>just that the current timecounter wrapped all the way around the ring
>while microuptime was interrupted, du
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
> Could you try this combination:
>
> NTIMECOUNTER = HZ (or even 5 * HZ)
> tco_method = 0
> no splhigh protection for microuptime() ?
After 25 m
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
> >I don't follow that. As I read the code, the "current" timecounter
> >is only advanced every second -- not every 1/HZ seconds. Why should
> >more of them be nee
In message <[EMAIL PROTECTED]>, John Polstra writes:
>OK, adding the splhigh() around the body of microuptime seems to have
>solved the problem. After 45 minutes of running the same test as
>before, I haven't gotten a single message. If I get one later, I'll
>let you know.
Ok, so we know where
OK, adding the splhigh() around the body of microuptime seems to have
solved the problem. After 45 minutes of running the same test as
before, I haven't gotten a single message. If I get one later, I'll
let you know.
> >I'm testing that now. But for how long would microuptime have to
> >be int
> >How are issues (1) and (3) above different?
> >
> >ps. I'm just trying to understand, and am *NOT* trying to start a
> >flame-war. :) :) :)
>
> If the starvation happens to hardclock() or rather tc_windup() the effect
> will be cummulative and show up in permanent jumps in the output of date
>
In message <[EMAIL PROTECTED]>, John Polstra writes:
>In article <[EMAIL PROTECTED]>,
>Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>> In message <[EMAIL PROTECTED]>, John Polstra writes:
>> >Yes, I think you're onto something now. It's a 550 MHz. machine, so
>> >the TSC increments every 1.82 ns
In message <[EMAIL PROTECTED]>, Nate Williams writes:
>How are issues (1) and (3) above different?
>
>ps. I'm just trying to understand, and am *NOT* trying to start a
>flame-war. :) :) :)
If the starvation happens to hardclock() or rather tc_windup() the effect
will be cummulative and show up i
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
> >Yes, I think you're onto something now. It's a 550 MHz. machine, so
> >the TSC increments every 1.82 nsec. And 1.82 nsec * 2^32 is 7.81
> >seconds. :-)
>
> In
> >> >> Can you try to MFC rev 1.111 and see if that changes anything ?
> >> >
> >> >That produced some interesting results. I am still testing under
> >> >very heavy network interrupt load. With the change from 1.111, I
> >> >still get the microuptime messages about as often. But look how
> >>
In message <[EMAIL PROTECTED]>, John Polstra writes:
>In article <[EMAIL PROTECTED]>,
>Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>> In message <[EMAIL PROTECTED]>, John Polstra writes:
>> >In article <[EMAIL PROTECTED]>,
>> >John Polstra <[EMAIL PROTECTED]> wrote:
>> >
>> >Another interesting
In message <[EMAIL PROTECTED]>, Nate Williams writes:
>> >> Can you try to MFC rev 1.111 and see if that changes anything ?
>> >
>> >That produced some interesting results. I am still testing under
>> >very heavy network interrupt load. With the change from 1.111, I
>> >still get the microuptime
In message <[EMAIL PROTECTED]>, John Polstra writes:
>In article <[EMAIL PROTECTED]>,
>> This may be a problem, I have yet to see GCC make different code for
>> that but I should probably have committed the "volatile" anyway.
>
>It should be committed, but it is not causing the problem in this
>c
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>
> Sanity-check: this is NOT a multi-CPU system, right ?
Right. These are all single-CPU systems with non-SMP -stable
kernels.
John
--
John Polstra
John D. Polstra & Co., Inc.Seattle, Wa
> >> Can you try to MFC rev 1.111 and see if that changes anything ?
> >
> >That produced some interesting results. I am still testing under
> >very heavy network interrupt load. With the change from 1.111, I
> >still get the microuptime messages about as often. But look how
> >much larger the
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
> >In article <[EMAIL PROTECTED]>,
> >John Polstra <[EMAIL PROTECTED]> wrote:
> >
> >Another interesting thing is that the jumps are always 7.7x seconds
> >back --
In message <[EMAIL PROTECTED]>, John Polstra writes:
>In article <[EMAIL PROTECTED]>,
>John Polstra <[EMAIL PROTECTED]> wrote:
>> In article <[EMAIL PROTECTED]>,
>> Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>> > In message <[EMAIL PROTECTED]>, John Polstra writes:
>> >
>> > Can you try to MF
In message <[EMAIL PROTECTED]>, John Polstra writes:
>In article <[EMAIL PROTECTED]>,
>Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>> In message <[EMAIL PROTECTED]>, John Polstra writes:
>>
>> Can you try to MFC rev 1.111 and see if that changes anything ?
>
>That produced some interesting resu
In article <[EMAIL PROTECTED]>,
John Polstra <[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>,
> Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> > In message <[EMAIL PROTECTED]>, John Polstra writes:
> >
> > Can you try to MFC rev 1.111 and see if that changes anything ?
>
> That pro
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
>
> Can you try to MFC rev 1.111 and see if that changes anything ?
That produced some interesting results. I am still testing under
very heavy network interrupt
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
>
> >Agreed. But in the cases I'm worrying about right now, the
> >timecounter is the TSC.
>
> Now, *that* is very interesting, how reproducible is it ?
I can re
In message <[EMAIL PROTECTED]>, John Polstra writes:
>Agreed. But in the cases I'm worrying about right now, the
>timecounter is the TSC.
Now, *that* is very interesting, how reproducible is it ?
Can you try to MFC rev 1.111 and see if that changes anything ?
--
Poul-Henning Kamp | UN
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
> >In article <[EMAIL PROTECTED]>,
> >John Baldwin <[EMAIL PROTECTED]> wrote:
> >>
> >> > like, "If X is never locked out for longer than Y, this problem
> >> > ca
In article <[EMAIL PROTECTED]>,
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, John Polstra writes:
> >That's the global variable named "timecounter", right? I did notice
> >one potential problem: that variable is not declared volatile. So
> >in this part ...
>
In message <[EMAIL PROTECTED]>, John Polstra writes:
>In article <[EMAIL PROTECTED]>,
>John Baldwin <[EMAIL PROTECTED]> wrote:
>>
>> > like, "If X is never locked out for longer than Y, this problem
>> > cannot happen." I'm looking for definitions of X and Y. X might be
>> > hardclock() or sof
In article <[EMAIL PROTECTED]>,
John Baldwin <[EMAIL PROTECTED]> wrote:
>
> > like, "If X is never locked out for longer than Y, this problem
> > cannot happen." I'm looking for definitions of X and Y. X might be
> > hardclock() or softclock() or non-interrupt kernel processing. Y
> > would b
In message <[EMAIL PROTECTED]>, "M. Warner Losh" writes:
>In message: <[EMAIL PROTECTED]>
>Poul-Henning Kamp <[EMAIL PROTECTED]> writes:
>: But the i8254 is a piece of shit in this context, and due to
>: circumstances (apm being enabled0 most machines end up using the
>: i8254 by defau
In message: <[EMAIL PROTECTED]>
Poul-Henning Kamp <[EMAIL PROTECTED]> writes:
: But the i8254 is a piece of shit in this context, and due to
: circumstances (apm being enabled0 most machines end up using the
: i8254 by default.
:
: My (and I belive Bruce's) diagnosis so far is that mo
In message <[EMAIL PROTECTED]>, John Polstra writes:
>Mike Smith <[EMAIL PROTECTED]> wrote:
>>
>> It's not necessarily caused by interrupt latency. Here's the assumption
>> that's being made.
>[...]
>
>Thanks for the superb explanation! I appreciate it.
My apologies for never getting the tim
On 04-Feb-02 John Polstra wrote:
> In article <[EMAIL PROTECTED]>,
> Dominic Marks <[EMAIL PROTECTED]> wrote:
>> On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
>> > I'm trying to understand the timecounter code, and in particular the
>> > reason for the "microuptime went backwards
In article <[EMAIL PROTECTED]>,
Mike Smith <[EMAIL PROTECTED]> wrote:
>
> It's not necessarily caused by interrupt latency. Here's the assumption
> that's being made.
[...]
Thanks for the superb explanation! I appreciate it.
> There is a ring of timecounter structures, of some size. In tes
> In article <[EMAIL PROTECTED]>,
> Dominic Marks <[EMAIL PROTECTED]> wrote:
> > On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
> > > I'm trying to understand the timecounter code, and in particular the
> > > reason for the "microuptime went backwards" messages which I see on
> >
In article <[EMAIL PROTECTED]>,
Dominic Marks <[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
> > I'm trying to understand the timecounter code, and in particular the
> > reason for the "microuptime went backwards" messages which I see on
> > just about
On Mon, Feb 04, 2002 at 01:21:25PM -0800, John Polstra wrote:
> I'm trying to understand the timecounter code, and in particular the
> reason for the "microuptime went backwards" messages which I see on
> just about every machine I have, whether running -stable or -current.
I see them everywhere
43 matches
Mail list logo