<
said:
> Probably the problem is largest for latency, especially in benchmarks.
> Latency benchmarks probably have to start cold, so they have no chance
> of queue lengths > 1, so there must be a context switch per packet and
> may be 2.
It has frequently been proposed that one of the deficienc
On Sat, 15 Oct 2005, Robert Watson wrote:
On Sat, 15 Oct 2005, Bruce Evans wrote:
... However, for netisrs I think it is
common to process only 1 packet per context switch, at least in the
loopback case.
The Mach scheduler allows deferred wakeups to be issued -- "wake up a thread
in the sl
On Sat, 15 Oct 2005, Bruce Evans wrote:
I'm not sure about that. More the reverse. Normal interrupts just
don't occur often enough for their context switch time to matter. This
is most clear for disk devices. Disk devices are relatively slow and
have even slower seeks, so have to talk to
In message <[EMAIL PROTECTED]>, Bruce Evans writes:
>On Fri, 14 Oct 2005, Poul-Henning Kamp wrote:
>> Even to this day new CPU chips come out where TSC has flaws that
>> prevent it from being used as timecounter, and we do not have (NDA)
>> access to the data that would allow us to build a list of
Bruce Evans writes:
> On Fri, 14 Oct 2005, Andrew Gallatin wrote:
>
> > Bear in mind that I have no clue about timekeeping. I got into this
> > just because I noticed using a TSC timecounter reduces context switch
> > latency by 40% or more on all the SMP platforms I have access to:
> >
>
On Fri, 14 Oct 2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
Poul-Henning Kamp writes:
The solution is not faster but less reliable timekeeping, the
solution is to move the scheduler(s) away from using time as an
approximation of cpu cycles.
So you m
On Fri, 14 Oct 2005, Andrew Gallatin wrote:
Bear in mind that I have no clue about timekeeping. I got into this
just because I noticed using a TSC timecounter reduces context switch
latency by 40% or more on all the SMP platforms I have access to:
1.0GHz dual PIII : 50% reduction vs i8254
3.06
On Fri, 14 Oct 2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
What if somebody were to port the linux TSC syncing code, and use it
to decide whether or not set kern.timecounter.smp_tsc=1? Would you
object to that?
Yes, I would object to that.
Even to
On Fri, 14 Oct 2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Bruce Evans writes:
The timestamps in mi_switch() are taken on the same CPU and only their
differences are used, so they don't even need to be synced. It they
use the TSC, then the TSCs just need to have the same alm
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
>
>Poul-Henning Kamp writes:
> > The solution is not faster but less reliable timekeeping, the
> > solution is to move the scheduler(s) away from using time as an
> > approximation of cpu cycles.
>
>So you mean rather than use binuptime() in
Poul-Henning Kamp writes:
> The solution is not faster but less reliable timekeeping, the
> solution is to move the scheduler(s) away from using time as an
> approximation of cpu cycles.
So you mean rather than use binuptime() in mi_switch(), use some
per-cpu cycle counter (like rdtsc)?
Heck,
Poul-Henning Kamp wrote:
> In message <[EMAIL PROTECTED]>, Andrew
> Gallatin
> writes:
>
>> > >What if somebody were to port the linux TSC syncing code, and use it
>> > >to decide whether or not set kern.timecounter.smp_tsc=1? Would you
>> > >object to that?
>> >
>> > Yes, I would object to th
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
> > >What if somebody were to port the linux TSC syncing code, and use it
> > >to decide whether or not set kern.timecounter.smp_tsc=1? Would you
> > >object to that?
> >
> > Yes, I would object to that.
> >
> > Even to this day new CPU c
Poul-Henning Kamp writes:
> In message <[EMAIL PROTECTED]>, Andrew Gallatin
> writes:
> >
> >Poul-Henning Kamp writes:
> > > The best compromise solution therefore is to change the scheduler
> > > to make decisions based on the TSC ticks (or equivalent on other
> > > archs) and at regular
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
>
>Poul-Henning Kamp writes:
> > The best compromise solution therefore is to change the scheduler
> > to make decisions based on the TSC ticks (or equivalent on other
> > archs) and at regular intervals figure out how fast the CPU ran in
> >
Poul-Henning Kamp writes:
> The best compromise solution therefore is to change the scheduler
> to make decisions based on the TSC ticks (or equivalent on other
> archs) and at regular intervals figure out how fast the CPU ran in
> the last period and convert the TSC ticks accumulated to a tim
In message <[EMAIL PROTECTED]>, Bruce Evans writes:
>On Fri, 14 Oct 2005, Poul-Henning Kamp wrote:
>
>> In message <[EMAIL PROTECTED]>, Andrew Gallatin
>> writes:
>>
>>> Linux already takes care of syncing the TSC between SMP cpus, so we
>>> know it is possible. This seems like a much more doable
On Fri, 14 Oct 2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
Linux already takes care of syncing the TSC between SMP cpus, so we
know it is possible. This seems like a much more doable optimization.
And it is likely to have other benefits..
The times
In message <[EMAIL PROTECTED]>, Andrew Gallatin
writes:
>Linux already takes care of syncing the TSC between SMP cpus, so we
>know it is possible. This seems like a much more doable optimization.
>And it is likely to have other benefits..
Validating that the TSC is reliable is a nontrivial task
Garrett Wollman writes:
> < PROTECTED]> said:
>
> > Right now, at least, it seems to work OK. I haven't tried witness,
> > but a non-debug kernel shows a big speedup from enabling it. Do
> > you think there is a chance that it could be made to work in FreeBSD?
>
> I did this ten years a
<
said:
> Right now, at least, it seems to work OK. I haven't tried witness,
> but a non-debug kernel shows a big speedup from enabling it. Do
> you think there is a chance that it could be made to work in FreeBSD?
I did this ten years ago for a previous job and was able to blow out
the stack
Robert Watson writes:
>
> On Wed, 12 Oct 2005, Andrew Gallatin wrote:
>
> > Speaking of net.isr, is there any reason why if_simloop() calls
> > netisr_queue() rather than netisr_dispatch()?
>
> Yes -- it's basically to prevent recursion for loopback traffic, which can
> result in both
On Wed, 12 Oct 2005, Andrew Gallatin wrote:
Speaking of net.isr, is there any reason why if_simloop() calls
netisr_queue() rather than netisr_dispatch()?
Yes -- it's basically to prevent recursion for loopback traffic, which can
result in both lock orders and general concerns regarding reent
Speaking of net.isr, is there any reason why if_simloop() calls
netisr_queue() rather than netisr_dispatch()?
Drew
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL
On Wed, 12 Oct 2005 [EMAIL PROTECTED] wrote:
At Tue, 11 Oct 2005 15:01:11 +0100 (BST),
rwatson wrote:
If I don't hear anything back in the near future, I will commit a
change to 7.x to make direct dispatch the default, in order to let a
broader community do the testing. :-) If you are setup t
At Tue, 11 Oct 2005 15:01:11 +0100 (BST),
rwatson wrote:
> If I don't hear anything back in the near future, I will commit a
> change to 7.x to make direct dispatch the default, in order to let a
> broader community do the testing. :-) If you are setup to easily
> test stability and performance re
On Wed, 5 Oct 2005, Robert Watson wrote:
In 2003, Jonathan Lemon added initial support for direct dispatch of
netisr handlers from the calling thread, as part of his DARPA/NAI Labs
contract in the DARPA CHATS research program. Over the last two years
since then, Sam Leffler and I have worked
In 2003, Jonathan Lemon added initial support for direct dispatch of
netisr handlers from the calling thread, as part of his DARPA/NAI Labs
contract in the DARPA CHATS research program. Over the last two years
since then, Sam Leffler and I have worked to refine this implementation,
removing
28 matches
Mail list logo