Herbert Xu writes:
> On Thu, Aug 16, 2007 at 02:11:43PM +1000, Paul Mackerras wrote:
> >
> > The uses of atomic_read where one might want it to allow caching of
> > the result seem to me to fall into 3 categories:
> >
> > 1. Places that are buggy because of a race arising from the way it's
> >
What exactly the difference between kzalloc and kcalloc? From the
definition, I could see that kcalloc should be used for array
allocation. But I could see kzalloc is used for allocation arrays as in
the below patch.
Any coding standard (or) developers can use kzalloc and kcalloc as per
their codi
Herbert Xu writes:
> It doesn't matter. The memory pressure flag is an *advisory*
> flag. If we get it wrong the worst that'll happen is that we'd
> waste some time doing work that'll be thrown away.
Ah, so it's the "racy but I don't care because it's only an
optimization" case. That's fine.
On Thu, 16 Aug 2007, Satyam Sharma wrote:
> Hi Bill,
>
>
> On Wed, 15 Aug 2007, Bill Fink wrote:
>
> > On Wed, 15 Aug 2007, Satyam Sharma wrote:
> >
> > > (C)
> > > $ cat tp3.c
> > > int a;
> > >
> > > void func(void)
> > > {
> > > *(volatile int *)&a = 10;
> > > *(volatile int *)&a = 2
On Thu, Aug 16, 2007 at 02:11:43PM +1000, Paul Mackerras wrote:
>
> The uses of atomic_read where one might want it to allow caching of
> the result seem to me to fall into 3 categories:
>
> 1. Places that are buggy because of a race arising from the way it's
>used.
>
> 2. Places where there
On Thu, Aug 16, 2007 at 02:34:25PM +1000, Paul Mackerras wrote:
>
> I'm talking about this situation:
>
> CPU 0 comes into __sk_stream_mem_reclaim, reads memory_allocated, but
> then before it can do the store to *memory_pressure, CPUs 1-1023 all
> go through sk_stream_mem_schedule, collectively i
Hi Bill,
On Wed, 15 Aug 2007, Bill Fink wrote:
> On Wed, 15 Aug 2007, Satyam Sharma wrote:
>
> > (C)
> > $ cat tp3.c
> > int a;
> >
> > void func(void)
> > {
> > *(volatile int *)&a = 10;
> > *(volatile int *)&a = 20;
> > }
> > $ gcc -Os -S tp3.c
> > $ cat tp3.s
> > ...
> > movl$10
Herbert Xu writes:
> > You mean it's intended that *sk->sk_prot->memory_pressure can end up
> > as 1 when sk->sk_prot->memory_allocated is small (less than
> > ->sysctl_mem[0]), or as 0 when ->memory_allocated is large (greater
> > than ->sysctl_mem[2])? Because that's the effect of the current c
Satyam Sharma writes:
> Anyway, the problem, of course, is that this conversion to a stronger /
> safer-by-default behaviour doesn't happen with zero cost to performance.
> Converting atomic ops to "volatile" behaviour did add ~2K to kernel text
> for archs such as i386 (possibly to important code
On Thu, Aug 16, 2007 at 01:48:32PM +1000, Paul Mackerras wrote:
> Herbert Xu writes:
>
> > If you're referring to the code in sk_stream_mem_schedule
> > then it's working as intended. The atomicity guarantees
>
> You mean it's intended that *sk->sk_prot->memory_pressure can end up
> as 1 when sk
Herbert Xu writes:
> If you're referring to the code in sk_stream_mem_schedule
> then it's working as intended. The atomicity guarantees
You mean it's intended that *sk->sk_prot->memory_pressure can end up
as 1 when sk->sk_prot->memory_allocated is small (less than
->sysctl_mem[0]), or as 0 when
On Thu, Aug 16, 2007 at 01:15:05PM +1000, Paul Mackerras wrote:
>
> But others can also reduce the reservation. Also, the code sets and
> clears *sk->sk_prot->memory_pressure nonatomically with respect to the
> reads of sk->sk_prot->memory_allocated, so in fact the code doesn't
> guarantee any pa
On Wed, 15 Aug 2007, Satyam Sharma wrote:
> (C)
> $ cat tp3.c
> int a;
>
> void func(void)
> {
> *(volatile int *)&a = 10;
> *(volatile int *)&a = 20;
> }
> $ gcc -Os -S tp3.c
> $ cat tp3.s
> ...
> movl$10, a
> movl$20, a
> ...
I'm curious about one minor tangential point. W
On Thu, Aug 16, 2007 at 01:23:06PM +1000, Paul Mackerras wrote:
>
> In particular, atomic_read seems to lend itself to buggy uses. People
> seem to do things like:
>
> atomic_add(&v, something);
> if (atomic_read(&v) > something_else) ...
If you're referring to the code in sk_stream_
>It's not about being a niche. It's about creating a maintainable
>software net stack that has predictable behavior.
>
>Needing to reach out of the RDMA sandbox and reserve net stack resources
>away from itself travels a path we've consistently avoided.
We need to ensure that we're also creating
Herbert Xu writes:
> > Are you sure? How do you know some other CPU hasn't changed the value
> > in between?
>
> Yes I'm sure, because we don't care if others have increased
> the reservation.
But others can also reduce the reservation. Also, the code sets and
clears *sk->sk_prot->memory_press
Christoph Lameter writes:
> > But I have to say that I still don't know of a single place
> > where one would actually use the volatile variant.
>
> I suspect that what you say is true after we have looked at all callers.
It seems that there could be a lot of places where atomic_t is used in
a n
Satyam Sharma writes:
> I can't speak for this particular case, but there could be similar code
> examples elsewhere, where we do the atomic ops on an atomic_t object
> inside a higher-level locking scheme that would take care of the kind of
> problem you're referring to here. It would be useful f
On 8/14/07, Adrian Bunk <[EMAIL PROTECTED]> wrote:
> According to git, the only one who touched this file during the last
> 5 years was me when removing drivers...
>
> modinfo offers less ancient information.
>
> Signed-off-by: Adrian Bunk <[EMAIL PROTECTED]>
>
Fine by me for any of the old stuff
> Needing to reach out of the RDMA sandbox and reserve net stack
> resources away from itself travels a path we've consistently avoided.
Where did the idea of an "RDMA sandbox" come from? Obviously no one
disagrees with keeping things clean and maintainable, but the idea
that RDMA is a second-c
On Thu, 16 Aug 2007, Satyam Sharma wrote:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
> > Herbert Xu writes:
> >
> > > See sk_stream_mem_schedule in net/core/stream.c:
> > >
> > > /* Under limit. */
> > > if (atomic_read(sk->sk_prot->memory_allocated) <
> > > sk->sk_prot->sys
On Thu, Aug 16, 2007 at 10:11:05AM +0800, Herbert Xu wrote:
> On Thu, Aug 16, 2007 at 12:05:56PM +1000, Paul Mackerras wrote:
> > Herbert Xu writes:
> >
> > > See sk_stream_mem_schedule in net/core/stream.c:
> > >
> > > /* Under limit. */
> > > if (atomic_read(sk->sk_prot->memory_
On Wed, Aug 15, 2007 at 06:41:40PM -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > Understood. My point is not that the impact is precisely zero, but
> > rather that the impact on optimization is much less hurtful than the
> > problems that could arise otherwi
Alan J. Wylie <[EMAIL PROTECTED]> wrote:
>
> The photos, along with the following information are available at
> http://wylie.me.uk/skb_pull_rcsum/
The really important bit has scrolled off. Try booting with
vga= to increase the resolution. Or use
a serial console if you can.
Thanks,
--
Visit
On Thu, Aug 16, 2007 at 03:30:44AM +0200, Segher Boessenkool wrote:
> >>>Part of the motivation here is to fix heisenbugs. If I knew where
> >>>they
> >>
> >>By the same token we should probably disable optimisations
> >>altogether since that too can create heisenbugs.
> >
> >Precisely the point
On Thu, Aug 16, 2007 at 11:09:25AM +1000, Nick Piggin wrote:
> Paul E. McKenney wrote:
> >On Wed, Aug 15, 2007 at 11:30:05PM +1000, Nick Piggin wrote:
>
> >>Especially since several big architectures don't have volatile in their
> >>atomic_get and _set, I think it would be a step backwards to add
Steve Wise wrote:
David Miller wrote:
From: Sean Hefty <[EMAIL PROTECTED]>
Date: Thu, 09 Aug 2007 14:40:16 -0700
Steve Wise wrote:
Any more comments?
Does anyone have ideas on how to reserve the port space without using
a struct socket?
How about we just remove the RDMA stack altogether?
Segher Boessenkool wrote:
Part of the motivation here is to fix heisenbugs. If I knew where they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would f
On Thu, Aug 16, 2007 at 03:23:28AM +0200, Segher Boessenkool wrote:
> No; compilation units have nothing to do with it, GCC can optimise
> across compilation unit boundaries just fine, if you tell it to
> compile more than one compilation unit at once.
> >>>
> >>>Last I checked, the Lin
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> Herbert Xu writes:
>
> > See sk_stream_mem_schedule in net/core/stream.c:
> >
> > /* Under limit. */
> > if (atomic_read(sk->sk_prot->memory_allocated) <
> > sk->sk_prot->sysctl_mem[0]) {
> > if (*sk->sk_prot->memory
Herbert Xu wrote:
But I have to say that I still don't know of a single place
where one would actually use the volatile variant.
Given that many of the existing users do currently have "volatile", are
you comfortable simply removing that behaviour from them? Are you sure
that you will not i
Joe Perches wrote:
On Wed, 2007-08-15 at 19:58 -0400, Dave Jones wrote:
Signed-off-by: Dave Jones <[EMAIL PROTECTED]>
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index 3330917..0ed02b7 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw
On Thu, 16 Aug 2007, Herbert Xu wrote:
> > Do we have a consensus here? (hoping against hope, probably :-)
>
> I can certainly agree with this.
I agree too.
> But I have to say that I still don't know of a single place
> where one would actually use the volatile variant.
I suspect that what yo
On Wed, 15 Aug 2007, Christoph Lameter wrote:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > > We don't need to reload sk->sk_prot->memory_allocated here.
> >
> > Are you sure? How do you know some other CPU hasn't changed the value
> > in between?
>
> The cpu knows because the cacheline w
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> > We don't need to reload sk->sk_prot->memory_allocated here.
>
> Are you sure? How do you know some other CPU hasn't changed the value
> in between?
The cpu knows because the cacheline was not invalidated.
-
To unsubscribe from this list: send the
On Thu, Aug 16, 2007 at 12:05:56PM +1000, Paul Mackerras wrote:
> Herbert Xu writes:
>
> > See sk_stream_mem_schedule in net/core/stream.c:
> >
> > /* Under limit. */
> > if (atomic_read(sk->sk_prot->memory_allocated) <
> > sk->sk_prot->sysctl_mem[0]) {
> > if (*s
On Thu, Aug 16, 2007 at 07:45:44AM +0530, Satyam Sharma wrote:
>
> Completely agreed, again. To summarize again (had done so about ~100 mails
> earlier in this thread too :-) ...
>
> atomic_{read,set}_volatile() -- guarantees volatility also along with
> atomicity (the two _are_ different concepts
A volatile default would disable optimizations for atomic_read.
atomic_read without volatile would allow for full optimization by the
compiler. Seems that this is what one wants in many cases.
Name one such case.
An atomic_read should do a load from memory. If the programmer puts
an atomic_rea
Herbert Xu writes:
> See sk_stream_mem_schedule in net/core/stream.c:
>
> /* Under limit. */
> if (atomic_read(sk->sk_prot->memory_allocated) <
> sk->sk_prot->sysctl_mem[0]) {
> if (*sk->sk_prot->memory_pressure)
> *sk->sk_prot->memory_pres
On Wed, 15 Aug 2007, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > Understood. My point is not that the impact is precisely zero, but
> > rather that the impact on optimization is much less hurtful than the
> > problems that could arise otherwise, particularly a
On Thu, Aug 16, 2007 at 11:51:42AM +1000, Paul Mackerras wrote:
>
> Name one such case.
See sk_stream_mem_schedule in net/core/stream.c:
/* Under limit. */
if (atomic_read(sk->sk_prot->memory_allocated) <
sk->sk_prot->sysctl_mem[0]) {
if (*sk->sk_prot->memory_pre
Christoph Lameter writes:
> A volatile default would disable optimizations for atomic_read.
> atomic_read without volatile would allow for full optimization by the
> compiler. Seems that this is what one wants in many cases.
Name one such case.
An atomic_read should do a load from memory. If
On Wed, 15 Aug 2007, Paul E. McKenney wrote:
> Understood. My point is not that the impact is precisely zero, but
> rather that the impact on optimization is much less hurtful than the
> problems that could arise otherwise, particularly as compilers become
> more aggressive in their optimizations
"compilation unit" is a C standard term. It typically boils down
to "single .c file".
As you mentioned later, "single .c file with all the other files
(headers
or other .c files) that it pulls in via #include" is actually
"translation
unit", both in the C standard as well as gcc docs.
Yeah
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Precisely the point -- use of volatile (whether in casts or on asms)
in these cases are intended to disable those
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would find disabling all compiler
op
No; compilation units have nothing to do with it, GCC can optimise
across compilation unit boundaries just fine, if you tell it to
compile more than one compilation unit at once.
Last I checked, the Linux kernel build system did compile each .c
file
as a separate compilation unit.
I have som
Segher Boessenkool wrote:
Please check the definition of "cache coherence".
Which of the twelve thousand such definitions? :-)
Every definition I have seen says that writes to a single memory
location have a serial order as seen by all CPUs, and that a read
will return the most recent write
On Thu, Aug 16, 2007 at 08:53:16AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 05:49:50PM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 16, 2007 at 08:30:23AM +0800, Herbert Xu wrote:
> >
> > > Thanks. But I don't need a summary of the thread, I'm asking
> > > for an extant code snippet in
On Wed, Aug 15, 2007 at 05:59:41PM -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > The volatile cast should not disable all that many optimizations,
> > for example, it is much less hurtful than barrier(). Furthermore,
> > the main optimizations disabled (pull
On Wed, 15 Aug 2007, Segher Boessenkool wrote:
> [...]
> > BTW:
> >
> > #define atomic_read(a) (*(volatile int *)&(a))
> > #define atomic_set(a,i) (*(volatile int *)&(a) = (i))
> >
> > int a;
> >
> > void func(void)
> > {
> > int b;
> >
> > b = atomic_read(a);
> > atomic
Paul E. McKenney wrote:
On Wed, Aug 15, 2007 at 11:30:05PM +1000, Nick Piggin wrote:
Especially since several big architectures don't have volatile in their
atomic_get and _set, I think it would be a step backwards to add them in
as a "just in case" thin now (unless there is a better reason).
Hi Herbert,
On Thu, 16 Aug 2007, Herbert Xu wrote:
> On Thu, Aug 16, 2007 at 06:28:42AM +0530, Satyam Sharma wrote:
> >
> > > The udelay itself certainly should have some form of cpu_relax in it.
> >
> > Yes, a form of barrier() must be present in mdelay() or udelay() itself
> > as you say, hav
On Wed, 15 Aug 2007, Paul E. McKenney wrote:
> The volatile cast should not disable all that many optimizations,
> for example, it is much less hurtful than barrier(). Furthermore,
> the main optimizations disabled (pulling atomic_read() and atomic_set()
> out of loops) really do need to be disab
On Wed, Aug 15, 2007 at 05:42:07PM -0700, Christoph Lameter wrote:
> On Wed, 15 Aug 2007, Paul E. McKenney wrote:
>
> > Seems to me that we face greater chance of confusion without the
> > volatile than with, particularly as compiler optimizations become
> > more aggressive. Yes, we could simply
On Wed, Aug 15, 2007 at 05:49:50PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 16, 2007 at 08:30:23AM +0800, Herbert Xu wrote:
>
> > Thanks. But I don't need a summary of the thread, I'm asking
> > for an extant code snippet in our kernel that benefits from
> > the volatile change and is not part
On Thu, Aug 16, 2007 at 06:28:42AM +0530, Satyam Sharma wrote:
>
> > The udelay itself certainly should have some form of cpu_relax in it.
>
> Yes, a form of barrier() must be present in mdelay() or udelay() itself
> as you say, having it in __const_udelay() is *not* enough (superflous
> actually,
From: Dave Jones <[EMAIL PROTECTED]>
Date: Wed, 15 Aug 2007 19:52:20 -0400
> On Wed, Aug 15, 2007 at 03:08:14PM -0700, David Miller wrote:
> > From: "Ilpo_Järvinen" <[EMAIL PROTECTED]>
> > Date: Thu, 16 Aug 2007 00:57:00 +0300 (EEST)
> >
> > > A similar fix to netfilter from Eric Dumazet insp
On Thu, Aug 16, 2007 at 08:30:23AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 05:23:10PM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 16, 2007 at 08:12:48AM +0800, Herbert Xu wrote:
> > > On Wed, Aug 15, 2007 at 04:53:35PM -0700, Paul E. McKenney wrote:
> > > >
> > > > > > Communicating b
[ Sorry for empty subject line in previous mail. I intended to make
a patch so cleared it to change it, but ultimately neither made
a patch nor restored subject line. Done that now. ]
On Thu, 16 Aug 2007, Herbert Xu wrote:
> On Thu, Aug 16, 2007 at 06:06:00AM +0530, Satyam Sharma wrote:
> >
On Wed, 15 Aug 2007, Paul E. McKenney wrote:
> Seems to me that we face greater chance of confusion without the
> volatile than with, particularly as compiler optimizations become
> more aggressive. Yes, we could simply disable optimization, but
> optimization can be quite helpful.
A volatile de
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> Those barriers are for when we need ordering between atomic variables
> and other memory locations. An atomic variable by itself doesn't and
> shouldn't need any barriers for other CPUs to be able to see what's
> happening to it.
It does not need any
On Wed, 2007-08-15 at 19:58 -0400, Dave Jones wrote:
> Signed-off-by: Dave Jones <[EMAIL PROTECTED]>
>
> diff --git a/drivers/infiniband/hw/mlx4/mad.c
> b/drivers/infiniband/hw/mlx4/mad.c
> index 3330917..0ed02b7 100644
> --- a/drivers/infiniband/hw/mlx4/mad.c
> +++ b/drivers/infiniband/hw/mlx4/m
On Wed, Aug 15, 2007 at 05:26:34PM -0700, Christoph Lameter wrote:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > In the kernel we use atomic variables in precisely those situations
> > where a variable is potentially accessed concurrently by multiple
> > CPUs, and where each CPU needs to see
Christoph Lameter writes:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > In the kernel we use atomic variables in precisely those situations
> > where a variable is potentially accessed concurrently by multiple
> > CPUs, and where each CPU needs to see updates done by other CPUs in a
> > time
On Thu, Aug 16, 2007 at 06:06:00AM +0530, Satyam Sharma wrote:
>
> that are:
>
> while ((atomic_read(&waiting_for_crash_ipi) > 0) && msecs) {
> mdelay(1);
> msecs--;
> }
>
> where mdelay() becomes __const_udelay() which happens to be in another
> translati
On Wed, Aug 15, 2007 at 05:23:10PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 16, 2007 at 08:12:48AM +0800, Herbert Xu wrote:
> > On Wed, Aug 15, 2007 at 04:53:35PM -0700, Paul E. McKenney wrote:
> > >
> > > > > Communicating between process context and interrupt/NMI handlers using
> > > > > per-
On Wed, 15 Aug 2007, Heiko Carstens wrote:
> [...]
> Btw.: we still have
>
> include/asm-i386/mach-es7000/mach_wakecpu.h: while (!atomic_read(deassert));
> include/asm-i386/mach-default/mach_wakecpu.h: while (!atomic_read(deassert));
>
> Looks like they need to be fixed as well.
[PATCH] i38
On Thu, 16 Aug 2007, Paul Mackerras wrote:
> In the kernel we use atomic variables in precisely those situations
> where a variable is potentially accessed concurrently by multiple
> CPUs, and where each CPU needs to see updates done by other CPUs in a
> timely fashion. That is what they are for.
On Wed, 15 Aug 2007, Segher Boessenkool wrote:
> > > > What you probably mean is that the compiler has to assume any code
> > > > it cannot currently see can do anything (insofar as allowed by the
> > > > relevant standards etc.)
> >
> > I think this was just terminology confusion here again. I
On Thu, Aug 16, 2007 at 08:12:48AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 04:53:35PM -0700, Paul E. McKenney wrote:
> >
> > > > Communicating between process context and interrupt/NMI handlers using
> > > > per-CPU variables.
> > >
> > > Remeber we're talking about atomic_read/atomic_s
On Wed, Aug 15, 2007 at 04:53:35PM -0700, Paul E. McKenney wrote:
>
> > > Communicating between process context and interrupt/NMI handlers using
> > > per-CPU variables.
> >
> > Remeber we're talking about atomic_read/atomic_set. Please
> > cite the actual file/function name you have in mind.
>
On Thu, Aug 16, 2007 at 07:41:46AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 11:45:20AM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 15, 2007 at 07:19:57PM +0100, David Howells wrote:
> > > Herbert Xu <[EMAIL PROTECTED]> wrote:
> > >
> > > > Let's turn this around. Can you give a singl
On Wed, Aug 15, 2007 at 03:08:14PM -0700, David Miller wrote:
> From: "Ilpo_Järvinen" <[EMAIL PROTECTED]>
> Date: Thu, 16 Aug 2007 00:57:00 +0300 (EEST)
>
> > A similar fix to netfilter from Eric Dumazet inspired me to
> > look around a bit by using some grep/sed stuff as looking for
> > thi
On Thu, Aug 16, 2007 at 07:40:21AM +0800, Herbert Xu wrote:
> On Wed, Aug 15, 2007 at 12:13:12PM -0400, Chris Snook wrote:
> >
> > Part of the motivation here is to fix heisenbugs. If I knew where they
>
> By the same token we should probably disable optimisations
> altogether since that too can
On Wed, Aug 15, 2007 at 11:45:20AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 15, 2007 at 07:19:57PM +0100, David Howells wrote:
> > Herbert Xu <[EMAIL PROTECTED]> wrote:
> >
> > > Let's turn this around. Can you give a single example where
> > > the volatile semantics is needed in a legitimate
On Wed, Aug 15, 2007 at 12:13:12PM -0400, Chris Snook wrote:
>
> Part of the motivation here is to fix heisenbugs. If I knew where they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Cheers,
--
Visit Openswan at http://www.openswan.o
Satyam Sharma writes:
> > Doesn't "atomic WRT all processors" require volatility?
>
> No, it definitely doesn't. Why should it?
>
> "Atomic w.r.t. all processors" is just your normal, simple "atomicity"
> for SMP systems (ensure that that object is modified / set / replaced
> in main memory atom
From: "John W. Linville" <[EMAIL PROTECTED]>
Date: Tue, 14 Aug 2007 20:34:10 -0400
> Individual patches available here:
>
>
> http://www.kernel.org/pub/linux/kernel/people/linville/wireless-2.6/upstream-davem
John, what I'm going to do is wait for Linus to pull in the
2.6.23 mac80211 fixe
From: Vlad Yasevich <[EMAIL PROTECTED]>
Date: Mon, 13 Aug 2007 09:24:00 -0400
> Sorry about that. Not sure what happened to that patch. Here is
> the good patch with witespace cleanups.
Applied to net-2.6.24, thanks for fixing this patch up.
-
To unsubscribe from this list: send the line "unsub
From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Fri, 10 Aug 2007 16:56:17 -0400
> All this is currently checked into the 'eflags' branch of
> git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/netdev-2.6.git
>
> But when everybody is happy with it, IMO we should get it into
> net-2.6.24.git, as i
On Wed, Aug 15, 2007 at 11:05:35PM +0200, Segher Boessenkool wrote:
> >>No; compilation units have nothing to do with it, GCC can optimise
> >>across compilation unit boundaries just fine, if you tell it to
> >>compile more than one compilation unit at once.
> >
> >Last I checked, the Linux kernel
On Wed, Aug 15, 2007 at 10:52:53PM +0200, Segher Boessenkool wrote:
> >>I think this was just terminology confusion here again. Isn't "any
> >>code
> >>that it cannot currently see" the same as "another compilation unit",
> >>and wouldn't the "compilation unit" itself expand if we ask gcc to
> >>c
- Removed bimodal interrupt support - unused feature
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Ramkrishna Vepa <[EMAIL PROTECTED]>
---
diff -urpN patch1/drivers/net/s2io.c patch2/drivers/net/s2io.c
--- patch1/drivers/net/s2io.c 2007-08-10 11:53:46.0 +0530
+++
- Changed kmalloc+memset to k[zc]alloc as per Mariusz's patch
<[EMAIL PROTECTED]>
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Ramkrishna Vepa <[EMAIL PROTECTED]>
---
diff -urpN org/drivers/net/s2io.c patch1/drivers/net/s2io.c
--- org/drivers/net/s2io.c 2007-08-09 1
- Added check to return from the traffic handling function, if the card status
is DOWN.
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Santosh Rastapur <[EMAIL PROTECTED]>
Signed-off-by: Ramkrishna Vepa <[EMAIL PROTECTED]>
---
diff -Nurp patch3/drivers/net/s2io.c patch4/d
- Optimized interrupt routine fast path.
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Santosh Rastapur <[EMAIL PROTECTED]>
Signed-off-by: Ramkrishna Vepa <[EMAIL PROTECTED]>
---
diff -Nurp patch4/drivers/net/s2io.c patch5/drivers/net/s2io.c
--- patch4/drivers/net/s2io.c
- Added support to poll entire set of device errors and alarams.
- Replaced alarm_intr_handler() with s2io_handle_errors().
- Added statistic counters to monitor the alarms.
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Santosh Rastapur <[EMAIL PROTECTED]>
Signed-off-by: Ra
- Removed the unused variable, intr_type, in device private structure.
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Santosh Rastapur <[EMAIL PROTECTED]>
Signed-off-by: Ramkrishna Vepa <[EMAIL PROTECTED]>
---
diff -Nurp patch2/drivers/net/s2io.c patch3/drivers/net/s2io.c
--
- Added support to unmask entire set of device errors and alarms.
Signed-off-by: Sivakumar Subramani <[EMAIL PROTECTED]>
Signed-off-by: Santosh Rastapur <[EMAIL PROTECTED]>
Signed-off-by: Ramkrishna Vepa <[EMAIL PROTECTED]>
---
diff -Nurp orig/drivers/net/s2io.c patch1/drivers/net/s2io.c
--- orig/
From: "Ilpo_Järvinen" <[EMAIL PROTECTED]>
Date: Thu, 16 Aug 2007 00:57:00 +0300 (EEST)
> A similar fix to netfilter from Eric Dumazet inspired me to
> look around a bit by using some grep/sed stuff as looking for
> this kind of bugs seemed easy to automate. This is one of them
> I found where it l
A similar fix to netfilter from Eric Dumazet inspired me to
look around a bit by using some grep/sed stuff as looking for
this kind of bugs seemed easy to automate. This is one of them
I found where it looks like this semicolon is not valid.
Signed-off-by: Ilpo Järvinen <[EMAIL PROTECTED]>
---
n
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Wed, 15 Aug 2007 16:33:35 +0800
> [NET]: Fix unbalanced rcu_read_unlock in __sock_create
>
> The recent RCU work created an unbalanced rcu_read_unlock
> in __sock_create. This patch fixes that. Reported by
> oleg 123.
>
> Signed-off-by: Herbert Xu <[E
Hello,
I can see some very strange problem with sky2 and skge interfaces with 2.6.22,
and also 2.6.23-rc2/3.
After 8-9 minutes, the interfaces stop working. ethtool reports that the link is
down and the only way to make the interfaces usable again is
removing/reinserting the module or running ethto
Please check the definition of "cache coherence".
Which of the twelve thousand such definitions? :-)
Summary: the CPU is indeed within its rights to execute loads and
stores
to a single variable out of order, -but- only if it gets the same
result
that it would have obtained by executing them
Possibly these were too trivial to expose any potential problems that
you
may have been referring to, so would be helpful if you could write a
more
concrete example / sample code.
The trick is to have a sufficiently complicated expression to force
the compiler to run out of registers.
You ca
No; compilation units have nothing to do with it, GCC can optimise
across compilation unit boundaries just fine, if you tell it to
compile more than one compilation unit at once.
Last I checked, the Linux kernel build system did compile each .c file
as a separate compilation unit.
I have some
Of course, if we find there are more callers in the kernel who want
the
volatility behaviour than those who don't care, we can re-define the
existing ops to such variants, and re-name the existing definitions
to
somethine else, say "atomic_read_nonvolatile" for all I care.
Do we really need a
On Wed, Aug 15, 2007 at 06:30:00PM +0200, Steffen Klassert wrote:
> On Tue, Aug 14, 2007 at 10:54:32AM +0100, Mark Hindley wrote:
> > On Tue, Aug 14, 2007 at 01:33:26AM -0400, Jeff Garzik wrote:
> > > I would strongly prefer that vortex_up return a value, since all the
> > > important callers of t
I think this was just terminology confusion here again. Isn't "any
code
that it cannot currently see" the same as "another compilation unit",
and wouldn't the "compilation unit" itself expand if we ask gcc to
compile more than one unit at once? Or is there some more specific
"definition" for "com
1 - 100 of 179 matches
Mail list logo