PPIOCDETACH ioctl and see if anyone actually notices. Leave
> a stub in place that prints a one-time warning and returns EINVAL.
>
> Reported-by: syzbot+16363c99d4134717c...@syzkaller.appspotmail.com
> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Eric Biggers
Acked-by: Paul Mackerras
Olof Johansson writes:
> Here's a set of updates for pasemi_mac for 2.6.26. Some of them touch
> the dma_lib in the platform code as well, but it's easier if it's all
> merged through netdev to avoid dependencies.
>
> Major highlights are jumbo frame support and ethtool basics, the rest
> is most
David Miller writes:
> Here is the patch I'm putting through some paces, let me know if
> it solves the powerpc problem.
Looks fine to me.
Acked-by: Paul Mackerras <[EMAIL PROTECTED]>
Thanks!
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the
Andrew Morton writes:
> arch/powerpc/boot/inflate.c:920:19: errno.h: No such file or directory
> arch/powerpc/boot/inflate.c:921:18: slab.h: No such file or directory
> arch/powerpc/boot/inflate.c:922:21: vmalloc.h: No such file or directory
We used to have our own copies of inflate.c etc. for th
David Miller writes:
> The only thing we touched in zlib is in the patch below.
>
> I suspect the lib/zlib_inflate/inflate.c changes, I had no idea that
> some pieces of code try to use this into userspace.
Not userspace; the zImage wrapper uses inflate.c to gunzip the
compressed kernel image.
Russell King writes:
> Let me say it more clearly: On ARM, it is impossible to perform atomic
> operations on MMIO space.
Actually, no one is suggesting that we try to do that at all.
The discussion about RMW ops on MMIO space started with a comment
attributed to the gcc developers that one reas
Satyam Sharma writes:
> I wonder if this'll generate smaller and better code than _both_ the
> other atomic_read_volatile() variants. Would need to build allyesconfig
> on lots of diff arch's etc to test the theory though.
I'm sure it would be a tiny effect.
This whole thread is arguing about ef
Herbert Xu writes:
> On Fri, Aug 17, 2007 at 03:09:57PM +1000, Paul Mackerras wrote:
> > Herbert Xu writes:
> >
> > > Can you find an actual atomic_read code snippet there that is
> > > broken without the volatile modifier?
> >
> > There are some in
Herbert Xu writes:
> Can you find an actual atomic_read code snippet there that is
> broken without the volatile modifier?
There are some in arch-specific code, for example line 1073 of
arch/mips/kernel/smtc.c. On mips, cpu_relax() is just barrier(), so
the empty loop body is ok provided that at
Herbert Xu writes:
> So the point here is that if you don't mind getting a stale
> value from the CPU cache when doing an atomic_read, then
> surely you won't mind getting a stale value from the compiler
> "cache".
No, that particular argument is bogus, because there is a cache
coherency protocol
Nick Piggin writes:
> Why are people making these undocumented and just plain false
> assumptions about atomic_t?
Well, it has only been false since December 2006. Prior to that
atomics *were* volatile on all platforms.
> If they're using lockless code (ie.
> which they must be if using atomics
Linus Torvalds writes:
> In general, I'd *much* rather we used barriers. Anything that "depends" on
> volatile is pretty much set up to be buggy. But I'm certainly also willing
> to have that volatile inside "atomic_read/atomic_set()" if it avoids code
> that would otherwise break - ie if it hi
Nick Piggin writes:
> So i386 and x86-64 don't have volatiles there, and it saves them a
> few K of kernel text. What you need to justify is why it is a good
I'm really surprised it's as much as a few K. I tried it on powerpc
and it only saved 40 bytes (10 instructions) for a G5 config.
Paul.
-
Christoph Lameter writes:
> No it does not have any volatile semantics. atomic_dec() can be reordered
> at will by the compiler within the current basic unit if you do not add a
> barrier.
Volatile doesn't mean it can't be reordered; volatile means the
accesses can't be eliminated.
Paul.
-
To
Herbert Xu writes:
> On Thu, Aug 16, 2007 at 02:11:43PM +1000, Paul Mackerras wrote:
> >
> > The uses of atomic_read where one might want it to allow caching of
> > the result seem to me to fall into 3 categories:
> >
> > 1. Places that are buggy because of
Herbert Xu writes:
> It doesn't matter. The memory pressure flag is an *advisory*
> flag. If we get it wrong the worst that'll happen is that we'd
> waste some time doing work that'll be thrown away.
Ah, so it's the "racy but I don't care because it's only an
optimization" case. That's fine.
Herbert Xu writes:
> > You mean it's intended that *sk->sk_prot->memory_pressure can end up
> > as 1 when sk->sk_prot->memory_allocated is small (less than
> > ->sysctl_mem[0]), or as 0 when ->memory_allocated is large (greater
> > than ->sysctl_mem[2])? Because that's the effect of the current c
Satyam Sharma writes:
> Anyway, the problem, of course, is that this conversion to a stronger /
> safer-by-default behaviour doesn't happen with zero cost to performance.
> Converting atomic ops to "volatile" behaviour did add ~2K to kernel text
> for archs such as i386 (possibly to important code
Herbert Xu writes:
> If you're referring to the code in sk_stream_mem_schedule
> then it's working as intended. The atomicity guarantees
You mean it's intended that *sk->sk_prot->memory_pressure can end up
as 1 when sk->sk_prot->memory_allocated is small (less than
->sysctl_mem[0]), or as 0 when
Herbert Xu writes:
> > Are you sure? How do you know some other CPU hasn't changed the value
> > in between?
>
> Yes I'm sure, because we don't care if others have increased
> the reservation.
But others can also reduce the reservation. Also, the code sets and
clears *sk->sk_prot->memory_press
Christoph Lameter writes:
> > But I have to say that I still don't know of a single place
> > where one would actually use the volatile variant.
>
> I suspect that what you say is true after we have looked at all callers.
It seems that there could be a lot of places where atomic_t is used in
a n
Satyam Sharma writes:
> I can't speak for this particular case, but there could be similar code
> examples elsewhere, where we do the atomic ops on an atomic_t object
> inside a higher-level locking scheme that would take care of the kind of
> problem you're referring to here. It would be useful f
Herbert Xu writes:
> See sk_stream_mem_schedule in net/core/stream.c:
>
> /* Under limit. */
> if (atomic_read(sk->sk_prot->memory_allocated) <
> sk->sk_prot->sysctl_mem[0]) {
> if (*sk->sk_prot->memory_pressure)
> *sk->sk_prot->memory_pres
Christoph Lameter writes:
> A volatile default would disable optimizations for atomic_read.
> atomic_read without volatile would allow for full optimization by the
> compiler. Seems that this is what one wants in many cases.
Name one such case.
An atomic_read should do a load from memory. If
Christoph Lameter writes:
> On Thu, 16 Aug 2007, Paul Mackerras wrote:
>
> > In the kernel we use atomic variables in precisely those situations
> > where a variable is potentially accessed concurrently by multiple
> > CPUs, and where each CPU needs to see updates
Satyam Sharma writes:
> > Doesn't "atomic WRT all processors" require volatility?
>
> No, it definitely doesn't. Why should it?
>
> "Atomic w.r.t. all processors" is just your normal, simple "atomicity"
> for SMP systems (ensure that that object is modified / set / replaced
> in main memory atom
Chris Snook writes:
> I'll do this for the whole patchset. Stay tuned for the resubmit.
Could you incorporate Segher's patch to turn atomic_{read,set} into
asm on powerpc? Segher claims that using asm is really the only
reliable way to ensure that gcc does what we want, and he seems to
have a p
;-> #0" (before &vlan_netdev_xmit_lock_key) and
> lockdep should be notified about this.
>
> Reported & tested by: "Yuriy N. Shkandybin" <[EMAIL PROTECTED]>
> Signed-off-by: Jarek Poplawski <[EMAIL PROTECTED]>
> Cc: Paul Mackerras <[EMAIL PRO
I wrote:
> So this doesn't change process_input_packet(), which treats the case
> where the first byte is 0xff (PPP_ALLSTATIONS) but the second byte is
> 0x03 (PPP_UI) as indicating a packet with a PPP protocol number of
I meant "the second byte is NOT 0x03", of course.
Paul.
-
To unsubscribe fr
David Miller writes:
> Here is Patrick McHardy's patch:
So this doesn't change process_input_packet(), which treats the case
where the first byte is 0xff (PPP_ALLSTATIONS) but the second byte is
0x03 (PPP_UI) as indicating a packet with a PPP protocol number of
0xff. Arguably that's wrong since
David Miller writes:
> > It seems we fail to reserve enough headroom for the case
> > buf[0] == PPP_ALLSTATIONS and buf[1] != PPP_UI.
> >
> > Can you try this patch please?
>
> Any confirmation of this fix yet?
Indeed, ppp_async doesn't handle that case correctly.
RFC 1662 says:
The Con
Andrew Morton writes:
> From: Stephan Helas <[EMAIL PROTECTED]>
> To: linux-kernel@vger.kernel.org
> Subject: kernel oops at ppp
>
>
> Hello,
>
> i got oops on unsing UMTS - hsdpa card merlin xu870 using ppp.
What is a "UMTS - hsdpa card merlin xu870"?
At a guess I would say that whatever ppp
David Miller writes:
> The PPP generic layer seems to be very careful about it's handling of
> the ->xmit_pending packet.
Mostly, but I think that this is a genuine leak.
> I'm really surprised this leak doesn't trigger already via the
> ppp_synctty.c and ppp_async.c drivers, perhaps they do som
Guennadi Liakhovetski writes:
> Don't leak an sk_buff on interface destruction.
>
> Signed-off-by: G. Liakhovetski <[EMAIL PROTECTED]>
Acked-by: Paul Mackerras <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the b
Linus Torvalds writes:
> We should just do this natively. There's been several tests over the years
> saying that it's much more efficient to do sti/cli as a simple store, and
> handling the "oops, we got an interrupt while interrupts were disabled" as
> a special case.
>
> I have this dim mem
Andrew Morton writes:
> Let me restore the words from my earlier email which you removed so that
> you could say that:
>
> For you the driver author to make assumptions about what's happening
> inside pci_set_mwi() is a layering violation. Maybe the bridge got
> hot-unplugged. Maybe the a
Andrew Morton writes:
> If the drivers doesn't care and if it makes no difference to performance
> then just delete the call to pci_set_mwi().
>
> But if MWI _does_ make a difference to performance then we should tell
> someone that it isn't working rather than silently misbehaving?
That sounds
[EMAIL PROTECTED] writes:
> PPPoE must advertise the underlying device's MTU via the ppp channel
> descriptor structure, as multilink functionality depends on it.
>
> Signed-off-by: Michal Ostrowski <[EMAIL PROTECTED]>
Acked-by: Paul Mackerras <[EMAIL PROTECTED]>
-
Linas Vepstas writes:
> The rest of this patch might indeed be correct, but the above comment
> bothers me. The "ns" versions of routines are supposed to be
> non-byte-swapped versions of the insl/outsl routines (which would
> byte-swap on big-endian archs such as powerpc.)
If it were true that
Herbert Xu writes:
> BTW, did you see the "cmpldi r1,..." stuff in the code? That's a typo,
> right?
Yes it is a typo, but fixing it is lower priority since both r1 and
cr1 equal 1.
Paul.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PRO
Herbert Xu writes:
> Interesting. We were previously off by 28 bytes, now we're off by 8 :)
You missed a couple of 'beqlr' instructions (branch if equal to LR).
I'd be interested to know if it still fails with the patch below.
Thanks,
Paul.
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powe
Joy Latten writes:
> The good news is that the pings worked great!
> So perhaps ESP is working ok with ICMP.
>
> But when I tried to do sftp, I still got the oops.
> I don't think TCP and ESP are working.
You're hitting the BUG_ON(len) at line 611 of net/xfrm/xfrm_algo.c.
Is that the same thing
s from kmalloc+memset to kzalloc.
[EMAIL PROTECTED]: fix error-path leak]
[EMAIL PROTECTED]: cleanups]
[EMAIL PROTECTED]: don't add useless printk and cardmap_destroy calls]
Signed-off-by: Panagiotis Issaris <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]&
[EMAIL PROTECTED] writes:
> From: Panagiotis Issaris <[EMAIL PROTECTED]>
>
> The PPP code contains two kmalloc()s followed by memset()s without handling a
> possible memory allocation failure. (Suggested by Joe Perches).
Nack, because...
> - cardmap_set(&all_ppp_units, unit, ppp);
> +
Jan-Bernd Themann writes:
> The outcome of some internal discussions was that it is not acceptable for
> our enterprise users of this type of driver on this target system to need a
> recompile / reload of the driver for error analysis, so we need a mechanism
> that allows us to switch on / off deb
[EMAIL PROTECTED] writes:
> From: <[EMAIL PROTECTED]>
>
> Adapted from http://bugzilla.kernel.org/show_bug.cgi?id=6530
>
> Reschedule the async Tx tasklet if the transmit queue was full.
>
> Cc: Paul Mackerras <[EMAIL PROTECTED]>
>
> [akpm: s
Rajesh Shah writes:
> The current MSI code actually does this deliberately, not by
> accident. It's got a lot of complex code to track devices and
> vectors and make sure an enable_msi -> disable -> enable sequence
> gives a driver the same vector. It also has policies about
> reserving vectors ba
Andrew Morton writes:
> xeb (who forgot to do reply-to-all) tells me that pptpd uses ptys.
I tried to replicate this using pppd running on a pty, with a
"charshunt" process on the master side of the pty transferring
characters between it and a socket. I didn't see any freezeups in
either directi
Andy Gay writes:
> How does the serial driver know it has to call ppp_asynctty_wakeup()?
The serial driver is supposed to call the line discipline's wakeup
function when it has room in the output buffer and the
TTY_DO_WRITE_WAKEUP bit is set in tty->flags. When the serial port is
set to the ppp
Andrew Morton writes:
> xeb has said:
>
> in this construction:
>
> if ((test_bit(XMIT_WAKEUP, &ap->xmit_flags) ||
> test_bit(XMIT_FULL, &ap->xmit_flags)) && ppp_async_push(ap))
> ppp_output_wakeup(&ap->chan);
>
> if ppp_async_push() doesn't send any dat
Andrew Morton writes:
> hm, a PPP fix. We seem to need some of those lately.
>
> Paul, does this look sane?
/me pages in 7 year old code...
> @@ -516,6 +516,8 @@ static void ppp_async_process(unsigned l
> /* try to push more stuff out */
> if (test_bit(XMIT_WAKEUP, &ap->xmit_flags)
Andreas Schwab writes:
> I suppose the NIC in the PowerMac G5 can do GigE, yet when plugged into a
> GB switch it is only willing to talk 100MB with it. Any idea why? Kernel
> is 2.6.16-rc5-git2.
It does 1000Mb/s here...
# ethtool eth0
Settings for eth0:
Supported ports: [ TP MII ]
Randy.Dunlap writes:
> E.g., could the hypervisor know when one of it's virtual OSes
> dies or reboots and release its resources then?
I think the point is that with kexec, the same virtual machine keeps
running, so the hypervisor doesn't see the OS dying or rebooting.
Paul.
-
To unsubscribe fro
Alexey Dobriyan writes:
> The fact that they can be represented by the same bit patterns is
> irrelevant.
Indeed it is. The fact that the C standard says that "0" is a valid
representation for a null pointer in C source code *is* relevant,
though. That is in fact something that *wasn't* in K&R
David S. Miller writes:
> Because sparse goes beyond the standards and tries to
> catch cases that usually end up being bugs.
When has a pointer comparison with an explicit "0" ever caused a bug?
Paul.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message t
James Carlson writes:
> Alexey Dobriyan writes:
> > - if (ap == 0)
> > + if (!ap)
>
> And the solution is to treat it as a boolean instead?! I'm not sure
> which is more ugly.
>
> Why wouldn't explicit comparison against NULL be the preferred fix?
I just think this whole "you shouldn't com
Arnd Bergmann writes:
> Uploading the device firmware may fail if wrong input data
> was provided by the user. This checks for the condition.
>
> From: [EMAIL PROTECTED]
> Cc: netdev@vger.kernel.org
This one should be sent to Jeff Garzik, along with patches 11, 13 and
14.
Paul.
-
To unsubscribe
Jesus Arango writes:
> I would like to porpose (see attached patch) the addition of protocol
> values for multiplexing and demultiplexing ROHC header compression
> packets. The constants in this patch are compliant with RFC 3241 (ROHC
> over PPP).
You could take pity on the reader of the code, an
Philippe De Muyter writes:
> Actually, that's probably the case I had, but my fix gets the ip adresses
> 4byte aligned in my case : I had verified the address of the saddr field,
> and I needed to shift the buffer by 3, not 1, to get it 4byte aligned.
Please outline the code flow that leads to th
Philippe De Muyter writes:
> > This patch seems a bit strange and/or incomplete. Are we trying to
> > get 2-byte alignment or 4-byte alignment of the payload? It seems
>
> Actually, we try to get a 4n+2 alignment for skb->data, to get the
> ip-addresses
> field 4bytes aligned.
> I think the on
I wrote:
> I really think there should be another flag bit set by pppd to say
> "must compress" rather than relying on the compressor telling you
> that.
I talked to Matt Domsch at OLS and agreed that I would add such a flag
(since waiting for someone who actually cared about it to do it doesn't
Jeff Garzik writes:
> From: "Philippe De Muyter" <[EMAIL PROTECTED]>
>
> Avoid ppp-generated kernel crashes on machines where unaligned accesses are
> forbidden (ie: 68000-based CPUs)
This patch seems a bit strange and/or incomplete. Are we trying to
get 2-byte alignment or 4-byte alignment of
To follow up on my comments on the mppe patch, it still misses the
most important thing, which is to make sure you don't send unencrypted
data if CCP should go down. A received CCP TermReq or TermAck will
clear the SC_DECOMP_RUN flag and the code will then ignore the
xcomp->must_compress flag.
I
Some comments on the MPPE kernel patch (sorry it's taken me so long):
> +static inline struct sk_buff *
> +pad_compress_skb(struct ppp *ppp, struct sk_buff *skb)
> +{
> + struct sk_buff *new_skb;
> + int len;
> + int new_skb_size = ppp->dev->mtu + ppp->xcomp->comp_skb_extra_space +
>
64 matches
Mail list logo