Ben,
Poke. :)
- k
On Aug 10, 2012, at 8:07 AM, Kumar Gala wrote:
> Ben,
>
> Two updates from last week (one dts bug fix, one minor defconfig update)
>
> - k
>
> The following changes since commit 0d7614f09c1ebdbaa1599a5aba7593f147bf96ee:
>
> Linux 3.6-rc1 (2012-08-02 16:38:10 -0700)
>
>
On Thu, Aug 16, 2012 at 05:21:12PM +0200, Oleg Nesterov wrote:
...
> > So, the arch agnostic code itself
> > takes care of this case...
>
> Yes. I forgot about install_breakpoint()->is_swbp_insn() check which
> returns -ENOTSUPP, somehow I thought arch_uprobe_analyze_insn() does
> this.
>
> > o
On Thu, 2012-08-16 at 16:15 +0200, Peter Zijlstra wrote:
> On Fri, 2012-08-17 at 00:02 +1000, Michael Ellerman wrote:
> > You do want to guarantee that the task will always be subject to the
> > breakpoint, even if it moves cpus. So is there any way to guarantee that
> > other than reserving a brea
> > > > On this second syscall, fetch_bp_busy_slots() sets slots.pinned to be 1,
> > > > despite there being no breakpoint on this CPU. This is because the call
> > > > the task_bp_pinned, checks all CPUs, rather than just the current CPU.
> > > > POWER7 only has one hardware breakpoint per CPU (i
> If you have more things to print/offer via sysfs, I'm all for it.
>
> The XsG5 really has (by looking into the casing): 1 PCI Fan,
> 6 center fans, 1 PSU intake and 1 PSU outblow fan (this last one
> seems rather slow-turning, but maybe that's normal).
> It is not quite clear which is which in
On Sun, Jul 29, 2012 at 8:33 PM, Benjamin Herrenschmidt
wrote:
> n Wed, 2012-07-18 at 18:49 +0200, o...@aepfle.de wrote:
>> From: Linda Xie
>>
>> Expected result:
>> It should show something like this:
>> x1521p4:~ # cat /sys/class/scsi_host/host1/config
>> PARTITIONNAME='x1521p4'
>> NWSDNAME='X1
On Thu, Aug 16, 2012 at 09:37:25PM +0300, Kirill A. Shutemov wrote:
> On Thu, Aug 16, 2012 at 08:29:44PM +0200, Andrea Arcangeli wrote:
> > On Thu, Aug 16, 2012 at 07:43:56PM +0300, Kirill A. Shutemov wrote:
> > > Hm.. I think with static_key we can avoid cache overhead here. I'll try.
> >
> > Cou
On Thu, Aug 16, 2012 at 08:29:44PM +0200, Andrea Arcangeli wrote:
> On Thu, Aug 16, 2012 at 07:43:56PM +0300, Kirill A. Shutemov wrote:
> > Hm.. I think with static_key we can avoid cache overhead here. I'll try.
>
> Could you elaborate on the static_key? Is it some sort of self
> modifying code?
On Thu, Aug 16, 2012 at 07:43:56PM +0300, Kirill A. Shutemov wrote:
> Hm.. I think with static_key we can avoid cache overhead here. I'll try.
Could you elaborate on the static_key? Is it some sort of self
modifying code?
> Thanks, for review. Could you take a look at huge zero page patchset? ;)
On Thu, Aug 16, 2012 at 06:16:47PM +0200, Andrea Arcangeli wrote:
> Hi Kirill,
>
> On Thu, Aug 16, 2012 at 06:15:53PM +0300, Kirill A. Shutemov wrote:
> > for (i = 0; i < pages_per_huge_page;
> > i++, p = mem_map_next(p, page, i)) {
>
> It may be more optimal to avoid a multiplicatio
Ping :)
Can we get some consensus on the right approach here? I'm loathe to code
this if its going to be rejected.
I'd prefer the driver to be properly split so we dont have the MDIO
driver mapping the ethernet drivers address spaces, but if thats not
going to be merged, I'm not feeling like doin
Hi Kirill,
On Thu, Aug 16, 2012 at 06:15:53PM +0300, Kirill A. Shutemov wrote:
> for (i = 0; i < pages_per_huge_page;
>i++, p = mem_map_next(p, page, i)) {
It may be more optimal to avoid a multiplication/shiftleft before the
add, and to do:
for (i = 0, vaddr = haddr; i
On 08/16, Ananth N Mavinakayanahalli wrote:
>
> On Thu, Aug 16, 2012 at 07:41:53AM +1000, Benjamin Herrenschmidt wrote:
> > On Wed, 2012-08-15 at 18:59 +0200, Oleg Nesterov wrote:
> > > On 07/26, Ananth N Mavinakayanahalli wrote:
> > > >
> > > > From: Ananth N Mavinakayanahalli
> > > >
> > > > Thi
On Wednesday 2012-08-15 23:35, Benjamin Herrenschmidt wrote:
>> XServe G5 of mine started powering off more or less
>> randomly
>
>BTW. There's a new windfarm driver for these in recent kernels...
>
>Appart from that, the trip points are coming from a calibration EEPROM,
>you may want to tweak th
From: Andi Kleen
Clearing a 2MB huge page will typically blow away several levels
of CPU caches. To avoid this only cache clear the 4K area
around the fault address and use a cache avoiding clears
for the rest of the 2MB area.
Signed-off-by: Andi Kleen
Signed-off-by: Kirill A. Shutemov
---
mm
From: Andi Kleen
Add a cache avoiding version of clear_page. Straight forward integer variant
of the existing 64bit clear_page, for both 32bit and 64bit.
Also add the necessary glue for highmem including a layer that non cache
coherent architectures that use the virtual address for flushing can
From: "Kirill A. Shutemov"
Signed-off-by: Kirill A. Shutemov
---
mm/hugetlb.c | 38 +++---
1 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bc72712..3c86d3d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2672,
From: Andi Kleen
Signed-off-by: Andi Kleen
Signed-off-by: Kirill A. Shutemov
---
mm/huge_memory.c |7 ---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 70737ec..6f0825b611 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@
From: "Kirill A. Shutemov"
Clearing a 2MB huge page will typically blow away several levels of CPU
caches. To avoid this only cache clear the 4K area around the fault
address and use a cache avoiding clears for the rest of the 2MB area.
This patchset implements cache avoiding version of clear_p
From: "Kirill A. Shutemov"
Signed-off-by: Kirill A. Shutemov
---
include/linux/mm.h |2 +-
mm/huge_memory.c |2 +-
mm/hugetlb.c |3 ++-
mm/memory.c|7 ---
4 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
i
From: Andi Kleen
With multiple threads vector stores are more efficient, so use them.
This will cause the page clear to run non preemptable and add some
overhead. However on 32bit it was already non preempable (due to
kmap_atomic) and there is an preemption opportunity every 4K unit.
On a NPB (N
From: Andi Kleen
Use the fault address, not the rounded down hpage address for NUMA
policy purposes. In some circumstances this can give more exact
NUMA policy.
Signed-off-by: Andi Kleen
Signed-off-by: Kirill A. Shutemov
---
mm/huge_memory.c |8
1 files changed, 4 insertions(+),
> MAX_IDL: Maximum idle characters. When a character is received, the
> receiver begins counting idle characters. If MAX_IDL idle characters
> are received before the next data character, an idle timeout occurs
> and the buffer is closed,
> generating a maskable interrupt request to the core to re
Le 16/08/2012 16:29, Alan Cox a écrit :
The PowerPC CPM is working differently. It doesn't use a fifo but
buffers. Buffers are handed to the microprocessor only when they are
full or after a timeout period which is adjustable. In the driver, the
Which is different how - remembering we empty the
> The PowerPC CPM is working differently. It doesn't use a fifo but
> buffers. Buffers are handed to the microprocessor only when they are
> full or after a timeout period which is adjustable. In the driver, the
Which is different how - remembering we empty the FIFO on an IRQ
> buffers are con
On Fri, 2012-08-17 at 00:02 +1000, Michael Ellerman wrote:
> You do want to guarantee that the task will always be subject to the
> breakpoint, even if it moves cpus. So is there any way to guarantee that
> other than reserving a breakpoint slot on every cpu ahead of time?
That's not how regular
On Thu, 2012-08-16 at 13:44 +0200, Peter Zijlstra wrote:
> On Thu, 2012-08-16 at 21:17 +1000, Michael Neuling wrote:
> > Peter,
> >
> > > > On this second syscall, fetch_bp_busy_slots() sets slots.pinned to be 1,
> > > > despite there being no breakpoint on this CPU. This is because the call
> >
Le 14/08/2012 16:52, Alan Cox a écrit :
On Tue, 14 Aug 2012 16:26:28 +0200
Christophe Leroy wrote:
Hello,
I'm not sure who to address this Patch to either
It fixes a delay issue with CPM UART driver on Powerpc MPC8xx.
The problem is that with the actual code, the driver waits 32 IDLE patter
On Thu, 2012-08-16 at 21:17 +1000, Michael Neuling wrote:
> Peter,
>
> > > On this second syscall, fetch_bp_busy_slots() sets slots.pinned to be 1,
> > > despite there being no breakpoint on this CPU. This is because the call
> > > the task_bp_pinned, checks all CPUs, rather than just the current
Peter,
> > On this second syscall, fetch_bp_busy_slots() sets slots.pinned to be 1,
> > despite there being no breakpoint on this CPU. This is because the call
> > the task_bp_pinned, checks all CPUs, rather than just the current CPU.
> > POWER7 only has one hardware breakpoint per CPU (ie. HBP_N
On Thu, 2012-08-16 at 14:23 +1000, Michael Neuling wrote:
>
> On this second syscall, fetch_bp_busy_slots() sets slots.pinned to be 1,
> despite there being no breakpoint on this CPU. This is because the call
> the task_bp_pinned, checks all CPUs, rather than just the current CPU.
> POWER7 only h
31 matches
Mail list logo