--- Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Wed, 13 Feb 2008, Christian Bell wrote:
>
> > not always be in the thousands but you're still
> claiming scalability
> > for a mechanism that essentially logs who accesses
> the regions. Then
> > there's the fact that reclaim becomes a colle
--- Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Wed, 13 Feb 2008, Kanoj Sarcar wrote:
>
> > It seems that the need is to solve potential
> memory
> > shortage and overcommit issues by being able to
> > reclaim pages pinned by rdma driver/hardware.
>
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the runqueue length is limited to a very small
> > multiple of the #cp
>
> I didn't seen anything from Kanoj but I did something myself for the wildfire:
>
>
>ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
>
> this is mostly an userspace issue, not really intended as a kernel optimization
> (however it's also pa
>
>
>
> Kanoj, our cpu-pooling + loadbalancing allows you to do that.
> The system adminstrator can specify at runtime through a
> /proc filesystem interface the cpu-pool-size, whether loadbalacing
> should take place.
Yes, I think this approach can support the various requirements
put on the
>
> It helps by keeping the task in the same node if it cannot keep it in
> the same cpu anymore.
>
> Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
> and it gets running and for some reason cpu 8 is busy and there are other
> cpus idle in the system. Now with the cu
>
>
> George, while this is needed as pointed out in a previous message,
> due to non-contiguous physical IDs, I think the current usage is
> pretty bad (at least looking from a x86 perspective). Maybe somebody
> can chime in from a different architecture.
>
> I think that all data accesses par
>
> [Added Linus and linux-kernel as I think it's of general interest]
>
> Kanoj Sarcar wrote:
> > Whether Jamie was trying to illustrate a different problem, I am not
> > sure.
>
> Yes, I was talking about pte_test_and_clear_dirty in the earlier post.
>
>
> [Added Linus and linux-kernel as I think it's of general interest]
>
> Kanoj Sarcar wrote:
> > Whether Jamie was trying to illustrate a different problem, I am not
> > sure.
>
> Yes, I was talking about pte_test_and_clear_dirty in the earlier post.
>
>
> Kanoj Sarcar wrote:
> > > Here's the important part: when processor 2 wants to set the pte's dirty
> > > bit, it *rereads* the pte and *rechecks* the permission bits again.
> > > Even though it has a non-dirty TLB entry for that pte.
> > >
>
> Kanoj Sarcar wrote:
> >
> > Okay, I will quote from Intel Architecture Software Developer's Manual
> > Volume 3: System Programming Guide (1997 print), section 3.7, page 3-27:
> >
> > "Bus cycles to the page directory and page tables in memo
>
> On Thu, 15 Feb 2001, Kanoj Sarcar wrote:
>
> > No. All architectures do not have this problem. For example, if the
> > Linux "dirty" (not the pte dirty) bit is managed by software, a fault
> > will actually be taken when processor 2 tries to do the w
Hi,
Just a quick note to mention that I was successful in booting up a
64 node, 128p, 64G mips64 machine on a 2.4.1 based kernel. To be able
to handle the amount of io devices connected, I had to make some
fixes in the arch/mips64 code. And a few to handle 128 cpus.
A couple of generic patches
>
> Some time ago, the list was very helpful in solving my programs
> failing at the limit of real memory rather than expanding into
> swap under linux 2.2.
>
I can;t say what your actual problem is, but in previous experiments,
I have seen these as the main cause:
1. shortage of real memory (r
>
> So while there may be a more elegant solution down the road, I would like
> to see the simple fix put back into 2.4. Here is the patch to essential
> put the code back to the way it was before the S/390 merge. Patch is
> against 2.4.0-test10pre6.
>
> --- linux/mm/memory.cFri Oct 27 15:
>
>
> On Thu, 12 Oct 2000, David S. Miller wrote:
> >
> >page_table_lock is supposed to protect normal page table activity (like
> >what's done in page fault handler) from swapping out.
> >However, grabbing this lock in swap-out code is completely missing!
> >
> > Audrey, vmlist_ac
16 matches
Mail list logo