On Tue, 15 Dec 2009 12:46:32 +0200
"Kirill A. Shutemov" <[email protected]> wrote:

> On Tue, Dec 15, 2009 at 3:58 AM, KAMEZAWA Hiroyuki
> <[email protected]> wrote:
> > On Sat, 12 Dec 2009 00:59:19 +0200
> > "Kirill A. Shutemov" <[email protected]> wrote:

> > If you use have to use spinlock here, this is a system-wide spinlock,
> > threshold as "100" is too small, I think.
> 
> What is reasonable value for THRESHOLDS_EVENTS_THRESH for you?
> 
> In most cases spinlock taken only for two checks. Is it significant time?
> 
I tend to think about "bad case" when I see spinlock. 

And...I'm not sure but, recently, there are many VM users.
spinlock can be a big pitfall in some enviroment if not para-virtualized.
(I'm sorry I misunderstand somehing and VM handle this well...)

> Unfortunately, I can't test it on a big box. I have only dual-core system.
> It's not enough to test scalability.
> 

please leave it as 100 for now. But there is a chance to do simple optimization
for reducing the number of checks.

example)
static void mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
{
        /* For handle memory allocation in rush, check jiffies */
        */
        smp_rmb();
        if (memcg->last_checkpoint_jiffies == jiffies)
                return;   /* reset event to half value ..*/
        memcg->last_checkpoint_jiffies = jiffies;
        smp_wmb();
        .....

I think this kind of check is necessary for handle "Rushing" memory allocation
in scalable way. Above one is just an example, 1 tick may be too long.

Other simple plan is

        /* Allow only one thread to do scan the list at the same time. */
        if (atomic_inc_not_zero(&memcg->threahold_scan_count) {
                atomic_dec(&memcg->threshold_scan_count);
                return;
        }
        ...
        atomic_dec(&memcg->threahold_scan_count)

Some easy logic (as above) for taking care of scalability and commenary for that
is enough at 1st stage. Then, if there seems to be a trouble/concern, someone
(me?) will do some work later.




Thanks,
-Kame

_______________________________________________
Containers mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Devel mailing list
[email protected]
https://openvz.org/mailman/listinfo/devel

Reply via email to