> The map type is printed in a similar format to /proc/meminfo or
> /proc//status, i.e. "$key: $value\n"
this description doesn't seem to match with the code.
YAMAMOTO Takashi
> +static int cgroup_map_add(struct cgroup_map_cb *cb, const char *key, u64
> value)
>
> On Feb 19, 2008 9:48 PM, YAMAMOTO Takashi <[EMAIL PROTECTED]> wrote:
> >
> > it changes the format from "%s %lld" to "%s: %llu", right?
> > why?
> >
>
> The colon for consistency with maps in /proc. I think it also makes it
> slightl
> - simplifies transition to a future efficient cgroups binary API
>
> Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
it changes the format from "%s %lld" to "%s: %llu", right?
why?
YAMAMOTO Takashi
--
To unsubscribe from this list: send the line "unsub
gt;= nr_to_scan)
> break;
> page = pc->page;
> - VM_BUG_ON(!pc);
> + VM_BUG_ON(!page);
can't page be NULL here if mem_cgroup_uncharge clears pc->page behind us?
ie. bug.
YAMAMOTO Takashi
--
To unsubscribe from this l
type page_cgroup and we use list_for_each_entry_safe_reverse. Not
> sure
> why we can't bug on pc.
pc is dereferenced before this VM_BUG_ON.
YAMAMOTO Takashi
>
>
>
> --
> Warm Regards,
> Balbir Singh
> Linux Technology Center
> IBM, IST
implementation of memory subsystem associates pages to
cgroups directly, rather than via tasks. so it isn't straightforward to
use the information for other classification mechanisms like yours which
might not share the view of "hierarchy" with the memory subsystem.
YAMAMOTO Takashi
I/O correctly:
>
> Yes, this should be mentioned in the document with the current implementation
> as you pointed out.
>
> By the way, I think once a memory controller of cgroup is introduced, it will
> help to track down which cgroup is the original source.
do you mean to make
> +static inline struct mem_cgroup_per_zone *
> +mem_cgroup_zoneinfo(struct mem_cgroup *mem, int nid, int zid)
> +{
> + if (!mem->info.nodeinfo[nid])
can this be true?
YAMAMOTO Takashi
> + return NULL;
> + return &mem->info.nodeinfo[nid]-&
__mem_cgroup_remove_list(pc);
> kfree(pc);
> } else /* being uncharged ? ...do relax */
> break;
'active' seems unused.
YAMAMOTO Takashi
-
To unsubscribe from this list: send the line "unsub
> YAMAMOTO Takashi wrote:
> >> Allow tasks to migrate from one container to the other. We migrate
> >> mm_struct's mem_container only when the thread group id migrates.
> >
> >> + /*
> >> + * Only thread group leaders are allowed to migrate,
+ */
> + if (p->tgid != p->pid)
> + goto out;
does it mean that you can't move a process between containers
once its thread group leader exited?
YAMAMOTO Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a m
> +echo 1 > /proc/sys/vm/drop_pages will help get rid of some of the pages
> +cached in the container (page cache pages).
drop_caches
YAMAMOTO Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Mor
> YAMAMOTO Takashi wrote:
> >> + lock_meta_page(page);
> >> + /*
> >> + * Check if somebody else beat us to allocating the meta_page
> >> + */
> >> + race_mp = page_get_meta_page(page);
> >> + if (race_mp)
> YAMAMOTO Takashi wrote:
> >> Choose if we want cached pages to be accounted or not. By default both
> >> are accounted for. A new set of tunables are added.
> >>
> >> echo -n 1 > mem_control_type
> >>
> >> switches the acc
t;
> switches the behaviour back
MEM_CONTAINER_TYPE_ALL is 3, not 2.
YAMAMOTO Takashi
> +enum {
> + MEM_CONTAINER_TYPE_UNSPEC = 0,
> + MEM_CONTAINER_TYPE_MAPPED,
> + MEM_CONTAINER_TYPE_CACHED,
> + MEM_CONTAINER_TYPE_ALL,
> + MEM_CONTAINER_TYPE_MAX,
>
mp = list_entry(src->prev, struct meta_page, lru);
what prevents another thread from freeing mp here?
> + spin_lock(&mem_cont->lru_lock);
> + if (mp)
> + page = mp->page;
> + spin_unlock(&mem_cont->lru_lock);
>
atomic_inc(&mp->ref_cnt);
> + res_counter_uncharge(&mem->res, 1);
> + goto done;
> + }
i think you need css_put here.
YAMAMOTO Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMA
> +extern void container_init_smp(void);
> +static inline void container_init_smp(void) {}
stale prototypes?
YAMAMOTO Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http:/
members of the subsystem's top_container. It should
> + be initialized to -1.
stale info?
> +struct container {
> + struct containerfs_root *root;
> + struct container *top_container;
> +};
can cont->top_container be different from than &cont->root.top_
mp = list_entry(src->prev, struct meta_page, lru);
> + page = mp->page;
> +
- is it safe to pick the lists without mem_cont->lru_lock held?
- what prevents mem_container_uncharge from freeing this meta_page
behind us?
YAMAMOTO Takashi
-
To unsubscribe from this list
> On 7/10/07, YAMAMOTO Takashi <[EMAIL PROTECTED]> wrote:
> > hi,
> >
> > > diff -puN mm/memory.c~mem-control-accounting mm/memory.c
> > > --- linux-2.6.22-rc6/mm/memory.c~mem-control-accounting 2007-07-05
> > > 13:45:18.0 -07
extended to become
> container aware.
>
> Signed-off-by: Balbir Singh <[EMAIL PROTECTED]>
it seems that the number of pages to scan (nr_active/nr_inactive
in shrink_zone) is calculated from NR_ACTIVE and NR_INACTIVE of the zone,
even in the case of per-container reclaim. is it inten
goto oom;
> +
> entry = mk_pte(page, vma->vm_page_prot);
> entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>
ditto.
can you check the rest of the patch by yourself? thanks.
YAMAMOTO Takashi
-
To unsubscribe from this list: send the line "unsubscri
if (found) {
> + return p;
> + }
> + addr += PAGE_SIZE;
> + }
> + }
doesn't this loop take very long time if you have a large hole?
i'd suggest to change valid_phys_addr_range to fill &size even when
it returns false, s
24 matches
Mail list logo