Hello, Chris.
On Wed, Aug 14, 2013 at 01:18:31PM -0400, Chris Metcalf wrote:
> On 8/14/2013 12:57 PM, Tejun Heo wrote:
> > Hello, Chris.
> >
> > On Wed, Aug 14, 2013 at 12:03:39PM -0400, Chris Metcalf wrote:
> >> Tejun, I don't know if you have a better idea for how to mark a
> >> work_struct as b
On 8/14/2013 12:57 PM, Tejun Heo wrote:
> Hello, Chris.
>
> On Wed, Aug 14, 2013 at 12:03:39PM -0400, Chris Metcalf wrote:
>> Tejun, I don't know if you have a better idea for how to mark a
>> work_struct as being "not used" so we can set and test it here.
>> Is setting entry.next to NULL good? Sh
Hello, Chris.
On Wed, Aug 14, 2013 at 12:03:39PM -0400, Chris Metcalf wrote:
> Tejun, I don't know if you have a better idea for how to mark a
> work_struct as being "not used" so we can set and test it here.
> Is setting entry.next to NULL good? Should we offer it as an API
> in the workqueue he
On 8/14/2013 2:46 AM, Andrew Morton wrote:
> On Tue, 13 Aug 2013 19:32:37 -0400 Chris Metcalf wrote:
>
>> On 8/13/2013 7:29 PM, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Tue, Aug 13, 2013 at 06:53:32PM -0400, Chris Metcalf wrote:
int lru_add_drain_all(void)
{
- return schedule_on_ea
Hello,
On Tue, Aug 13, 2013 at 11:46:29PM -0700, Andrew Morton wrote:
> What does "nest" mean? lru_add_drain_all() calls itself recursively,
> presumably via some ghastly alloc_percpu()->alloc_pages(GFP_KERNEL)
> route? If that ever happens then we'd certainly want to know about it.
> Hopefully
On Tue, 13 Aug 2013 19:32:37 -0400 Chris Metcalf wrote:
> On 8/13/2013 7:29 PM, Tejun Heo wrote:
> > Hello,
> >
> > On Tue, Aug 13, 2013 at 06:53:32PM -0400, Chris Metcalf wrote:
> >> int lru_add_drain_all(void)
> >> {
> >> - return schedule_on_each_cpu(lru_add_drain_per_cpu);
> >> + return s
On Tue, Aug 13, 2013 at 07:44:55PM -0400, Chris Metcalf wrote:
> int lru_add_drain_all(void)
> {
> static struct cpumask mask;
Instead of cpumask,
> static DEFINE_MUTEX(lock);
you can DEFINE_PER_CPU(struct work_struct, ...).
> for_each_online_cpu(cpu) {
>
On 8/13/2013 7:29 PM, Tejun Heo wrote:
> It won't nest and doing it simultaneously won't buy anything, right?
> Wouldn't it be better to protect it with a mutex and define all
> necessary resources statically (yeah, cpumask is pain in the ass and I
> think we should un-deprecate cpumask_t for stati
On 8/13/2013 7:29 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Aug 13, 2013 at 06:53:32PM -0400, Chris Metcalf wrote:
>> int lru_add_drain_all(void)
>> {
>> -return schedule_on_each_cpu(lru_add_drain_per_cpu);
>> +return schedule_on_each_cpu_cond(lru_add_drain_per_cpu,
>> +
Hello,
On Tue, Aug 13, 2013 at 06:53:32PM -0400, Chris Metcalf wrote:
> int lru_add_drain_all(void)
> {
> - return schedule_on_each_cpu(lru_add_drain_per_cpu);
> + return schedule_on_each_cpu_cond(lru_add_drain_per_cpu,
> + lru_add_drain_cond, NULL);
This change makes lru_add_drain_all() only selectively interrupt
the cpus that have per-cpu free pages that can be drained.
This is important in nohz mode where calling mlockall(), for
example, otherwise will interrupt every core unnecessarily.
Signed-off-by: Chris Metcalf
---
v7: try a version
11 matches
Mail list logo