Sorry, I did try it. I must have forgotten to notify you of its success.
Will next post an updated patch using 'pr_warn_once'. It prints a message
during system initialization, by the way.
Thanks.
On 07/26/2017 02:16 PM, Tejun Heo wrote:
> On Wed, Jul 26, 2017 at 10:25:08AM -0500, Michael Brin
On Wed, Jul 26, 2017 at 10:25:08AM -0500, Michael Bringmann wrote:
> Hello, Tejun:
> Do you need anything else from me regarding this patch?
> Or are you good to commit it upstream?
> Thanks.
Hmmm... you were planning to try it and we wanted to convert it to
WARN_ONCE?
Thanks.
--
tejun
Hello, Tejun:
Do you need anything else from me regarding this patch?
Or are you good to commit it upstream?
Thanks.
Michael
On 06/28/2017 04:24 PM, Tejun Heo wrote:
> On Wed, Jun 28, 2017 at 04:15:09PM -0500, Michael Bringmann wrote:
>> I will try that patch tomorrow. My only concern abo
On Wed, Jun 28, 2017 at 04:15:09PM -0500, Michael Bringmann wrote:
> I will try that patch tomorrow. My only concern about that is the use of
> WARN_ON().
> As I may have mentioned in my note of 6/27, I saw about 600 instances of the
> warning
> message just during boot of the PowerPC kernel. I
I will try that patch tomorrow. My only concern about that is the use of
WARN_ON().
As I may have mentioned in my note of 6/27, I saw about 600 instances of the
warning
message just during boot of the PowerPC kernel. I doubt that we want to see
that on
an ongoing basis.
Michael
On 06/13/2017
Hello,
On Tue, Jun 13, 2017 at 03:04:30PM -0500, Michael Bringmann wrote:
> @@ -3564,19 +3564,28 @@ static struct pool_workqueue
> *alloc_unbound_pwq(struct workqueue_struct *wq,
> static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int
> node,
>
Hello:
On 06/12/2017 12:32 PM, Tejun Heo wrote:
> Hello,
>
> On Mon, Jun 12, 2017 at 12:10:49PM -0500, Michael Bringmann wrote:
>>> The reason why we're ending up with empty masks is because
>>> wq_calc_node_cpumask() is assuming that the possible node cpumask is
>>> always a superset of online (
Hello,
On Mon, Jun 12, 2017 at 12:10:49PM -0500, Michael Bringmann wrote:
> > The reason why we're ending up with empty masks is because
> > wq_calc_node_cpumask() is assuming that the possible node cpumask is
> > always a superset of online (as it should). We can trigger a fat
> > warning there
On 06/12/2017 11:14 AM, Tejun Heo wrote:
> Hello,
>
> On Mon, Jun 12, 2017 at 09:47:31AM -0500, Michael Bringmann wrote:
>>> I'm not sure because it doesn't make any logical sense and it's not
>>> right in terms of correctness. The above would be able to enable CPUs
>>> which are explicitly exc
Hello,
On Mon, Jun 12, 2017 at 09:47:31AM -0500, Michael Bringmann wrote:
> > I'm not sure because it doesn't make any logical sense and it's not
> > right in terms of correctness. The above would be able to enable CPUs
> > which are explicitly excluded from a workqueue. The only fallback
> > wh
Hello:
On 06/06/2017 01:09 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Jun 06, 2017 at 11:18:36AM -0500, Michael Bringmann wrote:
>> On 05/25/2017 10:30 AM, Michael Bringmann wrote:
>>> I will try that patch shortly. I also updated my patch to be conditional
>>> on whether the pool's cpumask attri
Hello,
On Tue, Jun 06, 2017 at 11:18:36AM -0500, Michael Bringmann wrote:
> On 05/25/2017 10:30 AM, Michael Bringmann wrote:
> > I will try that patch shortly. I also updated my patch to be conditional
> > on whether the pool's cpumask attribute was empty. You should have received
> > V2 of that
On 05/25/2017 10:30 AM, Michael Bringmann wrote:
> I will try that patch shortly. I also updated my patch to be conditional
> on whether the pool's cpumask attribute was empty. You should have received
> V2 of that patch by now.
Let's try this again.
The hotplug problem goes away with the cha
I will try that patch shortly. I also updated my patch to be conditional
on whether the pool's cpumask attribute was empty. You should have received
V2 of that patch by now.
As to your remark about 'proper subset of possible cpumask for the node',
would that not be the case when we are removing
On Thu, May 25, 2017 at 11:03:53AM -0400, Tejun Heo wrote:
> wq_update_unbound_numa() should have never called into
> alloc_unbound_pwq() w/ empty node cpu mask. It should have fallen
> back to the dfl_pwq. It looks like I just messed up the logic there
> from the initial commit of the feature.
Hello, Michael.
On Wed, May 24, 2017 at 06:39:49PM -0500, Michael Bringmann wrote:
> [ 321.310961] [ cut here ]
> [ 321.310997] WARNING: CPU: 184 PID: 13201 at kernel/workqueue.c:3375
> alloc_unbound_pwq+0x5c0/0x5e0
> [ 321.311005] Modules linked in: rpadlpar_io rpaphp
On 05/23/2017 03:10 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, May 23, 2017 at 03:09:07PM -0500, Michael Bringmann wrote:
>> To confirm, you want the WARN_ON(cpumask_any(pool->attrs->cpumask) >=
>> NR_CPUS)
>> at the point where I place my current patch?
>
> Yeah, cpumask_weight() probably is a
On 05/23/2017 03:10 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, May 23, 2017 at 03:09:07PM -0500, Michael Bringmann wrote:
>> To confirm, you want the WARN_ON(cpumask_any(pool->attrs->cpumask) >=
>> NR_CPUS)
>> at the point where I place my current patch?
>
> Yeah, cpumask_weight() probably is a
Hello,
On Tue, May 23, 2017 at 03:09:07PM -0500, Michael Bringmann wrote:
> To confirm, you want the WARN_ON(cpumask_any(pool->attrs->cpumask) >= NR_CPUS)
> at the point where I place my current patch?
Yeah, cpumask_weight() probably is a bit more intuitive but I'm
curious why we're creating work
To confirm, you want the WARN_ON(cpumask_any(pool->attrs->cpumask) >= NR_CPUS)
at the point where I place my current patch?
On 05/23/2017 02:49 PM, Tejun Heo wrote:
> Hello, Michael.
>
> On Tue, May 23, 2017 at 02:44:23PM -0500, Michael Bringmann wrote:
>> On 05/16/2017 10:55 AM, Tejun Heo wrote:
Hello, Michael.
On Tue, May 23, 2017 at 02:44:23PM -0500, Michael Bringmann wrote:
> On 05/16/2017 10:55 AM, Tejun Heo wrote:
> > Hello, Michael.
> >
> > On Mon, May 15, 2017 at 10:48:04AM -0500, Michael Bringmann wrote:
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -3
On 05/16/2017 10:55 AM, Tejun Heo wrote:
> Hello, Michael.
>
> On Mon, May 15, 2017 at 10:48:04AM -0500, Michael Bringmann wrote:
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3366,6 +3366,8 @@ static struct worker_pool *get_unbound_pool(const
struct workqueue_attrs
Hello, Michael.
On Mon, May 15, 2017 at 10:48:04AM -0500, Michael Bringmann wrote:
> >> --- a/kernel/workqueue.c
> >> +++ b/kernel/workqueue.c
> >> @@ -3366,6 +3366,8 @@ static struct worker_pool *get_unbound_pool(const
> >> struct workqueue_attrs *attrs)
> >>copy_workqueue_attrs(pool->attrs,
Hello:
On 05/10/2017 12:33 PM, Tejun Heo wrote:
> Hello,
>
> On Wed, May 10, 2017 at 11:48:17AM -0500, Michael Bringmann wrote:
>>
>> On NUMA systems with dynamic processors, the content of the cpumask
>> may change over time. As new processors are added via DLPAR operations,
>> workqueues are c
Hello,
On Wed, May 10, 2017 at 11:48:17AM -0500, Michael Bringmann wrote:
>
> On NUMA systems with dynamic processors, the content of the cpumask
> may change over time. As new processors are added via DLPAR operations,
> workqueues are created for them. This patch ensures that the pools
> crea
25 matches
Mail list logo