On Tue, 2015-10-27 at 15:00 +0900, Tejun Heo wrote:
> On Tue, Oct 27, 2015 at 06:56:42AM +0100, Mike Galbraith wrote:
> > > Well, if you think certain things are being missed, please speak up.
> > > Not in some media campaign way but with technical reasoning and
> > > justifications.
> >
> > Inser
On Tue, Oct 27, 2015 at 06:56:42AM +0100, Mike Galbraith wrote:
> > Well, if you think certain things are being missed, please speak up.
> > Not in some media campaign way but with technical reasoning and
> > justifications.
>
> Inserting a middle-man is extremely unlikely to improve performance.
On Tue, 2015-10-27 at 14:46 +0900, Tejun Heo wrote:
> Hello,
>
> On Tue, Oct 27, 2015 at 06:42:11AM +0100, Mike Galbraith wrote:
> > Sure, sounds fine, I just fervently hope that the below is foul swamp
> > gas having nothing what so ever to do with your definition of "saner".
>
> lol, idk, you k
Hello,
On Tue, Oct 27, 2015 at 06:42:11AM +0100, Mike Galbraith wrote:
> Sure, sounds fine, I just fervently hope that the below is foul swamp
> gas having nothing what so ever to do with your definition of "saner".
lol, idk, you keep taking things in weird directions. Let's just stay
technical,
On Tue, 2015-10-27 at 12:16 +0900, Tejun Heo wrote:
> Hello, Mike.
>
> On Sun, Oct 25, 2015 at 04:43:33AM +0100, Mike Galbraith wrote:
> > I don't think it's weird, it's just a thought wrt where pigeon holing
> > could lead: If you filter out current users who do so in a manner you
> > consider t
Hello, Mike.
On Sun, Oct 25, 2015 at 04:43:33AM +0100, Mike Galbraith wrote:
> I don't think it's weird, it's just a thought wrt where pigeon holing
> could lead: If you filter out current users who do so in a manner you
> consider to be in some way odd, when all the filtering is done, you may
>
On Sun, Oct 25, 2015 at 02:17:23PM +0100, Florian Weimer wrote:
> On 10/25/2015 12:58 PM, Theodore Ts'o wrote:
>
> > Well, I was thinking we could just teach them to use
> > "syscall(SYS_gettid)".
>
> Right, and that's easier if TIDs are officially part of the GNU API.
>
> I think the worry is t
On 10/25/2015 12:58 PM, Theodore Ts'o wrote:
> Well, I was thinking we could just teach them to use
> "syscall(SYS_gettid)".
Right, and that's easier if TIDs are officially part of the GNU API.
I think the worry is that some future system might have TIDs which do
not share the PID space, or are
On Sun, Oct 25, 2015 at 11:47:04AM +0100, Florian Weimer wrote:
> On 10/25/2015 11:41 AM, Theodore Ts'o wrote:
> > On Sun, Oct 25, 2015 at 10:33:32AM +0100, Ingo Molnar wrote:
> >>
> >> Hm, that's weird - all our sched_*() system call APIs that set task
> >> scheduling
> >> priorities are fundame
On 10/25/2015 11:41 AM, Theodore Ts'o wrote:
> On Sun, Oct 25, 2015 at 10:33:32AM +0100, Ingo Molnar wrote:
>>
>> Hm, that's weird - all our sched_*() system call APIs that set task
>> scheduling
>> priorities are fundamentally per thread, not per process. Same goes for the
>> old
>> sys_nice()
On Sun, Oct 25, 2015 at 10:33:32AM +0100, Ingo Molnar wrote:
>
> Hm, that's weird - all our sched_*() system call APIs that set task
> scheduling
> priorities are fundamentally per thread, not per process. Same goes for the
> old
> sys_nice() interface. The scheduler has no real notion of 'pro
* Linus Torvalds wrote:
> On Sun, Oct 25, 2015 at 11:18 AM, Tejun Heo wrote:
> >
> > We definitely need to weigh the inputs from heavy users but also need to
> > discern the actual problems which need to be solved from the specific
> > mechanisms chosen to solve them. Let's please keep the d
On Sun, Oct 25, 2015 at 11:18 AM, Tejun Heo wrote:
>
> We definitely need to weigh the inputs from heavy users but also need
> to discern the actual problems which need to be solved from the
> specific mechanisms chosen to solve them. Let's please keep the
> discussions technical. That's the bes
On Sun, 2015-10-25 at 11:18 +0900, Tejun Heo wrote:
> Hello, Mike.
>
> On Sat, Oct 24, 2015 at 06:36:07AM +0200, Mike Galbraith wrote:
> > On Sat, 2015-10-24 at 07:21 +0900, Tejun Heo wrote:
> >
> > > It'd be a step back in usability only for users who have been using
> > > cgroups in fringing wa
Hello, Mike.
On Sat, Oct 24, 2015 at 06:36:07AM +0200, Mike Galbraith wrote:
> On Sat, 2015-10-24 at 07:21 +0900, Tejun Heo wrote:
>
> > It'd be a step back in usability only for users who have been using
> > cgroups in fringing ways which can't be justified for ratification and
> > we do want to
On Sat, 2015-10-24 at 07:21 +0900, Tejun Heo wrote:
> It'd be a step back in usability only for users who have been using
> cgroups in fringing ways which can't be justified for ratification and
> we do want to actively filter those out.
Of all the cgroup signal currently in existence, seems the
Hello, Paul.
On Thu, Oct 15, 2015 at 04:42:37AM -0700, Paul Turner wrote:
> > The thing which bothers me the most is that cpuset behavior is
> > different from global case for no good reason.
>
> I've tried to explain above that I believe there are reasonable
> reasons for it working the way it d
On Thu, Oct 1, 2015 at 11:46 AM, Tejun Heo wrote:
> Hello, Paul.
>
> Sorry about the delay. Things were kinda hectic in the past couple
> weeks.
Likewise :-(
>
> On Fri, Sep 18, 2015 at 04:27:07AM -0700, Paul Turner wrote:
>> On Sat, Sep 12, 2015 at 7:40 AM, Tejun Heo wrote:
>> > On Wed, Sep 0
Hello, Paul.
Sorry about the delay. Things were kinda hectic in the past couple
weeks.
On Fri, Sep 18, 2015 at 04:27:07AM -0700, Paul Turner wrote:
> On Sat, Sep 12, 2015 at 7:40 AM, Tejun Heo wrote:
> > On Wed, Sep 09, 2015 at 05:49:31AM -0700, Paul Turner wrote:
> >> I do not think this is a
On Sat, Sep 12, 2015 at 7:40 AM, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 09, 2015 at 05:49:31AM -0700, Paul Turner wrote:
>> I do not think this is a layering problem. This is more like C++:
>> there is no sane way to concurrently use all the features available,
>> however, reasonably self-cons
Paul?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Thu, Sep 17, 2015 at 11:52:45AM -0400, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 17, 2015 at 05:10:49PM +0200, Peter Zijlstra wrote:
> > Subject: sched: Refuse to unplug a CPU if this will violate user task
> > affinity
> >
> > Its bad policy to allow unplugging a CPU for which a user set exp
Hello,
On Thu, Sep 17, 2015 at 05:10:49PM +0200, Peter Zijlstra wrote:
> Subject: sched: Refuse to unplug a CPU if this will violate user task affinity
>
> Its bad policy to allow unplugging a CPU for which a user set explicit
> affinity, either strictly on this CPU or in case this was the last
>
On Thu, Sep 17, 2015 at 10:53:09AM -0400, Tejun Heo wrote:
> > I'd be happy to fail a CPU down for user tasks where this is the last
> > runnable CPU of.
>
> So, yeah, we need to keep these things consistent across global and
> cgroup cases.
>
Ok, I'll go extend the sysctl_sched_strict_affinity
On Thu, Sep 17, 2015 at 04:35:27PM +0200, Peter Zijlstra wrote:
> I'd be happy to fail a CPU down for user tasks where this is the last
> runnable CPU of.
A little like so. Completely untested.
---
Subject: sched: Refuse to unplug a CPU if this will violate user task affinity
Its bad policy to a
Hello,
On Thu, Sep 17, 2015 at 04:35:27PM +0200, Peter Zijlstra wrote:
> On Sat, Sep 12, 2015 at 10:40:07AM -0400, Tejun Heo wrote:
> > So, one of the problems is that the kernel can't have tasks w/o
> > runnable CPUs, so we have to some workaround when, for whatever
> > reason, a task ends up wit
On Sat, Sep 12, 2015 at 10:40:07AM -0400, Tejun Heo wrote:
> So, one of the problems is that the kernel can't have tasks w/o
> runnable CPUs, so we have to some workaround when, for whatever
> reason, a task ends up with no CPU that it can run on.
No, just refuse that configuration.
> You say cpu
Hello,
On Wed, Sep 09, 2015 at 05:49:31AM -0700, Paul Turner wrote:
> I do not think this is a layering problem. This is more like C++:
> there is no sane way to concurrently use all the features available,
> however, reasonably self-consistent subsets may be chosen.
That's just admitting failur
[ Picking this back up, I was out of the country last week. Note that
we are also wrestling with some DMARC issues as it was just activated
for Google.com so apologies if people do not receive this directly. ]
On Tue, Aug 25, 2015 at 2:02 PM, Tejun Heo wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at
Paul?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Hello, Kame.
On Tue, Aug 25, 2015 at 11:36:25AM +0900, Kamezawa Hiroyuki wrote:
> I think I should explain my customer's use case of qemu + cpuset/cpu (via
> libvirt)
>
> (1) Isolating hypervisor thread.
>As already discussed, hypervisor threads are isolated by cpuset. But their
> purpose
>
Hello,
On Mon, Aug 24, 2015 at 04:06:39PM -0700, Paul Turner wrote:
> > This is an erratic behavior on cpuset's part tho. Nothing else
> > behaves this way and it's borderline buggy.
>
> It's actually the only sane possible interaction here.
>
> If you don't overwrite the masks you can no longe
Hello, Paul.
On Mon, Aug 24, 2015 at 04:15:59PM -0700, Paul Turner wrote:
> > Hmmm... if that's the case, would limiting iops on those IO devices
> > (or classes of them) work? qemu already implements IO limit mechanism
> > after all.
>
> No.
>
> 1) They should proceed at the maximum rate that
On Tue, Aug 25, 2015 at 11:24:42AM +0200, Ingo Molnar wrote:
>
> * Paul Turner wrote:
>
> > > Anyways, a point here is that threads of the same process competing
> > > isn't a new problem. There are many ways to make those threads play
> > > nice as the application itself often has to be involv
* Paul Turner wrote:
> > Anyways, a point here is that threads of the same process competing
> > isn't a new problem. There are many ways to make those threads play
> > nice as the application itself often has to be involved anyway,
> > especially for something like qemu which is heavily involv
On 2015/08/25 8:15, Paul Turner wrote:
On Mon, Aug 24, 2015 at 3:49 PM, Tejun Heo wrote:
Hello,
On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote:
Hmm... I was hoping for an actual configurations and usage scenarios.
Preferably something people can set up and play with.
This is mu
On Mon, Aug 24, 2015 at 3:49 PM, Tejun Heo wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote:
>> > Hmm... I was hoping for an actual configurations and usage scenarios.
>> > Preferably something people can set up and play with.
>>
>> This is much easier to set up and
On Mon, Aug 24, 2015 at 3:19 PM, Tejun Heo wrote:
> Hey,
>
> On Mon, Aug 24, 2015 at 02:58:23PM -0700, Paul Turner wrote:
>> > Why isn't it? Because the programs themselves might try to override
>> > it?
>>
>> The major reasons are:
>>
>> 1) Isolation. Doing everything with sched_setaffinity mea
Hello,
On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote:
> > Hmm... I was hoping for an actual configurations and usage scenarios.
> > Preferably something people can set up and play with.
>
> This is much easier to set up and play with synthetically. Just
> create the 10 threads and
Hey,
On Mon, Aug 24, 2015 at 02:58:23PM -0700, Paul Turner wrote:
> > Why isn't it? Because the programs themselves might try to override
> > it?
>
> The major reasons are:
>
> 1) Isolation. Doing everything with sched_setaffinity means that
> programs can use arbitrary resources if they desir
On Mon, Aug 24, 2015 at 2:40 PM, Tejun Heo wrote:
> On Mon, Aug 24, 2015 at 02:19:29PM -0700, Paul Turner wrote:
>> > Would it be possible for you to give realistic and concrete examples?
>> > I'm not trying to play down the use cases but concrete examples are
>> > usually helpful at putting thing
On Mon, Aug 24, 2015 at 2:36 PM, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Aug 24, 2015 at 01:52:01PM -0700, Paul Turner wrote:
>> We typically share our machines between many jobs, these jobs can have
>> cores that are "private" (and not shared with other jobs) and cores
>> that are "shared" (g
On Mon, Aug 24, 2015 at 02:19:29PM -0700, Paul Turner wrote:
> > Would it be possible for you to give realistic and concrete examples?
> > I'm not trying to play down the use cases but concrete examples are
> > usually helpful at putting things in perspective.
>
> I don't think there's anything th
Hello, Paul.
On Mon, Aug 24, 2015 at 01:52:01PM -0700, Paul Turner wrote:
> We typically share our machines between many jobs, these jobs can have
> cores that are "private" (and not shared with other jobs) and cores
> that are "shared" (general purpose cores accessible to all jobs on the
> same m
On Mon, Aug 24, 2015 at 2:17 PM, Tejun Heo wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at 02:10:17PM -0700, Paul Turner wrote:
>> Suppose that we have 10 vcpu threads and 100 support threads.
>> Suppose that we want the support threads to receive up to 10% of the
>> time available to the VM as a whol
Hello,
On Mon, Aug 24, 2015 at 02:10:17PM -0700, Paul Turner wrote:
> Suppose that we have 10 vcpu threads and 100 support threads.
> Suppose that we want the support threads to receive up to 10% of the
> time available to the VM as a whole on that machine.
>
> If I have one particular support th
On Mon, Aug 24, 2015 at 2:12 PM, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Aug 24, 2015 at 02:00:54PM -0700, Paul Turner wrote:
>> > Hmmm... I'm trying to understand the usecases where having hierarchy
>> > inside a process are actually required so that we don't end up doing
>> > something compl
Hello, Paul.
On Mon, Aug 24, 2015 at 02:00:54PM -0700, Paul Turner wrote:
> > Hmmm... I'm trying to understand the usecases where having hierarchy
> > inside a process are actually required so that we don't end up doing
> > something complex unnecessarily. So far, it looks like an easy
> > altern
On Mon, Aug 24, 2015 at 2:02 PM, Tejun Heo wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at 01:54:08PM -0700, Paul Turner wrote:
>> > That alone doesn't require hierarchical resource distribution tho.
>> > Setting nice levels reasonably is likely to alleviate most of the
>> > problem.
>>
>> Nice is not
Hello,
On Mon, Aug 24, 2015 at 01:54:08PM -0700, Paul Turner wrote:
> > That alone doesn't require hierarchical resource distribution tho.
> > Setting nice levels reasonably is likely to alleviate most of the
> > problem.
>
> Nice is not sufficient here. There could be arbitrarily many threads
>
On Mon, Aug 24, 2015 at 1:25 PM, Tejun Heo wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 04:00:49PM -0400, Austin S Hemmelgarn wrote:
>> >That alone doesn't require hierarchical resource distribution tho.
>> >Setting nice levels reasonably is likely to alleviate most of the
>> >problem.
>>
>
On Mon, Aug 24, 2015 at 10:04 AM, Tejun Heo wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
>> >Just to learn more, what sort of hypervisor support threads are we
>> >talking about? They would have to consume considerable amount of cpu
>> >cycles f
On Sat, Aug 22, 2015 at 11:29 AM, Tejun Heo wrote:
> Hello, Paul.
>
> On Fri, Aug 21, 2015 at 12:26:30PM -0700, Paul Turner wrote:
> ...
>> A very concrete example of the above is a virtual machine in which you
>> want to guarantee scheduling for the vCPU threads which must schedule
>> beside many
Hello, Austin.
On Mon, Aug 24, 2015 at 04:00:49PM -0400, Austin S Hemmelgarn wrote:
> >That alone doesn't require hierarchical resource distribution tho.
> >Setting nice levels reasonably is likely to alleviate most of the
> >problem.
>
> In the cases I've dealt with this myself, nice levels didn'
On 2015-08-24 13:04, Tejun Heo wrote:
Hello, Austin.
On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
Just to learn more, what sort of hypervisor support threads are we
talking about? They would have to consume considerable amount of cpu
cycles for problems like this to be
On Mon, 2015-08-24 at 13:04 -0400, Tejun Heo wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
> > >Just to learn more, what sort of hypervisor support threads are we
> > >talking about? They would have to consume considerable amount of cpu
> > >cycl
Hello, Austin.
On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
> >Just to learn more, what sort of hypervisor support threads are we
> >talking about? They would have to consume considerable amount of cpu
> >cycles for problems like this to be relevant and be dynamic in numbe
On 2015-08-22 14:29, Tejun Heo wrote:
Hello, Paul.
On Fri, Aug 21, 2015 at 12:26:30PM -0700, Paul Turner wrote:
...
A very concrete example of the above is a virtual machine in which you
want to guarantee scheduling for the vCPU threads which must schedule
beside many hypervisor support threads
Hello, Paul.
On Fri, Aug 21, 2015 at 12:26:30PM -0700, Paul Turner wrote:
...
> A very concrete example of the above is a virtual machine in which you
> want to guarantee scheduling for the vCPU threads which must schedule
> beside many hypervisor support threads. A hierarchy is the only way
> t
On Tue, Aug 18, 2015 at 1:31 PM, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Aug 17, 2015 at 09:03:30PM -0700, Paul Turner wrote:
>> > 2) Control within an address-space. For subsystems with fungible
>> > resources,
>> > e.g. CPU, it can be useful for an address space to partition its own
>> > t
On Thu, 2015-08-20 at 00:52 -0700, Tejun Heo wrote:
> Hmmm... I think this discussion got pretty badly derailed at this
> point. If I'm not mistaken, you're talking about tens or a few
> hundred millisecs of latency per migration which no longer exists and
> won't ever come back and the discussio
Hey, Mike.
On Thu, Aug 20, 2015 at 06:00:59AM +0200, Mike Galbraith wrote:
> If create/attach/detach/destroy aren't hot paths, what is? Those are
> fork/exec/exit cgroup analogs. If you have thousands upon thousands of
Things like page faults? cgroup controllers hook into subsystems and
their
On Wed, 2015-08-19 at 09:41 -0700, Tejun Heo wrote:
> Most problems can be solved in different ways and I'm doubtful that
> e.g. bouncing jobs to worker threads would be more expensive than
> migrating the worker back and forth in a lot of cases. If migrating
> threads around floats somebody's bo
Hello, Mike.
On Wed, Aug 19, 2015 at 05:23:40AM +0200, Mike Galbraith wrote:
> Hm. I know of a big data outfit to which attach/detach performance was
> important enough for them to have plucked an old experimental overhead
> reduction hack (mine) off lkml, and shipped it. It must have mattered a
Hello, Kame.
On Wed, Aug 19, 2015 at 08:39:43AM +0900, Kamezawa Hiroyuki wrote:
> An actual per-thread use case in our customers is qemu-kvm + cpuset.
> customers pin each vcpus and qemu-kvm's worker threads to cpus.
> For example, pinning 4 vcpus to cpu 2-6 and pinning qemu main thread and
> othe
On Tue, 2015-08-18 at 13:31 -0700, Tejun Heo wrote:
> So, this is a trade-off we're consciously making. If there are
> common-enough use cases which require jumping across different cgroup
> domains, we'll try to figure out a way to accomodate those but by
> default migration is a very cold and e
On 2015/08/19 5:31, Tejun Heo wrote:
Hello, Paul.
On Mon, Aug 17, 2015 at 09:03:30PM -0700, Paul Turner wrote:
2) Control within an address-space. For subsystems with fungible resources,
e.g. CPU, it can be useful for an address space to partition its own
threads. Losing the capability to do
Hello, Paul.
On Mon, Aug 17, 2015 at 09:03:30PM -0700, Paul Turner wrote:
> > 2) Control within an address-space. For subsystems with fungible resources,
> > e.g. CPU, it can be useful for an address space to partition its own
> > threads. Losing the capability to do this against the CPU control
Apologies for the repeat. Gmail ate its plain text setting for some
reason. Shame bells.
On Mon, Aug 17, 2015 at 9:02 PM, Paul Turner wrote:
>
>
> On Wed, Aug 5, 2015 at 7:31 AM, Tejun Heo wrote:
>>
>> Hello,
>>
>> On Wed, Aug 05, 2015 at 11:10:36AM +0200, Peter Zijlstra wrote:
>> > > I've bee
Hello, Peter.
Do we have an agreement on the sched changes?
Thanks a lot.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the
Hello,
On Wed, Aug 05, 2015 at 11:10:36AM +0200, Peter Zijlstra wrote:
> > I've been thinking about it and I'm now convinced that cgroups just is
> > the wrong interface to require each application to be programming
> > against.
>
> But people are doing it. So you must give them something. You ca
On Tue, Aug 04, 2015 at 11:10:17AM -0400, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Aug 04, 2015 at 11:07:11AM +0200, Peter Zijlstra wrote:
> > What about the unified hierarchy stuff cannot deal with per-task
> > controllers?
> >
> > _That_ was the biggest problem from what I can remember, and
Hello, Peter.
On Tue, Aug 04, 2015 at 11:07:11AM +0200, Peter Zijlstra wrote:
> What about the unified hierarchy stuff cannot deal with per-task
> controllers?
>
> _That_ was the biggest problem from what I can remember, and I see no
> proposed resolution for that here.
I've been thinking about
On Mon, Aug 03, 2015 at 06:41:29PM -0400, Tejun Heo wrote:
> While the cpu controller doesn't have any functional problems, there
> are a couple interface issues which can be addressed in the v2
> interface.
>
> * cpuacct being a separate controller. This separation is artificial
> and rather p
While the cpu controller doesn't have any functional problems, there
are a couple interface issues which can be addressed in the v2
interface.
* cpuacct being a separate controller. This separation is artificial
and rather pointless as demonstrated by most use cases co-mounting
the two contro
75 matches
Mail list logo