Hello, Marcelo.
On Wed, Jan 06, 2016 at 10:46:15AM -0200, Marcelo Tosatti wrote:
> Well, i suppose cgroups has facilities to handle this? That is, what is
> required is:
No, it doesn't.
> On task creation, move the new task to a particular cgroup, based on
> some visible characteristic of the ta
On Wed, Jan 06, 2016 at 12:09:50AM +0100, Thomas Gleixner wrote:
> Marcelo,
>
> On Mon, 4 Jan 2016, Marcelo Tosatti wrote:
> > On Thu, Dec 31, 2015 at 11:30:57PM +0100, Thomas Gleixner wrote:
> > > I don't have an idea how that would look like. The current structure is a
> > > cgroups based hierar
Marcelo,
On Mon, 4 Jan 2016, Marcelo Tosatti wrote:
> On Thu, Dec 31, 2015 at 11:30:57PM +0100, Thomas Gleixner wrote:
> > I don't have an idea how that would look like. The current structure is a
> > cgroups based hierarchy oriented approach, which does not allow simple
> > things
> > like
> >
ame
> > > meaning on all sockets and restrict it to per task partitioning."
> > >
> > > Yes, thats the issue we hit, that is the modification that was agreed
> > > with Intel, and thats what we are waiting for them to post.
> >
> > How do you i
;It would even be sufficient for particular use cases to just associate a
> piece of cache to a given CPU and do not bother with tasks at all."
>
> as a "simple" modification to (*1) ?
As noted above.
>
> > > I described a directory structure for that
to (*1) ?
> > I described a directory structure for that qos/cat stuff in my proposal and
> > that's complete AFAICT.
>
> Ok, lets make the job for the submitter easier. You are the maintainer,
> so you decide.
>
> Is it enough for you to have (*2) (which was agree
y proposal and
> that's complete AFAICT.
Ok, lets make the job for the submitter easier. You are the maintainer,
so you decide.
Is it enough for you to have (*2) (which was agreed with Intel), or
would you rather prefer to integrate the directory structure at
"[RFD] CAT user space inte
Marcelo,
On Wed, 23 Dec 2015, Marcelo Tosatti wrote:
> On Tue, Dec 22, 2015 at 06:12:05PM +, Yu, Fenghua wrote:
> > > From: Thomas Gleixner [mailto:t...@linutronix.de]
> > >
> > > I was not able to identify any existing infrastructure where this really
> > > fits in. I
> > > chose a directory
On Tue, Dec 22, 2015 at 06:12:05PM +, Yu, Fenghua wrote:
> > From: Thomas Gleixner [mailto:t...@linutronix.de]
> > Sent: Wednesday, November 18, 2015 10:25 AM
> > Folks!
> >
> > After rereading the mail flood on CAT and staring into the SDM for a while,
> > I
> > think we all should sit back
> From: Thomas Gleixner [mailto:t...@linutronix.de]
> Sent: Wednesday, November 18, 2015 10:25 AM
> Folks!
>
> After rereading the mail flood on CAT and staring into the SDM for a while, I
> think we all should sit back and look at it from scratch again w/o our
> preconceptions - I certainly had t
On Tue, Nov 24, 2015 at 03:31:24PM +0800, Chao Peng wrote:
> On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> >
> > Let's look at partitioning itself. We have two options:
> >
> >1) Per task partitioning
> >
> >2) Per CPU partitioning
> >
> > So far we only talked abou
On Tue, Nov 24, 2015 at 07:25:43PM -0200, Marcelo Tosatti wrote:
> On Tue, Nov 24, 2015 at 04:27:54PM +0800, Chao Peng wrote:
> > On Wed, Nov 18, 2015 at 10:01:54PM -0200, Marcelo Tosatti wrote:
> > > > tglx
> > >
> > > Again: you don't need to look into the MSR table and relate it
> > >
On Wed, Nov 18, 2015 at 10:01:54PM -0200, Marcelo Tosatti wrote:
> > tglx
>
> Again: you don't need to look into the MSR table and relate it
> to tasks if you store the data as:
>
> task group 1 = {
> reservation-1 = {size = 80Kb, type = data, socketmask =
> 0xff
On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
>
> Let's look at partitioning itself. We have two options:
>
>1) Per task partitioning
>
>2) Per CPU partitioning
>
> So far we only talked about #1, but I think that #2 has a value as
> well. Let me give you a simple exa
On Fri, Nov 20, 2015 at 08:53:34AM +0100, Thomas Gleixner wrote:
> On Thu, 19 Nov 2015, Marcelo Tosatti wrote:
> > On Thu, Nov 19, 2015 at 10:09:03AM +0100, Thomas Gleixner wrote:
> > > On Wed, 18 Nov 2015, Marcelo Tosatti wrote
> > > > Actually, there is a point that is useful: you might want the
On Thu, Nov 19, 2015 at 09:35:34AM +0100, Thomas Gleixner wrote:
> On Wed, 18 Nov 2015, Marcelo Tosatti wrote:
> > On Wed, Nov 18, 2015 at 08:34:07PM -0200, Marcelo Tosatti wrote:
> > > On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > > > Assume that you have isolated a CPU and
On Thu, 19 Nov 2015, Marcelo Tosatti wrote:
> On Thu, Nov 19, 2015 at 10:09:03AM +0100, Thomas Gleixner wrote:
> > On Wed, 18 Nov 2015, Marcelo Tosatti wrote
> > > Actually, there is a point that is useful: you might want the important
> > > application to share the L3 portion with HW (that HW DMAs
On Thu, Nov 19, 2015 at 10:09:03AM +0100, Thomas Gleixner wrote:
> On Wed, 18 Nov 2015, Marcelo Tosatti wrote
> > Actually, there is a point that is useful: you might want the important
> > application to share the L3 portion with HW (that HW DMAs into), and
> > have only the application and the HW
On Wed, Nov 18, 2015 at 11:05:35PM -0200, Marcelo Tosatti wrote:
> On Wed, Nov 18, 2015 at 10:01:53PM -0200, Marcelo Tosatti wrote:
> > On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > > Folks!
> > >
> > > After rereading the mail flood on CAT and staring into the SDM for a
> >
On Thu, 19 Nov 2015 09:35:34 +0100 (CET)
Thomas Gleixner wrote:
> > Well any work on behalf of the important task, should have its cache
> > protected as well (example irq handling threads).
>
> Right, but that's nothing you can do automatically and certainly not
> from a random application.
R
On Wed, 18 Nov 2015, Marcelo Tosatti wrote
> Actually, there is a point that is useful: you might want the important
> application to share the L3 portion with HW (that HW DMAs into), and
> have only the application and the HW use that region.
>
> So its a good point that controlling the exact pos
On Wed, 18 Nov 2015, Marcelo Tosatti wrote:
> On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > So now to the interface part. Unfortunately we need to expose this
> > very close to the hardware implementation as there are really no
> > abstractions which allow us to express the v
On Wed, 18 Nov 2015, Marcelo Tosatti wrote:
> On Wed, Nov 18, 2015 at 08:34:07PM -0200, Marcelo Tosatti wrote:
> > On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > > Assume that you have isolated a CPU and run your important task on
> > > it. You give that task a slice of cache.
Marcelo,
On Wed, 18 Nov 2015, Marcelo Tosatti wrote:
Can you please trim your replies? It's really annoying having to
search for a single line of reply.
> The cgroups interface works, but moves the problem of contiguous
> allocation to userspace, and is incompatible with cache allocations
> on d
On Wed, Nov 18, 2015 at 10:01:53PM -0200, Marcelo Tosatti wrote:
> On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > Folks!
> >
> > After rereading the mail flood on CAT and staring into the SDM for a
> > while, I think we all should sit back and look at it from scratch
> > agai
On Wed, Nov 18, 2015 at 08:34:07PM -0200, Marcelo Tosatti wrote:
> On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > Folks!
> >
> > After rereading the mail flood on CAT and staring into the SDM for a
> > while, I think we all should sit back and look at it from scratch
> > agai
On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> Folks!
>
> After rereading the mail flood on CAT and staring into the SDM for a
> while, I think we all should sit back and look at it from scratch
> again w/o our preconceptions - I certainly had to put my own away.
>
> Let's loo
On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> Folks!
>
> After rereading the mail flood on CAT and staring into the SDM for a
> while, I think we all should sit back and look at it from scratch
> again w/o our preconceptions - I certainly had to put my own away.
>
> Let's loo
ill; Dugger, Donald D; r...@redhat.com
> Subject: Re: [RFD] CAT user space interface revisited
>
> On Wed, 18 Nov 2015 19:25:03 +0100 (CET) Thomas Gleixner
> wrote:
>
> > We really need to make this as configurable as possible from userspace
> > without imposing rando
On Wed, 18 Nov 2015 19:25:03 +0100 (CET)
Thomas Gleixner wrote:
> We really need to make this as configurable as possible from userspace
> without imposing random restrictions to it. I played around with it on
> my new intel toy and the restriction to 16 COS ids (that's 8 with CDP
> enabled) make
Folks!
After rereading the mail flood on CAT and staring into the SDM for a
while, I think we all should sit back and look at it from scratch
again w/o our preconceptions - I certainly had to put my own away.
Let's look at the properties of CAT again:
- It's a per socket facility
- CAT sl
31 matches
Mail list logo