Hello,
On Fri, May 07, 2021 at 06:54:13PM +0200, Daniel Vetter wrote:
> All I meant is that for the container/cgroups world starting out with
> time-sharing feels like the best fit, least because your SRIOV designers
> also seem to think that's the best first cut for cloud-y computing.
> Whether i
Hello,
On Fri, May 07, 2021 at 03:55:39PM -0400, Alex Deucher wrote:
> The problem is temporal partitioning on GPUs is much harder to enforce
> unless you have a special case like SR-IOV. Spatial partitioning, on
> AMD GPUs at least, is widely available and easily enforced. What is
> the point o
Hello,
On Fri, May 07, 2021 at 06:30:56PM -0400, Alex Deucher wrote:
> Maybe we are speaking past each other. I'm not following. We got
> here because a device specific cgroup didn't make sense. With my
> Linux user hat on, that makes sense. I don't want to write code to a
> bunch of device sp
On Tue, Mar 29, 2022 at 10:42:20AM +0200, Daniel Vetter wrote:
> Hm I just realized ... are the names in the groups abi? If yes then I
> think we need to fix this before we merge anything.
Yes.
Thanks.
--
tejun
Hello,
On Mon, Mar 28, 2022 at 03:59:41AM +, T.J. Mercier wrote:
> The API/UAPI can be extended to set per-device/total allocation limits
> in the future.
This total thing kinda bothers me. Can you please provide some concrete
examples of how this and per-device limits would be used?
Thanks.
On Wed, Feb 23, 2022 at 10:20:47PM +0100, Marek Szyprowski wrote:
> Hi All,
>
> On 17.02.2022 12:22, Tetsuo Handa wrote:
> > syzbot found a circular locking dependency which is caused by flushing
> > system_long_wq WQ [1]. Tejun Heo commented that it makes no sen
Hello,
On Wed, May 01, 2019 at 10:04:33AM -0400, Brian Welty wrote:
> The patch series enables device drivers to use cgroups to control the
> following resources within a GPU (or other accelerator device):
> * control allocation of device memory (reuse of memcg)
> and with future work, we could e
Hello,
On Tue, May 07, 2019 at 12:50:50PM -0700, Welty, Brian wrote:
> There might still be merit in having a 'device mem' cgroup controller.
> The resource model at least is then no longer mixed up with host memory.
> RDMA community seemed to have some interest in a common controller at
> least f
Hello,
I haven't gone through the patchset yet but some quick comments.
On Wed, May 15, 2019 at 10:29:21PM -0400, Kenny Ho wrote:
> Given this controller is specific to the drm kernel subsystem which
> uses minor to identify drm device, I don't see a need to complicate
> the interfaces more by ha
Hello,
On Fri, Jun 14, 2019 at 04:08:33PM +0100, Chris Wilson wrote:
> #ifdef CONFIG_MEMCG
> if (slab_state >= FULL && err >= 0 && is_root_cache(s)) {
> struct kmem_cache *c;
>
> mutex_lock(&slab_mutex);
>
> so it happens to hit the error + FULL case with
Hello,
On Fri, Feb 11, 2022 at 04:18:23PM +, T.J. Mercier wrote:
> The GPU/DRM cgroup controller came into being when a consensus[1]
> was reached that the resources it tracked were unsuitable to be integrated
> into memcg. Originally, the proposed controller was specific to the DRM
> subsyste
Hello,
On Wed, Apr 20, 2022 at 11:52:19PM +, T.J. Mercier wrote:
> From: Hridya Valsaraju
>
> This patch adds a proposal for a new GPU cgroup controller for
> accounting/limiting GPU and GPU-related memory allocations.
> The proposed controller is based on the DRM cgroup controller[1] and
>
On Fri, Aug 12, 2022 at 04:26:47PM -0400, Felix Kuehling wrote:
> Hi workqueue maintainers,
>
> In the KFD (amdgpu) driver we found a need to schedule bottom half interrupt
> handlers on CPU cores different from the one where the top-half interrupt
> handler runs to avoid the interrupt handler sta
Hello,
On Fri, Aug 12, 2022 at 04:54:04PM -0400, Felix Kuehling wrote:
> In principle, I think IRQ routing to CPUs can change dynamically with
> irqbalance.
I wonder whether this is something which should be exposed to userland
rather than trying to do dynamically in the kernel and let irqbalance
Hello,
Just took a look out of curiosity.
On Thu, May 12, 2022 at 02:25:57PM +0900, Byungchul Park wrote:
> PROCESS A PROCESS B WORKER C
>
> __do_sys_reboot()
> __do_sys_reboot()
> mutex_lock(&system_transition_mutex)
> ... mutex_lock(&system_transition_mutex)
Hello,
On Thu, May 12, 2022 at 08:18:24PM +0900, Byungchul Park wrote:
> > 1. wait_for_completion_killable_timeout() doesn't need someone to wake it up
> >to make forward progress because it will unstick itself after timeout
> >expires.
>
> I have a question about this one. Yes, it would
Hello,
On Thu, May 12, 2022 at 08:43:52PM -0700, T.J. Mercier wrote:
> > I'm actually happy I've asked this question, wasn't silly after all. I
> > think the
> > problem here is a naming issue. What you really are monitor is "video
> > memory",
> > which consist of a memory segment allocated to
Hello,
On Tue, May 17, 2022 at 04:30:29PM -0700, T.J. Mercier wrote:
> Thanks for your suggestion. This almost works. "dmabuf" as a key could
> work, but I'd actually like to account for each heap. Since heaps can
> be dynamically added, I can't accommodate every potential heap name by
> hardcodin
Hello,
On Wed, Mar 30, 2022 at 01:56:09PM -0700, T.J. Mercier wrote:
> The use case we have for accounting the total (separate from the
> individual devices) is to include the value as part of bugreports, for
> understanding the system-wide amount of dmabuf allocations. I'm not
> aware of an exist
On Mon, Nov 04, 2019 at 05:08:47PM -0500, Brian Welty wrote:
> + gpuset.units
> + gpuset.units.effective
> + gpuset.units.partition
> +
> + gpuset.mems
> + gpuset.mems.effective
> + gpuset.mems.partition
> +
> + sched.max
> + sched.stats
> + sched.weight
> + sched.weight.nice
> +
> + mem
Hello,
On Tue, Nov 05, 2019 at 04:08:22PM -0800, Brian Welty wrote:
> I was more interested in hearing your thoughts on whether you like
> the approach to have a set of controls that are consistent with
> some subset of the existing CPU/MEM ones. Any feedback on this?
> Didn't really mean to sug
Hello, Kenny, Daniel.
(cc'ing Johannes)
On Fri, Feb 14, 2020 at 01:51:32PM -0500, Kenny Ho wrote:
> On Fri, Feb 14, 2020 at 1:34 PM Daniel Vetter wrote:
> >
> > I think guidance from Tejun in previos discussions was pretty clear that
> > he expects cgroups to be both a) standardized and c) suffi
On Fri, Feb 14, 2020 at 03:28:40PM -0500, Kenny Ho wrote:
> Can you elaborate, per your understanding, how the lgpu weight
> attribute differ from the io.weight you suggested? Is it merely a
Oh, it's the non-weight part which is problematic.
> formatting/naming issue or is it the implementation
On Fri, Nov 29, 2019 at 01:00:36AM -0500, Kenny Ho wrote:
> On Tue, Oct 1, 2019 at 10:31 AM Michal Koutný wrote:
> > On Thu, Aug 29, 2019 at 02:05:19AM -0400, Kenny Ho wrote:
> > > +struct cgroup_subsys drm_cgrp_subsys = {
> > > + .css_alloc = drmcg_css_alloc,
> > > + .css_free
Hello,
I just glanced through the interface and don't have enough context to
give any kind of detailed review yet. I'll try to read up and
understand more and would greatly appreciate if you can give me some
pointers to read up on the resources being controlled and how the
actual use cases would
Hello, Daniel.
On Tue, Sep 03, 2019 at 09:55:50AM +0200, Daniel Vetter wrote:
> > * While breaking up and applying control to different types of
> > internal objects may seem attractive to folks who work day in and
> > day out with the subsystem, they aren't all that useful to users and
> >
Hello, Daniel.
On Tue, Sep 03, 2019 at 09:48:22PM +0200, Daniel Vetter wrote:
> I think system memory separate from vram makes sense. For one, vram is
> like 10x+ faster than system memory, so we definitely want to have
> good control on that. But maybe we only want one vram bucket overall
> for t
Hello,
On Wed, Sep 04, 2019 at 10:54:34AM +0200, Daniel Vetter wrote:
> Anyway, I don't think reusing the drm_minor registration makes sense,
> since we want to be on the drm_device, not on the minor. Which is a bit
> awkward for cgroups, which wants to identify devices using major.minor
> pairs.
Hello, Daniel.
On Fri, Sep 06, 2019 at 05:36:02PM +0200, Daniel Vetter wrote:
> Block devices are a great example I think. How do you handle the
> partitions on that? For drm we also have a main minor interface, and
cgroup IO controllers only distribute hardware IO capacity and are
blind to parti
Hello, Daniel.
On Fri, Sep 06, 2019 at 05:34:16PM +0200, Daniel Vetter wrote:
> > Hmm... what'd be the fundamental difference from slab or socket memory
> > which are handled through memcg? Is system memory used by GPUs have
> > further global restrictions in addition to the amount of physical
>
Hello, Michal.
On Tue, Sep 10, 2019 at 01:54:48PM +0200, Michal Hocko wrote:
> > So, while it'd great to have shrinkers in the longer term, it's not a
> > strict requirement to be accounted in memcg. It already accounts a
> > lot of memory which isn't reclaimable (a lot of slabs and socket
> > bu
Hello,
On Thu, Jan 26, 2023 at 02:00:50PM +0100, Michal Koutný wrote:
> On Wed, Jan 25, 2023 at 06:11:35PM +, Tvrtko Ursulin
> wrote:
> > I don't immediately see how you envisage the half-userspace implementation
> > would look like in terms of what functionality/new APIs would be provided b
On Thu, Jan 12, 2023 at 04:56:07PM +, Tvrtko Ursulin wrote:
...
> + /*
> + * 1st pass - reset working values and update hierarchical weights and
> + * GPU utilisation.
> + */
> + if (!__start_scanning(root, period_us))
> + goto out_retry; /*
> +
Hello,
On Thu, Feb 02, 2023 at 02:26:06PM +, Tvrtko Ursulin wrote:
> When you say active/inactive - to what you are referring in the cgroup
> world? Offline/online? For those my understanding was offline was a
> temporary state while css is getting destroyed.
Oh, it's just based on activity.
Hello,
On Wed, May 03, 2023 at 10:34:56AM +0200, Maarten Lankhorst wrote:
> RFC as I'm looking for comments.
>
> For long running compute, it can be beneficial to partition the GPU memory
> between cgroups, so each cgroup can use its maximum amount of memory without
> interfering with other sched
Hello, Tvrtko.
On Tue, Mar 14, 2023 at 02:18:54PM +, Tvrtko Ursulin wrote:
> DRM scheduling soft limits
> ~~
>
> Because of the heterogenous hardware and driver DRM capabilities, soft limits
> are implemented as a loose co-operative (bi-directional) interface between t
Hello,
On Wed, May 10, 2023 at 04:59:01PM +0200, Maarten Lankhorst wrote:
> The misc controller is not granular enough. A single computer may have any
> number of
> graphics cards, some of them with multiple regions of vram inside a single
> card.
Extending the misc controller to support dynami
Hello,
On Wed, Jul 12, 2023 at 12:45:56PM +0100, Tvrtko Ursulin wrote:
> +void drmcgroup_client_migrate(struct drm_file *file_priv)
> +{
> + struct drm_cgroup_state *src, *dst;
> + struct cgroup_subsys_state *old;
> +
> + mutex_lock(&drmcg_mutex);
> +
> + old = file_priv->__css;
>
On Wed, Jul 12, 2023 at 12:46:00PM +0100, Tvrtko Ursulin wrote:
> +DRM scheduling soft limits
> +~~
Please don't say soft limits for this. It means something different for
memcg, so it gets really confusing. Call it "weight based CPU time control"
and maybe call the trigger
On Wed, Jul 12, 2023 at 12:46:03PM +0100, Tvrtko Ursulin wrote:
> + drm.active_us
> + GPU time used by the group recursively including all child groups.
Maybe instead add drm.stat and have "usage_usec" inside? That'd be more
consistent with cpu side.
Thanks.
--
tejun
On Fri, Jul 21, 2023 at 12:19:32PM -1000, Tejun Heo wrote:
> On Wed, Jul 12, 2023 at 12:46:03PM +0100, Tvrtko Ursulin wrote:
> > + drm.active_us
> > + GPU time used by the group recursively including all child groups.
>
> Maybe instead add drm.stat and have "usage_us
On Wed, Jul 12, 2023 at 12:46:04PM +0100, Tvrtko Ursulin wrote:
> $ cat drm.memory.stat
> card0 region=system total=12898304 shared=0 active=0 resident=12111872
> purgeable=167936
> card0 region=stolen-system total=0 shared=0 active=0 resident=0 purgeable=0
>
> Data is generated on demand f
Hello,
On Tue, Jul 25, 2023 at 03:08:40PM +0100, Tvrtko Ursulin wrote:
> > Also, shouldn't this be keyed by the drm device?
>
> It could have that too, or it could come later. Fun with GPUs that it not
> only could be keyed by the device, but also by the type of the GPU engine.
> (Which are a) ven
>From aa6fde93f3a49e42c0fe0490d7f3711bac0d162e Mon Sep 17 00:00:00 2001
From: Tejun Heo
Date: Mon, 17 Jul 2023 12:50:02 -1000
Subject: [PATCH] workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is
below 4000
wq_cpu_intensive_thresh_us is used to detect CPU-hogging per-cpu work it
Hello,
On Wed, Jul 26, 2023 at 12:14:24PM +0200, Maarten Lankhorst wrote:
> > So, yeah, if you want to add memory controls, we better think through how
> > the fd ownership migration should work.
>
> I've taken a look at the series, since I have been working on cgroup memory
> eviction.
>
> The s
Hello,
On Wed, Jul 26, 2023 at 05:44:28PM +0100, Tvrtko Ursulin wrote:
...
> > So, yeah, if you want to add memory controls, we better think through how
> > the fd ownership migration should work.
>
> It would be quite easy to make the implicit migration fail - just the matter
> of failing the fi
Hello,
On Tue, Jul 11, 2023 at 04:06:22PM +0200, Geert Uytterhoeven wrote:
> On Tue, Jul 11, 2023 at 3:55 PM Geert Uytterhoeven
> wrote:
> >
> > Hi Tejun,
> >
> > On Fri, May 12, 2023 at 9:54 PM Tejun Heo wrote:
> > > Workqueue now automatically marks
On Tue, Jul 11, 2023 at 11:39:17AM -1000, Tejun Heo wrote:
> On Tue, Jul 11, 2023 at 04:06:22PM +0200, Geert Uytterhoeven wrote:
> > On Tue, Jul 11, 2023 at 3:55 PM Geert Uytterhoeven
> > wrote:
...
> > workqueue: neigh_managed_work hogged CPU for >1us 4 times,
&
On Wed, Jul 12, 2023 at 02:27:45PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 12, 2023 at 11:04:16AM +0200, Geert Uytterhoeven wrote:
> > Hoi Peter,
> >
> > On Wed, Jul 12, 2023 at 10:05 AM Peter Zijlstra
> > wrote:
> > > On Tue, Jul 11, 2023 at 11:39:17
Mon Sep 17 00:00:00 2001
From: Tejun Heo
Date: Mon, 17 Jul 2023 12:50:02 -1000
Subject: [PATCH] workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is
below 1000
wq_cpu_intensive_thresh_us is used to detect CPU-hogging per-cpu work items.
Once detected, they're excluded from con
wer boundary to 4000 MIPS. The scaling is
still capped at 1s.
>From 8555cbd4b22e5f85eb2bdcb84fd1d1f519a0a0d3 Mon Sep 17 00:00:00 2001
From: Tejun Heo
Date: Mon, 17 Jul 2023 12:50:02 -1000
Subject: [PATCH] workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is
below 4000
wq_cpu_intensiv
y these changes are safe and I think they're. It just needs
explanations.
> Signed-off-by: Bhaktipriya Shridhar
Other than that, Acked-by: Tejun Heo
Thanks.
--
tejun
rkqueue becomes empty.
>
> Hence flush_workqueue has been removed.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
Hello,
On Tue, Apr 21, 2020 at 02:34:59PM +0200, Daniel Vetter wrote:
> > > Also, of course, let me know if yu're not happy with the
> > > __kthread_queue_work() changes/kthread_worker usage in drm_vblank_work as
> > > well
> >
> > Just glanced over it and I still wonder whether it needs to be t
Hello,
On Fri, May 08, 2020 at 04:46:51PM -0400, Lyude Paul wrote:
> +bool kthread_queue_flush_work(struct kthread_work *work,
> + struct kthread_flush_work *fwork);
> +void __kthread_flush_work_fn(struct kthread_work *work);
As an exposed interface, this doesn't seem gr
On Fri, May 08, 2020 at 04:46:52PM -0400, Lyude Paul wrote:
> Add some simple wrappers around incrementing/decrementing
> kthread_work.cancelling under lock, along with checking whether queuing
> is currently allowed on a given kthread_work, which we'll use want to
> implement work cancelling with
On Tue, Mar 17, 2020 at 12:03:20PM -0400, Kenny Ho wrote:
> What's your thoughts on this latest series?
My overall impression is that the feedbacks aren't being incorporated throughly
/ sufficiently.
Thanks.
--
tejun
___
dri-devel mailing list
dri-dev
Hello, Kenny.
On Tue, Mar 24, 2020 at 02:49:27PM -0400, Kenny Ho wrote:
> Can you elaborate more on what are the missing pieces?
Sorry about the long delay, but I think we've been going in circles for quite
a while now. Let's try to make it really simple as the first step. How about
something lik
Hello,
On Mon, Apr 13, 2020 at 04:18:57PM -0400, Lyude Paul wrote:
> Hi Tejun! Sorry to bother you, but have you had a chance to look at any of
> this yet? Would like to continue moving this forward
Sorry, wasn't following this thread. Have you looked at kthread_worker?
https://git.kernel.org/
Hello,
On Mon, Apr 13, 2020 at 04:17:14PM -0400, Kenny Ho wrote:
> Perhaps we can even narrow things down to just
> gpu.weight/gpu.compute.weight as a start? In this aspect, is the key
That sounds great to me.
> objection to the current implementation of gpu.compute.weight the
> work-conserving
Hello,
On Mon, Apr 13, 2020 at 05:40:32PM -0400, Kenny Ho wrote:
> By lack of consense, do you mean Intel's assertion that a standard is
> not a standard until Intel implements it? (That was in the context of
> OpenCL language standard with the concept of SubDevice.) I thought
> the discussion so
Hello,
On Tue, Apr 14, 2020 at 12:52:51PM -0400, Lyude Paul wrote:
> Hi, thanks for the response! And yes-I think this would actually be perfect
> for what we need, I guess one question I might as well ask since I've got you
> here: would patches to expose an unlocked version of kthread_queue_work
Hello,
On Fri, Apr 17, 2020 at 04:16:28PM -0400, Lyude Paul wrote:
> Hey Tejun! So I ended up rewriting the drm_vblank_work stuff so that it used
> kthread_worker. Things seem to work alright now. But while we're doing just
> fine with vblank workers on nouveau, we're still having trouble meeting
Hello,
On Mon, Sep 21, 2020 at 11:21:54AM +0200, Daniel Vetter wrote:
> The part I don't like about this is that it all feels rather hacked
> together, and if we add more stuff (or there's some different thing in the
> system that also needs rt scheduling) then it doesn't compose.
>
> So question
On Sat, Jun 18, 2016 at 01:52:05PM +0530, Bhaktipriya Shridhar wrote:
> alloc_workqueue replaces deprecated create_workqueue().
>
> A dedicated workqueue has been used since the workqueue isr_workq is
> involved in irq handling path of block driver and requires forward
> progress under memory pres
On Mon, Jun 20, 2016 at 11:01:44AM -0400, Tejun Heo wrote:
> On Sat, Jun 18, 2016 at 01:52:05PM +0530, Bhaktipriya Shridhar wrote:
> > alloc_workqueue replaces deprecated create_workqueue().
> >
> > A dedicated workqueue has been used since the workqueue isr_workq is
> &g
> cancel_work_sync() has been used in _host1x_free_syncpt_irq() to ensure
> that no work is pending by the time exit path runs.
Alternatively, this could have used alloc_workqueue() w/o
WQ_MEM_RECLAIM and used it just as a flush domain. Either way is
fine.
Acked-by: Tejun Heo
Thanks.
--
tejun
em is sync cancelled in dsicm_cancel_ulps_work() which is called
> in dsicm_remove() to ensure that there are no workitems pending when the
> driver is disconnected.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
tly specified and thus the increase of local concurrency
> shouldn't make any difference.
>
> flush_work() has been called in qxl_device_fini() to ensure that there
> are no pending tasks while disconnecting the driver.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
On Sat, Jul 02, 2016 at 04:33:50PM +0530, Bhaktipriya Shridhar wrote:
> alloc_workqueue replaces deprecated create_singlethread_workqueue().
>
> A dedicated workqueue has been used since work items need to be flushed
> as a group rather than individually.
>
> Since the flip_queue workqueue is inv
Hello,
On Mon, Jul 04, 2016 at 12:58:32PM +0900, Michel Dänzer wrote:
> On 02.07.2016 22:46, Tejun Heo wrote:
> > On Sat, Jul 02, 2016 at 04:33:50PM +0530, Bhaktipriya Shridhar wrote:
> >> alloc_workqueue replaces deprecated create_singlethread_workqueue().
> >>
>
Hello, Michel.
On Wed, Jul 06, 2016 at 12:12:52PM +0900, Michel Dänzer wrote:
> There is an ordering requirement between the two queues, but it's
> enforced by the driver (by only queuing the unpin work once a flip has
> completed, which only happens after the corresponding flip work has run).
O
Hello,
On Fri, Jul 08, 2016 at 02:52:30PM +0900, Michel Dänzer wrote:
> On 07.07.2016 16:43, Christian König wrote:
> >>> Also, what kind of delays matter here? Is it millisec range or micro?
> >> It can be the latter in theory, but normally rather the former.
> >
> > Well to be precise with a
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
ems, explicit concurrency
> limit is unnecessary here.
>
> Signed-off-by: Bhaktipriya Shridhar
Acked-by: Tejun Heo
Thanks.
--
tejun
>
> Convert all direct write accesses to using the correct API.
>
> Signed-off-by: Russell King
Acked-by: Tejun Heo
The patch is pretty widely spread. I don't mind how it gets routed
but what's the plan?
Thanks.
--
tejun
__
On Fri, Sep 20, 2013 at 07:16:52AM -0500, Tejun Heo wrote:
> On Fri, Sep 20, 2013 at 12:11:38AM +0100, Russell King wrote:
> > The correct way for a driver to specify the coherent DMA mask is
> > not to directly access the field in the struct device, but to use
> > dma_set_c
Hey,
On Fri, Sep 20, 2013 at 03:00:18PM +0100, Russell King - ARM Linux wrote:
> Another would be if subsystem maintainers are happy that I carry them,
> I can add the acks, and then later on towards the end of the cycle,
> provide a branch subsystem maintainers could pull.
>
> Or... if you can t
Hello, Rasmus.
On Thu, Dec 08, 2016 at 02:22:55AM +0100, Rasmus Villemoes wrote:
> TL;DR: these patches save 250 KB of memory, with more low-hanging
> fruit ready to pick.
>
> While browsing through the lib/idr.c code, I noticed that the code at
> the end of ida_get_new_above() probably doesn't w
Hello,
On Fri, Dec 09, 2016 at 02:01:40PM -0800, Andrew Morton wrote:
> On Thu, 8 Dec 2016 02:22:55 +0100 Rasmus Villemoes rasmusvillemoes.dk> wrote:
>
> > TL;DR: these patches save 250 KB of memory, with more low-hanging
> > fruit ready to pick.
> >
> > While browsing through the lib/idr.c co
Hello, Matthew.
On Mon, Dec 12, 2016 at 05:35:17PM +, Matthew Wilcox wrote:
> I know the preload followed by preload_end looks wrong. I don't
> think it's broken though. If we get preempted, then the worst
> situation is that we'll end up with the memory we preallocated being
> allocated to
Hello,
On Tue, Nov 20, 2018 at 01:58:11PM -0500, Kenny Ho wrote:
> Since many parts of the DRM subsystem has vendor-specific
> implementations, we introduce mechanisms for vendor to register their
> specific resources and control files to the DRM cgroup subsystem. A
> vendor will register itself
Hello,
On Tue, Nov 20, 2018 at 10:21:14PM +, Ho, Kenny wrote:
> By this reply, are you suggesting that vendor specific resources
> will never be acceptable to be managed under cgroup? Let say a user
I wouldn't say never but whatever which gets included as a cgroup
controller should have clea
never fails and only uses the return value
to indicate whether the work was already pending or not.
This misconversion triggered spurious error messages. Remove the now
unnecessary return value check and error message.
Signed-off-by: Tejun Heo
Reported-by: Markus Trippelsdorf
Cc: David Airlie
Cc
>From 9a919c46dfa48a9c1f465174609b90253eb8ffc1 Mon Sep 17 00:00:00 2001
From: Tejun Heo
Date: Mon, 9 Aug 2010 12:01:27 +0200
Commit 991ea75c (drm: use workqueue instead of slow-work), which made
drm to use wq instead of slow-work, didn't account for the return
value difference
On Mon, Jul 09, 2018 at 10:36:40AM +0200, Daniel Vetter wrote:
> Makes the macros resilient against if {} else {} blocks right
> afterwards.
>
> Signed-off-by: Daniel Vetter
> Cc: Tejun Heo
> Cc: Jens Axboe
> Cc: Shaohua Li
> Cc: Kate Stewart
> Cc: Greg Kroah-Har
On Wed, Jul 11, 2018 at 09:40:58AM -0700, Tejun Heo wrote:
> On Mon, Jul 09, 2018 at 10:36:40AM +0200, Daniel Vetter wrote:
> > Makes the macros resilient against if {} else {} blocks right
> > afterwards.
> >
> > Signed-off-by: Daniel Vetter
> > Cc: Teju
On Mon, Jul 09, 2018 at 10:36:41AM +0200, Daniel Vetter wrote:
> Avoids the need to invert the condition instead of the open-coded
> version.
>
> Signed-off-by: Daniel Vetter
> Cc: Tejun Heo
> Cc: Li Zefan
> Cc: Johannes Weiner
> Cc: cgro...@vger.kernel.org
Acked-by:
On Wed, Jul 11, 2018 at 01:31:51PM -0600, Jens Axboe wrote:
> I don't think there's a git easy way of sending it out outside of
> just ensuring that everybody is CC'ed on everything. I don't mind
> that at all. I don't subscribe to lkml, and the patches weren't
> sent to linux-block. Hence all I se
Hello, Matthew.
On Tue, Jul 30, 2024 at 03:17:40PM -0700, Matthew Brost wrote:
> +/**
> + * wq_init_user_lockdep_map - init user lockdep map for workqueue
> + * @wq: workqueue to init lockdep map for
> + * @lockdep_map: lockdep map to use for workqueue
> + *
> + * Initialize workqueue with a user
On Tue, Jul 30, 2024 at 10:53:38PM +, Matthew Brost wrote:
> I didn't want to change the export alloc_workqueue() arguments so I went
> with this approach. Are you suggesting export a new function
> alloc_workqueue_lockdep_map() which will share an internal
> implementation with the existing al
Hello,
On Tue, Jul 30, 2024 at 05:31:17PM -0700, Matthew Brost wrote:
> +#define alloc_ordered_workqueue_lockdep_map(fmt, flags, lockdep_map,
> args...)\
> + alloc_workqueue_lockdep_map(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags),
> 1, lockdep_map, ##args)
> +#endif
alloc_ordered_workq
t; v2:
> - Add alloc_workqueue_lockdep_map (Tejun)
> v3:
> - Drop __WQ_USER_OWNED_LOCKDEP (Tejun)
> - static inline alloc_ordered_workqueue_lockdep_map (Tejun)
>
> Cc: Tejun Heo
> Cc: Lai Jiangshan
> Signed-off-by: Matthew Brost
1-3 look fine to me. Would applying them t
On Tue, Aug 13, 2024 at 06:55:20PM +, Matthew Brost wrote:
> On Tue, Aug 13, 2024 at 08:52:26AM -1000, Tejun Heo wrote:
> > On Fri, Aug 09, 2024 at 03:28:25PM -0700, Matthew Brost wrote:
> > > Add an interface for a user-defined workqueue lockdep map, which is
> >
t; v2:
> - Add alloc_workqueue_lockdep_map (Tejun)
> v3:
> - Drop __WQ_USER_OWNED_LOCKDEP (Tejun)
> - static inline alloc_ordered_workqueue_lockdep_map (Tejun)
>
> Cc: Tejun Heo
> Cc: Lai Jiangshan
> Signed-off-by: Matthew Brost
Applied 1-3 to wq/for-6.12.
Thanks.
--
tejun
Hello,
>From cgroup POV, it generally looks fine to me. As before, I'm really
curious whether this is something other non-intel drivers can get behind.
Just one nit.
On Tue, Oct 24, 2023 at 05:07:19PM +0100, Tvrtko Ursulin wrote:
> * Allowing per DRM card configuration and queries is deliberatly
Hello,
On Mon, Dec 04, 2023 at 04:03:47PM +, Naohiro Aota wrote:
> Recently, commit 636b927eba5b ("workqueue: Make unbound workqueues to use
> per-cpu pool_workqueues") changed WQ_UNBOUND workqueue's behavior. It
> changed the meaning of alloc_workqueue()'s max_active from an upper limit
> imp
(cc'ing Roman)
Hello,
On Tue, Mar 06, 2018 at 03:46:56PM -0800, Matt Roper wrote:
> +static inline struct cgroup *
> +task_get_dfl_cgroup(struct task_struct *task)
> +{
> + struct cgroup *cgrp;
> +
> + mutex_lock(&cgroup_mutex);
> + cgrp = task_dfl_cgroup(task);
> + cgroup_get(cgr
On Tue, Mar 06, 2018 at 03:46:57PM -0800, Matt Roper wrote:
> Non-controller kernel subsystems may base access restrictions for
> cgroup-related syscalls/ioctls on a process' access to the cgroup.
> Let's make it easy for other parts of the kernel to check these cgroup
> permissions.
I'm not sure
Hello, Matt.
cc'ing Roman and Alexei.
On Tue, Mar 06, 2018 at 03:46:55PM -0800, Matt Roper wrote:
> There are cases where other parts of the kernel may wish to store data
> associated with individual cgroups without building a full cgroup
> controller. Let's add interfaces to allow them to regis
1 - 100 of 249 matches
Mail list logo