e to take this through the drm-fixes directly.
This is indeed the case.
I was going to push this patch through drm-misc-next, but the original/base
patch
(<20240124210811.1639040-1-matthew.br...@intel.com>) isn't there.
If drm-misc maintainers back merge drm-fixes into
er")
> Signed-off-by: Matthew Brost
Indeed, we cannot have any loops in the GPU scheduler work items, as we need to
bounce
between submit and free in the same work queue. (Coming from the original
design before
work items/queues were introduced).
Thanks for fixing this, Matt!
Reviewed-b
On 2024-02-05 19:06, Luben Tuikov wrote:
> On 2024-02-01 07:56, Christian König wrote:
>> Am 31.01.24 um 18:11 schrieb Daniel Vetter:
>>> On Tue, Jan 30, 2024 at 07:03:02PM -0800, Matthew Brost wrote:
>>>> Add Matthew Brost to DRM scheduler maintainers.
>>>&
On 2024-02-01 07:56, Christian König wrote:
> Am 31.01.24 um 18:11 schrieb Daniel Vetter:
>> On Tue, Jan 30, 2024 at 07:03:02PM -0800, Matthew Brost wrote:
>>> Add Matthew Brost to DRM scheduler maintainers.
>>>
>>> Cc: Luben Tuikov
>>> Cc: Daniel V
On 2024-01-29 02:44, Christian König wrote:
> Am 26.01.24 um 17:29 schrieb Matthew Brost:
>> On Fri, Jan 26, 2024 at 11:32:57AM +0100, Christian König wrote:
>>> Am 25.01.24 um 18:30 schrieb Matthew Brost:
On Thu, Jan 25, 2024 at 04:12:58PM +0100, Christian König wrote:
> Am 24.01.24 um 22
all still driven by a
>> kthread.
>>
>> It can perfectly be that we messed this up when switching from kthread to a
>> work item.
>>
>
> Right, that what this patch does - the run worker does not go idle until
> no ready entities are found. That was incorre
On 2024-01-24 16:08, Matthew Brost wrote:
> All entities must be drained in the DRM scheduler run job worker to
> avoid the following case. An entity found that is ready, no job found
> ready on entity, and run job worker goes idle with other entities + jobs
> ready. Draining all ready entities (i.
On 2023-12-26 10:58, Markus Elfring wrote:
> From: Markus Elfring
> Date: Tue, 26 Dec 2023 16:37:37 +0100
>
> Return an error code without storing it in an intermediate variable.
>
> Signed-off-by: Markus Elfring
Thank you Markus for this patch.
Reviewed-by: Luben Tuikov
.
> This issue was detected by using the Coccinelle software.
>
> Thus adjust a jump target.
>
> Signed-off-by: Markus Elfring
Thank you Markus for this patch.
Reviewed-by: Luben Tuikov
Pushed to drm-misc-next.
--
Regards,
Luben
> ---
> drivers/gpu/drm/scheduler/sche
Hi,
On 2023-12-05 14:02, Rob Clark wrote:
> From: Rob Clark
>
> Container fences have burner contexts, which makes the trick to store at
> most one fence per context somewhat useless if we don't unwrap array or
> chain fences.
>
> Signed-off-by: Rob Clark
Link: https://lore.kernel.org/all/202
On 2023-11-29 22:36, Luben Tuikov wrote:
> On 2023-11-29 15:49, Alex Deucher wrote:
>> On Wed, Nov 29, 2023 at 3:10 PM Alex Deucher wrote:
>>>
>>> Actually I think I see the problem. I'll try and send out a patch
>>> later today to test.
>>
>>
ktop.org/archives/amd-gfx/2023-November/101197.html
Link: https://lore.kernel.org/r/87edgv4x3i@vps.thesusis.net
Let's link the start of the thread.
Regards,
Luben
> Signed-off-by: Alex Deucher
> Cc: Phillip Susi
> Cc: Luben Tuikov
> ---
> drivers/gpu/drm/amd/amdgp
On 2023-11-29 10:22, Alex Deucher wrote:
> On Wed, Nov 29, 2023 at 8:50 AM Alex Deucher wrote:
>>
>> On Tue, Nov 28, 2023 at 11:45 PM Luben Tuikov wrote:
>>>
>>> On 2023-11-28 17:13, Alex Deucher wrote:
>>>> On Mon, Nov 27, 2023 at 6:24 PM Phillip
On 2023-11-29 08:50, Alex Deucher wrote:
> On Tue, Nov 28, 2023 at 11:45 PM Luben Tuikov wrote:
>>
>> On 2023-11-28 17:13, Alex Deucher wrote:
>>> On Mon, Nov 27, 2023 at 6:24 PM Phillip Susi wrote:
>>>>
>>>> Alex Deucher writes:
>>>&
On 2023-11-28 17:13, Alex Deucher wrote:
> On Mon, Nov 27, 2023 at 6:24 PM Phillip Susi wrote:
>>
>> Alex Deucher writes:
>>
In that case those are the already known problems with the scheduler
changes, aren't they?
>>>
>>> Yes. Those changes went into 6.7 though, not 6.6 AFAIK. Maybe
struct drm_sched_entity *entity)
> {
> - if (drm_sched_entity_is_ready(entity))
> - if (drm_sched_can_queue(sched, entity))
> - drm_sched_run_job_queue(sched);
> + if (drm_sched_can_queue(sched, entity))
> + drm_sched_run_job_q
Hi Bert,
# The title of the patch should be:
drm/sched: Partial revert of "Qualify drm_sched_wakeup() by
drm_sched_entity_is_ready()"
On 2023-11-27 08:30, Bert Karwatzki wrote:
> Commit f3123c25 (in combination with the use of work queues by the gpu
Commit f3123c2590005c, in combination with t
On 2023-11-27 09:20, Christian König wrote:
> Am 27.11.23 um 15:13 schrieb Luben Tuikov:
>> On 2023-11-27 08:55, Christian König wrote:
>>> Hi Luben,
>>>
>>> Am 24.11.23 um 08:57 schrieb Christian König:
>>>> Am 24.11.23 um 06:27 schrieb Lube
On 2023-11-27 08:55, Christian König wrote:
> Hi Luben,
>
> Am 24.11.23 um 08:57 schrieb Christian König:
>> Am 24.11.23 um 06:27 schrieb Luben Tuikov:
>>> Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW.
>>>
>>> This mirrors DRM_SCHED_
On 2023-11-26 18:38, Stephen Rothwell wrote:
> Hi all,
>
> After merging the drm-misc tree, today's linux-next build (x86_64
> allmodconfig) failed like this:
>
> drivers/gpu/drm/nouveau/nouveau_sched.c:21:41: error:
> 'DRM_SCHED_PRIORITY_MIN' undeclared here (not in a function); did you mean
>
On 2023-11-25 14:22, Luben Tuikov wrote:
> Fix compilation issues with DRM scheduler priority rename MIN to LOW.
>
> Signed-off-by: Luben Tuikov
> Reported-by: kernel test robot
> Closes:
> https://lore.kernel.org/oe-kbuild-all/202311252109.wgbjsskg-...@intel.com/
> Cc: D
On 2023-11-24 04:38, Bert Karwatzki wrote:
> Am Mittwoch, dem 22.11.2023 um 18:02 -0500 schrieb Luben Tuikov:
>> On 2023-11-21 04:00, Bert Karwatzki wrote:
>>> Since linux-next-20231115 my linux system (debian sid on msi alpha 15
>>> laptop)
>>> suffers from ra
Fix compilation issues with DRM scheduler priority rename MIN to LOW.
Signed-off-by: Luben Tuikov
Reported-by: kernel test robot
Closes:
https://lore.kernel.org/oe-kbuild-all/202311252109.wgbjsskg-...@intel.com/
Cc: Danilo Krummrich
Cc: Frank Binns
Cc: Donald Robson
Cc: Matt Coster
Cc
On 2023-11-24 04:38, Christian König wrote:
> Am 24.11.23 um 09:22 schrieb Luben Tuikov:
>> On 2023-11-24 03:04, Christian König wrote:
>>> Am 24.11.23 um 06:27 schrieb Luben Tuikov:
>>>> Reverse run-queue priority enumeration such that the higest priority is
On 2023-11-24 08:20, Jani Nikula wrote:
> On Wed, 22 Nov 2023, Luben Tuikov wrote:
>> On 2023-11-22 07:00, Maxime Ripard wrote:
>>> Hi Luben,
>>>
>>> On Thu, Nov 16, 2023 at 09:27:58AM +0100, Daniel Vetter wrote:
>>>> On Thu, Nov 16, 2023 at 09:
On 2023-11-24 03:04, Christian König wrote:
> Am 24.11.23 um 06:27 schrieb Luben Tuikov:
>> Reverse run-queue priority enumeration such that the higest priority is now
>> 0,
>> and for each consecutive integer the prioirty diminishes.
>>
>> Run-queues corresp
Kumar
Cc: Dmitry Baryshkov
Cc: Danilo Krummrich
Cc: Alex Deucher
Cc: Christian König
Cc: linux-arm-...@vger.kernel.org
Cc: freedr...@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++--
drivers/gpu/drm/amd
freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +-
drivers/gpu/drm/msm/msm_gpu.h| 2 +-
drivers/gpu/drm/scheduler/sched_entity.c | 7 ---
drivers/gpu/drm/scheduler/sched_main.c | 15 +++
The first patch renames priority MIN to LOW.
The second patch makes the "priority" of the same run-queue index in any two
schedulers, the same.
This series sits on top on this fix
https://patchwork.freedesktop.org/patch/568723/ which I sent yesterday.
Luben Tuikov (2):
drm/sch
If we're given a malformed entity in drm_sched_entity_init()--shouldn't
happen, but we verify--with out-of-bounds priority value, we set it to an
allowed value. Fix the expression which sets this limit.
Signed-off-by: Luben Tuikov
Fixes: 56e449603f0ac5 ("drm/sched: Convert the G
On 2023-11-21 17:05, Phillip Susi wrote:
> Alex Deucher writes:
>
>> Does reverting 56e449603f0ac580700621a356d35d5716a62ce5 alone fix it?
>> Can you also attach your full dmesg log for the failed suspend?
>
> No, it doesn't. Here is the full syslog from the boot with only that
> revert:
>
Th
On 2023-11-21 04:00, Bert Karwatzki wrote:
> Since linux-next-20231115 my linux system (debian sid on msi alpha 15 laptop)
> suffers from random deadlocks which can occur after 30 - 180min of usage.
> These
> deadlocks can be actively provoked by creating high system load (usually by
> compiling
On 2023-11-22 07:00, Maxime Ripard wrote:
> Hi Luben,
>
> On Thu, Nov 16, 2023 at 09:27:58AM +0100, Daniel Vetter wrote:
>> On Thu, Nov 16, 2023 at 09:11:43AM +0100, Maxime Ripard wrote:
>>> On Tue, Nov 14, 2023 at 06:46:21PM -0500, Luben Tuikov wrote:
>>>> O
Hi,
On 2023-11-16 09:15, Christian König wrote:
> Start to improve the scheduler document. Especially document the
> lifetime of each of the objects as well as the restrictions around
> DMA-fence handling and userspace compatibility.
>
> v2: Some improvements suggested by Danilo, add section abou
is
identical to if drm->dev had been NULL.
Signed-off-by: Luben Tuikov
---
include/drm/drm_print.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
index a93a387f8a1a15..dd4883df876a6d 100644
--- a/include/drm/drm_print.h
++
On 2023-11-16 04:22, Maxime Ripard wrote:
> Hi,
>
> On Mon, Nov 13, 2023 at 09:56:32PM -0500, Luben Tuikov wrote:
>> On 2023-11-13 21:45, Stephen Rothwell wrote:
>>> Hi Luben,
>>>
>>> On Mon, 13 Nov 2023 20:32:40 -0500 Luben Tuikov wrote:
>>
On 2023-11-15 03:24, Jani Nikula wrote:
> On Tue, 14 Nov 2023, Luben Tuikov wrote:
>> diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
>> index a93a387f8a1a15..ce784118e4f762 100644
>> --- a/include/drm/drm_print.h
>> +++ b/include/drm/drm_print.h
On 2023-11-13 07:38, Christian König wrote:
> Start to improve the scheduler document. Especially document the
> lifetime of each of the objects as well as the restrictions around
> DMA-fence handling and userspace compatibility.
Thanks Christian for doing this--much needed.
>
> Signed-off-by: C
On 2023-11-14 07:20, Jani Nikula wrote:
> On Mon, 13 Nov 2023, Luben Tuikov wrote:
>> Hi Jani,
>>
>> On 2023-11-10 07:40, Jani Nikula wrote:
>>> On Thu, 09 Nov 2023, Luben Tuikov wrote:
>>>> Define pr_fmt() as "[drm] " for DRM code using pr_*
On 2023-11-13 22:08, Stephen Rothwell wrote:
> Hi Luben,
>
> BTW, cherry picking commits does not avoid conflicts - in fact it can
> cause conflicts if there are further changes to the files affected by
> the cherry picked commit in either the tree/branch the commit was
> cheery picked from or the
On 2023-11-13 21:45, Stephen Rothwell wrote:
> Hi Luben,
>
> On Mon, 13 Nov 2023 20:32:40 -0500 Luben Tuikov wrote:
>>
>> On 2023-11-13 20:08, Luben Tuikov wrote:
>>> On 2023-11-13 15:55, Stephen Rothwell wrote:
>>>> Hi all,
>>>&g
On 2023-11-13 20:08, Luben Tuikov wrote:
> On 2023-11-13 15:55, Stephen Rothwell wrote:
>> Hi all,
>>
>> Commit
>>
>> 0da611a87021 ("dma-buf: add dma_fence_timestamp helper")
>>
>> is missing a Signed-off-by from its committer.
>>
>
On 2023-11-13 15:55, Stephen Rothwell wrote:
> Hi all,
>
> Commit
>
> 0da611a87021 ("dma-buf: add dma_fence_timestamp helper")
>
> is missing a Signed-off-by from its committer.
>
In order to merge the scheduler changes necessary for the Xe driver, those
changes
were based on drm-tip, which
Hi Jani,
On 2023-11-10 07:40, Jani Nikula wrote:
> On Thu, 09 Nov 2023, Luben Tuikov wrote:
>> Define pr_fmt() as "[drm] " for DRM code using pr_*() facilities, especially
>> when no devices are available. This makes it easier to browse kernel logs.
>
> Please do
On 2023-11-11 06:33, Jani Nikula wrote:
> On Sat, 11 Nov 2023, Luben Tuikov wrote:
>> From Jani:
>> The drm_print.[ch] facilities use very few pr_*() calls directly. The
>> users of pr_*() calls do not necessarily include at
>> all, and really don't have to.
>
defines pr_fmt() itself if not already
defined.
No, it's encouraged not to use pr_*() at all, and prefer drm device
based logging, or device based logging.
This reverts commit 36245bd02e88e68ac5955c2958c968879d7b75a9.
Signed-off-by: Luben Tuikov
Link: https://patchwork.freedesktop.or
This reverts commit 36245bd02e88e68ac5955c2958c968879d7b75a9.
Signed-off-by: Luben Tuikov
---
include/drm/drm_print.h | 14 --
1 file changed, 14 deletions(-)
diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
index e8fe60d0eb8783..a93a387f8a1a15 100644
--- a/include
On 2023-11-10 07:40, Jani Nikula wrote:
> On Thu, 09 Nov 2023, Luben Tuikov wrote:
>> Define pr_fmt() as "[drm] " for DRM code using pr_*() facilities, especially
>> when no devices are available. This makes it easier to browse kernel logs.
>
> Please do not m
On 2023-11-09 19:57, Luben Tuikov wrote:
> On 2023-11-09 19:16, Danilo Krummrich wrote:
[snip]
>> @@ -667,6 +771,8 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs);
>> * drm_sched_job_init - init a scheduler job
>> * @job: scheduler job to init
>> * @entity: scheduler en
On 2023-11-09 19:16, Danilo Krummrich wrote:
> Currently, job flow control is implemented simply by limiting the number
> of jobs in flight. Therefore, a scheduler is initialized with a credit
> limit that corresponds to the number of jobs which can be sent to the
> hardware.
>
> This implies that
Define pr_fmt() as "[drm] " for DRM code using pr_*() facilities, especially
when no devices are available. This makes it easier to browse kernel logs.
Signed-off-by: Luben Tuikov
---
include/drm/drm_print.h | 14 ++
1 file changed, 14 insertions(+)
diff --git a/i
Don't "wake up" the GPU scheduler unless the entity is ready, as well as we
can queue to the scheduler, i.e. there is no point in waking up the scheduler
for the entity unless the entity is ready.
Signed-off-by: Luben Tuikov
Fixes: bc8d6a9df99038 ("drm/sched: Don't dist
On 2023-11-09 18:41, Danilo Krummrich wrote:
> On 11/9/23 20:24, Danilo Krummrich wrote:
>> On 11/9/23 07:52, Luben Tuikov wrote:
>>> Hi,
>>>
>>> On 2023-11-07 19:41, Danilo Krummrich wrote:
>>>> On 11/7/23 05:10, Luben Tuikov wrot
On 2023-11-09 14:55, Danilo Krummrich wrote:
> On 11/9/23 01:09, Danilo Krummrich wrote:
>> On 11/8/23 06:46, Luben Tuikov wrote:
>>> Hi,
>>>
>>> Could you please use my gmail address, the one one I'm responding from--I
>>> don't want
&g
Hi,
On 2023-11-07 19:41, Danilo Krummrich wrote:
> On 11/7/23 05:10, Luben Tuikov wrote:
>> Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact,
>> rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let
>> it do just that,
On 2023-11-08 19:09, Danilo Krummrich wrote:
> On 11/8/23 06:46, Luben Tuikov wrote:
>> Hi,
>>
>> Could you please use my gmail address, the one one I'm responding from--I
>> don't want
>> to miss any DRM scheduler patches. BTW, the luben.tui...@amd.com
On 2023-11-08 00:46, Luben Tuikov wrote:
> Hi,
>
> Could you please use my gmail address, the one one I'm responding from--I
> don't want
> to miss any DRM scheduler patches. BTW, the luben.tui...@amd.com email should
> bounce
> as undeliverable.
>
> On 202
Hi,
Could you please use my gmail address, the one one I'm responding from--I don't
want
to miss any DRM scheduler patches. BTW, the luben.tui...@amd.com email should
bounce
as undeliverable.
On 2023-11-07 21:26, Danilo Krummrich wrote:
> Commit 56e449603f0a ("drm/sched: Convert the GPU schedul
On 2023-11-07 12:53, Danilo Krummrich wrote:
> On 11/7/23 05:10, Luben Tuikov wrote:
>> Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact,
>> rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let
>> it do just that,
On 2023-11-07 06:48, Matthew Brost wrote:
> On Mon, Nov 06, 2023 at 11:10:21PM -0500, Luben Tuikov wrote:
>> Don't call drm_sched_select_entity() in drm_sched_run_job_queue(). In fact,
>> rename __drm_sched_run_job_queue() to just drm_sched_run_job_queue(), and let
>>
This commit fixes this by eliminating the call to
drm_sched_select_entity() from drm_sched_run_job_queue(), and leaves it only
in drm_sched_run_job_work().
v2: Rebased on top of Tvrtko's renames series of patches. (Luben)
Add fixes-tag. (Tvrtko)
Signed-off-by: Luben Tuikov
Fixes: f7fe64ad0f
On 2023-11-06 07:41, Tvrtko Ursulin wrote:
>
> On 05/11/2023 01:51, Luben Tuikov wrote:
>> On 2023-11-02 06:55, Tvrtko Ursulin wrote:
>>> From: Tvrtko Ursulin
>>>
>>> I found some of the naming a bit incosistent and unclear so just a small
>>> at
On 2023-11-02 06:55, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin
>
> I found some of the naming a bit incosistent and unclear so just a small
> attempt to clarify and tidy some of them. See what people think if my first
> stab improves things or not.
>
> Cc: Luben Tuikov
: Tvrtko Ursulin
>
> I found some of the naming a bit incosistent and unclear so just a small
> attempt to clarify and tidy some of them. See what people think if my first
> stab improves things or not.
>
> Cc: Luben Tuikov
> Cc: Matthew Brost
>
> Tvrtko
Hi Tvrtko,
On 2023-11-03 06:39, Tvrtko Ursulin wrote:
>
> On 02/11/2023 22:46, Luben Tuikov wrote:
>> Eliminate drm_sched_run_job_queue_if_ready() and instead just call
>> drm_sched_run_job_queue() in drm_sched_free_job_work(). The problem is that
>>
Hi Matt, :-)
On 2023-11-03 11:13, Matthew Brost wrote:
> On Thu, Nov 02, 2023 at 06:46:54PM -0400, Luben Tuikov wrote:
>> Eliminate drm_sched_run_job_queue_if_ready() and instead just call
>> drm_sched_run_job_queue() in drm_sched_free_job_work(). The problem is that
>> the
On 2023-11-02 07:13, Tvrtko Ursulin wrote:
>
> On 31/10/2023 03:24, Matthew Brost wrote:
>> Rather than call free_job and run_job in same work item have a dedicated
>> work item for each. This aligns with the design and intended use of work
>> queues.
>>
>> v2:
>> - Test for DMA_FENCE_FLAG_TIM
drm_sched_select_entity(), then
in the case of RR scheduling, that would result in calling select_entity()
twice, which may result in skipping a ready entity if more than one entity is
ready. This commit fixes this by eliminating the if_ready() variant.
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/scheduler
On 2023-10-30 23:24, Matthew Brost wrote:
> As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
> have been asked to merge our common DRM scheduler patches first.
>
> This a continuation of a RFC [3] with all comments addressed, ready for
> a full review, and hopefully in st
- Do not move drm_sched_select_entity in file (Luben)
>
> Signed-off-by: Matthew Brost
Reviewed-by: Luben Tuikov
Regards,
Luben
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 146 +
> include/drm/gpu_scheduler.h| 4 +-
> 2 files ch
7;s
> credit count, which represents the number of credits a job contributes
> to the scheduler's credit limit.
>
> Signed-off-by: Danilo Krummrich
Reviewed-by: Luben Tuikov
Regards,
Luben
> ---
> Changes in V2:
> ==
> - fixed up influence on scheduling
On 2023-10-31 22:23, Danilo Krummrich wrote:
> Hi Luben,
>
[snip]
>>> @@ -187,12 +251,14 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq
>>> *rq,
>>> /**
>>> * drm_sched_rq_select_entity_rr - Select an entity which could provide a
>>> job to run
>>> *
>>> + * @sched: the gpu schedul
On 2023-10-31 09:33, Danilo Krummrich wrote:
>
> On 10/26/23 19:25, Luben Tuikov wrote:
>> On 2023-10-26 12:39, Danilo Krummrich wrote:
>>> On 10/23/23 05:22, Luben Tuikov wrote:
>>>> The GPU scheduler has now a variable number of run-queues, which are set
>
Hi,
(PSA: luben.tui...@amd.com should've bounced :-) I'm removing it from the To:
field.)
On 2023-10-30 20:26, Danilo Krummrich wrote:
> Currently, job flow control is implemented simply by limiting the number
> of jobs in flight. Therefore, a scheduler is initialized with a credit
> limit that
On 2023-10-27 12:41, Boris Brezillon wrote:
> On Fri, 27 Oct 2023 10:32:52 -0400
> Luben Tuikov wrote:
>
>> On 2023-10-27 04:25, Boris Brezillon wrote:
>>> Hi Danilo,
>>>
>>> On Thu, 26 Oct 2023 18:13:00 +0200
>>> Danilo Krummrich wrote:
>
On 2023-10-27 12:31, Boris Brezillon wrote:
> On Fri, 27 Oct 2023 16:23:24 +0200
> Danilo Krummrich wrote:
>
>> On 10/27/23 10:25, Boris Brezillon wrote:
>>> Hi Danilo,
>>>
>>> On Thu, 26 Oct 2023 18:13:00 +0200
>>> Danilo Krummrich wrote:
>>>
Currently, job flow control is implemented s
Hi,
On 2023-10-27 12:26, Boris Brezillon wrote:
> On Fri, 27 Oct 2023 16:34:26 +0200
> Danilo Krummrich wrote:
>
>> On 10/27/23 09:17, Boris Brezillon wrote:
>>> Hi Danilo,
>>>
>>> On Thu, 26 Oct 2023 18:13:00 +0200
>>> Danilo Krummrich wrote:
>>>
+
+ /**
+ * @update_job_cr
Hi Danilo,
On 2023-10-27 10:45, Danilo Krummrich wrote:
> Hi Luben,
>
> On 10/26/23 23:13, Luben Tuikov wrote:
>> On 2023-10-26 12:13, Danilo Krummrich wrote:
>>> Currently, job flow control is implemented simply by limiting the number
>>> of jobs in flight. Ther
On 2023-10-27 04:25, Boris Brezillon wrote:
> Hi Danilo,
>
> On Thu, 26 Oct 2023 18:13:00 +0200
> Danilo Krummrich wrote:
>
>> Currently, job flow control is implemented simply by limiting the number
>> of jobs in flight. Therefore, a scheduler is initialized with a credit
>> limit that correspo
On 2023-10-26 17:13, Luben Tuikov wrote:
> On 2023-10-26 12:13, Danilo Krummrich wrote:
>> Currently, job flow control is implemented simply by limiting the number
>> of jobs in flight. Therefore, a scheduler is initialized with a credit
>> limit that corresponds to the number
On 2023-10-26 12:13, Danilo Krummrich wrote:
> Currently, job flow control is implemented simply by limiting the number
> of jobs in flight. Therefore, a scheduler is initialized with a credit
> limit that corresponds to the number of jobs which can be sent to the
> hardware.
>
> This implies that
Update the GPU Scheduler maintainer email.
Cc: Alex Deucher
Cc: Christian König
Cc: Daniel Vetter
Cc: Dave Airlie
Cc: AMD Graphics
Cc: Direct Rendering Infrastructure - Development
Signed-off-by: Luben Tuikov
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
On 2023-10-26 12:39, Danilo Krummrich wrote:
> On 10/23/23 05:22, Luben Tuikov wrote:
>> The GPU scheduler has now a variable number of run-queues, which are set up
>> at
>> drm_sched_init() time. This way, each driver announces how many run-queues it
>> requires (suppo
Hi,
I've pushed this commit as I got a verbal Acked-by from Christian in our kernel
meeting this morning.
Matt, please rebase your patches to drm-misc-next.
Regards,
Luben
On 2023-10-26 11:20, Luben Tuikov wrote:
> Ping!
>
> On 2023-10-22 23:22, Luben Tuikov wrote:
>> T
Ping!
On 2023-10-22 23:22, Luben Tuikov wrote:
> The GPU scheduler has now a variable number of run-queues, which are set up at
> drm_sched_init() time. This way, each driver announces how many run-queues it
> requires (supports) per each GPU scheduler it creates. Note, that r
Also note that there were no complaints from "kernel test robot
"
when I posted my patch (this patch), but there is now, which further shows
that there's unwarranted changes. Just follow the steps I outlined below,
and we should all be good.
Thanks!
Regards,
Luben
On 2023-10-
Hi,
On 2023-10-26 02:33, kernel test robot wrote:
> Hi Matthew,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on 201c8a7bd1f3f415920a2df4b8a8817e973f42fe]
>
> url:
> https://github.com/intel-lab-lkp/linux/commits/Matthew-Brost/drm-sched-Add-drm_sch
On 2023-10-26 00:12, Matthew Brost wrote:
> Rather than call free_job and run_job in same work item have a dedicated
> work item for each. This aligns with the design and intended use of work
> queues.
>
> v2:
>- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting
> timestamp in free_job
t; - Adjust comment for drm_sched_tdr_queue_imm (Luben)
> v4:
> - Adjust commit message (Luben)
>
> Cc: Luben Tuikov
> Signed-off-by: Matthew Brost
> Reviewed-by: Luben Tuikov
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 18 +-
> include/drm/gpu_scheduler.h
On 2023-10-26 00:12, Matthew Brost wrote:
> Rather than call free_job and run_job in same work item have a dedicated
> work item for each. This aligns with the design and intended use of work
> queues.
>
> v2:
>- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting
> timestamp in free_job
On 2023-10-26 00:12, Matthew Brost wrote:
> From: Luben Tuikov
>
> The GPU scheduler has now a variable number of run-queues, which are set up at
> drm_sched_init() time. This way, each driver announces how many run-queues it
> requires (supports) per each GPU scheduler it crea
for free_job work item patch
>
> Matt
>
> [1] https://gitlab.freedesktop.org/drm/xe/kernel
> [2] https://patchwork.freedesktop.org/series/112188/
> [3] https://patchwork.freedesktop.org/series/116055/
>
> Luben Tuikov (1):
> drm/sched: Convert the GPU scheduler to variabl
Hi Matt,
On 2023-10-25 11:13, Matthew Brost wrote:
> On Mon, Oct 23, 2023 at 11:50:26PM -0400, Luben Tuikov wrote:
>> Hi,
>>
>> On 2023-10-17 11:09, Matthew Brost wrote:
>>> DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between
>>> scheduler
Hi,
On 2023-10-17 11:09, Matthew Brost wrote:
> DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between
> scheduler and entity. No priorities or run queue used in this mode.
> Intended for devices with firmware schedulers.
>
> v2:
> - Drop sched / rq union (Luben)
> v3:
> - Don't
v3d build (CI)
> - s/bad_policies/drm_sched_policy_mismatch/ (Luben)
> - Don't update modparam doc (Luben)
> v4:
> - Fix alignment in msm_ringbuffer_new (Luben / checkpatch)
>
> Signed-off-by: Matthew Brost
> Reviewed-by: Luben Tuikov
> ---
> drivers/gpu/drm/amd/amdg
On 2023-10-23 18:35, Danilo Krummrich wrote:
> On Wed, Oct 11, 2023 at 09:52:36PM -0400, Luben Tuikov wrote:
>> Hi,
>>
>> Thanks for fixing the title and submitting a v2 of this patch. Comments
>> inlined below.
>>
>> On 2023-10-09 18:35, Danilo Krummrich wr
On 2023-10-23 18:57, Danilo Krummrich wrote:
> On Tue, Oct 10, 2023 at 09:41:51AM +0200, Boris Brezillon wrote:
>> On Tue, 10 Oct 2023 00:35:53 +0200
>> Danilo Krummrich wrote:
>>
>>> Currently, job flow control is implemented simply by limiting the number
>>> of jobs in flight. Therefore, a sched
ernel.org
Cc: freedr...@lists.freedesktop.org
Cc: nouv...@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c| 4 +-
drivers/gpu/drm/etnaviv/etnaviv_sched.c| 1 +
Hi,
On 2023-10-19 12:55, Matthew Brost wrote:
> On Wed, Oct 18, 2023 at 09:25:36PM -0400, Luben Tuikov wrote:
>> Hi,
>>
>> On 2023-10-17 11:09, Matthew Brost wrote:
>>> Rather than call free_job and run_job in same work item have a dedicated
>>> work item fo
On 2023-10-20 12:37, Alex Deucher wrote:
> On Tue, Oct 17, 2023 at 9:22 PM Luben Tuikov wrote:
>>
>> Remove a redundant call to amdgpu_ctx_priority_is_valid() from
>> amdgpu_ctx_priority_permit(), which is called from amdgpu_ctx_init() which is
>> called from amdgpu_
1 - 100 of 390 matches
Mail list logo