commit d6c650c0a8f6f671e49553725e1db541376d95f2
Author: Nicolai Hähnle
@@ -611,6 +611,10 @@ static int amd_sched_main(void *param)
fence = sched->ops->run_job(sched_job);
amd_sched_fence_scheduled(s_fence);
+
+ /* amd_sched_process_job drops the job's
On 12/10/17 07:49 PM, Alex Deucher wrote:
> On Thu, Oct 12, 2017 at 1:02 PM, Christian König
> wrote:
>> Am 12.10.2017 um 18:20 schrieb Michel Dänzer:
>>> On 12/10/17 05:58 PM, Alex Deucher wrote:
Hi Dave,
One memory management regression fix.
The following changes si
On 13.10.2017 05:57, Liu, Monk wrote:
That sounds sane, but unfortunately might not be possible with the existing
IOCTL. Keep in mind that we need to keep backward compatibility here.
unfortunately the current scheme on amdgpu_ctx_query() won’t work with
TDR feature, which is aim to support v
On 12/10/17 10:54 PM, Alex Deucher wrote:
> Signed-off-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index b9a3258.
On 12/10/17 07:54 PM, Harry Wentland wrote:
> We're overflowing the last bit. Cast it explicitly
>
> Signed-off-by: Harry Wentland
> ---
> drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/b
Yeah, that change is actually incorrect and should be reverted.
What we really need to do is remove dropping sched_job->s_fence from
amd_sched_process_job() into amd_sched_job_finish() directly before the
call to free_job().
Regards,
Christian.
Am 13.10.2017 um 09:24 schrieb Liu, Monk:
comm
Am 13.10.2017 um 09:41 schrieb Michel Dänzer:
On 12/10/17 07:49 PM, Alex Deucher wrote:
On Thu, Oct 12, 2017 at 1:02 PM, Christian König
wrote:
Am 12.10.2017 um 18:20 schrieb Michel Dänzer:
On 12/10/17 05:58 PM, Alex Deucher wrote:
Hi Dave,
One memory management regression fix.
The followi
Am 13.10.2017 um 09:48 schrieb Michel Dänzer:
On 12/10/17 10:54 PM, Alex Deucher wrote:
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd
Am 13.10.2017 um 10:08 schrieb Michel Dänzer:
On 12/10/17 07:54 PM, Harry Wentland wrote:
We're overflowing the last bit. Cast it explicitly
Signed-off-by: Harry Wentland
---
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
On 13/10/17 10:24 AM, Christian König wrote:
> Am 13.10.2017 um 10:08 schrieb Michel Dänzer:
>> On 12/10/17 07:54 PM, Harry Wentland wrote:
>>> We're overflowing the last bit. Cast it explicitly
>>>
>>> Signed-off-by: Harry Wentland
>>> ---
>>> drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
From: pding
Only report fence for GFX ring. This can help checking MCBP feature.
Signed-off-by: pding
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
Am 13.10.2017 um 10:26 schrieb Pixel Ding:
From: pding
Only report fence for GFX ring. This can help checking MCBP feature.
Signed-off-by: pding
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.
I doubt it would always work fine…
First, we have FENCE_TRACE reference s_fence->finished after
“fence_signal(&fence->finished)”
Second, we have trace_amd_sched_proess_job(s_fence) after
“amd_sched_fence_finished()”,
If you put the finished before free_job() and by coincidence the job_finish()
Thanks Christian,
I’m not sure if I get your point, but yes the preemption fence offset could be
changed.
Is it OK to limit this information only for SRIOV VF on Tonga and Vega whose
format is known? It can help use to identify if MCBP is working correctly or
not.
—
Sincerely Yours,
Pixel
This is the first patch series to make latest staging driver
stable for SRIOV VF on both Tonga and Vega. Patches are merged
from SRIOV branches or reimplemented, including bug fixes and
small features requested by SRIOV users.
Please help reviewing, Thanks.
[PATCH 1/4] drm/amdgpu: always consid
From: pding
Register accessing is performed when IRQ is disabled. Never sleep in
this function.
Known issue: dead sleep in many use cases of index/data registers.
Signed-off-by: pding
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8 ++---
From: pding
The post checking on scratch registers isn't reliable for virtual
function.
Signed-off-by: pding
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/
The free_job() callback is called only way after the job has finished.
That is one change actually made by you to the code :)
Christian.
Am 13.10.2017 um 10:39 schrieb Liu, Monk:
I doubt it would always work fine…
First, we have FENCE_TRACE reference s_fence->finished after
“fence_signal(&f
From: pding
The polling memory was standalone in VRAM before, so the HDP flush
introduced latency that hides a VM fault issue. Now polling memory
leverages the WB in system memory and HDP flush is not required, the
VM fault at same page happens.
Add delay back to workaround until the root cause
Just revert Nicolai’s patch,if other routine want to reference s_fence, it
should get the finished fence in the first place/time,
For gpu_reset routine, it refers to s_fence only on those unfinished job in
sched_hw_job_reset, so totally safe to refer to s_fence pointer
I wonder what issue Nicol
No that’s not true
The free_job() is called in sched_job_finish() which is queued on a WORK and
scheduled from that “amd_sched_fence_finished()”
So the finishing timing of free_job() is asynchronized with sched_process_job()
How can you sure free_job() must before “trace_amd_sched_process_job” ?
The free_job() is called in sched_job_finish() which is queued on a WORK and
scheduled from that “amd_sched_fence_finished()”
So the finishing timing of free_job() is asynchronized with sched_process_job()
There is chance that free_job() called before that
“trace_amd_sched_process_job”, correct
Is it OK to limit this information only for SRIOV VF on Tonga and Vega whose
format is known? It can help use to identify if MCBP is working correctly or
not.
The question is where is this code for Tonga and Vega? I can't find a
reference to fence_offs in the GFX8 nor GFX9 code we have in
amd-
There is chance that free_job() called before that
“trace_amd_sched_process_job”, correct?
Correct, but that is harmless.
Take a look what trace_amd_sched_process_job actually does, it just
prints the pointer of the fence structure (but the pointer might be
stale at this point).
Neverthele
My understanding is that CP will write seqno back to preempted fence offset
when preemption occurred. Then if there is a value here we can generally know
packet with which fence is preempted. Should driver handle any other thing for
this?
your patch looks good, do you think we should also do this:
void amd_sched_fence_scheduled(struct amd_sched_fence *fence)
{
- int ret = fence_signal(&fence->scheduled);
+ int ret;
+
+ fence_get(&fence->scheduled;)
+ ret = fence_signal(&fence->scheduled);
if (!r
Am 13.10.2017 um 10:26 schrieb Pixel Ding:
From: pding
Register accessing is performed when IRQ is disabled. Never sleep in
this function.
Known issue: dead sleep in many use cases of index/data registers.
Signed-off-by: pding
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h| 1 +
drivers
Am 13.10.2017 um 10:26 schrieb Pixel Ding:
From: pding
The polling memory was standalone in VRAM before, so the HDP flush
introduced latency that hides a VM fault issue. Now polling memory
leverages the WB in system memory and HDP flush is not required, the
VM fault at same page happens.
Add d
Alright, if MESA can handle clone context's VRAM_LOST_COUNTER mismatch issue,
no need to introduce one more U/K in kmd,
So we have only one issue unresolved and need determined ASAP:
How to modify amdgpu_ctx_query() ??
Current design won't work later with our discussion on the TDR (v2) right ?
Pixel,
On drm-next, we always allocate 8DW for all WB, you can check the wb_get
routine on detail
BR Monk
-Original Message-
From: Ding, Pixel
Sent: 2017年10月13日 17:19
To: Koenig, Christian ;
amd-gfx@lists.freedesktop.org; Liu, Monk
Cc: Li, Bingley
Subject: Re: [PATCH 3/4] drm/amdgpu
Yes I tried smp_mb but it doesn’t help…
We will follow up this issue continuously until fix the root cause.
—
Sincerely Yours,
Pixel
On 13/10/2017, 5:17 PM, "Christian König"
wrote:
>Am 13.10.2017 um 10:26 schrieb Pixel Ding:
>> From: pding
>>
>> The polling memory was standalone in VRA
This patch is not for implementation of MCBP handled by driver itself. In SRIOV
use case the preemption is handle in host layer. What I do here is just check
if the preemption occurs, so there’s no code to handle it.
—
Sincerely Yours,
Pixel
On 13/10/2017, 5:03 PM, "Ding, Pixel" wrote:
Yes I see the “drm/amdgpu: use 256 bit buffers for all wb allocations (v2)”.
Hi Christian,
So it seems all good, right?
—
Sincerely Yours,
Pixel
On 13/10/2017, 5:19 PM, "Liu, Monk" wrote:
>Pixel,
>
>On drm-next, we always allocate 8DW for all WB, you can check the wb_get
>routine on
Hi Pixel,
My understanding is that CP will write seqno back to preempted fence offset
when preemption occurred.
That is correct.
But my question is why you want to print a different value here:
+ seq_printf(m, "Last preempted 0x%08x\n",
+ le32_to_c
Yes,
CP hw assume the fence is 64bit length, so there are 4 type of fence and need
256bits totally
The preempted fence is at offset 64bit,
And I assume that we print all of those fence type when under SRIOV mode
BR Monk
-Original Message-
From: Ding, Pixel
Sent: 2017年10月13日 17:19
T
WB _init should clear this buffer in very early stage, so it should output 0 if
for BARE-METAL or no preemption occurred
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of
Christian K?nig
Sent: 2017年10月13日 17:10
To: Ding, Pixel ; amd-gfx@lists.f
@Ding, Pixel
+ seq_printf(m, "Last preempted 0x%08x\n",
+ le32_to_cpu(*(ring->fence_drv.cpu_addr + 2)));
Please handle other type fence as well:
Preempted fence
Reset fence
Reset and preempted fence
BR Monk
-Original Message-
From: amd-gfx [m
OK I get it…
when we use the fence_offs to submit fence to HW, in fact it’s a 8 DW fence not
a 2 DW.
The format is:
Completed Fence 0x0 Fence written here if frame completed normally
Preempted Fence 0x2 Bit set in CP_VMID_PREEMPT and preemption occurred
Reset Fence 0x4 Bit is s
Pixel
I don't think this will work well, my suggestion is you add a new function like:
amdgpu_wreg_kiq_busy(),
which will write registers through KIQ and use polling/busy wait, and the
original amdgu_wreg_no_kiq() can be still there.
When you need to disable sleep like in IRQ CONTEXT, you can c
On 13/10/2017, 5:16 PM, "Christian König"
wrote:
>Am 13.10.2017 um 10:26 schrieb Pixel Ding:
>> From: pding
>>
>> Register accessing is performed when IRQ is disabled. Never sleep in
>> this function.
>>
>> Known issue: dead sleep in many use cases of index/data registers.
>>
>> Signed-off-by
Get it.
—
Sincerely Yours,
Pixel
On 13/10/2017, 5:28 PM, "Liu, Monk" wrote:
>@Ding, Pixel
>
>+ seq_printf(m, "Last preempted 0x%08x\n",
>+ le32_to_cpu(*(ring->fence_drv.cpu_addr + 2)));
>
>Please handle other type fence as well:
>
>Preempted fenc
I’m afraid there’s racing issue if polling and IRQ use cases are mixed at the
same time.
The original implementation is as your suggested. Is there any benefit to keep
to sleepy version?
—
Sincerely Yours,
Pixel
On 13/10/2017, 5:34 PM, "Liu, Monk" wrote:
>Pixel
>
>I don't think this w
The busy waiting method somehow make the performance worse considering 16 VF
and vf0 need to wait 15*time_slice when it try to access registers
BR
-Original Message-
From: Ding, Pixel
Sent: 2017年10月13日 17:39
To: Liu, Monk ; amd-gfx@lists.freedesktop.org
Cc: Li, Bingley
Subject: Re: [PA
Why there will be racing issue ?
Polling or sleep wait only have different result for the caller, not the job
scheduled to KIQ
The sleep waiting is synchroniz sleep, it just release CPU resource to other
process/thread, so the order is guaranteed
BR Monk
-Original Message-
From: Din
Good point as well. How about the attached version?
This time we keep an extra reference in amd_sched_process_job() until we
are sure that we don't need the s_fence any more.
Regards,
Christian.
Am 13.10.2017 um 11:13 schrieb Liu, Monk:
your patch looks good, do you think we should also do
Sounds logical to me as well.
Please also add a document describing what the CP does here.
BTW: Is that limited to the GFX pipeline or do the Compute pipes the same?
Regards,
Christian.
Am 13.10.2017 um 11:36 schrieb Ding, Pixel:
Get it.
—
Sincerely Yours,
Pixel
On 13/10/2017, 5:28 PM
Am 13.10.2017 um 11:35 schrieb Ding, Pixel:
On 13/10/2017, 5:16 PM, "Christian König"
wrote:
Am 13.10.2017 um 10:26 schrieb Pixel Ding:
From: pding
Register accessing is performed when IRQ is disabled. Never sleep in
this function.
Known issue: dead sleep in many use cases of index/data
Yeah, this one looks good
You can put my reviewed-by on it
From: Koenig, Christian
Sent: 2017年10月13日 18:14
To: Liu, Monk ; Nicolai Hähnle ;
amd-gfx@lists.freedesktop.org
Subject: Re: regression with d6c650c0a8f6f671e49553725e1db541376d95f2
Good point as well. How about the attached version?
Th
At the very least I don't get the hangs like I used to before the
patchset so this patch isn't regressing any of that behaviour.
Tom
On 13/10/17 07:06 AM, Liu, Monk wrote:
Yeah, this one looks good
You can put my reviewed-by on it
*From:*Koenig, Christian
*Sent:* 2017年10月13日18:14
*To:* Liu,
Only the gfx engine has such preemption fence mechanism in CP
-Original Message-
From: Koenig, Christian
Sent: 2017年10月13日 18:15
To: Ding, Pixel ; Liu, Monk ;
amd-gfx@lists.freedesktop.org
Cc: Li, Bingley
Subject: Re: [PATCH 3/4] drm/amdgpu: report preemption fence via
amdgpu_fence_in
Because you didn't try GPU reset feature
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Tom
St Denis
Sent: 2017年10月13日 19:11
To: amd-gfx@lists.freedesktop.org
Subject: Re: regression with d6c650c0a8f6f671e49553725e1db541376d95f2
At the very l
Ping Christian & Nicolai
This ctx_query() is a little annoying to me
BR Monk
-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf Of Liu,
Monk
Sent: 2017年10月13日 17:19
To: Haehnle, Nicolai ; Koenig, Christian
; Michel Dänzer ; Olsak, Marek
; Deuche
I think the best approach is to keep it as it is right now and don't
change a thing.
And we add a new IOCTL with a bit more sane return values. E.g. guilty
status and VRAM lost status as flags.
Regards,
Christian.
Am 13.10.2017 um 13:51 schrieb Liu, Monk:
Ping Christian & Nicolai
This ctx_
Michel, gentle ping to you.
With that patch applied piglit seems to be stable on my Tonga (with a
bit older Mesa).
Christian.
Am 12.10.2017 um 19:30 schrieb Christian König:
From: Christian König
We don't use compound pages at the moment. Take this into account when
freeing them.
Signed-o
That's what I suggested, look to know it's agreed
BR Monk
-Original Message-
From: Koenig, Christian
Sent: 2017年10月13日 20:01
To: Liu, Monk ; Haehnle, Nicolai ;
Michel Dänzer ; Olsak, Marek ;
Deucher, Alexander ; Zhou, David(ChunMing)
; Mao, David
Cc: Ramirez, Alejandro ;
amd-gfx@li
Hi,
This commit breaks suspend/resume on my Carrizo A12-9800 system.
[root@carrizo linux2]# git bisect bad
7ae4acd21e9e264afb079e23d43bcf2238c7dbea is the first bad commit
commit 7ae4acd21e9e264afb079e23d43bcf2238c7dbea
Author: Leo (Sunpeng) Li
Date: Thu Sep 7 17:05:38 2017 -0400
drm/amd
For what it's worth this commit also breaks resume on my Tonga only
system so it's not specific to just Carrizo.
On 13/10/17 08:16 AM, Tom St Denis wrote:
Hi,
This commit breaks suspend/resume on my Carrizo A12-9800 system.
[root@carrizo linux2]# git bisect bad
7ae4acd21e9e264afb079e23d43bcf2
On 12/10/17 07:30 PM, Christian König wrote:
> From: Christian König
>
> We don't use compound pages at the moment. Take this into account when
> freeing them.
>
> Signed-off-by: Christian König
> ---
> drivers/gpu/drm/ttm/ttm_page_alloc.c | 21 -
> 1 file changed, 16 inser
On 12/10/17 07:11 PM, Christian König wrote:
> Am 12.10.2017 um 18:49 schrieb Michel Dänzer:
>> On 12/10/17 01:00 PM, Michel Dänzer wrote:
>>> [0] I also got this, but I don't know yet if it's related:
>> No, that seems to be a separate issue; I can still reproduce it with the
>> huge page related
Done with following coccinelle patch
@r@
expression x;
void* e;
type T;
identifier f;
@@
(
*((T *)e)
|
((T *)x)[...]
|
((T*)x)->f
|
- (T*)
e
)
Signed-off-by: Harsha Sharma
---
drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c| 6 +++---
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
Hi Tom,
This is a known issue, and we're currently tracking it on ticket
SWDEV-135329. It's reported using Vega10, but we reproduced it on
Carrizo as well.
Thanks,
Leo
On 2017-10-13 09:30 AM, Tom St Denis wrote:
For what it's worth this commit also breaks resume on my Tonga only
system so it's
On 13/10/17 10:53 AM, Leo wrote:
Hi Tom,
This is a known issue, and we're currently tracking it on ticket
SWDEV-135329. It's reported using Vega10, but we reproduced it on
Carrizo as well.
Thanks. Given this regression is in a new set of display patches I
didn't think it was in the amd-gfx l
From: Shirish S
For SoC's having software designed cursor plane,
should be treated differently than hardware cursor planes.
The DRM core initializes cursor plane by default with
legacy_cursor_update set.
Hence legacy_cursor_update can be use effectively
to handle software cursor planes' update
On 10/12/2017 05:15 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
List of affected functions:
amdgpu_dm_find_first_crtc_matching_connector: use for_each_new
- Old from_state_var flag was always choosing the new
Am 13.10.2017 um 16:34 schrieb Michel Dänzer:
On 12/10/17 07:11 PM, Christian König wrote:
Am 12.10.2017 um 18:49 schrieb Michel Dänzer:
On 12/10/17 01:00 PM, Michel Dänzer wrote:
[0] I also got this, but I don't know yet if it's related:
No, that seems to be a separate issue; I can still rep
From: Christian König
Otherwise somebody could try to evict it at the same time and try to use
halve torn down structures.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 13 +++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/dr
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Christian König
> Sent: Friday, October 13, 2017 11:26 AM
> To: amd-gfx@lists.freedesktop.org
> Subject: [PATCH] drm/amdgpu: reserve root PD while releasing it
>
> From: Christian König
>
>
On 2017-10-13 11:03 AM, Andrey Grodzovsky wrote:
On 10/12/2017 05:15 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
List of affected functions:
amdgpu_dm_find_first_crtc_matching_connector: use for_each_new
-
On 10/13/2017 11:41 AM, Leo wrote:
On 2017-10-13 11:03 AM, Andrey Grodzovsky wrote:
On 10/12/2017 05:15 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
List of affected functions:
amdgpu_dm_find_first_crtc_matchi
On 13/10/17 05:26 PM, Christian König wrote:
> From: Christian König
>
> Otherwise somebody could try to evict it at the same time and try to use
> halve torn down structures.
Typo: "half torn down"
> Signed-off-by: Christian König
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 13
On Fri, 13 Oct 2017, Harsha Sharma wrote:
> Done with following coccinelle patch
>
> @r@
> expression x;
> void* e;
> type T;
> identifier f;
> @@
> (
> *((T *)e)
> |
> ((T *)x)[...]
> |
> ((T*)x)->f
> |
>
> - (T*)
> e
> )
>
> Signed-off-by: Harsha Sharma
> ---
> drivers/gpu/drm/amd/po
On 2017-10-12 05:15 PM, sunpeng...@amd.com wrote:
> From: "Leo (Sunpeng) Li"
>
> To conform to DRM's new API, we should not be accessing a DRM object's
> internal state directly. Rather, the DRM for_each_old/new_* iterators,
> and drm_atomic_get_old/new_* interface should be used.
>
> This is an
On 2017-10-13 04:26 AM, Michel Dänzer wrote:
> On 13/10/17 10:24 AM, Christian König wrote:
>> Am 13.10.2017 um 10:08 schrieb Michel Dänzer:
>>> On 12/10/17 07:54 PM, Harry Wentland wrote:
We're overflowing the last bit. Cast it explicitly
Signed-off-by: Harry Wentland
---
On Thu, Oct 12, 2017 at 5:15 PM, wrote:
> From: "Leo (Sunpeng) Li"
>
> undersacn -> underscan
>
> Signed-off-by: Leo (Sunpeng) Li
Reviewed-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/
On Thu, Oct 12, 2017 at 5:15 PM, wrote:
> From: "Leo (Sunpeng) Li"
>
> in amdgpu_dm_atomic_commit_tail. Just use crtc instead.
>
> Signed-off-by: Leo (Sunpeng) Li
Reviewed-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 +++---
> 1 file changed, 3 insertions(+
Patches 3-6 are
Reviewed-by: Harry Wentland
Harry
On 2017-10-12 05:15 PM, sunpeng...@amd.com wrote:
> From: "Leo (Sunpeng) Li"
>
> Hi Dave,
>
> This series reworks the previous patch. Patch 1 is a v2 of the previous,
> and additional patches are from the feedback received. They apply on top
>
Acquire_first_split_pipe only makes sense for DCN.
Signed-off-by: Harry Wentland
---
drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
inde
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Harry Wentland
> Sent: Friday, October 13, 2017 12:44 PM
> To: amd-gfx@lists.freedesktop.org; die...@nuetzel-hh.de; Deucher,
> Alexander
> Cc: Wentland, Harry
> Subject: [PATCH] drm/amd/displa
On 2017-10-13 11:56 AM, Andrey Grodzovsky wrote:
On 10/13/2017 11:41 AM, Leo wrote:
On 2017-10-13 11:03 AM, Andrey Grodzovsky wrote:
On 10/12/2017 05:15 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
List of a
On 10/13/2017 12:18 PM, Harry Wentland wrote:
On 2017-10-12 05:15 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
To conform to DRM's new API, we should not be accessing a DRM object's
internal state directly. Rather, the DRM for_each_old/new_* iterators,
and drm_atomic_get_old/new_*
Op 12-10-17 om 23:15 schreef sunpeng...@amd.com:
> From: "Leo (Sunpeng) Li"
>
> Hi Dave,
>
> This series reworks the previous patch. Patch 1 is a v2 of the previous,
> and additional patches are from the feedback received. They apply on top
> of your drm-next-amd-dc-staging branch.
>
> Thanks,
> L
On 10/13/2017 12:35 PM, Leo wrote:
On 2017-10-13 11:56 AM, Andrey Grodzovsky wrote:
On 10/13/2017 11:41 AM, Leo wrote:
On 2017-10-13 11:03 AM, Andrey Grodzovsky wrote:
On 10/12/2017 05:15 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iter
> @@ -3400,7 +3400,7 @@ static int smu7_read_sensor(struct pp_hwmgr *hwmgr, int
> idx,
> static int smu7_find_dpm_states_clocks_in_dpm_table(struct pp_hwmgr *hwmgr,
> const void *input)
> {
> const struct phm_set_power_state_input *states =
> - (const struct phm_set_po
Done with following coccinelle patch
@r@
expression x;
void* e;
type T;
identifier f;
@@
(
*((T *)e)
|
((T *)x)[...]
|
((T*)x)->f
|
- (T*)
e
)
Signed-off-by: Harsha Sharma
---
Changes in v2:
-Remove unnecessary parentheses
-Remove one more useless cast
drivers/gpu/drm/amd/powerplay
On 2017-10-13 01:26 PM, Andrey Grodzovsky wrote:
>
>
> On 10/13/2017 12:18 PM, Harry Wentland wrote:
>> On 2017-10-12 05:15 PM, sunpeng...@amd.com wrote:
>>> From: "Leo (Sunpeng) Li"
>>>
>>> To conform to DRM's new API, we should not be accessing a DRM object's
>>> internal state directly. Rathe
Don't leak implementation details about how each priority behaves to
usermode. This allows greater flexibility in the future.
Squash into c2636dc53abd8269a0930bccd564f2f195dba729
Signed-off-by: Andres Rodriguez
---
Hey Alex,
From some of the IRC discussions, I thought this would be
appropriate.
v2: convert value to bool using !!
Signed-off-by: Harry Wentland
---
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
b/drivers/gpu/drm/amd/display/dc/bios/bios_pa
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Harry Wentland
> Sent: Friday, October 13, 2017 3:03 PM
> To: amd-gfx@lists.freedesktop.org; mic...@daenzer.net; Daenzer, Michel;
> Koenig, Christian
> Cc: Wentland, Harry
> Subject: [PATCH v2
On 2017-10-12 08:22 PM, Dieter Nützel wrote:
> Hello,
>
> next (regression) compilation error:
>
> drivers/gpu/drm/amd/amdgpu/../display/dc/core/dc_resource.c: In function
> ‘resource_map_pool_resources’:
> drivers/gpu/drm/amd/amdgpu/../display/dc/core/dc_resource.c:1688:14: error:
> implicit d
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
The following functions were considered:
amdgpu_dm_find_first_crtc_matching_connector: use for_each_new
- Old from_state_var flag was always choosing the new state
amdgpu_dm_display_resume: use for_
On 2017-10-13 03:29 PM, sunpeng...@amd.com wrote:
> From: "Leo (Sunpeng) Li"
>
> Use the correct for_each_new/old_* iterators instead of for_each_*
>
> The following functions were considered:
>
> amdgpu_dm_find_first_crtc_matching_connector: use for_each_new
> - Old from_state_var flag was
Hi Dave,
Updates for DC against your drm-next-amd-dc-staging branch.
- Fix for iterator changes
- Misc cleanups
The following changes since commit e7b8e99bed73e9c42f1c074ad6009cb59a79bd52:
amdgpu/dc: fixup for new apis - probably wrong (2017-10-09 11:22:07 +1000)
are available in the git repo
On 10/13/2017 03:29 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
The following functions were considered:
amdgpu_dm_find_first_crtc_matching_connector: use for_each_new
- Old from_state_var flag was always cho
On 2017-10-13 04:36 PM, Andrey Grodzovsky wrote:
On 10/13/2017 03:29 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
The following functions were considered:
amdgpu_dm_find_first_crtc_matching_connector: use for_ea
On Sat, 14 Oct 2017, Harsha Sharma wrote:
> Done with following coccinelle patch
>
> @r@
> expression x;
> void* e;
> type T;
> identifier f;
> @@
> (
> *((T *)e)
> |
> ((T *)x)[...]
> |
> ((T*)x)->f
> |
>
> - (T*)
> e
> )
>
> Signed-off-by: Harsha Sharma
> ---
> Changes in v3:
> -Remo
Done with following coccinelle patch
@r@
expression x;
void* e;
type T;
identifier f;
@@
(
*((T *)e)
|
((T *)x)[...]
|
((T*)x)->f
|
- (T*)
e
)
Signed-off-by: Harsha Sharma
---
Changes in v3:
-Removed unnecessary lines
-Remove more useless casts
Changes in v2:
-Remove unnecessary pare
On 10/13/2017 05:01 PM, Leo wrote:
On 2017-10-13 04:36 PM, Andrey Grodzovsky wrote:
On 10/13/2017 03:29 PM, sunpeng...@amd.com wrote:
From: "Leo (Sunpeng) Li"
Use the correct for_each_new/old_* iterators instead of for_each_*
The following functions were considered:
amdgpu_dm_find_fir
Am 13.10.2017 18:44, schrieb Harry Wentland:
Acquire_first_split_pipe only makes sense for DCN.
Signed-off-by: Harry Wentland
Tested-by: Dieter Nützel
---
drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc
On 13.10.2017 14:04, Liu, Monk wrote:
That's what I suggested, look to know it's agreed
Yeah, that's fine with me as well.
Cheers,
Nicolai
BR Monk
-Original Message-
From: Koenig, Christian
Sent: 2017年10月13日 20:01
To: Liu, Monk ; Haehnle, Nicolai ; Michel Dänzer
; Olsak, Marek ; D
Am 13.10.2017 21:22, schrieb Harry Wentland:
On 2017-10-12 08:22 PM, Dieter Nützel wrote:
Hello,
next (regression) compilation error:
drivers/gpu/drm/amd/amdgpu/../display/dc/core/dc_resource.c: In
function ‘resource_map_pool_resources’:
drivers/gpu/drm/amd/amdgpu/../display/dc/core/dc_resour
1 - 100 of 101 matches
Mail list logo