On 2020-02-29 8:46 p.m., Nicolas Dufresne wrote:
> Le samedi 29 février 2020 à 19:14 +0100, Timur Kristóf a écrit :
>>
>> 1. I think we should completely disable running the CI on MRs which are
>> marked WIP. Speaking from personal experience, I usually make a lot of
>> changes to my MRs before the
One idea for Marge-bot (don't know if you already do this):
Rust-lang has their bot (bors) automatically group together a few merge
requests into a single merge commit, which it then tests, then, then the
tests pass, it merges. This could help reduce CI runs to once a day (or
some other rate). If t
I don't think we need to worry so much about the cost of CI that we need to
micro-optimize to to get the minimal number of CI runs. We especially
shouldn't if it begins to impact coffee quality, people's ability to merge
patches in a timely manner, or visibility into what went wrong when CI
fai
[AMD Official Use Only - Internal Distribution Only]
The one suggestion I saw that definitely seemed worth looking at was adding
download caches if the larger CI systems didn't already have them.
Then again do we know that CI traffic is generating the bulk of the costs ? My
guess would have bee
Hi Samir
Looks it is your first upstream path,
The format of your description need to change:
Modify:
[PATCH] drm/amdgpu: Rearm IRQ in Navi10 SR-IOV if IRQ lost
To:
drm/amdgpu: Rearm IRQ in Navi10 SR-IOV if IRQ lost
with that changed you can get my RB
(means you can put "Reviewed-by: Monk Liu
From: "Tianci.Yin"
[why]
CP firmware decide to skip setting the state for 3D pipe 1 for Navi1x as there
is no use case.
[how]
Disable 3D pipe 1 on Navi1x.
Change-Id: I6898bdfe31d4e7908bd9bcfa82b6a75e118e8727
Reviewed-by: Hawking Zhang
Signed-off-by: Tianci.Yin
---
drivers/gpu/drm/amd/amdgpu/
On Sun, Mar 1, 2020 at 2:49 PM Nicolas Dufresne wrote:
>
> Hi Jason,
>
> I personally think the suggestion are still a relatively good
> brainstorm data for those implicated. Of course, those not implicated
> in the CI scripting itself, I'd say just keep in mind that nothing is
> black and white a
SPM access the video memory according to SPM_VMID. It should be updated
with the job's vmid right before the job is scheduled. SPM_VMID is a
global resource
Change-Id: Id3881908960398f87e7c95026a54ff83ff826700
Signed-off-by: Jacob He
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4
1 file ch
Commit '16f17eda8bad ("drm/amd/display: Send vblank and user
events at vsartup for DCN")' introduces a new way of pageflip
completion handling for DCN, and some trouble.
The current implementation introduces a race condition, which
can cause pageflip completion events to be sent out one vblank
too
Some framework test will fail if enable runpm on Vega10.
Disable it untill issue fixed.
Signed-off-by: Feifei Xu
Tested-by: Kyle Chen
---
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
b/drivers/gpu/drm/amd/am
With new L1 policy, some regs are blocked at guest and they are
programed at host side. So skip programing the regs under sriov.
the regs are:
GCMC_VM_FB_LOCATION_TOP
GCMC_VM_FB_LOCATION_BASE
MMMC_VM_FB_LOCATION_TOP
MMMC_VM_FB_LOCATION_BASE
GCMC_VM_SYSTEM_APERTURE_HIGH_ADDR
GCMC_VM_SYSTEM_APERTURE
11 matches
Mail list logo