Am 02.08.2016 um 14:57 schrieb Alex Deucher:
On Tue, Aug 2, 2016 at 4:55 AM, Marek Olšák <mar...@gmail.com> wrote:
On Tue, Aug 2, 2016 at 3:13 AM, Michel Dänzer <mic...@daenzer.net> wrote:
On 01.08.2016 16:35, Michel Dänzer wrote:
On 30.07.2016 06:42, Marek Olšák wrote:
From: Marek Olšák <marek.ol...@amd.com>
This is controversial, but I don't see a better way out of this.
Tonga has 2 GB of VRAM and 2 GB of GTT. amdgpu is not capable of submitting
an IB referencing 1 GB of VRAM and 1 GB of GTT. The CS ioctl never succeeds
even though it's far below the limits.
Without this, "dEQP-GLES2.functional.color_clear.single_rgb" fails to
submit an IB. With this, dEQP throws a framebuffer-incomplete exception
and kills the process.
IMO, failing the CS ioctl is worse for stability than failing big
allocations.
I can agree with that, but this change can't reliably prevent CS ioctl
failures:
I believe the problem is mostly due to BOs which are pinned for scanout.
Since up to 6 CRTCs can scan out different buffers at any time, in the
worst case it may not be possible to place any BOs whose size is >= ~1/7
of the VRAM size.
At the end of the day, this needs to be solved in the kernel one way or
another.
Or if you do want to avoid the problem in userspace for now, maybe use 1
/ (number of CRTCs + 1) instead of hardcoding 1/3?
I don't know.
I could avoid the problem by splitting buffers into 128MB blocks
mapped into GPUVM consecutively, but that would prevent easy DMABUF
sharing and CPU mappings would not be consecutive.
I think it could be fixed to a certain extent by handling migrations
to/from vram iteratively rather than trying to move the whole buffer
at once. E.g., we may have enough room in vram, but not enough
contiguous gtt aperture or system pages to map the whole buffer to do
the transfer in one shot.
Additional to that paging VRAM should help a lot with that as well.
Anyway I have both VRAM paging and splitting moves on my TODO list.
Good to know that I can use
"dEQP-GLES2.functional.color_clear.single_rgb" as a test case.
Regards,
Christian.
Alex
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev