On Fri, Sep 2, 2016 at 12:56 PM, Christian König <deathsim...@vodafone.de> wrote: > Am 02.09.2016 um 12:33 schrieb Marek Olšák: >> >> On Fri, Sep 2, 2016 at 9:38 AM, Christian König <deathsim...@vodafone.de> >> wrote: >>> >>> Am 01.09.2016 um 16:33 schrieb Deucher, Alexander: >>> >>>> return r; > + > + return -ENOMEM; > +} >>> >>> Unreachable code. Other than that looks good. >>> >>> >>> Ups, indeed just a merge error. >>> >>> With that fixed: Reviewed-by: Alex Deucher <alexander.deuc...@amd.com> >>> >>> Thanks for the review, I've just pushed the resulting patches to >>> amd-staging-4.6. >>> >>> Marek you might want to try and raise the limits now how much VRAM can be >>> used by a single command submission. If I remember correctly that was >>> rather >>> conservatively. >> >> I would beg to differ. Just to simplify the calculation, let's assume >> there are not allocations in GTT (in practice there are very few, so >> it's very close to the reality). If you take the memory used by all >> VRAM buffers in CS, the limit for that is "VRAM size + (GTT size * >> 0.7)". >> >> If you have 4GB VRAM and 4GB GTT, Mesa allows per-CS VRAM usage to be 6.8 >> GB. >> >> If you have any GTT buffers there, you just subtract from that. For >> example, if you have 100MB GTT usage, the VRAM limit is 6.7 GB. >> >> Does that sound conservative to you? > > > Not at all :) > > But I clearly remember that you noted that you can't allocate more than 75% > of VRAM in one CS without running into problems sometimes. Is that already > fixed?
No, I can allocate all of VRAM (actually Mesa can't allocate VRAM "physically", only the CS ioctl can reserve it for its time window). The problem I saw was the following. GEM_CREATE or the CS ioctl (I don't remember which one) had a great chance of failing if 1 buffer has the size of 50% of VRAM. You can have thousands of tiny buffers being able to use close to 100% of VRAM, but you can't have 1 buffer whose size is only 50%. After that discovery, we speculated that the number of pinned buffers (=CRTCs) may have had something to do with that. Marek _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx