[PATCH 4/9] drm: pre allocate node for create_block

2013-07-05 Thread Ben Widawsky
For an upcoming patch where we introduce the i915 VMA, it's ideal to have the drm_mm_node as part of the VMA struct (ie. it's pre-allocated). Part of the conversion to VMAs is to kill off obj->gtt_space. Doing this will break a bunch of code, but amongst them are 2 callers of drm_mm_create_block(),

[PATCH 5/9] drm: Change create block to reserve node

2013-07-05 Thread Ben Widawsky
With the previous patch we no longer actually create a node, we simply find the correct hole and occupy it. This very well could have been squashed with the last patch, but since I already had David's review, I figured it's easiest to keep it distinct. Also update the users in i915. Conveniently t

[PATCH 9/9] drm: Optionally create mm blocks from top-to-bottom

2013-07-05 Thread Ben Widawsky
From: Chris Wilson Clients like i915 needs to segregate cache domains within the GTT which can lead to small amounts of fragmentation. By allocating the uncached buffers from the bottom and the cacheable buffers from the top, we can reduce the amount of wasted space and also optimize allocation o

[PATCH 4/9] drm: pre allocate node for create_block

2013-07-05 Thread Ben Widawsky
For an upcoming patch where we introduce the i915 VMA, it's ideal to have the drm_mm_node as part of the VMA struct (ie. it's pre-allocated). Part of the conversion to VMAs is to kill off obj->gtt_space. Doing this will break a bunch of code, but amongst them are 2 callers of drm_mm_create_block(),

[PATCH 5/9] drm: Change create block to reserve node

2013-07-05 Thread Ben Widawsky
With the previous patch we no longer actually create a node, we simply find the correct hole and occupy it. This very well could have been squashed with the last patch, but since I already had David's review, I figured it's easiest to keep it distinct. Also update the users in i915. Conveniently t

[PATCH 9/9] drm: Optionally create mm blocks from top-to-bottom

2013-07-05 Thread Ben Widawsky
From: Chris Wilson Clients like i915 needs to segregate cache domains within the GTT which can lead to small amounts of fragmentation. By allocating the uncached buffers from the bottom and the cacheable buffers from the top, we can reduce the amount of wasted space and also optimize allocation o

[pull] radeon drm-next-3.11

2013-07-05 Thread alexdeuc...@gmail.com
From: Alex Deucher Hi Dave, A few more DPM patches and some bug fixes. Adds a sysfs interface to force dpm performance levels. The following changes since commit 338a95a95508537e23c82d59a2d87be6fde4b6ff: drm/radeon/sumo: implement support for disable_gfx_power_gating_in_uvd flag (2013-07-0

[Bug 60182] X.Org Server terminate when I close video player

2013-07-05 Thread bugzilla-dae...@freedesktop.org
bed... URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20130705/e6166650/attachment-0001.html>

[Bug 66632] New: Very low FPS when video memory is full (GART & ram <-> vram swapping)

2013-07-05 Thread bugzilla-dae...@freedesktop.org
- next part -- An HTML attachment was scrubbed... URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20130705/c0074336/attachment.html>

Best practice device tree design for display subsystems/DRM

2013-07-05 Thread Sebastian Hesselbarth
On 07/05/13 11:51, Grant Likely wrote: > On Fri, Jul 5, 2013 at 10:34 AM, Sebastian Hesselbarth > wrote: >> So for the discussion, I can see that there have been some voting for >> super-node, some for node-to-node linking. Although I initially proposed >> super-nodes, I can also happily live with

exynos-drm-next todo work.

2013-07-05 Thread Mark Brown
HDMI support in mainline, whatever they're doing seems like a good place to start. -- next part -- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20130705/655642ad/attachment-0001.pgp>

[PATCH 09/10] idr: Remove unneeded idr locking, idr_preload() usage

2013-07-05 Thread Kent Overstreet
From: Kent Overstreet Our new idr implementation does its own locking, instead of forcing it onto the callers like the old implementation. Many of the existing idr users need locking for more than just idr_alloc()/idr_remove()/idr_find() - they're taking refcounts and such under their locks and

[PATCH 10/10] idr: Rework idr_preload()

2013-07-05 Thread Kent Overstreet
The old idr_preload() used percpu buffers - since the bitmap/radix/whatever tree only grew by fixed sized nodes, it only had to ensure there was a node available in the percpu buffer and disable preemption. This conveniently meant that you didn't have to pass the idr you were going to allocate from

<    1   2