On Mon, Jun 23, 2014 at 3:17 AM, Maarten Lankhorst <maarten.lankho...@canonical.com> wrote: > op 21-06-14 14:12, Ilia Mirkin schreef: >> On Tue, Jun 17, 2014 at 2:34 AM, Maarten Lankhorst >> <maarten.lankho...@canonical.com> wrote: >>> nv30 seems to not support dma objects with offset, so simply extend the >>> query_heap to cover the >>> entire notifier, and use a offset in nv30_context_kick_notify. >> It would be great if you could detail the list of transformations that >> were done in the commit description, as well as what the "new way" is >> (if any) for the various concepts. > I moved the pushbuf and fences to each context separately. The PUSH_KICK on > context switch ensures > that the previous context is flushed.
I meant in the commit log :) >> This change doesn't have any of the locking -- is that coming in a >> future change? Otherwise we're still vulnerable to multiple threads >> trying to render at the same time. (Now with screen sharing, even if >> they end up with separate screens, we'd still be in trouble.) > I haven't done any locking changes, because I don't believe locking is the > answer here. > With each context having its own pushbuf we can ensure that things aren't > flushed, so > on flush it should assume all state is dirty. After this is done the > PUSH_KICK of the old > context on context switch can be removed, and suddenly the driver is > thread-safe because > only the pushbuf to kernel command submission could race. ;-) OK. I'm concerned that PUSH_SPACE could let us down here. We'd have to make sure enough space were available for the whole pushbuf, which if an inline vertex transfer is involved, could be a tricky proposition. > >> I'm still a bit concerned with moving the fence stuff to the >> context... there might be an assumption in gallium that fences are >> context-independent, in which case you need to make sure to have just >> a single fence shared by everything... > I don't think that this is the case, because it's very rare that gallium uses > multiple contexts simultaneously. > (Except vdpau interop, which does flush explicitly.) Agreed that it's rare. vdpau interop is the main case, the minor case is 2 screens (which will now share one pipe_screen). This is the issue the other user was having (with the fd closing thing). >> Have you done a full piglit run on this (with the glx tests, for good >> measure) on nv30/nv50/nvc0? If so, can you share the results files >> somewhere? > No not yet. But I did confirm that glxgears and glxinfo didn't regress on my > nv43, nv96 and nvc0. :-) Well, would be good to get those results, and make sure there are no unexpected regressions. (*-struct-pad seems to flip between fail and pass at random, I think based on which tests were run prior... some bit of state we're missing somewhere. One or two others like that.) -ilia _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev