On Sat, Aug 16, 2014 at 09:01:27AM +0200, Thomas Hellstrom wrote:
> On 08/15/2014 04:52 PM, Jerome Glisse wrote:
> > On Fri, Aug 15, 2014 at 08:54:38AM +0200, Thomas Hellstrom wrote:
> >> On 08/14/2014 09:15 PM, Jerome Glisse wrote:
> >>> On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrot
On 08/15/2014 04:52 PM, Jerome Glisse wrote:
> On Fri, Aug 15, 2014 at 08:54:38AM +0200, Thomas Hellstrom wrote:
>> On 08/14/2014 09:15 PM, Jerome Glisse wrote:
>>> On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote:
On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse
wrote:
>
On Fri, Aug 15, 2014 at 10:07:39AM +0200, Daniel Vetter wrote:
> On Thu, Aug 14, 2014 at 07:03:44PM -0400, Jerome Glisse wrote:
> > On Thu, Aug 14, 2014 at 11:23:01PM +0200, Daniel Vetter wrote:
> > > On Thu, Aug 14, 2014 at 9:15 PM, Jerome Glisse
> > > wrote:
> > > > Cost 1 uint32 per buffer and
On Fri, Aug 15, 2014 at 08:54:38AM +0200, Thomas Hellstrom wrote:
> On 08/14/2014 09:15 PM, Jerome Glisse wrote:
> > On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote:
> >> On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse
> >> wrote:
> >>> Sucks because you can not do weird synchronizat
On Thu, Aug 14, 2014 at 07:03:44PM -0400, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 11:23:01PM +0200, Daniel Vetter wrote:
> > On Thu, Aug 14, 2014 at 9:15 PM, Jerome Glisse
> > wrote:
> > > Cost 1 uint32 per buffer and simple if without locking to check status of
> > > a buffer.
> >
> > Ye
On 08/14/2014 09:15 PM, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote:
>> On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse wrote:
>>> Sucks because you can not do weird synchronization like one i depicted in
>>> another
>>> mail in this thread and for as long
On Thu, Aug 14, 2014 at 9:15 PM, Jerome Glisse wrote:
> Yes preemption and gpu scheduling would break such scheme, but my point is
> that when you have such gpu you want to implement a proper solution. Which
> of course require quite some work accross the stack. So the past can live
> on but the f
On Thu, Aug 14, 2014 at 9:15 PM, Jerome Glisse wrote:
> Cost 1 uint32 per buffer and simple if without locking to check status of
> a buffer.
Yeah well except it doesn't and that's why we switch to full blown
fence objects internally instead of smacking seqno values all over the
place. At least i
On Thu, Aug 14, 2014 at 9:56 PM, Jerome Glisse wrote:
> Android fence are not in my mind a nice thing :)
Well I'll have a very close look at the proposed ioctl interface on
top of these fence fds to make sure it's sane. But it will come that's
pretty much for sure. And similar fence integration w
On 14-08-14 21:15, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote:
>> On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse wrote:
>>> Sucks because you can not do weird synchronization like one i depicted in
>>> another
>>> mail in this thread and for as long as
On 14-08-14 20:26, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 05:58:48PM +0200, Daniel Vetter wrote:
>> On Thu, Aug 14, 2014 at 10:12:06AM -0400, Jerome Glisse wrote:
>>> On Thu, Aug 14, 2014 at 09:16:02AM -0400, Rob Clark wrote:
On Wed, Aug 13, 2014 at 1:07 PM, Jerome Glisse
wrot
On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse wrote:
> Sucks because you can not do weird synchronization like one i depicted in
> another
> mail in this thread and for as long as cmdbuf_ioctl do not give you
> fence|syncpt
> you can not such thing cleanly in non hackish way.
Actually i915 can
On Thu, Aug 14, 2014 at 11:23:01PM +0200, Daniel Vetter wrote:
> On Thu, Aug 14, 2014 at 9:15 PM, Jerome Glisse wrote:
> > Cost 1 uint32 per buffer and simple if without locking to check status of
> > a buffer.
>
> Yeah well except it doesn't and that's why we switch to full blown
> fence objects
On Thu, Aug 14, 2014 at 10:12 AM, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 09:16:02AM -0400, Rob Clark wrote:
>> On Wed, Aug 13, 2014 at 1:07 PM, Jerome Glisse wrote:
>> > So this is fundamentaly different, fence as they are now allow random
>> > driver
>> > callback and this is bound to g
On Thu, Aug 14, 2014 at 10:12:06AM -0400, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 09:16:02AM -0400, Rob Clark wrote:
> > On Wed, Aug 13, 2014 at 1:07 PM, Jerome Glisse
> > wrote:
> > > So this is fundamentaly different, fence as they are now allow random
> > > driver
> > > callback and th
On Thu, Aug 14, 2014 at 10:23:30AM -0400, Jerome Glisse wrote:
> On Thu, Aug 14, 2014 at 11:08:34AM +0200, Daniel Vetter wrote:
> > On Wed, Aug 13, 2014 at 01:07:20PM -0400, Jerome Glisse wrote:
> > > Let me make this crystal clear this must be a valid kernel page that have
> > > a
> > > valid ker
Am 14.08.2014 um 14:37 schrieb Maarten Lankhorst:
> Op 14-08-14 om 13:53 schreef Christian K?nig:
>>> But because of driver differences I can't implement it as a straight wait
>>> queue. Some drivers may not have a reliable interrupt, so they need a
>>> custom wait function. (qxl)
>>> Some may ne
On Thu, Aug 14, 2014 at 09:40:08PM +0200, Maarten Lankhorst wrote:
>
>
> On 14-08-14 21:15, Jerome Glisse wrote:
> > On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote:
> >> On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse
> >> wrote:
> >>> Sucks because you can not do weird synchroniz
On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote:
> On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse wrote:
> > Sucks because you can not do weird synchronization like one i depicted in
> > another
> > mail in this thread and for as long as cmdbuf_ioctl do not give you
> > fence|syncp
Op 14-08-14 om 13:53 schreef Christian K?nig:
>> But because of driver differences I can't implement it as a straight wait
>> queue. Some drivers may not have a reliable interrupt, so they need a custom
>> wait function. (qxl)
>> Some may need to do extra flushing to get fences signaled (vmwgfx),
On Thu, Aug 14, 2014 at 05:58:48PM +0200, Daniel Vetter wrote:
> On Thu, Aug 14, 2014 at 10:12:06AM -0400, Jerome Glisse wrote:
> > On Thu, Aug 14, 2014 at 09:16:02AM -0400, Rob Clark wrote:
> > > On Wed, Aug 13, 2014 at 1:07 PM, Jerome Glisse
> > > wrote:
> > > > So this is fundamentaly differen
On Thu, Aug 14, 2014 at 05:55:51PM +0200, Daniel Vetter wrote:
> On Thu, Aug 14, 2014 at 10:23:30AM -0400, Jerome Glisse wrote:
> > On Thu, Aug 14, 2014 at 11:08:34AM +0200, Daniel Vetter wrote:
> > > On Wed, Aug 13, 2014 at 01:07:20PM -0400, Jerome Glisse wrote:
> > > > Let me make this crystal cl
> But because of driver differences I can't implement it as a straight wait
> queue. Some drivers may not have a reliable interrupt, so they need a custom
> wait function. (qxl)
> Some may need to do extra flushing to get fences signaled (vmwgfx), others
> need some locking to protect against gp
Op 13-08-14 om 19:07 schreef Jerome Glisse:
> On Wed, Aug 13, 2014 at 05:54:20PM +0200, Daniel Vetter wrote:
>> On Wed, Aug 13, 2014 at 09:36:04AM -0400, Jerome Glisse wrote:
>>> On Wed, Aug 13, 2014 at 10:28:22AM +0200, Daniel Vetter wrote:
On Tue, Aug 12, 2014 at 06:13:41PM -0400, Jerome Gli
On Wed, Aug 13, 2014 at 01:07:20PM -0400, Jerome Glisse wrote:
> Let me make this crystal clear this must be a valid kernel page that have a
> valid kernel mapping for the lifetime of the device. Hence there is no access
> to mmio space or anything, just a regular kernel page. If can not rely on th
On Thu, Aug 14, 2014 at 11:08:34AM +0200, Daniel Vetter wrote:
> On Wed, Aug 13, 2014 at 01:07:20PM -0400, Jerome Glisse wrote:
> > Let me make this crystal clear this must be a valid kernel page that have a
> > valid kernel mapping for the lifetime of the device. Hence there is no
> > access
> >
On Thu, Aug 14, 2014 at 09:16:02AM -0400, Rob Clark wrote:
> On Wed, Aug 13, 2014 at 1:07 PM, Jerome Glisse wrote:
> > So this is fundamentaly different, fence as they are now allow random driver
> > callback and this is bound to get ugly this is bound to lead to one driver
> > doing something tha
On Thu, Aug 14, 2014 at 11:15:11AM +0200, Maarten Lankhorst wrote:
> Op 13-08-14 om 19:07 schreef Jerome Glisse:
> > On Wed, Aug 13, 2014 at 05:54:20PM +0200, Daniel Vetter wrote:
> >> On Wed, Aug 13, 2014 at 09:36:04AM -0400, Jerome Glisse wrote:
> >>> On Wed, Aug 13, 2014 at 10:28:22AM +0200, Dan
On Wed, Aug 13, 2014 at 1:07 PM, Jerome Glisse wrote:
> So this is fundamentaly different, fence as they are now allow random driver
> callback and this is bound to get ugly this is bound to lead to one driver
> doing something that seems innocuous but turn out to break heavoc when call
> from som
On Wed, Aug 13, 2014 at 09:36:04AM -0400, Jerome Glisse wrote:
> On Wed, Aug 13, 2014 at 10:28:22AM +0200, Daniel Vetter wrote:
> > On Tue, Aug 12, 2014 at 06:13:41PM -0400, Jerome Glisse wrote:
> > > Hi,
> > >
> > > So i want over the whole fence and sync point stuff as it's becoming a
> > > pre
> The whole issue is that today cs ioctl assume implied synchronization. So this
> can not change, so for now anything that goes through cs ioctl would need to
> use an implied timeline and have all ring that use common buffer synchronize
> on it. As long as those ring use different buffer there is
On Wed, Aug 13, 2014 at 05:54:20PM +0200, Daniel Vetter wrote:
> On Wed, Aug 13, 2014 at 09:36:04AM -0400, Jerome Glisse wrote:
> > On Wed, Aug 13, 2014 at 10:28:22AM +0200, Daniel Vetter wrote:
> > > On Tue, Aug 12, 2014 at 06:13:41PM -0400, Jerome Glisse wrote:
> > > > Hi,
> > > >
> > > > So i w
On Wed, Aug 13, 2014 at 04:08:14PM +0200, Christian K?nig wrote:
> >The whole issue is that today cs ioctl assume implied synchronization. So
> >this
> >can not change, so for now anything that goes through cs ioctl would need to
> >use an implied timeline and have all ring that use common buffer
On Tue, Aug 12, 2014 at 06:13:41PM -0400, Jerome Glisse wrote:
> Hi,
>
> So i want over the whole fence and sync point stuff as it's becoming a
> pressing
> issue. I think we first need to agree on what is the problem we want to solve
> and what would be the requirements to solve it.
>
> Problem
Hi Jerome,
first of all that finally sounds like somebody starts to draw the whole
picture for me.
So far all I have seen was a bunch of specialized requirements and some
not so obvious design decisions based on those requirements.
So thanks a lot for finally summarizing the requirements from
On Wed, Aug 13, 2014 at 09:59:26AM +0200, Christian K?nig wrote:
> Hi Jerome,
>
> first of all that finally sounds like somebody starts to draw the whole
> picture for me.
>
> So far all I have seen was a bunch of specialized requirements and some not
> so obvious design decisions based on those
On Wed, Aug 13, 2014 at 10:28:22AM +0200, Daniel Vetter wrote:
> On Tue, Aug 12, 2014 at 06:13:41PM -0400, Jerome Glisse wrote:
> > Hi,
> >
> > So i want over the whole fence and sync point stuff as it's becoming a
> > pressing
> > issue. I think we first need to agree on what is the problem we w
On Tue, Aug 12, 2014 at 06:13:41PM -0400, Jerome Glisse wrote:
> Hi,
>
> So i want over the whole fence and sync point stuff as it's becoming a
> pressing
> issue. I think we first need to agree on what is the problem we want to solve
> and what would be the requirements to solve it.
>
> Problem
Hi,
So i want over the whole fence and sync point stuff as it's becoming a pressing
issue. I think we first need to agree on what is the problem we want to solve
and what would be the requirements to solve it.
Problem :
Explicit synchronization btw different hardware block over a buffer object.
39 matches
Mail list logo