Am 17.10.23 um 09:32 schrieb Daniel Vetter:
On Fri, Oct 13, 2023 at 12:22:52PM +0200, Michel Dänzer wrote:
On 10/13/23 11:41, Daniel Vetter wrote:
On Thu, Oct 12, 2023 at 02:19:41PM -0400, Ray Strode wrote:
On Mon, Oct 09, 2023 at 02:36:17PM +0200, Christian König wrote:
To be clear, my take
On Fri, Oct 13, 2023 at 10:04:02AM -0400, Ray Strode wrote:
> Hi
>
> On Fri, Oct 13, 2023 at 5:41 AM Daniel Vetter wrote:
> > > I mean we're not talking about scientific computing, or code
> > > compilation, or seti@home. We're talking about nearly the equivalent
> > > of `while (1) __asm__ ("nop
On Fri, Oct 13, 2023 at 12:22:52PM +0200, Michel Dänzer wrote:
> On 10/13/23 11:41, Daniel Vetter wrote:
> > On Thu, Oct 12, 2023 at 02:19:41PM -0400, Ray Strode wrote:
> >> On Mon, Oct 09, 2023 at 02:36:17PM +0200, Christian König wrote:
> >> To be clear, my take is, if driver code is running
Hi
On Fri, Oct 13, 2023 at 5:41 AM Daniel Vetter wrote:
> > I mean we're not talking about scientific computing, or code
> > compilation, or seti@home. We're talking about nearly the equivalent
> > of `while (1) __asm__ ("nop");`
>
> I don't think anyone said this shouldn't be fixed or improved.
On 10/13/23 11:41, Daniel Vetter wrote:
> On Thu, Oct 12, 2023 at 02:19:41PM -0400, Ray Strode wrote:
>> On Mon, Oct 09, 2023 at 02:36:17PM +0200, Christian König wrote:
>> To be clear, my take is, if driver code is running in process context
>> and needs to wait for periods of time on the
On Thu, Oct 12, 2023 at 02:19:41PM -0400, Ray Strode wrote:
> Hi,
>
> On Mon, Oct 09, 2023 at 02:36:17PM +0200, Christian König wrote:
> > > > > To be clear, my take is, if driver code is running in process context
> > > > > and needs to wait for periods of time on the order of or in excess of
> >
Hi,
On Mon, Oct 09, 2023 at 02:36:17PM +0200, Christian König wrote:
> > > > To be clear, my take is, if driver code is running in process context
> > > > and needs to wait for periods of time on the order of or in excess of
> > > > a typical process time slice it should be sleeping during the wai
On Mon, Oct 09, 2023 at 02:36:17PM +0200, Christian König wrote:
> Am 09.10.23 um 14:19 schrieb Ville Syrjälä:
> > On Mon, Oct 09, 2023 at 08:42:24AM +0200, Christian König wrote:
> > > Am 06.10.23 um 20:48 schrieb Ray Strode:
> > > > Hi,
> > > >
> > > > On Fri, Oct 6, 2023 at 3:12 AM Christian Kö
On Thu, Oct 05, 2023 at 01:16:27PM +0300, Ville Syrjälä wrote:
> On Thu, Oct 05, 2023 at 11:57:41AM +0200, Daniel Vetter wrote:
> > On Tue, Sep 26, 2023 at 01:05:49PM -0400, Ray Strode wrote:
> > > From: Ray Strode
> > >
> > > A drm atomic commit can be quite slow on some hardware. It can lead
>
Am 09.10.23 um 14:19 schrieb Ville Syrjälä:
On Mon, Oct 09, 2023 at 08:42:24AM +0200, Christian König wrote:
Am 06.10.23 um 20:48 schrieb Ray Strode:
Hi,
On Fri, Oct 6, 2023 at 3:12 AM Christian König wrote:
When the operation busy waits then that *should* get accounted to the
CPU time of th
On Mon, Oct 09, 2023 at 08:42:24AM +0200, Christian König wrote:
> Am 06.10.23 um 20:48 schrieb Ray Strode:
> > Hi,
> >
> > On Fri, Oct 6, 2023 at 3:12 AM Christian König
> > wrote:
> >> When the operation busy waits then that *should* get accounted to the
> >> CPU time of the current process. Wh
Am 06.10.23 um 20:48 schrieb Ray Strode:
Hi,
On Fri, Oct 6, 2023 at 3:12 AM Christian König wrote:
When the operation busy waits then that *should* get accounted to the
CPU time of the current process. When the operation sleeps and waits for
some interrupt for example it should not get account
On 10/6/23 20:48, Ray Strode wrote:
>
> Note, a point that I don't think has been brought up yet, too, is
> the system unbound workqueue doesn't run with real time priority.
> Given the lion's share of mutter's drmModeAtomicCommit calls are
> nonblock, and so are using the system unbound workqueue
Hi,
On Fri, Oct 6, 2023 at 3:12 AM Christian König wrote:
> When the operation busy waits then that *should* get accounted to the
> CPU time of the current process. When the operation sleeps and waits for
> some interrupt for example it should not get accounted.
> What you suggest is to put the p
Am 05.10.23 um 23:04 schrieb Ray Strode:
Hi,
On Thu, Oct 5, 2023 at 5:57 AM Daniel Vetter wrote:
So imo the trouble with this is that we suddenly start to make
realtime/cpu usage guarantees in the atomic ioctl. That's a _huge_ uapi
change, because even limited to the case of !ALLOW_MODESET we
Hi,
On Thu, Oct 5, 2023 at 5:57 AM Daniel Vetter wrote:
> So imo the trouble with this is that we suddenly start to make
> realtime/cpu usage guarantees in the atomic ioctl. That's a _huge_ uapi
> change, because even limited to the case of !ALLOW_MODESET we do best
> effort guarantees at best.
S
Am 05.10.23 um 11:57 schrieb Daniel Vetter:
On Tue, Sep 26, 2023 at 01:05:49PM -0400, Ray Strode wrote:
From: Ray Strode
A drm atomic commit can be quite slow on some hardware. It can lead
to a lengthy queue of commands that need to get processed and waited
on before control can go back to use
On Thu, Oct 05, 2023 at 11:57:41AM +0200, Daniel Vetter wrote:
> On Tue, Sep 26, 2023 at 01:05:49PM -0400, Ray Strode wrote:
> > From: Ray Strode
> >
> > A drm atomic commit can be quite slow on some hardware. It can lead
> > to a lengthy queue of commands that need to get processed and waited
>
On Tue, Sep 26, 2023 at 01:05:49PM -0400, Ray Strode wrote:
> From: Ray Strode
>
> A drm atomic commit can be quite slow on some hardware. It can lead
> to a lengthy queue of commands that need to get processed and waited
> on before control can go back to user space.
>
> If user space is a real
Hi,
On Wed, Oct 4, 2023 at 1:28 PM Ville Syrjälä
wrote:
> No one really seemed all that interested in it. I'd still like to get
> it in, if for no other reason than to make things operate more uniformly.
> Though there are lots of legacy codepaths left that still hold the locks
> over the whole c
On Thu, Sep 28, 2023 at 03:33:46PM -0400, Ray Strode wrote:
> hI,
>
> On Thu, Sep 28, 2023 at 11:05 AM Ville Syrjälä
> wrote:
> > Here's my earlier take on this:
> > https://patchwork.freedesktop.org/series/108668/
>
> Nice. Was there push back? Why didn't it go in?
No one really seemed all th
hI,
On Thu, Sep 28, 2023 at 11:05 AM Ville Syrjälä
wrote:
> Here's my earlier take on this:
> https://patchwork.freedesktop.org/series/108668/
Nice. Was there push back? Why didn't it go in?
> execpt I went further and moved the flush past the unlock in the end.
Is that necessary? I was wonde
On 9/28/23 16:51, Christian König wrote:
> Am 28.09.23 um 15:37 schrieb Michel Dänzer:
>> On 9/28/23 14:59, Ray Strode wrote:
>>> On Thu, Sep 28, 2023 at 5:43 AM Michel Dänzer
>>> wrote:
>>> When it's really not desirable to account the CPU overhead to the
>>> process initiating it then yo
On Tue, Sep 26, 2023 at 01:05:49PM -0400, Ray Strode wrote:
> From: Ray Strode
>
> A drm atomic commit can be quite slow on some hardware. It can lead
> to a lengthy queue of commands that need to get processed and waited
> on before control can go back to user space.
>
> If user space is a real
Am 28.09.23 um 15:58 schrieb Michel Dänzer:
On 9/28/23 15:23, Christian König wrote:
What you need to do here is to report those problems to the driver teams and
not try to hide them this way.
See the linked issue: https://gitlab.freedesktop.org/drm/amd/-/issues/2861
(BTW, the original report
Am 28.09.23 um 15:37 schrieb Michel Dänzer:
On 9/28/23 14:59, Ray Strode wrote:
On Thu, Sep 28, 2023 at 5:43 AM Michel Dänzer
wrote:
When it's really not desirable to account the CPU overhead to the
process initiating it then you probably rather want to use an non
blocking commit plus a dma_fe
Hi,
On Thu, Sep 28, 2023 at 9:24 AM Christian König
wrote:
> If you see a large delay in the dpms off case then we probably have a driver
> bug somewhere.
This is something we both agree on, I think.
>> I'm getting the idea that you think there is some big bucket of kernel
>> syscalls that blo
On 9/28/23 15:23, Christian König wrote:
>
> What you need to do here is to report those problems to the driver teams and
> not try to hide them this way.
See the linked issue: https://gitlab.freedesktop.org/drm/amd/-/issues/2861
(BTW, the original reporter of that issue isn't hitting it with D
On 9/28/23 14:59, Ray Strode wrote:
> On Thu, Sep 28, 2023 at 5:43 AM Michel Dänzer
> wrote:
> When it's really not desirable to account the CPU overhead to the
> process initiating it then you probably rather want to use an non
> blocking commit plus a dma_fence to wait for the work t
Hi,
Am 28.09.23 um 14:46 schrieb Ray Strode:
Hi,
On Thu, Sep 28, 2023 at 2:56 AM Christian König
wrote:
To say the "whole point" is about CPU overhead accounting sounds
rather absurd to me. Is that really what you meant?
Yes, absolutely. See the functionality you try to implement already ex
hi,
On Thu, Sep 28, 2023 at 5:43 AM Michel Dänzer
wrote:
> >>> When it's really not desirable to account the CPU overhead to the
> >>> process initiating it then you probably rather want to use an non
> >>> blocking commit plus a dma_fence to wait for the work to end from
> >>> userspace.
> >> W
Hi,
On Thu, Sep 28, 2023 at 2:56 AM Christian König
wrote:
> > To say the "whole point" is about CPU overhead accounting sounds
> > rather absurd to me. Is that really what you meant?
>
> Yes, absolutely. See the functionality you try to implement already exists.
You say lower in this same messa
On 9/28/23 08:56, Christian König wrote:
> Am 27.09.23 um 22:25 schrieb Ray Strode:
>> On Wed, Sep 27, 2023 at 4:05 AM Christian König
>> wrote:
>
>>> When it's really not desirable to account the CPU overhead to the
>>> process initiating it then you probably rather want to use an non
>>> blocki
Hi Ray,
Am 27.09.23 um 22:25 schrieb Ray Strode:
Hi,
On Wed, Sep 27, 2023 at 4:05 AM Christian König
wrote:
I'm not an expert for that stuff, but as far as I know the whole purpose
of the blocking functionality is to make sure that the CPU overhead
caused by the commit is accounted to the rig
Hi,
On Wed, Sep 27, 2023 at 4:05 AM Christian König
wrote:
> I'm not an expert for that stuff, but as far as I know the whole purpose
> of the blocking functionality is to make sure that the CPU overhead
> caused by the commit is accounted to the right process.
I'm not an expert either, but that'
Am 26.09.23 um 19:05 schrieb Ray Strode:
From: Ray Strode
A drm atomic commit can be quite slow on some hardware. It can lead
to a lengthy queue of commands that need to get processed and waited
on before control can go back to user space.
If user space is a real-time thread, that delay can ha
From: Ray Strode
A drm atomic commit can be quite slow on some hardware. It can lead
to a lengthy queue of commands that need to get processed and waited
on before control can go back to user space.
If user space is a real-time thread, that delay can have severe
consequences, leading to the proc
37 matches
Mail list logo