On 16. 05. 24 16:50, Jaroslav Kysela wrote:
On 15. 05. 24 22:33, Nicolas Dufresne wrote:
In GFX, they solve this issue with fences. That allow setting up the next
operation in the chain before the data has been produced.
The fences look really nicely and seem more modern. It should be possib
On 15. 05. 24 15:34, Shengjiu Wang wrote:
On Wed, May 15, 2024 at 6:46 PM Jaroslav Kysela wrote:
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200,
Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrot
On 15. 05. 24 22:33, Nicolas Dufresne wrote:
Hi,
GStreamer hat on ...
Le mercredi 15 mai 2024 à 12:46 +0200, Jaroslav Kysela a écrit :
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200,
Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
Hi,
GStreamer hat on ...
Le mercredi 15 mai 2024 à 12:46 +0200, Jaroslav Kysela a écrit :
> On 15. 05. 24 12:19, Takashi Iwai wrote:
> > On Wed, 15 May 2024 11:50:52 +0200,
> > Jaroslav Kysela wrote:
> > >
> > > On 15. 05. 24 11:17, Hans Verkuil wrote:
> > > > Hi Jaroslav,
> > > >
> > > > On 5/
On 5/9/24 06:13, Jaroslav Kysela wrote:
> On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is
one of the components in the pipeline.
>>>
>>> I was thinking of loopback with endpoints using compress streams,
>>> without physical
On Wed, May 15, 2024 at 6:46 PM Jaroslav Kysela wrote:
>
> On 15. 05. 24 12:19, Takashi Iwai wrote:
> > On Wed, 15 May 2024 11:50:52 +0200,
> > Jaroslav Kysela wrote:
> >>
> >> On 15. 05. 24 11:17, Hans Verkuil wrote:
> >>> Hi Jaroslav,
> >>>
> >>> On 5/13/24 13:56, Jaroslav Kysela wrote:
> O
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200,
Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem i
On Wed, 15 May 2024 11:50:52 +0200,
Jaroslav Kysela wrote:
>
> On 15. 05. 24 11:17, Hans Verkuil wrote:
> > Hi Jaroslav,
> >
> > On 5/13/24 13:56, Jaroslav Kysela wrote:
> >> On 09. 05. 24 13:13, Jaroslav Kysela wrote:
> >>> On 09. 05. 24 12:44, Shengjiu Wang wrote:
> >> mem2mem is just like
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is
one of the components in the pipeline.
I w
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
> On 09. 05. 24 13:13, Jaroslav Kysela wrote:
>> On 09. 05. 24 12:44, Shengjiu Wang wrote:
> mem2mem is just like the decoder in the compress pipeline. which is
> one of the components in the pipeline.
I was thinking of loopba
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is
one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams,
without physical endpoint, somethin
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is
one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams,
without physical endpoint, something like:
compress playback (to feed data from
On Thu, May 9, 2024 at 6:28 PM Amadeusz Sławiński
wrote:
>
> On 5/9/2024 12:12 PM, Shengjiu Wang wrote:
> > On Thu, May 9, 2024 at 5:50 PM Amadeusz Sławiński
> > wrote:
> >>
> >> On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
> >>> On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński
> >>> wrote:
> >>>
On 5/9/2024 12:12 PM, Shengjiu Wang wrote:
On Thu, May 9, 2024 at 5:50 PM Amadeusz Sławiński
wrote:
On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński
wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On F
On Thu, May 9, 2024 at 5:50 PM Amadeusz Sławiński
wrote:
>
> On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
> > On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński
> > wrote:
> >>
> >> On 5/8/2024 10:00 AM, Hans Verkuil wrote:
> >>> On 06/05/2024 10:49, Shengjiu Wang wrote:
> On Fri, May 3, 2024 a
On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński
wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab wrote:
Em Fri, 3 May 2024 10:47:19 +0900
Mark Brown
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński
wrote:
>
> On 5/8/2024 10:00 AM, Hans Verkuil wrote:
> > On 06/05/2024 10:49, Shengjiu Wang wrote:
> >> On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab
> >> wrote:
> >>>
> >>> Em Fri, 3 May 2024 10:47:19 +0900
> >>> Mark Brown escreveu:
> >
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab wrote:
Em Fri, 3 May 2024 10:47:19 +0900
Mark Brown escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Cheh
On 06/05/2024 10:49, Shengjiu Wang wrote:
> On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab
> wrote:
>>
>> Em Fri, 3 May 2024 10:47:19 +0900
>> Mark Brown escreveu:
>>
>>> On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab escreveu:
>>>
>
On 06. 05. 24 10:49, Shengjiu Wang wrote:
Even now I still think V4L2 is the best option, but it looks like there
are a lot of rejects. If develop a new ALSA-mem2mem, it is also
a duplication of code (bigger duplication that just add audio support
in V4L2 I think).
Maybe not. Could you try to
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab wrote:
>
> Em Fri, 3 May 2024 10:47:19 +0900
> Mark Brown escreveu:
>
> > On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
> > > Mauro Carvalho Chehab escreveu:
> >
> > > > There are still time control associated with it,
Em Fri, 3 May 2024 10:47:19 +0900
Mark Brown escreveu:
> On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
> > Mauro Carvalho Chehab escreveu:
>
> > > There are still time control associated with it, as audio and video
> > > needs to be in sync. This is done by controllin
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
> Mauro Carvalho Chehab escreveu:
> > There are still time control associated with it, as audio and video
> > needs to be in sync. This is done by controlling the buffers size
> > and could be fine-tuned by checking when the
Em Thu, 2 May 2024 09:59:56 +0100
Mauro Carvalho Chehab escreveu:
> Em Thu, 02 May 2024 09:46:14 +0200
> Takashi Iwai escreveu:
>
> > On Wed, 01 May 2024 03:56:15 +0200,
> > Mark Brown wrote:
> > >
> > > On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
> > > > Mark Brow
Em Thu, 02 May 2024 09:46:14 +0200
Takashi Iwai escreveu:
> On Wed, 01 May 2024 03:56:15 +0200,
> Mark Brown wrote:
> >
> > On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
> > > Mark Brown escreveu:
> > > > On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wr
On Wed, 01 May 2024 03:56:15 +0200,
Mark Brown wrote:
>
> On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
> > Mark Brown escreveu:
> > > On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
>
> > > The discussion around this originally was that all the audio A
On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
> Mark Brown escreveu:
> > On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
> > The discussion around this originally was that all the audio APIs are
> > very much centered around real time operations rather t
Em Tue, 30 Apr 2024 23:46:03 +0900
Mark Brown escreveu:
> On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
>
> > first of all thanks for all of this work and I am very sorry for only
> > emerging this late into the series, I sadly didn't notice it earlier.
>
> It might be wor
On 30. 04. 24 16:46, Mark Brown wrote:
So instead of hammering a driver into the wrong destination, I would
suggest bundling our forces and implementing a general memory-to-memory
framework that both the media and the audio subsystem can use, that
addresses the current shortcomings of the implem
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
> first of all thanks for all of this work and I am very sorry for only
> emerging this late into the series, I sadly didn't notice it earlier.
It might be worth checking out the discussion on earlier versions...
> 1. The biggest
Em Tue, 30 Apr 2024 10:47:13 +0200
Hans Verkuil escreveu:
> On 30/04/2024 10:21, Sebastian Fricke wrote:
> > Hey Shengjiu,
> >
> > first of all thanks for all of this work and I am very sorry for only
> > emerging this late into the series, I sadly didn't notice it earlier.
> >
> > I would like
On 30/04/2024 10:21, Sebastian Fricke wrote:
> Hey Shengjiu,
>
> first of all thanks for all of this work and I am very sorry for only
> emerging this late into the series, I sadly didn't notice it earlier.
>
> I would like to voice a few concerns about the general idea of adding
> Audio support
Audio signal processing also has the requirement for memory to
memory similar as Video.
This asrc memory to memory (memory ->asrc->memory) case is a non
real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc
sends back the output buffer to user. So it is n
33 matches
Mail list logo