On 2007/11/03 13:40, Karel Kulhavy wrote:
> On Wed, Oct 31, 2007 at 05:48:20PM +0100, Alexandre Ratchov wrote:
> > no; character devices (such as /dev/audio) keep per-unit state
> > (encoding, rate, ...). To mix multiple audio streams per-stream
> > state must be kept. That's why arts/esd/jack/... exist.
> 
> You don't need arts/esd/jack because of this. This can be solved in kernel.

so, the kernel would mix multiple sound sources coming at
different rates?

but, /dev/audio uses a character device. like ratchov@ (new audio
developer) wrote, these keep per-unit state, not per-stream state.

even ignoring whether it's actually desirable to have the kernel
do resampling (a quick perusal of the list archives should reveal
what developers think about that), with a character device you
*can't track the state of each stream* e.g. which bitrate to use.

> mplayer has to do this stuff all the time so it's full of this code.  It does 
> it
> not only to accomodate for various sample rates, but also when you slow down 
> or
> speed up your video.  Maybe the code could be taken from mplayer.

no, it couldn't.

from your other mail:
> For example if I have music playing in the background, Audacity cannot open
> the soundcard for recording. Can you imagine an operating system where if
> Firefox was writing a webpage to the disk, your e-mail client couldn't read
> a mail folder from the disk?

can you imagine an operating system where if Firefox was printing a
web page, your e-mail client could print an email on the same printer
at the same time?

just because some software which can mix audio sucks, doesn't mean
that all software which can mix audio has to suck.

Reply via email to