Our program is CorePlayer and is well known for being a great
audio/video player for mobile platforms.

As the title of the thread says, I use AudioTrack which is a *java*
API. Wether I call it from JNI or within dalvik doesn't make any
difference.

On Wed, Feb 17, 2010 at 10:10 PM, niko20 <nikolatesl...@yahoo.com> wrote:
> What is the name of your program again? Because just hearing that you
> are calling the AudioTrack from inside the NDK makes me NOT want to
> every grab a copy of your program. The headers aren't stable and it
> will break in the future (high risk anyway).
>
> There's no need to do that anyway, you can just pass the buffer back
> out from JNI code, you really wont get that much of a speed hit.
>
>
> -niko
>
> On Feb 17, 2:04 pm, Steve Lhomme <rob...@gmail.com> wrote:
>> I already use a separate thread to feed the audio chunks. For
>> efficiency it's writing many chunks at once before going to sleep. It
>> works well on all the platforms I mentioned because they all support
>> non-blocking calls, in fact I'm not sure any of them support blocking
>> calls at all. As for writing directly to hardware buffers, the last
>> audio sink I worked on is the AudioQueue one on iPhone/OS X and you
>> have direct access to the hardware memory (or very close).
>>
>> I can (and I did) tweak our buffer feeding system to feed smaller
>> chunks and return more often to the top of the thread to see if it has
>> to be paused or not. This is not very efficient in terms of memory
>> allocation (called a lot to allocate/release which takes time in
>> something that is supposed to be time critical) or thread/semaphore
>> locking/unlocking. In the end it works, but it's not as efficient as
>> all other platforms. On other platforms, either it can write and it
>> writes as much as it can and then sleeps for a while, or just go to
>> sleep because it cannot write. Here the sleeping is done in the
>> AudioTrack side of the code.
>>
>>
>>
>> On Wed, Feb 17, 2010 at 7:27 PM, Bob Kerns <r...@acm.org> wrote:
>> > The 70 ms here isn't due to the blocking nature, but due to the buffer
>> > size. With a 2.5 ms buffer size, you'd be able to stop the sound in
>> > 5ms even when both buffers were full. It really has nothing to do with
>> > blocking/non-blocking, which simply has to do with who has to do the
>> > blocking and checking for buffers full/available.
>>
>> > I take it 70 ms is the minimum your hardware supports? If so, non-
>> > blocking won't solve it, and you probably need different hardware. The
>> > fact that they even HAVE a minimum suggests to me that we'e talking
>> > about transferring to hardware buffers. Except for embedded devices,
>> > it's been a long time since software wrote the DAC registers directly.
>>
>> > But what about that 500 ms? That would seem to be more under your
>> > control. You can do your work in smaller chunks.
>>
>> > The scheduler is the other thing that'll kill you -- especially if you
>> > had smaller buffer sizes. If you're doing work in other threads, you'd
>> > want to tune it so you're doing work in small enough chunks that your
>> > output thread can run in a timely way.
>>
>> > A non-blocking protocol does let you be more explicit about this --
>> > essentially write your own scheduler.  But you can get the same result
>> > with an "audio pipeline" approach, where you move small buffers of
>> > data through each stage of your processing, in a single thread, and if
>> > the UI sets a flag that you should be doing something different, you
>> > just exit out of that pipeline wherever you are in the processing, and
>> > start up the new pipeline.
>>
>> > If you want to try to use up more processor on the earlier parts of
>> > your task, to protect against underruns, you can use two threads, with
>> > a larger number of small buffers mediating between them. The smaller
>> > buffers keep the initial latency small, while the larger number of
>> > buffers still allows the upstream processing to get further ahead.
>>
>> > On Feb 16, 11:14 pm, Steve Lhomme <rob...@gmail.com> wrote:
>> >> I can also say that a blocking AudioTrack would suck for a DJ software
>> >> where 70 ms of latency to do an action is terrible. 5 ms would be
>> >> acceptable, and that's also about as much time we use for polling.
>>
>> >> ...and because of the way our "feeding"
>> >> threads work, it can take up to 500 ms between the time the user
>> >> presses pause and the time the thread using AudioTrack is actually
>> >> able to handle it.
>>
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups "Android Developers" group.
>> > To post to this group, send email to android-developers@googlegroups.com
>> > To unsubscribe from this group, send email to
>> > android-developers+unsubscr...@googlegroups.com
>> > For more options, visit this group at
>> >http://groups.google.com/group/android-developers?hl=en
>
> --
> You received this message because you are subscribed to the Google
> Groups "Android Developers" group.
> To post to this group, send email to android-developers@googlegroups.com
> To unsubscribe from this group, send email to
> android-developers+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/android-developers?hl=en

-- 
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en

Reply via email to