Hi Robin, I've same your problems. If you find a solution to how work
using opencore from native or java, please post.

On 7 Ago, 13:40, Robin Marx <marx.ro...@gmail.com> wrote:
> First of all, we have no idea whatsoever on how to access the opencore
> functionality from our native code (or from java). We found a lot of
> posts from people saying they are using the encoders/decoders
> directly, or trying to add extra format-support etc. but nowhere do we
> find documentation on even the most basic things for using the native
> opencore API. If you could tell us how to directly access the encoders
> for example, that would take us a LONG way already.
>
> As for RTP we really need to be in control of what is in the packages
> and how they are sent. Part of the research is working with a proxy
> that'll filter certain packets based on the available bandwidth and
> other resources of the network, so we really want full control over
> the RTP functionality on all ends. This means other RTP
> implementations are out of the question for us.
>
> We have been making some progress though. At the moment we are able to
> stream speex-encoded audio to the device, decode it (this step is very
> slow on emulator, but fast enough on the device itself) and play it as
> a .wav file (putting a dummy wav-header before the actual data seems
> to work just fine). We use the methode described here for good
> buffered playback from socket 
> :http://blog.pocketjourney.com/2008/04/04/tutorial-custom-media-stream...
> (so the inputstream is not from an URL but from our native socket and
> we don't play .mp3 but .wav).
>
> As for video we can now transmit from the device to the desktop using
> the ParcelFileDescriptor-method. Native code on the device wraps the
> data in rtp-videopackets. On the desktop we buffer the packets and
> parse the h.263 headers until a full frame has arrived and then show
> it (the first 32 bytes android writes are the 3gpp header, just ignore
> it, the h.263 frames start right after that without .3gpp things in
> the videodata). While this gives us live video-feed (with minor delay)
> it is far from optimal for our research (no correct timing information
> in the rtp-packets for instance, making lipsync difficult etc.).
>
> We expect that sending audio wouldn't be that different from sending
> video, but for the moment we don't have a working amr-decoder on the
> desktop so we can't test it yet.
>
> Receiving video is going to be the hard one with these methods. For
> audio we can use the simple .wav format, but .3gpp is considerably
> more complicated and we think it's almost impossible to replicate for
> a live stream such as ours (if you can, please prove us wrong. Our
> main concern at the moment is the stsz-atom with the timing info).
>
> I hope this can help some others working on the same problem, but
> these solutions are NOT optimal. If anyone can tell me how to use the
> opencore functionality we will probably be able to find much better
> methods and share them.
>
> Thanks in advance and thanks to those who replied.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to