Hi,

On 24.03.2011 23:57, Dominig Ar Foll wrote:
> Apologies for responding in dual posting. I would like as much as
> possible to concentrate that type of chat on the TV mailing list but I
> would not enjoy to leave a question open.
>
>     Regarding the already existing projects. Are you aware of MAFW on
>     Maemo5
>     http://www.grancanariadesktopsummit.org/node/219
>     The implementation might not be perfect, but the concept behind is
>     sane.
>
> No I did not know it and I think you for the link. I have so far only
> written a requirement specification and any idea to move toward a goo
> implementation specifications are welcomed. I will dig in their
> documentation.

MAFW on Fremantle was mostly done by Igalia. Some of the people now work
on Grillo open source project where you might be able to talk to them
(on irc).

>
>     The picture in "6 Position in MeeGo" looks quite arbitrary to me.
>     Do the
>     colors have a special semantics (meybe add a small leged below).
>
> No the colour are just imported from a slide and where just there to
> help to identify the blocks. The main idea of that graph is to make
> very clear that the proposed concept does not pan to create a
> universal audio/video pipeline but has the goal to be able to
> integrate multiple video pipeline under a unified umbrella. In
> particular it aims at enabling to get non open source pipeline to
> coexist with public pipelines.
>
>
>     In "7 Transparency" you need to highlight what your proposal adds
>     to the
>     existing features.
>
> The chapter 7) "Transparency" regroup the need to provide certain type
> of services in a transparent manner to the the application. My goal is
> to enable applications to play multimedia content which knowing much
> about that content. e.g. if you write an application which need to
> access a live TV service but you live in US, you will have different
> pipeline once that same application is run in Europe.The requirement
> of transparency is applied to the typeof source and target. In a very
> similar manner as when you print on Linux today. Your application
> knowns very little about the printer but still can print.
Which part of the pipeline your are thinking is not well handled right
now. I you have concrete examples for illustration, I would encourage
you to add them. I believe architecturally we don't miss anything major
here.

>
>     * Transport protocol: handled e.g. by gstreamer already, standarts
>     like
>     DLNA specify subsets for interoperability already
>
>
> I am afraid that GStreamer cannot do today everything that I would
> love it to do. It does pretty well on most of internet format but
> Playbin2 has a very limited set of supported services when it come to
> Broadcast or IPTV. Furthermore by default it does not support any
> smoth streaming feature or protection.
The gstreamer people already have smooth streaming implementation. There
are two things:
1) missing architecture to implement a feature
2) missing implementation for a certain feature
I believe the gstreamer architecture is pretty solid for adding extra
streaming protocols, container, codecs etc.
Regarding content protection, I believe it should be done outside of
gstreamer. As I said it is not media specific. One idea would be to
implement a virtual file system with the related access rights and
process isolation. This would allow to run an unmodified media pipeline.

> But I agree that GStreamer is a great tool and I would certainly see
> it as one of the strong candidate to implement the first open source
> audio video pipe line under a UMMS framework.
Just to be clear - I am not saying that gstreamer is the tool for
everything. But integrating two many thing in parallel might not be
beneficial either. Thus your document needs to improve pointing out the
missing parts (explicitly). Then people can help you to identify
existing implementations (or where they believe the feature should be
added). Then we can also identify things that are completely missing.

We also have to keep in mind that people need to be able to understand
our multimedia stack. Right now I think it makes sense:

QtMulitmediaKit
* high level qt/c++ api that focuses on particular use cases
* might apply constraints to keep the api simple
QtGStreamer
* the full feature set of gstreamer bound to a qt style api

GStreamer
* high level api (playbin2, camerabin(2), decodebin2, gnonlin, rtpbin, ...)
* open and closed multimedia components

Kernel
* audio/video i/o, network, accelerated codecs, ...

>  
>
>     * Transparent Encapsulation and Multiplexing: could you please
>     elaborate
>     why one would need the non-automatic mode. I think it does not make
>     sense to let the application specify what format the stream is in, if
>     the media-framework can figure it (in almost all of the cases). In
>     some
>     corner cases one can e.g. use custom pipelines and specify the format
>     (e.g. a ringtone playback service might do that if it knows the format
>     already).
>
>  
> Multimedia asset comes in multiple mode of transport and multiplexing
> (from HTTP to Live DVB) in MPEG2-TS, mp4, quick time or Flash. The
> automatic detection is sometime possible and some time not.
When would the detection not work?
> Futhermore some video pipeline can do many format well while still
> some other format will impose an alternative pipeline (Bluray is a
> good example).
There is no architectural issue regarding blueray in the our stack. It
is more a legal mess. I belive given you have the time and money, you
could write blueray support to gstreamer in a similar fashion as we have
the dvd support now.
> The idea presented here is that the UMMS can decide which pipe line to
> call on depending of URL or the detected stream type without requiring
> a prior knowledge from the application about the pipeline
> configuration/selection.
That means you would like to have a bunch of different multimedia
frameworks on a device and then use an appropriate one depending on the
URI. E.g. use gstreamer for some formats and use mplayer for some
others. While that might sound like a good idea, I don't think it is one
for several reasons:
- you will needs to abstract the the different apis (well thats actually
your proposal)
- you increase size and complexity of the multimedia stack
    - more difficult to debug (e.g. differnet tools needed)
    - testing is a lot more difficult
- users might get annoyed by small incompatibilities (seeking works
differently depending on the media)
- you need to do base adaptation several times (integrate codecs,
rendering etc. in several frameworks)

There might be more reasons that speak against such an approach, but
already the last one would be major enough for me.
>  
>
>     * Transparent Target: Whats the role of the UMMS here? How does
>     the URI
>     make sense here. Are you suggesting to use something like
>     opengl://localdisplay/0/0/854/480? MAFW was introducing renderers,
>     where
>     a local renderer would render well locally and one could e.g. have a
>     UPnP DLNA renderer or a media recorder.
>
>
> Once again here the goal is to decouple the application from the prior
> knowledge requirement of the videopipe line. I am proposing to add to
> the traditional target to play video of an xvid not only openGL
> texture, but also DLA target and video in overlay. The later is a
> speciality of SoC but is mandatory when it come to run HD video on low
> energy system or to respect tight security requirement.
Again, how do you think this is not possible right now. When the
application uses gstreamer to play video, it would just use playbin2,
set the uri and press play. Everything else is handled by playbin2,
including picking up the right platform optimized codecs. Having
flexible video-output routing is important and I agree that the xserver
does not offer enough here. Sometimes I wonder if we want a counterpart
for PulseAudio, I won't need to do mixing, but it would handles video
routing/cloning, cover hardware difference (overlays, texture streaming)
and would provide a policy enforcement point.

That said, please again add more details of the missing use-cases. For
me it would be totally fine to include hand-drawn and camera captured
diagrams in the document.
>  
>
>     * Transparent Resource Management: That makes a lot of sense and
>     so far
>     was planned to be done on QT MultimediaKit
>
>
> Yes. It make sense and on SoC it's even more critical.
>  
>
>     * Attended and Non Attended execution: This sounds like having a media
>     recording service in the platform.
>
>
> Yes that exactly what it is. 
>
>
>     "8 Audio Video Control"
>     This is a media player interface. Most of the things make sense. Below
>     those that might need more thinking
>     * Codec Selection: please don't. This is something that we need to
>     solve
>     below and not push to the application or even to the user.
>
>
> In general I do agree but sometime you need to specify. In particular
> when you have multiple streams in the same multiplex (e.g. Dolby 7.1
> and simple PCM audio).
But thats not codec selection. Of couse we want to allow users to pick
which stream in a media they like to render.
>  
>
>     * Buffer Strategy: same as before. Buffering strategy depends on the
>     use-case and media. The application needs to express whether its a
>     media-player/media-editor/.. and from that we need to derive this.
>
>
> I would have agreed with you before doing a real deployment of the
> Cubovision system in Telecom Italia. When you do HD video on the
> internet, they are a few things that you have to live with. Buffer
> strategy is one of these. But as you noticed I proposed by default to
> just proposed classes.
In case you have been trying to implement those using gstreamer and
found deficiencies, I would encourage you to file those stories as bugs
with enough background and like to the bugs here. That would help to
have focused discussion on each single item. The whole problem space is
huge and not solvable in one step. We can only layout a path of what we
like to have in the future and work on the missing bits one by one.
>
>
>     "9 Restricted Access Mode"
>     Most of those are needed as platform wide services. E.g. Parental
>     Control would also be needed for Internet access.
>
>
> Don't disagree with you, but my job is TV :-) If the same concept can
> be reuse, that's nice. But I do not know tight regulation impose on
> parental control on Internet while on TV devices it' a mandatory
> requirement in many countries.
Thats good to know. I just feel having the inside the media framework
would be bit awkward.
>
>
>     "11 Developer and Linux friendly"
>     * Backwards compatible ...: My suggestion is to take inspiration in
>     existing components, but only do any emulation if someone really needs
>     that. It is usually possible to some extend, but whats the point?
>
>
> MeeGo people which are developing application today with QT should
> have their effort protected. The UMMS because it designed to support
> TV requirement goes further than existing multimedia framework and so
> providing compatibility to existing applications is a simple way to be
> accepted. 
>  
>
>     * Device and Domain independence: Again, how does UMMS improve the
>     situation here?
>
>
> On TV our pixel are rectangular while on  other devices they are
> scare. We also have a zone (call safe zone) where we can not display
> anything. It's very safe indeed. I do not want application to have to
> get that knowledge of the domain (TV or non TV).
> On the device side, embedded SoC and PC are treating video and
> graphics very differently, Once again I want to hide that complexity
> to the application.
Nothing against such requirements, but then we want to solve this
probably even on the driver level.
>
>
>     "12 Typical use cases"
>     I think it would be helpful to have before and after stories here to
>     highlight the benefits of your concept.
>
>
> Good hint for a further release. 
>
>
>     "13 D-Bus"
>     Be careful with generic statements like "D-Bus can be a bit slow ...".
>     Stick with facts and avoid myths.
>
>
> Correct. Number only count. The Cubovision system delivered to Telecom
> Italia  is using D-Bus and performance has never been an issue but
> there is a perception that it might be slow. Before deciding for a
> final technology real measurement will be required.
>
>
>     "14 QT-Multimedia"
>     Seriously, don't even consider to stack it on top of qt-multimedia.
>     We're still embedded. You could propose to implement it as part of QT
>     multimedia though (or having it at the same level).
>
>
> I would do a lot to  keep existing application running. But if that is
> not required, will be happy to ditch it.
I would rather suggest to work with the QT Multimedia Kit and see if
they can cover your use cases.
>
>
>     "15 GStreamer"
>     It is GStreamer (with a upper case 'S') :) In general please spell
>     check
>     the section.
>     Regarding the three weak points:
>     * smooth fast forward is a seek_event with a rate>1.0. There might be
>     elements not properly implementing that, but I fail to understand how
>     you can fix that on higher layers instead of in the elements. It might
>     make sense to define extended compliance criteria for base adaptation
>     vendors to ensure consistent behavior and features.
>
>
> I do not plan to correct that at higher level, I just want to point
> that GStreamer which is used by default in MeeGo has weaknesses.
>  
>
>     * DRM can be implemented outside of GStreamer. still I don't fully
>     understand what the issue here might be.
>
>
> DRM and CA in General as a nightmare and cannot be always decoupled
> from the videopipe line. GStreamer is fairly friendly to DRM and CA
> but some requirement will impose dedicated video pipe line (e.g. Bluray)
>  
>
>     * Push/pull: gstreamer is a library. you can do lots of things
>     with it.
>     If you want to use it to broadcast media you can do that very
>     well. Some
>     known examples: rygel (upnp media server), gst-rtsp-server. Just to
>     clarify on the terminology - media processing within the graph is also
>     using push and pull, but that refers to whether one component pushes
>     media to downstream or one component pulls data from upstream. E.g. in
>     media playback of local files GStreamer uses a hybrid setup.
>
>
> Currently GStreamer needs improvement to support pushed transport. A
> good example is Broadcast Live TV where the clock needs to be
> synchronised on the satellite source if you do not want to jump a
> frame once in a while. Nothing impossible, but not something which
> works out of the box today.
>  
>
>     * Licenses and Patents: Seriously, this is hardly the fault of
>     GStreamer
>     and its plugin approach is the  best solution for it. In the end every
>     vendor shipping a meego solution will need to ensure that the
>     royalties
>     for codecs are payed and the shipped code is fully licensed.
>
>
> Yes. But not easy in a full open source environment.
>  
>
>     Besides a system like MAFW already allowed to e.g. implement a local
>     renderer using mplayer as a foundation if that is preferred.
>     Personally
>     thats fine to me, but I believe the target customer for a TV will
>     except
>     that things work out of the box :)
>
>
> Providing a full TV experience imposes quite a number of extra
> requirements which are definitively not covered by any Open source
> system today. I hope that by defining something generic, other MeeGo
> vertical (in particular Tablet and IVI) would have access to a better
> support of live TV in their domain. For them is optional and nice to
> have, for TV there is no chose, it's a mandatory requirement.
>
>
>     Sorry, this became a somewhat long reply
>
>
> Very welcomed.
I see that you have been putting a lot of work into this already. From
experience I know that its difficult to get big changes through. I would
strongly auggest you to try to break it down. Would it be possible to
e.g. write a document talking about your previous experience (e.g. the
Cubovision system). If possible describe the architecture, the targeted
uses cases and then tell what worked and what did not worked well. This
would help non TV people to better understand your POV. Would that work?

Stefan

>
> -- 
> Dominig ar Foll
> MeeGo TV
> Intel Open Source Technology Centre
>
>
> _______________________________________________
> MeeGo-dev mailing list
> MeeGo-dev@meego.com
> http://lists.meego.com/listinfo/meego-dev
> http://wiki.meego.com/Mailing_list_guidelines

_______________________________________________
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines

Reply via email to