Apologies for responding in dual posting. I would like as much as possible
to concentrate that type of chat on the TV mailing list but I would not
enjoy to leave a question open.

 Regarding the already existing projects. Are you aware of MAFW on Maemo5
> http://www.grancanariadesktopsummit.org/node/219
> The implementation might not be perfect, but the concept behind is sane.
>
No I did not know it and I think you for the link. I have so far only
written a requirement specification and any idea to move toward a goo
implementation specifications are welcomed. I will dig in their
documentation.

>
> The picture in "6 Position in MeeGo" looks quite arbitrary to me. Do the
> colors have a special semantics (meybe add a small leged below).
>
No the colour are just imported from a slide and where just there to help to
identify the blocks. The main idea of that graph is to make very clear that
the proposed concept does not pan to create a universal audio/video pipeline
but has the goal to be able to integrate multiple video pipeline under a
unified umbrella. In particular it aims at enabling to get non open source
pipeline to coexist with public pipelines.

>
> In "7 Transparency" you need to highlight what your proposal adds to the
> existing features.
>
The chapter 7) "Transparency" regroup the need to provide certain type of
services in a transparent manner to the the application. My goal is to
enable applications to play multimedia content which knowing much about that
content. e.g. if you write an application which need to access a live TV
service but you live in US, you will have different pipeline once that same
application is run in Europe.The requirement of transparency is applied to
the typeof source and target. In a very similar manner as when you print on
Linux today. Your application knowns very little about the printer but still
can print.

* Transport protocol: handled e.g. by gstreamer already, standarts like
> DLNA specify subsets for interoperability already
>

I am afraid that GStreamer cannot do today everything that I would love it
to do. It does pretty well on most of internet format but Playbin2 has a
very limited set of supported services when it come to Broadcast or IPTV.
Furthermore by default it does not support any smoth streaming feature or
protection.
But I agree that GStreamer is a great tool and I would certainly see it as
one of the strong candidate to implement the first open source audio video
pipe line under a UMMS framework.


> * Transparent Encapsulation and Multiplexing: could you please elaborate
> why one would need the non-automatic mode. I think it does not make
> sense to let the application specify what format the stream is in, if
> the media-framework can figure it (in almost all of the cases). In some
> corner cases one can e.g. use custom pipelines and specify the format
> (e.g. a ringtone playback service might do that if it knows the format
> already).
>

Multimedia asset comes in multiple mode of transport and multiplexing (from
HTTP to Live DVB) in MPEG2-TS, mp4, quick time or Flash. The automatic
detection is sometime possible and some time not. Futhermore some video
pipeline can do many format well while still some other format will impose
an alternative pipeline (Bluray is a good example).
The idea presented here is that the UMMS can decide which pipe line to call
on depending of URL or the detected stream type without requiring a prior
knowledge from the application about the pipeline configuration/selection.


> * Transparent Target: Whats the role of the UMMS here? How does the URI
> make sense here. Are you suggesting to use something like
> opengl://localdisplay/0/0/854/480? MAFW was introducing renderers, where
> a local renderer would render well locally and one could e.g. have a
> UPnP DLNA renderer or a media recorder.
>

Once again here the goal is to decouple the application from the prior
knowledge requirement of the videopipe line. I am proposing to add to the
traditional target to play video of an xvid not only openGL texture, but
also DLA target and video in overlay. The later is a speciality of SoC but
is mandatory when it come to run HD video on low energy system or to respect
tight security requirement.


> * Transparent Resource Management: That makes a lot of sense and so far
> was planned to be done on QT MultimediaKit
>

Yes. It make sense and on SoC it's even more critical.


> * Attended and Non Attended execution: This sounds like having a media
> recording service in the platform.
>

Yes that exactly what it is.

>
> "8 Audio Video Control"
> This is a media player interface. Most of the things make sense. Below
> those that might need more thinking
> * Codec Selection: please don't. This is something that we need to solve
> below and not push to the application or even to the user.
>

In general I do agree but sometime you need to specify. In particular when
you have multiple streams in the same multiplex (e.g. Dolby 7.1 and simple
PCM audio).


> * Buffer Strategy: same as before. Buffering strategy depends on the
> use-case and media. The application needs to express whether its a
> media-player/media-editor/.. and from that we need to derive this.
>

I would have agreed with you before doing a real deployment of the
Cubovision system in Telecom Italia. When you do HD video on the internet,
they are a few things that you have to live with. Buffer strategy is one of
these. But as you noticed I proposed by default to just proposed classes.

>
> "9 Restricted Access Mode"
> Most of those are needed as platform wide services. E.g. Parental
> Control would also be needed for Internet access.
>

Don't disagree with you, but my job is TV :-) If the same concept can be
reuse, that's nice. But I do not know tight regulation impose on parental
control on Internet while on TV devices it' a mandatory requirement in many
countries.

>
> "11 Developer and Linux friendly"
> * Backwards compatible ...: My suggestion is to take inspiration in
> existing components, but only do any emulation if someone really needs
> that. It is usually possible to some extend, but whats the point?
>

MeeGo people which are developing application today with QT should have
their effort protected. The UMMS because it designed to support TV
requirement goes further than existing multimedia framework and so providing
compatibility to existing applications is a simple way to be accepted.


> * Device and Domain independence: Again, how does UMMS improve the
> situation here?
>

On TV our pixel are rectangular while on  other devices they are scare. We
also have a zone (call safe zone) where we can not display anything. It's
very safe indeed. I do not want application to have to get that knowledge of
the domain (TV or non TV).
On the device side, embedded SoC and PC are treating video and graphics very
differently, Once again I want to hide that complexity to the application.

>
> "12 Typical use cases"
> I think it would be helpful to have before and after stories here to
> highlight the benefits of your concept.
>

Good hint for a further release.

>
> "13 D-Bus"
> Be careful with generic statements like "D-Bus can be a bit slow ...".
> Stick with facts and avoid myths.
>

Correct. Number only count. The Cubovision system delivered to Telecom
Italia  is using D-Bus and performance has never been an issue but there is
a perception that it might be slow. Before deciding for a final technology
real measurement will be required.

>
> "14 QT-Multimedia"
> Seriously, don't even consider to stack it on top of qt-multimedia.
> We're still embedded. You could propose to implement it as part of QT
> multimedia though (or having it at the same level).
>

I would do a lot to  keep existing application running. But if that is not
required, will be happy to ditch it.

>
> "15 GStreamer"
> It is GStreamer (with a upper case 'S') :) In general please spell check
> the section.
> Regarding the three weak points:
> * smooth fast forward is a seek_event with a rate>1.0. There might be
> elements not properly implementing that, but I fail to understand how
> you can fix that on higher layers instead of in the elements. It might
> make sense to define extended compliance criteria for base adaptation
> vendors to ensure consistent behavior and features.
>

I do not plan to correct that at higher level, I just want to point that
GStreamer which is used by default in MeeGo has weaknesses.


> * DRM can be implemented outside of GStreamer. still I don't fully
> understand what the issue here might be.
>

DRM and CA in General as a nightmare and cannot be always decoupled from the
videopipe line. GStreamer is fairly friendly to DRM and CA but some
requirement will impose dedicated video pipe line (e.g. Bluray)


> * Push/pull: gstreamer is a library. you can do lots of things with it.
> If you want to use it to broadcast media you can do that very well. Some
> known examples: rygel (upnp media server), gst-rtsp-server. Just to
> clarify on the terminology - media processing within the graph is also
> using push and pull, but that refers to whether one component pushes
> media to downstream or one component pulls data from upstream. E.g. in
> media playback of local files GStreamer uses a hybrid setup.
>

Currently GStreamer needs improvement to support pushed transport. A good
example is Broadcast Live TV where the clock needs to be synchronised on the
satellite source if you do not want to jump a frame once in a while. Nothing
impossible, but not something which works out of the box today.


> * Licenses and Patents: Seriously, this is hardly the fault of GStreamer
> and its plugin approach is the  best solution for it. In the end every
> vendor shipping a meego solution will need to ensure that the royalties
> for codecs are payed and the shipped code is fully licensed.
>

Yes. But not easy in a full open source environment.


> Besides a system like MAFW already allowed to e.g. implement a local
> renderer using mplayer as a foundation if that is preferred. Personally
> thats fine to me, but I believe the target customer for a TV will except
> that things work out of the box :)
>

Providing a full TV experience imposes quite a number of extra requirements
which are definitively not covered by any Open source system today. I hope
that by defining something generic, other MeeGo vertical (in particular
Tablet and IVI) would have access to a better support of live TV in their
domain. For them is optional and nice to have, for TV there is no chose,
it's a mandatory requirement.

>
> Sorry, this became a somewhat long reply
>

Very welcomed.

>
> --
Dominig ar Foll
MeeGo TV
Intel Open Source Technology Centre
_______________________________________________
MeeGo-dev mailing list
MeeGo-dev@meego.com
http://lists.meego.com/listinfo/meego-dev
http://wiki.meego.com/Mailing_list_guidelines

Reply via email to