On Wednesday, February 09, 2011 12:50:27 Sachin Gupta wrote:
> Hi Hans,
> 
>    Thanks for your inputs.We are part of Linaro organisation For more
> details on Linaro please refer to http://www.linaro.org . As part of our
> activities on Linaro we have been debating at whats the right solution for
> exposing camera support / features on a platform Openmax or v4l2.

It's simple, really. If you want to have support for the video hardware
merged in the kernel, then V4L is the way to go. OpenMax drivers will never
enter the kernel.

> Also can you share some details/docs on how userside library/v4l2
> partitioning is supposed to work.

It's really not yet known. The idea is as follows: the V4L driver gives
access to the hardware. Our goal is that through this driver the full
functionality of the hardware can be accessed.

There is also the library libv4l (also documented in the V4L spec:
http://linuxtv.org/downloads/v4l-dvb-apis/, section 6). The main goal of
this library is to do format conversion from custom formats to more common
formats, software whitebalancing, image mirroring and similar tasks. The main
use of this library is in generic webcam apps. You would not expect to see it
used on a SoC with camera software written for a specific board. But work is in
progress to extend this library to allow plug-ins as well.

This would make it possible to intercept V4L ioctls and call on some proprietary
code to do e.g. whitebalancing or setting up scaling factors. Some patches have
been posted already, but I don't know the current status.

In the case of complex video hardware I suspect that userspace libraries for
specific video hardware might be needed. What I expect to be in there is code
that will set up pipelines or specific use-cases in a simplified way. The
library would know how to optimally configure the various blocks. A typical
example would be scaling: there may be multiple scalers in the pipeline, and
while the V4L2 API will give you access to those scalers it does not contain
the knowledge how to achieve the best end result.

Now, it is too early to know whether or not this will actually be needed, and
if it is needed, what form it will take. Perhaps it will be possible to write
such libraries as a libv4l plugin, perhaps developers will just make their
own solution.

Right now I would just forget about such libraries. It's way too early.

Regards,

        Hans

> 
> 
> Thanks
> Sachin
> 
> On Wed, Feb 9, 2011 at 1:13 PM, Hans Verkuil <hverk...@xs4all.nl> wrote:
> 
> > On Wednesday, February 09, 2011 07:34:09 Sachin Gupta wrote:
> > > Looking at ppt from Robert , it seems v4l2 subdevices is the way to
> > support
> > > different devices that may be involved in imaging processing chain, also
> > > from the ppt it seems a userside library for Media controller is needed
> > > particular to each platform which controls these subdevices.I have not
> > been
> > > able to find detailed documentation on this but it seems we are talking
> > > about custom solution for every platform based on platform topology for
> > > image processing chain.
> >
> > It is not clear yet whether custom libraries will be needed or not. For
> > omap3
> > (the first driver to use the media controller) it doesn't seem to be needed
> > (yet?).
> >
> > However, the complexity of some of these video systems is such that I can't
> > help thinking that some library will be required to simplify the use of
> > such
> > hardware.
> >
> > In general it will not be possible to make a completely generic solution
> > for
> > video subsystems that will work everywhere. The various architectures
> > simply
> > are too varied for that. The media controller will go some way to solving
> > this,
> > but a 100% solution is in practice impossible.
> >
> > If you go only for a subset (for example, setting up a standard simple
> > pipeline
> > for a camera-type system), you are probably able to make something generic,
> > but
> > if you want to get full control over such systems in order to get the best
> > possible quality, then you will have to customize your code for that
> > particular
> > hardware.
> >
> > It might help me if I could get a better idea of what you are working on
> > and
> > what the goal is. I came in in the middle of the discussion and I think I'm
> > missing some of the pieces :-)
> >
> > Regards,
> >
> >        Hans
> >
> > >
> > > On Wed, Feb 9, 2011 at 11:53 AM, Subash Patel <subash...@samsung.com>
> > wrote:
> > >
> > > > In the reference architecture in ppt, we can directly wait for the RSZ
> > > > interrupt, if we configure the hardware pipe. It was my
> > mis-understanding as
> > > > each of those hardware blocks can deliver interrupts too. In that way
> > ARM
> > > > needs to just work at finished frame, like forward it to the display or
> > > > codec engine etc. V4L2 can be easily used for such hardware
> > architecture.
> > > >
> > > > But if a ISP chooses to do the above work in a seperate (dsp)processor,
> > can
> > > > we still use V4L2? OMX seems better in such environment. Let me know if
> > > > there is any other alternative.
> > > >
> > > > Regards,
> > > > Subash
> > > >  _______________________________________________
> > > > linaro-dev mailing list
> > > > linaro-dev@lists.linaro.org
> > > > http://lists.linaro.org/mailman/listinfo/linaro-dev
> > > >
> > >
> >
> >  --
> > Hans Verkuil - video4linux developer - sponsored by Cisco
> >
> 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco

_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to