> Hi, > > Thanks for your ideas. > > If I am not mistaken all subdevices in the ISP media pipe could be > interconnected without the need from ARM intervention. But I could be > wrong.
You are completely right. V4L2 sets up the internal hardware pipeline. Once you start streaming it is all done in hardware. It would be sad indeed if you would have to pass the frames around manually. OMX tends to be geared towards DSPs (encoders/decoders). In theory it would be possible to use V4L2 for this as well, but nobody has done so. Generally pure hardware encoders/decoders *are* handled by V4L2 (there are a few drivers that do that). Linux implementations of OMX should really use V4L2 to control the underlying video hardware. But due to the often close-source nature of codecs and due to the limited functionality of the V4L2 API (at least up to a few years ago) this was probably never a consideration. However, the V4L2 API and internal framework is improving rapidly, so it is now much more attractive for powerful video hardware. BTW, the main mailinglist for all things V4L is linux-me...@vger.kernel.org. Regards, Hans > Why not ask Hans Verkuil and Laurent Pinchart? Sorry for bringing you in > like this but you are the true experts in v4l2. > Do you guys have any comments on the discussions in this mail? > > /BR > /Robert Fekete > > > > > On 8 February 2011 13:42, SUBASH PATEL <subash...@samsung.com> wrote: > >> Hi Robert, >> >> >> >> Thanks for sharing the slide. It was informative on OMAP3. My reference >> was >> wrt to a similar hardware itself. >> >> >> >> In slide 20, the green blobs are mentioned to be drivers. Lets consider >> an >> example where camera sensor is connected on a CSI2 interface. >> >> On every CSI2 interrupt (assuming frame based), we will have to take out >> frame and pass on to each of ISP block in the diagram. ARM will >> >> be involved. >> >> >> >> If we are speaking of performance like HD/Full-HD @ 30fps, imagine the >> processing required by the ARM processor for this. Everytime it has to >> >> take frame from one component and pass onto another until resizer gives >> a >> final frame. From there it has to go to an encoder for encoding too. >> >> But if we use a dedicated imaging processor, which will run on its own >> and >> provide us the desired frames (last yellow box - ISP resizer >> >> output), ARM can concentrate on something else in the meantime. >> >> >> >> We cannot do such a thing with V4L2. As far my knowledge is, since this >> is >> two processor environment, we require some client/server architecture. >> OMX >> >> comes handy in these cases as it has all of client, core and component >> parts. Thats why gstreamer is looked as a broker by many media >> applications. >> >> Gstreamer will appropriately forward controls to v4L2 or OMX depending >> on >> how the hardware is delivering the frames. >> >> >> >> Regards, >> >> Subash >> >> >> >> ------- *Original Message* ------- >> >> *Sender* : Robert Fekete<robert.fek...@linaro.org> >> >> *Date* : Feb 08, 2011 17:40 (GMT+05:30) >> >> *Title* : Re: v4l2 vs omx for camera >> >> >> Hi, >> >> This presentation from Hans Verkaul last V4L2 summit describes the Media >> controller which is s perfect fit for an ISP. Pay special attention to >> Slide >> 20. Yellow boxes are input/outpud devices, green blobs are >> subdevices/drivers >> >> Thus V4L2 will fit any camera sensor whether it is yuv or raw camera. >> also >> providing a neat kernel interface with all source available for >> customers >> and happy hackers to use. >> >> BR >> /Robert F >> >> On 8 February 2011 11:42, SUBASH PATEL <subash...@samsung.com> wrote: >> >>> Hi Sachin, >>> >>> I think when we speak of OMX, we are referring to the OMX-IL layer. >>> This >>> layer is supported as middleware component. >>> >>> I am putting down my experiences as below: >>> - Generally Camera gives two streams. One is preview which can be >>> YUV/RGB >>> and another is capture (YUV/RGB/JPEG). Preview >>> frame format must be in one of display systems supported(YUV/RGB) >>> format. >>> Else color conversion is required in the path. >>> This adds overhead and latency. >>> >>> - If we have a camera sensor which is smart, i.e., it is capable of >>> providing the processed image(RGB, YUV) frame rather >>> than RAW pixel dump, and ARM is able to control the sensor interface, >>> then V4L2 framework camera driver would work. >>> User space wrapper/app would be invoking the V4L2 ioctl's to control >>> the >>> camera. >>> >>> - If we have a RAW sensor which produces, say Bayer pixel format, we >>> will >>> have to have an image pipe to process it before >>> converting this to one of RGB/YUV formats. Image pipes involves >>> conversions, resizing etc. It would be an overhead >>> to do these stages in ARM, and some vendors have proprietary imaging >>> processors for it. These processors may run a custom RTOS. >>> >>> They may have built a private IPC layer into linux kernel and >>> proprietary >>> OS. OMX layer works in such scenarios. >>> A concept called distributed OMX works on RPC mechanism. OMX client >>> calls >>> will now land up on the image processor from ARM. >>> Again some proprietary driver/s will be invoked from the remote OMX >>> component which would do the image processing mentioned above. >>> >>> User space wrapper/app aka OMX client, is a set of OMX calls, which >>> will >>> get routed to proper OMX component through OMX core. This >>> is similar to V4L2 client, but instead of controlling camera through >>> IOCTL's, we use OMX specific functions Get/Set methods. >>> >>> From my view, the choice of choosing V4L2 or OMX is basically depending >>> on >>> the type of sensors and presence of dedicated hardware. >>> If we already have a dedicated imaging processor, V4L2 can be absent, >>> and >>> we will have to leverage the OMX because of its capability. >>> But if we are integrating a new sensor which has in-built accelerator, >>> it >>> makes sense to reduce the silicon area on SoC and use V4L2 instead. >>> >>> Regards, >>> Subash >>> -------Original Message-------- >>> Sent: Sachin Gupta <sachin.gu...@linaro.org> >>> Date: Tue, 8 Feb 2011 14:25:21 +0530 >>> Subject: Re: v4l2 vs omx for camera >>> >>> >>> >>> Arnd, >>> >>> you are correct that omx and v4l2 sit at different levels one being >>> userside API and other being kernel API.But from the point of view of >>> integrating these API's in OS frameworks like gstreamer,Android camera >>> service they are at the same level.I mean one will have to implement >>> gstreamer source plugin based on either v4l2 or Omx.Also the way >>> vendors(STE >>> and TI) have gone about implementing OMX, they completely bypass v4l2 >>> .The >>> major reason being code sharing among different OS environments.The >>> kernel >>> side for OMX implementation just facilitates RPC between imaging >>> coprocessor >>> and ARM side.. >>> >>> Sachin >>> >>> >>> On Tue, Feb 8, 2011 at 2:00 PM, Lee Jones <lee.jo...@linaro.org> wrote: >>> >>> Bringing in my boys. >>> >>> Robert, Linus, what say you? >>> >>> >>> On 07/02/11 12:33, Arnd Bergmann wrote: >>> > On Monday 07 February 2011, Sachin Gupta wrote: >>> >> In Multimedia WG we have been posed with a question regarding >>> best >>> way >>> >> to expose low level API for camera.so this a questions mainly about >>> pros and >>> >> cons of v4l2 and omx over each other.So to involve a wider community >>> to >>> >> discuss this topic I am floating this mail on linaro-dev.Please >>> share >>> your >>> >> view/experiences.Also please involve any body else in this mail who >>> can >>> >> provide valuable inputs on this. >>> > I've had to look up with "omx" actually stands for [1][2], but from >>> > an outsider view, they don't seem to be mutually exclusive or even >>> > competing interfaces. v4l2 is the interface you use to get at camera >>> > data, in whatever format the camera gives you. There are no >>> alternatives >>> > to that. OpenMax gives you a way to accelerate video codecs, which >>> > is good, but this sits a layer higher up in the stack. Supporting omx >>> > is probably a good idea, but would be totally optional. >>> > >>> > Arnd >>> > >>> > [1] http://www.khronos.org/openmax/ >>> > [2] http://www.freedesktop.org/wiki/GstOpenMAX >>> > >>> >>> > _______________________________________________ >>> > linaro-dev mailing list >>> > linaro-dev@lists.linaro.org >>> > http://lists.linaro.org/mailman/listinfo/linaro-dev >>> >>> >>> _______________________________________________ >>> linaro-dev mailing list >>> linaro-dev@lists.linaro.org >>> http://lists.linaro.org/mailman/listinfo/linaro-dev >>> >>> >>> >>> -------Original Message-------- >>> Sent: Loïc Minier <loic.min...@linaro.org> >>> Date: Tue, 8 Feb 2011 10:35:38 +0100 >>> Subject: Re: Efikamx bootloader help >>> >>> hey adding my bits where I can On Tue, Feb 08, 2011, Eric Miao wrote: > >>> 2. >>> The three possible boot up methods: a) Internal SPI NOR flash, b) the > >>> MicroSD card behind the battery and c) the normal SD card at the left > >>> side, not really sure about the situation on Efika MX (smart top) as > >>> I >>> don't have one in my hands efikamx only has SD and internal flash; >>> there >>> might be other boot methods like serial or USB, but I don't think we >>> need to >>> care too much about these > 4. The boot sequence of the Internal SPI >>> NOR >>> flash, as it looks to me that > it's trying to find boot.scr from >>> either the >>> MicroSD or SD card on the > first partition? Also it would be helpful >>> to >>> document the envionment > variables of this specific u-boot. on my >>> efikamx, >>> this is the bootcmd it had when I received it and corresponding >>> variables: >>> bootcmd=run pata_boot pata_boot=run bootargs_base bootargs_pata >>> bootargs_base=setenv bootargs noinitrd console=ttymxc0,115200 >>> console=tty1 >>> bootargs_pata=setenv bootargs ${bootargs} root=/dev/sda2 >>> ${bootinfo};run >>> boot_pata bootinfo=rw boot_pata=run base_cmds;ide reset;fatload ide 0:1 >>> ${loadaddr} ${kernel}; bootm ${loadaddr} base_cmds=run base_cmd1;run >>> base_cmd2;run base_cmd3;run base_cmd4;run base_cmd5;run base_cmd99 >>> loadaddr=0x90007FC0 kernel=uImage base_cmd1=pmic 15 0x00400022;mw.l >>> 0x73fa84b8 0xe7 1;mw.l 0x73fd4014 0x59239100 1 base_cmd2=mw.l >>> 0x83fd9010 >>> 0xcaaaf6d0 1;mw.l 0x73f88000 0x01025200 1;mw.l 0x73f84000 0x20 1 >>> base_cmd3=mw.l 0x83fd9004 0x333574aa 1;mw.l 0x83fd900c 0x333574aa 1; >>> base_cmd4=mw.l 0x83fd9020 0x00f48b00 1;mw.l 0x83fd9024 0x00f49700 >>> 1;mw.l >>> 0x83fd9028 0x00f48700 1 base_cmd5=mw.l 0x83fd902c 0x00f48400 1;mw.l >>> 0x83fd9030 0x00f44e00 1; > 5. If Linaro is going to support u-boot for >>> Efika >>> MX/SB, it's better to > follow what is in upstream. However, there >>> could be >>> some differences > between the u-boot in the recovery/installing image >>> downloadable from > powerdeveloper.org and the one in upstream. We >>> might >>> need to figure > out those differences and see how to handle them. That >>> was >>> indeed the case; Marex, who developed the upstream u-boot bits, told me >>> the >>> same u-boot would work on SD and on flash; the one I've built from >>> mainline >>> uses the default imximage.cfg settings which is "BOOT_FROM spi", yet >>> works >>> fine from a SD card. So it seems the same u-boot can be used for both >>> SD and >>> flash -- just with different bootcmd. Cheers, -- Loïc Minier >>> _______________________________________________ >>> linaro-dev mailing list >>> linaro-dev@lists.linaro.org >>> http://lists.linaro.org/mailman/listinfo/linaro-dev >>> >> >> >> >> >> >> >> >> > -- Hans Verkuil - video4linux developer - sponsored by Cisco _______________________________________________ linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev