2011/4/24 Paulo Costa <p...@fe.up.pt>:

> Probably you are right. Let me know how does it works for you. Historically,
> we had a V4L unit for Kylix that was used with a Bt878 Frame Grabber and we
> had to build the new component when we wanted to use the USB cameras. Back
> then, some drivers were not available as V4L2.

I had some more time to look into this now, read some more
documentation and looked into the 5dpo sources.

Actually it really seems to be a full featured v4l2 implementation,
unfortunately I have only one v4l2 device to test it and the driver of
this device itself seems broken/incomplete. This diver will accept a
lot of different format settings without complaining but send its data
always as the same (probably jpeg) that I was not yet able to decode.
Other Applications have problems with this diver too.

The problem with v4l as I see it now is that it does not really define
a nice standard that could be easily used by the application. Only the
handshaking and camera settings are standardized, the actual data is
allowed to be transmitted in any of a myriad of possible formats, the
kernel guys don't like (don't allow) format conversion in kernel
space, so the driver just pumps the raw data from the camera to the
application. The application itself must be able to handle a zillion
of different video formats and encodings. Therefore it seems (from
what I understand so far) there exists a set of libraries that can (or
should) be used to wrap the raw v4l device into something that is
more application friendly and attempts to solve this problem.

This is what I will investigate next.

Currently I have an old v4l1 device with no existing v4l2 driver, so I
needed some way to access a v4l1 device. The units from 5dpo do not
contain the needed definitions to use v4l1, only v4l2.

So I wrote a bare bones minimal v4l1 unit from scratch, manually
translating the headers from the FreeBSD (better documentation)
videodev.h file, that contains only the absolute minimum that is
needed to open a v4l1 camera, set the image settings (brightness, etc)
and grab frames. Fortunately my camera outputs uncompressed RGB24
data.

I am attaching the unit for documentation purposes (its really legacy
stuff, no new devices use v4l1 nowadays but there are still some old
cameras around needed for special purposes), maybe somebody else will
stumble over this while searching for the same problem. It also
contains a class with some methods that demonstrates how to open the
cam and grab images, I have used it in this current form to
sucessfully grab streaming images from an old QuickCam Express that
uses the qc-usb driver on kernel  2.6.24.

Bernd
{ Implements the bare minimum of v4l (NOT v4l2), just enough
  to open the webcam, set the picture settings and grab images.
  It does not contain anything regarding Tuners, Audio, etc.

  Copyright 2011 Bernd Kreuss <prof7...@googlemail.com>

  License: Not sure, probably the same as FreeBSD because it
  is entirely derived from their re-implementation of videodev.h,
  the original file is attached at the end.

  If you find some missing "r" chracters in my source code
  comments then this is because my keyboard is broken. }

unit v4l1;

{$mode objfpc}{$H+}

interface

const
  // these are used in the TVideo_Capability record
  VID_TYPE_CAPTURE = 1 ; { Can capture }
  VID_TYPE_TUNER = 2 ; { Can tune }
  VID_TYPE_TELETEXT = 4 ; { Does teletext }
  VID_TYPE_OVERLAY = 8 ; { Overlay onto frame buffer }
  VID_TYPE_CHROMAKEY = 16 ; { Overlay by chromakey }
  VID_TYPE_CLIPPING = 32 ; { Can clip }
  VID_TYPE_FRAMERAM = 64 ; { Uses the frame buffer memory }
  VID_TYPE_SCALES = 128 ; { Scalable }
  VID_TYPE_MONOCHROME = 256 ; { Monochrome only }
  VID_TYPE_SUBCAPTURE = 512 ; { Can capture subareas of the image }
  VID_TYPE_MPEG_DECODER = 1024 ; { Can decode MPEG streams }
  VID_TYPE_MPEG_ENCODER = 2048 ; { Can encode MPEG streams }
  VID_TYPE_MJPEG_DECODER = 4096 ; { Can decode MJPEG streams }
  VID_TYPE_MJPEG_ENCODER = 8192 ; { Can encode MJPEG streams }

  // these are used in the TVideo_Picture record
  VIDEO_PALETTE_GREY = 1; { Linear greyscale }
  VIDEO_PALETTE_HI240 = 2; { High 240 cube (BT848) }
  VIDEO_PALETTE_RGB565 = 3; { 565 16 bit RGB }
  VIDEO_PALETTE_RGB24 = 4; { 24bit RGB }
  VIDEO_PALETTE_RGB32 = 5; { 32bit RGB }
  VIDEO_PALETTE_RGB555 = 6; { 555 15bit RGB }
  VIDEO_PALETTE_YUV422 = 7; { YUV422 capture }
  VIDEO_PALETTE_YUYV = 8;
  VIDEO_PALETTE_UYVY = 9; { The great thing about standards is ... }
  VIDEO_PALETTE_YUV420 = 10;
  VIDEO_PALETTE_YUV411 = 11; { YUV411 capture }
  VIDEO_PALETTE_RAW = 12; { RAW capture (BT848) }
  VIDEO_PALETTE_YUV422P = 13; { YUV 4:2:2 Planar }
  VIDEO_PALETTE_YUV411P = 14; { YUV 4:1:1 Planar }
  VIDEO_PALETTE_YUV420P = 15; { YUV 4:2:0 Planar }
  VIDEO_PALETTE_YUV410P = 16; { YUV 4:1:0 Planar }
  VIDEO_PALETTE_PLANAR = 13; { start of planar entries }
  VIDEO_PALETTE_COMPONENT = 7; { start of component entries }

  // used in TVideo_Mbuf
  VIDEO_MAX_FRAME = 32;

type
  { will be filled by issuing a VIDIOCGCAP to get some
    capabilities of the device. This is the first thing
    one should do after opening the device}
  TVideo_Capability = record
    name: array[0..31] of char;
    typ: Integer;
    channels: Integer;
    audios: Integer;
    maxwidth: Integer;
    maxheight: Integer;
    minwidth: Integer;
    minheight: Integer;
  end;

  { This is used to get or set the picture settings with
    VIDIOCGPICT and VIDIOCSPICT. This is the next thing
    to do after querying the capabilities.}
  Tvideo_Picture = record
    brightess: Word;
    hue: Word;
    colour: Word;
    contrast: Word;
    whiteness: Word;
    depth: Word;
    palette: Word;
  end;

  { clipping regions, referenced in TVideo_Window. This is not
    something one needs for everyday use. Its only defined here
    for the sake of completeness because TVideo_Window mentions them.}
  Pclips = ^TClips;
  TClips = record
    x,y: Integer;
    width, height: Integer;
    next: Pclips;
  end;

  { This must be sent with a VIDIOCSWIN or queried with a VIDIOCGWIN
    to set (or get) the video size. before starting capturing. Usually
    you set it to the maximum width and height that is found in
    TVideo_Capabilities}
  TVideo_Window = record
    x,y : DWord;
    width, height: DWord;
    chromakey: DWord;
    flags: DWord;
    clips: Pclips;
    clipcount: Integer;
  end;

  { this is used to ask the driver for the size of the memory
    that should be mapped. Use it with VIDIOCGMBUF and then
    use the returned size value to FpMmap() the device and
    after capturing you can use the offsets array to calculate
    the pointers to individual frames in the mapped memory }
  TVideo_Mbuf = record
    size: Integer;
    frames: Integer;
    offsets: array[0..VIDEO_MAX_FRAME-1] of Integer;
  end;

  { use this to request the capturing of exactloy one frame with
    VIDIOCMCAPTURE. It will immedately return. You must use
    VIDIOCSYNC to wait for this frame. VIDIOCMCAPTURE must be
    called for each frame. Usually you do it like the following,
    alternating between two frame numbers 0 and 1.

    VIDIOCMCAPTURE(0)
    while ... do begin
      VIDIOCMCAPTURE(1)       // start 1
      VIDIOCSYNC(0)           // wait for 0
      ... process the frame ...
      VIDIOCMCAPTURE(0)       // start 0
      VIDIOCSYNC(1)           // wait for 1
      ... process the frame ...
    end;
    }
  TVideo_Mmap = record
    frame: DWord;   // Frame (0..n-1)
    height: Integer;
    width: Integer;
    format: DWord;  // should be VIDEO_PALETTE_*
  end;


// the following were actually constants in the
// original headers but their values are computed
// by the C preprocessor at compile time with the
// help of a lot of nested macros, its actually
// easier to make them functions.

function VIDIOCGCAP: Cardinal;
function VIDIOCGPICT: Cardinal;
function VIDIOCSPICT: Cardinal;
function VIDIOCGWIN: Cardinal;
function VIDIOCSWIN: Cardinal;
function VIDIOCGMBUF: Cardinal;
function VIDIOCMCAPTURE: Cardinal;
function VIDIOCSYNC: Cardinal;


///////////////////////
// End of V4l Header //
///////////////////////


type
  { TSimpleV4l1Device implments something that will actually make use of
    all the above. It represents a webcam and will set the Device to the
    largest supported resolution and start capturing images. Its only
    a quick and dirty hack that works for my QuickCam Express but it
    illustrates the principle.
    Usage:

      Cam := TSimpleV4l1Device.Create('/dev/video0', VIDEO_PALETTE_RGB24);
      Cam.Open;
      while (WhatEver) do begin
        Cam.Capture;  // already start capturing the next frame
        Cam.Sync;     // wait for the current frame
        Data := Cam.GetImage:
        // do something with Data
      end;

    the methods can throw exceptions when the camera is not supported
    so you should enclose it all in try/except and don't forget to
    use Cam.Close afterwards; }
  TSimpleV4l1Device = class(TObject)
    FDevice: String;
    FHandle: Integer;
    FPalette: Word;
    FVideo_Capability: TVideo_Capability;
    FVideo_Picture: Tvideo_Picture;
    FVideo_Window: TVideo_Window;
    FVideo_Mbuf : TVideo_Mbuf;
    FVideo_Mmap : TVideo_Mmap;
    FBuf: PByte;
    FFrameNum: Integer;
    constructor Create(ADevice: String; APalette: Word);

    {Open the device and set all parameters so that capturing
    can begin. This will also trigger the very first Capture call
    with frame 0. the next Capture/Sync will then be 1/0, then
    0/1, etc.}
    procedure Open;

    {free the buffers and close the device}
    procedure Close;

    { tell the driver to start capturing a frame. This will
    also toggle the frame number to the other of the two
    frames. A subsequant Sync will then wait for the frame
    that was captured before.}
    procedure Capture;

    { wait for the next frame.}
    procedure Sync;

    { get the pointer to the last Sync'ed frame }
    function GetImage: PByte;
  end;


// the functions below are only for debugging purposes,
// to print the contents of some records to the console.

procedure DebugPrintCapabilities(cap: TVideo_Capability);
procedure DebugPrintPicture(pict: Tvideo_Picture);
procedure DebugPrintWindow(win: TVideo_Window);

implementation
uses
  Classes, SysUtils, BaseUnix, kernelioctl;

{ the ioctl "constants" implemented as functions }

function VIDIOCGCAP: Cardinal;
begin
  Result := _IOR(ord('v'), 1, SizeOf(TVideo_Capability));
end;

function VIDIOCGPICT: Cardinal;
begin
  Result := _IOR(ord('v'), 6, SizeOf(Tvideo_Picture));
end;

function VIDIOCSPICT: Cardinal;
begin
  Result := _IOW(ord('v'), 7, SizeOf(Tvideo_Picture));
end;

function VIDIOCGWIN: Cardinal;
begin
  Result := _IOR(ord('v'), 9, SizeOf(TVideo_Window));
end;

function VIDIOCSWIN: Cardinal;
begin
  Result := _IOW(ord('v'),10, SizeOf(TVideo_Window));
end;

function VIDIOCGMBUF: Cardinal;
begin
  Result := _IOR(ord('v'), 20, SizeOf(TVideo_Mbuf));
end;

function VIDIOCMCAPTURE: Cardinal;
begin
  Result :=	_IOW(ord('v'), 19, SizeOf(TVideo_Mmap));
end;

function VIDIOCSYNC: Cardinal;
begin
  Result := _IOW(ord('v'), 18, SizeOf(Integer));
end;

{ TSimpleV4l1Device }

constructor TSimpleV4l1Device.Create(ADevice: String; APalette: Word);
begin
  FDevice := ADevice;
  FPalette := APalette;
end;

procedure TSimpleV4l1Device.Open;
begin
  // open the device
  FHandle := FpOpen(pchar(FDevice), O_RDWR);
  if FHandle = -1 then
    raise Exception.Create('could not open video device ' + FDevice);

  // get cpability
  if FpIOCtl(FHandle, VIDIOCGCAP, @FVideo_Capability) < 0 then
    raise Exception.Create('could not query capabilities');
  DebugPrintCapabilities(FVideo_Capability);

  // get picture settings (and set palette)
  if FpIOCtl(FHandle, VIDIOCGPICT, @FVideo_Picture) < 0 then
    raise Exception.Create('could not query picture settings');
  DebugPrintPicture(FVideo_Picture);
  if FVideo_Picture.palette <> FPalette then begin
    writeln('setting desired palette');
    FVideo_Picture.palette := FPalette;
    if FpIOCtl(FHandle, VIDIOCSPICT, @FVideo_Picture) < 0 then
      raise Exception.Create('could not set palette');
    DebugPrintPicture(FVideo_Picture);
  end;

  // set video window
  with FVideo_Window do begin
    x := 0;
    y := 0;
    width := FVideo_Capability.maxwidth;
    height := FVideo_Capability.maxheight;
    chromakey := 0;
    flags := 0;
    clips := Nil;
    clipcount := 0;
  end;
  if FpIOCtl(FHandle, VIDIOCSWIN, @FVideo_Window) < 0 then
    raise Exception.Create('could not set video window');

  // get video window
  if FpIOCtl(FHandle, VIDIOCGWIN, @FVideo_Window) < 0 then
    raise Exception.Create('could not query video window');
  DebugPrintWindow(FVideo_Window);

  // ask the driver how much memory to mmap and do it
  if FpIOCtl(FHandle, VIDIOCGMBUF, @FVideo_Mbuf) < 0 then
    raise Exception.Create('could not query VIDIOCGMBUF');
  FBuf := Fpmmap(nil, FVideo_Mbuf.size, PROT_READ, MAP_SHARED, FHandle, 0);
  FFrameNum := 0;

  Capture; // start capturing the first frame already
end;

procedure TSimpleV4l1Device.Close;
begin
  Fpmunmap(FBuf, FVideo_Mbuf.size);
  FpClose(FHandle);
end;

procedure TSimpleV4l1Device.Capture;
begin
  writeln(Format('capture frame %d',[FFrameNum]));
  FVideo_Mmap.format := FVideo_Picture.palette;
  FVideo_Mmap.height := FVideo_Window.height;
  FVideo_Mmap.width := FVideo_Window.width;
  FVideo_Mmap.frame := FFrameNum;
  if FpIOCtl(FHandle, VIDIOCMCAPTURE, @FVideo_Mmap) < 0 then
    raise Exception.Create('could not send VIDIOCMCAPTURE');

  // now we switch to the other of the two frame numbers. The
  // application will now call Sync on the other frame which
  // has already been capturing for a bit longer.
  FFrameNum := 1 - FFrameNum;
end;

procedure TSimpleV4l1Device.Sync;
begin
  writeln(Format('wait for frame %d',[FFrameNum]));
  if FpIOCtl(FHandle, VIDIOCSYNC, @FFrameNum) < 0 then
    raise Exception.Create('could not do VIDIOCSYNC');
end;

function TSimpleV4l1Device.GetImage: PByte;
begin
  Result := FBuf + FVideo_Mbuf.offsets[FFrameNum];
end;


procedure DebugPrintCapabilities(cap: TVideo_Capability);
var
  captyp : String = '';
begin
  if (cap.typ and VID_TYPE_CAPTURE) <> 0 then captyp += 'VID_TYPE_CAPTURE ';
  if (cap.typ and VID_TYPE_TUNER) <> 0 then captyp += 'VID_TYPE_TUNER ';
  if (cap.typ and VID_TYPE_TELETEXT) <> 0 then captyp += 'VID_TYPE_TELETEXT ';
  if (cap.typ and VID_TYPE_OVERLAY) <> 0 then captyp += 'VID_TYPE_OVERLAY ';
  if (cap.typ and VID_TYPE_CHROMAKEY) <> 0 then captyp += 'VID_TYPE_CHROMAKEY ';
  if (cap.typ and VID_TYPE_CLIPPING) <> 0 then captyp += 'VID_TYPE_CLIPPING ';
  if (cap.typ and VID_TYPE_FRAMERAM) <> 0 then captyp += 'VID_TYPE_FRAMERAM ';
  if (cap.typ and VID_TYPE_SCALES) <> 0 then captyp += 'VID_TYPE_SCALES ';
  if (cap.typ and VID_TYPE_MONOCHROME) <> 0 then captyp += 'VID_TYPE_MONOCHROME ';
  if (cap.typ and VID_TYPE_SUBCAPTURE) <> 0 then captyp += 'VID_TYPE_SUBCAPTURE ';
  if (cap.typ and VID_TYPE_MPEG_DECODER) <> 0 then captyp += 'VID_TYPE_MPEG_DECODER ';
  if (cap.typ and VID_TYPE_MPEG_ENCODER) <> 0 then captyp += 'VID_TYPE_MPEG_ENCODER ';
  if (cap.typ and VID_TYPE_MJPEG_DECODER) <> 0 then captyp += 'VID_TYPE_MJPEG_DECODER ';
  if (cap.typ and VID_TYPE_MJPEG_ENCODER) <> 0 then captyp += 'VID_TYPE_MJPEG_ENCODER ';

  writeln('video_capability');
  writeln('       Name: ' + cap.name);
  writeln('        typ: ' + captyp);
  writeln('   channels: ' + IntToStr(cap.channels));
  writeln('     audios: ' + IntToStr(cap.audios));
  writeln('   maxwidth: ' + IntToStr(cap.maxwidth));
  writeln('  maxheight: ' + IntToStr(cap.maxheight));
  writeln('   minwidth: ' + IntToStr(cap.minwidth));
  writeln('  minheight: ' + IntToStr(cap.minheight));
  writeln;
end;

procedure DebugPrintPicture(pict: Tvideo_Picture);
var
  palette: String;
begin
  if pict.palette = VIDEO_PALETTE_GREY then palette := 'VIDEO_PALETTE_GREY';
  if pict.palette = VIDEO_PALETTE_HI240 then palette := 'VIDEO_PALETTE_HI240';
  if pict.palette = VIDEO_PALETTE_RGB565 then palette := 'VIDEO_PALETTE_RGB565';
  if pict.palette = VIDEO_PALETTE_RGB24 then palette := 'VIDEO_PALETTE_RGB24';
  if pict.palette = VIDEO_PALETTE_RGB32 then palette := 'VIDEO_PALETTE_RGB32';
  if pict.palette = VIDEO_PALETTE_RGB555 then palette := 'VIDEO_PALETTE_RGB555';
  if pict.palette = VIDEO_PALETTE_YUV422 then palette := 'VIDEO_PALETTE_YUV422';
  if pict.palette = VIDEO_PALETTE_YUYV then palette := 'VIDEO_PALETTE_YUYV';
  if pict.palette = VIDEO_PALETTE_UYVY then palette := 'VIDEO_PALETTE_UYVY';
  if pict.palette = VIDEO_PALETTE_YUV420 then palette := 'VIDEO_PALETTE_YUV420';
  if pict.palette = VIDEO_PALETTE_YUV411 then palette := 'VIDEO_PALETTE_YUV411';
  if pict.palette = VIDEO_PALETTE_RAW then palette := 'VIDEO_PALETTE_RAW';
  if pict.palette = VIDEO_PALETTE_YUV422P then palette := 'VIDEO_PALETTE_YUV422P';
  if pict.palette = VIDEO_PALETTE_YUV411P then palette := 'VIDEO_PALETTE_YUV411P';
  if pict.palette = VIDEO_PALETTE_YUV420P then palette := 'VIDEO_PALETTE_YUV420P';
  if pict.palette = VIDEO_PALETTE_YUV410P then palette := 'VIDEO_PALETTE_YUV410P';
  if pict.palette = VIDEO_PALETTE_PLANAR then palette := 'VIDEO_PALETTE_PLANAR';
  if pict.palette = VIDEO_PALETTE_COMPONENT then palette := 'VIDEO_PALETTE_COMPONENT';

  writeln('video_picture');
  writeln(' brightness: ' + IntToStr(pict.brightess));
  writeln('        hue: ' + IntToStr(pict.hue));
  writeln('     colour: ' + IntToStr(pict.colour));
  writeln('   contrast: ' + IntToStr(pict.contrast));
  writeln('  whiteness: ' + IntToStr(pict.whiteness));
  writeln('      depth: ' + IntToStr(pict.depth));
  writeln('    palette: ' + palette);
  writeln;
end;

procedure DebugPrintWindow(win: TVideo_Window);
begin
  writeln('video_window:');
  writeln('               x: ' + IntToStr(win.x));
  writeln('               y: ' + IntToStr(win.y));
  writeln('           width: ' + IntToStr(win.width));
  writeln('          height: ' + IntToStr(win.height));
  writeln('       chromakey: ' + IntToStr(win.chromakey));
  writeln('           flags: ' + IntToStr(win.flags));
  writeln(Format(' clips (pointer): %p', [win.clips]));
  writeln('       clipcount: ' + IntToStr(win.clipcount));
  writeln;
end;

end.

{ Below is the header file from FreeBSD. Its not the original
  one from Linux. I used this one because the BSD guys spent
  some time with inserting comments and explanations, while the
  original from linux does not have much helpful comments in it. }

(*
/*
 * This is a reimplementation of the videodev.h file containing
 * the Video for Linux (1) API specification.
 * The API is documented at http://linux.bytesex.org/v4l2/API.html
 * as part of the v4l2 description, and we try to follow that
 * one here.
 * See also http://v4l2spec.bytesex.org/spec/c12160.htm
 *
 * Part of the information contained here (especially, the numeric
 * values of the constants) also comes from the * videodev.h file
 * in the linux distribution, whose origin seems to be uncertain.
 *
 * Please realise that this is an API so there are very few degrees
 * of freedom in writing this header.
 * Names (constants, types and field names, API functions) must
 * necessarily be the same or very similar, what does change to some
 * limited degree is the set of headers that this file may depend
 * on, or the internal structure of the file. Hence, it makes little
 * if any sense to claim for copyrights on these elements.
 *
 * On the other hand, the documentation of this file is completely
 * new and compiled from external sources.
 */

#ifndef __LINUX_VIDEODEV_H	/* protect against nested include */
#define __LINUX_VIDEODEV_H

#include <sys/types.h>		/* make sure we have the basic types */

/*
 * On FreeBSD we don't have many of the linux types available,
 * and depending on where this file is included, we may need to
 * redefine them here.
 * XXX todo: check if we can replace with some linux header.
 */
#if 0 // ndef _LINUX_TYPES_H
#define _LINUX_TYPES_H
typedef int32_t __s32;
typedef int64_t __s64;
typedef uint16_t __u16;
typedef uint32_t __u32;
typedef uint64_t __u64;
typedef uint8_t __u8;
#endif /* _LINUX_TYPES_H */

/*
 * Bring in the common definition for v4l1 and v4l2.
 */
#include <linux/videodev2.h>

#if 1 // defined(CONFIG_VIDEO_V4L1_COMPAT) || !defined (__KERNEL__)

/*
 * Here we have a number of descriptors used to interact with
 * the driver. Most of them are arguments for ioctl() calls,
 * so we include them next to the definition of the structure.
 *
 * The basic way of interacting with a device is open() it and
 * issue a VIDIOCGCAP ioctl to figure out the capabilities
 * of the device. At this point we have the following tasks:
 *  - set the video format and controls, VIDIOCSPICT
 *  - set the capture size, VIDIOCSWIN and possibly also
 *	VIDIOCSCAPTURE and (if capturing to screen) VIDIOCSFBUF
 *  - if needed, set the tuner (VIDIOCSCHAN and more)
 *  - if using the read() interface, just call it to grab data;
 *  - if using the mmap interface, issue VIDIOCGMBUF to know
 *	how memory is organized; then call mmap() as needed,
 *	then issue VIDIOCMCAPTURE(0..n-1) for each of the
 *	frames that you want, and possibly use VIDIOCSYNC(i)
 *	to wait until frame i has been captured.
 * While capture is active you can use further ioctls to change
 * the video/audio controls including tuner and so on.
 */

/*
 * Description of a video device, as set by the driver.
 *
 * Name is a device-specific string eg. camera or card name;
 * Type is a bitfield with the set of capabilities (VID_TYPE_*
 * constants, defined in videodev2.h) supported by the device.
 * V4L1 has a small set e.g. CAPTURE, TUNER, TELETEXT, CLIPPING...
 * Channels is the number of radio/tv channels available.
 * Audios is the number of audio devices.
 * The last four fields are the range of supported image sizes.
 * Note that in many cases only a discrete set of sizes within the
 * range is available. Some drivers may do the conversion (e.g.
 * cropping or padding) on their own.
 *
 * There is only one related ioctl(), to fetch the capabilities
 * of the device.
 */
struct video_capability {
	char name[32];
	int type;	/* capabilities, see videodev2.h */
	int channels;
	int audios;
	int maxwidth;
	int maxheight;
	int minwidth;
	int minheight;
};
#define VIDIOCGCAP	_IOR('v', 1, struct video_capability)

/*
 * OVERLAY CAPTURE SUPPORT
 *
 * Some cards can write directly into the frame buffer, in which case
 * we use the VIDIOCSFBUF to tell the driver the base address, size,
 * depth and bytes per line for the buffer.
 */
struct video_buffer {
	void	*base;	/* use NULL to indicate unset */
	int	height;
	int	width;
	int	depth;	/* XXX per pixel ? */
	int	bytesperline;	/* offset between start of two lines */
};
#define VIDIOCGFBUF	_IOR('v',11, struct video_buffer)
#define VIDIOCSFBUF	_IOW('v',12, struct video_buffer)

/*
 * CAPTURE AREA DESCRIPTION
 *
 * struct video_window describes the capture area, plus optional
 * clipping information if relevant (e.g. VID_TYPE_CLIPPING is
 * set in the capabilities and we want it.).
 * A windows is specified as position and size, clipping rectangles
 * must be specified as a list of 'struct video_clip' and their
 * count in clipcount. 'clips' is not used in the VIDIOCGWIN call.
 * A -1 in clipcount means that clips is a
 * poonter to a 1024x625 bitmap, where a '1' represent a clipped pixel.
 * Flags supports interlace and chromakey.
 *
 * NOTE: setting the window does not start/stop capture, you need
 * do a VIDIOCCAPTURE with an argument of 1 (start) or 0 (stop).
 */
struct video_clip {
	__s32	x;
	__s32	y;
	__s32	width;
	__s32	height;
	struct	video_clip *next;
};

struct video_window {
	__u32	x;
	__u32	y;
	__u32	width;
	__u32	height;
	__u32	chromakey;	/* host order, RGB32 chromakey */
	__u32	flags;		/* more capture flags */
#define VIDEO_WINDOW_INTERLACE	1
#define VIDEO_WINDOW_CHROMAKEY	0x10	/* Overlay by chromakey */
	struct	video_clip *clips;	/* set only, see note above */
	int	clipcount;
#define VIDEO_CLIP_BITMAP	-1	/* see note above */
#define VIDEO_CLIPMAP_SIZE	(128 * 625)
};
#define VIDIOCCAPTURE	_IOW('v', 8, int)	/* Start/end capture */
/* Get/set the video overlay window with clip list. */
#define VIDIOCGWIN	_IOR('v', 9, struct video_window)
#define VIDIOCSWIN	_IOW('v',10, struct video_window)

/*
 * Some devices can capture a subfield of the image, in which case
 * we can specify here its position and size, possibly a decimation
 * factor, and whether we want even or odd frames.
 * XXX how can we ask for both ?
 */
struct video_capture {
	__u32 	x;
	__u32	y;
	__u32	width;
	__u32	height;
	__u16	decimation;		/* Decimation divider */
	__u16	flags;			/* odd/even */
#define VIDEO_CAPTURE_ODD	0
#define VIDEO_CAPTURE_EVEN	1
};
#define VIDIOCGCAPTURE	_IOR('v', 22, struct video_capture)
#define VIDIOCSCAPTURE	_IOW('v', 23, struct video_capture)

/*
 * Video sources definition.
 *
 * Each device has one or more channels, described
 * by the following structures
 * Each video channel has a numeric identifier, a name,
 * possibly a tuner associated, a type, and a 'norm' field.
 * We can read or set the channel info with VIDIOC[GS]CHAN
 */
struct video_channel {
	int channel;
	char name[32];
	int tuners;
	__u32  flags;
#define VIDEO_VC_TUNER		1	/* Channel has a tuner */
#define VIDEO_VC_AUDIO		2	/* Channel has audio */
/* #define VIDEO_VC_NORM	??? mentioned but not defined */
	__u16  type;
#define VIDEO_TYPE_TV		1
#define VIDEO_TYPE_CAMERA	2
	__u16 norm;			/* Norm set by channel */
};
#define VIDIOCGCHAN	_IOWR('v', 2, struct video_channel)
#define VIDIOCSCHAN	_IOW('v', 3, struct video_channel)

/*
 * This is the main structure to get/set picture features,
 * in particular the video controls (brightness etc),
 * the data format (depth, palette).
 * All values except palette (XXX and depth ?) are scaled to
 * the full 16 bit range irrespective of the native range.
 *
 * We can get or set the picture info with VIDIOC[GS]PICT.
 * Consider that changes to the video controls are often
 * inexpensive operations, whereas changing to the video
 * format (depth, palette) might require stopping and
 * restarting the video device.
 */
struct video_picture {
	__u16	brightness;
	__u16	hue;
	__u16	colour;
	__u16	contrast;
	__u16	whiteness;	/* Black and white only */
	__u16	depth;		/* Capture depth */
	__u16   palette;	/* Palette in use */

/* Available palettes i.e. video formats */
#define VIDEO_PALETTE_GREY	1	/* Linear greyscale */
#define VIDEO_PALETTE_HI240	2	/* BT848 high 240 color cube */
#define VIDEO_PALETTE_RGB565	3	/* 565 16 bit RGB */
#define VIDEO_PALETTE_RGB24	4	/* 24bit RGB */
#define VIDEO_PALETTE_RGB32	5	/* 32bit RGB */
#define VIDEO_PALETTE_RGB555	6	/* 555 15bit RGB */

/*
 * These formats are 'component' type, i.e. the components for
 * each pixel are contiguous (technically, the ones above are
 * the same; perhaps component refers to the 'YUV' vs 'RGB'
 * format).
 */
#define VIDEO_PALETTE_COMPONENT 7	/* start of component entries */
#define VIDEO_PALETTE_YUV422	7	/* 4bit Y, 2bit U, 2bit V */
#define VIDEO_PALETTE_YUYV	8
#define VIDEO_PALETTE_UYVY	9
#define VIDEO_PALETTE_YUV420	10
#define VIDEO_PALETTE_YUV411	11	/* YUV411 capture */
#define VIDEO_PALETTE_RAW	12	/* RAW capture (BT848) */

/*
 * These formats are 'planar' i.e. each 'plane' is stored contiguously.
 * This is more useful when we want to do compression.
 */
#define VIDEO_PALETTE_PLANAR	13	/* start of planar entries */
#define VIDEO_PALETTE_YUV422P	13	/* YUV 4:2:2 Planar */
#define VIDEO_PALETTE_YUV411P	14	/* YUV 4:1:1 Planar */
#define VIDEO_PALETTE_YUV420P	15	/* YUV 4:2:0 Planar */
#define VIDEO_PALETTE_YUV410P	16	/* YUV 4:1:0 Planar */
};

#define VIDIOCGPICT	_IOR('v', 6, struct video_picture)
#define VIDIOCSPICT	_IOW('v', 7, struct video_picture)

/*
 * READING IMAGES -- read() and mmap()
 *
 * The read() system call will return the next available image.
 * Before this, the app should call VIDIOCSPICT and VIDIOCSWIN
 * to set the format and size of the input data.
 *
 * An alternative method is to use the mmap interface.
 *
 * First set the image size and depth (with XXX ?).
 * Then issue VIDIOCGMBUF to _ask_ the driver how much memory to mmap,
 * the number n of frames, and their offsets in the buffer.
 * VIDEO_MAX_FRAME is defined in videodev2.h.
 * Then you should
 *
 * Finally, issue VIDIOCMCAPTURE to tell the driver to start
 * capturing the specified frame (XXX redundant size info ?).
 * VIDIOCMCAPTURE does not wait until capture completes,
 * _and_ you need to issue one VIDIOCMCAPTURE for each of the
 * frames (0..n-1) that you want to capture.
 * VIDIOCSYNC(x) blocks until x frames are captured.
 * Note you can have many pending VIDIOCMCAPTURE calls, which
 * you would use if you want to do 'double buffering'.
 */
struct video_mbuf {
	int	size;		/* Total memory to map */
	int	frames;		/* Frames */
	int	offsets[VIDEO_MAX_FRAME];
};
#define VIDIOCGMBUF	_IOR('v',20, struct video_mbuf)

struct video_mmap {
	unsigned	int frame;	/* Frame (0..n-1) */
	int		height;
	int		width;
	unsigned	int format;	/* should be VIDEO_PALETTE_* */
};
#define VIDIOCMCAPTURE	_IOW('v', 19, struct video_mmap)
#define VIDIOCSYNC	_IOW('v', 18, int)

/*
 * OTHER CONTROLS: tuners, audio controls, vbi...
 */
/*
 * Same as for channels, a tuner has a numeric identifier, a name,
 * and a mixture of flags indicating capabilities.
 * Of importance are the frequency range (in 1/16 MHZ or 1/16KHz
 * units depending on VIDEO_TUNER_LOW status),
 * and the signal strenght if known (range 0..65535).
 *
 * We can read and set the info with VIDIOC[GS]TUNER,
 * and get and set the frequency with VIDIOC[GS]FREQ
 */
struct video_tuner {
	int tuner;
	char name[32];
	unsigned long rangelow, rangehigh;	/* Tuner range */
	__u32 flags;
#define VIDEO_TUNER_PAL		0x0001
#define VIDEO_TUNER_NTSC	0x0002
#define VIDEO_TUNER_SECAM	0x0004
#define VIDEO_TUNER_LOW		0x0008	/* Uses KHz not MHz */
#define VIDEO_TUNER_NORM	0x0010	/* Tuner can set norm */
#define VIDEO_TUNER_STEREO_ON	0x0080	/* Tuner is seeing stereo */
#define VIDEO_TUNER_RDS_ON      0x0100	/* Tuner is seeing RDS stream */
#define VIDEO_TUNER_MBS_ON      0x0200	/* Tuner is seeing MBS stream */
	__u16 mode;			/* PAL/NTSC/SECAM/OTHER */
#define VIDEO_MODE_PAL		0
#define VIDEO_MODE_NTSC		1
#define VIDEO_MODE_SECAM	2
#define VIDEO_MODE_AUTO		3
	__u16 signal;
};
#define VIDIOCGTUNER	_IOWR('v', 4, struct video_tuner)
#define VIDIOCSTUNER	_IOW('v',  5, struct video_tuner)
#define VIDIOCGFREQ	_IOR('v', 14, unsigned long)
#define VIDIOCSFREQ	_IOW('v', 15, unsigned long)

/*
 * Read/set the controls for each audio channel.
 * Note how the structure differs from others e.g. the
 * name[] field is in the middle.
 */
struct video_audio {
	int	audio;		/* Audio channel */
	__u16	volume;		/* If settable */
	__u16	bass;
	__u16	treble;
	__u32	flags;		/* which controls do exist */
#define VIDEO_AUDIO_MUTE	0x0001
#define VIDEO_AUDIO_MUTABLE	0x0002
#define VIDEO_AUDIO_VOLUME	0x0004
#define VIDEO_AUDIO_BASS	0x0008
#define VIDEO_AUDIO_TREBLE	0x0010
#define VIDEO_AUDIO_BALANCE	0x0020
	char    name[16];
	__u16   mode;		/* which modes are supported */
#define VIDEO_SOUND_MONO	1
#define VIDEO_SOUND_STEREO	2
#define VIDEO_SOUND_LANG1	4
#define VIDEO_SOUND_LANG2	8
	__u16	balance;	/* Stereo balance */
	__u16	step;		/* Step actual volume uses */
};
#define VIDIOCGAUDIO	_IOR('v', 16, struct video_audio)
#define VIDIOCSAUDIO	_IOW('v', 17, struct video_audio)

/*
 * XXX to be documented
 */
struct video_key {
	__u8	key[8];
	__u32	flags;
};
/* Video key event - to dev 255 is to all -
 * cuts capture on all DMA windows with this key (0xFFFFFFFF == all)
 */
#define VIDIOCKEY	_IOR('v',13, struct video_key)


/*
 * This is mostly a linux-specific ioctl.
 * VIDIOCGUNIT returns the minor device numbers for all devices
 * associated with the current one, e.g. if a video device
 * also has vbi, teletext, etc. associated.
 * This is useful if you need to open them independently.
 * Probably returns VIDEO_NO_UNIT if a function is not available.
 * XXX On FreeBSD, it is unclear how to map these.
 */
struct video_unit {		/* fields are all minor numbers */
	int 	video;
	int	vbi;
	int	radio;
	int	audio;
	int	teletext;
};
#define 	VIDEO_NO_UNIT	(-1)
#define VIDIOCGUNIT	_IOR('v', 21, struct video_unit)

/*
 * VBI interface
 */
struct vbi_format {
	__u32	sampling_rate;	/* in Hz */
	__u32	samples_per_line;
	__u32	sample_format;	/* VIDEO_PALETTE_RAW only (1 byte) */
	__s32	start[2];	/* starting line for each frame */
	__u32	count[2];	/* count of lines for each frame */
	__u32	flags;
#define	VBI_UNSYNC	1	/* can distinguish between top/bottom field */
#define	VBI_INTERLACED	2	/* lines are interlaced */
};
/* get/set vbi information */
#define	VIDIOCGVBIFMT	_IOR('v',28, struct vbi_format)
#define	VIDIOCSVBIFMT	_IOW('v',29, struct vbi_format)

/*
 * video_info is biased towards hardware mpeg encode/decode
 * but it could apply generically to any hardware
 * compressor/decompressor
*/
struct video_info {
	__u32	frame_count;	/* frames output since decode/encode began */
	__u32	h_size;		/* current unscaled horizontal size */
	__u32	v_size;		/* current unscaled veritcal size */
	__u32	smpte_timecode;	/* current SMPTE timecode (for current GOP) */
	__u32	picture_type;	/* current picture type */
	__u32	temporal_reference;	/* current temporal reference */
	__u8	user_data[256];	/* user data last found in compressed stream */
	/* user_data[0] contains user data flags, user_data[1] has count */
};
#define VIDIOCGPLAYINFO		_IOR('v',26, struct video_info)		/* Get current playback info from hardware */

/* generic structure for setting playback modes */
struct video_play_mode {
	int	mode;
	int	p1;
	int	p2;
};
#define VIDIOCSPLAYMODE		_IOW('v',24, struct video_play_mode)	/* Set output video mode/feature */

/* for loading microcode / fpga programming */
struct video_code {
	char	loadwhat[16];	/* name or tag of file being passed */
	int	datasize;
	__u8	*data;
};
#define VIDIOCSMICROCODE	_IOW('v',27, struct video_code)		/* Load microcode into hardware */

/*
 * The remaining ioctl supported by the video4linux v1 interface:
 * VIDIOCSWRITEMODE passes one of the VID_WRITE_ parameters
 * (unknown meaning).
 */

#define VIDIOCSWRITEMODE	_IOW('v', 25, int)	/* Set write mode */

/* VIDIOCSWRITEMODE */
#define VID_WRITE_MPEG_AUD		0
#define VID_WRITE_MPEG_VID		1
#define VID_WRITE_OSD			2
#define VID_WRITE_TTX			3
#define VID_WRITE_CC			4
#define VID_WRITE_MJPEG			5

#define BASE_VIDIOCPRIVATE	192		/* 192-255 are private */

/*
 * mode values for VIDIOCSPLAYMODE
*/
#define VID_PLAY_VID_OUT_MODE		0
	/* p1: = VIDEO_MODE_PAL, VIDEO_MODE_NTSC, etc ... */
#define VID_PLAY_GENLOCK		1
	/* p1: 0 = OFF, 1 = ON */
	/* p2: GENLOCK FINE DELAY value */
#define VID_PLAY_NORMAL			2
#define VID_PLAY_PAUSE			3
#define VID_PLAY_SINGLE_FRAME		4
#define VID_PLAY_FAST_FORWARD		5
#define VID_PLAY_SLOW_MOTION		6
#define VID_PLAY_IMMEDIATE_NORMAL	7
#define VID_PLAY_SWITCH_CHANNELS	8
#define VID_PLAY_FREEZE_FRAME		9
#define VID_PLAY_STILL_MODE		10
#define VID_PLAY_MASTER_MODE		11
	/* p1: see below */
#define		VID_PLAY_MASTER_NONE	1
#define		VID_PLAY_MASTER_VIDEO	2
#define		VID_PLAY_MASTER_AUDIO	3
#define VID_PLAY_ACTIVE_SCANLINES	12
	/* p1 = first active; p2 = last active */
#define VID_PLAY_RESET			13
#define VID_PLAY_END_MARK		14


/*
 * Various hardware types.
 */
#define VID_HARDWARE_BT848	1
#define VID_HARDWARE_QCAM_BW	2
#define VID_HARDWARE_PMS	3
#define VID_HARDWARE_QCAM_C	4
#define VID_HARDWARE_PSEUDO	5
#define VID_HARDWARE_SAA5249	6
#define VID_HARDWARE_AZTECH	7
#define VID_HARDWARE_SF16MI	8
#define VID_HARDWARE_RTRACK	9
#define VID_HARDWARE_ZOLTRIX	10
#define VID_HARDWARE_SAA7146    11
#define VID_HARDWARE_VIDEUM	12	/* Reserved for Winnov videum */
#define VID_HARDWARE_RTRACK2	13
#define VID_HARDWARE_PERMEDIA2	14	/* Reserved for Permedia2 */
#define VID_HARDWARE_RIVA128	15	/* Reserved for RIVA 128 */
#define VID_HARDWARE_PLANB	16	/* PowerMac motherboard video-in */
#define VID_HARDWARE_BROADWAY	17	/* Broadway project */
#define VID_HARDWARE_GEMTEK	18
#define VID_HARDWARE_TYPHOON	19
#define VID_HARDWARE_VINO	20	/* SGI Indy Vino */
#define VID_HARDWARE_CADET	21	/* Cadet radio */
#define VID_HARDWARE_TRUST	22	/* Trust FM Radio */
#define VID_HARDWARE_TERRATEC	23	/* TerraTec ActiveRadio */
#define VID_HARDWARE_CPIA	24
#define VID_HARDWARE_ZR36120	25	/* Zoran ZR36120/ZR36125 */
#define VID_HARDWARE_ZR36067	26	/* Zoran ZR36067/36060 */
#define VID_HARDWARE_OV511	27
#define VID_HARDWARE_ZR356700	28	/* Zoran 36700 series */
#define VID_HARDWARE_W9966	29
#define VID_HARDWARE_SE401	30	/* SE401 USB webcams */
#define VID_HARDWARE_PWC	31	/* Philips webcams */
#define VID_HARDWARE_MEYE	32	/* Sony Vaio MotionEye cameras */
#define VID_HARDWARE_CPIA2	33
#define VID_HARDWARE_VICAM      34
#define VID_HARDWARE_SF16FMR2	35
#define VID_HARDWARE_W9968CF	36
#define VID_HARDWARE_SAA7114H   37
#define VID_HARDWARE_SN9C102	38
#define VID_HARDWARE_ARV	39

#endif /* CONFIG_VIDEO_V4L1_COMPAT */

#endif /* __LINUX_VIDEODEV_H */

*)
_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-pascal

Reply via email to