Hi Laurent and Maxime, Laurent, thank you very much for clear and comprehensive description of the "live video input" feature.
Maxime, sure, I will elaborate more in the next version of cover letter. > Date: Wed, 17 Jan 2024 16:23:43 +0200 > From: Laurent Pinchart <laurent.pinch...@ideasonboard.com> > To: Maxime Ripard <mrip...@kernel.org> > Cc: Anatoliy Klymenko <anatoliy.klyme...@amd.com>, > maarten.lankho...@linux.intel.com, tzimmerm...@suse.de, > airl...@gmail.com, dan...@ffwll.ch, michal.si...@amd.com, > dri-devel@lists.freedesktop.org, linux-arm-ker...@lists.infradead.org, > linux-ker...@vger.kernel.org > Subject: Re: [PATCH 0/4] Fixing live video input in ZynqMP DPSUB > Message-ID: <20240117142343.gd17...@pendragon.ideasonboard.com> > Content-Type: text/plain; charset=utf-8 > > On Mon, Jan 15, 2024 at 09:28:39AM +0100, Maxime Ripard wrote: > > On Fri, Jan 12, 2024 at 03:42:18PM -0800, Anatoliy Klymenko wrote: > > > Patches 1/4,2/4,3/4 are minor fixes. > > > > > > DPSUB requires input live video format to be configured. > > > Patch 4/4: The DP Subsystem requires the input live video format to > > > be configured. In this patch we are assuming that the CRTC's bus > > > format is fixed and comes from the device tree. This is a proposed > > > solution, as there are no api to query CRTC output bus format. > > > > > > Is this a good approach to go with? > > > > I guess you would need to expand a bit on what "live video input" is? > > Is it some kind of mechanism to bypass memory and take your pixels > > straight from a FIFO from another device, or something else? > > Yes and no. > > The DPSUB integrates DMA engines, a blending engine (two planes), and a DP > encoder. The dpsub driver supports all of this, and creates a DRM device. The > DP > encoder hardware always takes its input data from the output of the blending > engine. > > The blending engine can optionally take input data from a bus connected to the > FPGA fabric, instead of taking it from the DPSUB internal DMA engines. When > operating in that mode, the dpsub driver exposes the DP encoder as a bridge, > and > internally programs the blending engine to disable blending. Typically, the > FPGA > fabric will then contain a CRTC of some sort, with a driver that will acquire > the DP > encoder bridge as usually done. > > In this mode of operation, it is typical for the IP cores in FPGA fabric to be > synthesized with a fixed format (as that saves resources), while the DPSUB > supports multiple input formats. Bridge drivers in the upstream kernel work > the > other way around, with the bridge hardware supporting a limited set of > formats, > and the CRTC then being programmed with whatever the bridges chain needs. > Here, the negotiation needs to go the other way around, as the CRTC is the > limiting factor, not the bridge. > > Is this explanation clear ? > > -- > Regards, > > Laurent Pinchart > > Thank you, Anatoliy