> > +//
> > +
> > +// Reduced AMF API
> > +//
> > +// Full version of AMF SDK and the latest version of this file
> > +// can be found at https://github.com/GPUOpen-LibrariesAndSDKs/AMF
> 
> On further consideration I am against including this header.  Just ask the 
> user
> to get it from this link you are including anyway.  (Also maybe make that
> packagable by providing some means to install it, preferably including pkg-
> config support.)
> 

You put very accurate description of the situation in a different thread. 
Few opinions were expressed but then discussion paused.
How does this group resolve such issues?


> > +
> > +#define PTS_PROP L"PtsProp"
> 
> Who is defining this key?  (Does the AMF code look inside it?)
> 

This key is the private one for amfenc implementation in ffmpeg. AMF runtime 
will pass it 
though to the output object.

> > +
> > +const enum AVPixelFormat ff_amf_pix_fmts[] = {
> > +    AV_PIX_FMT_NV12,
> > +    AV_PIX_FMT_RGB0,
> > +    AV_PIX_FMT_BGR0,
> 
> I still think including RGB formats here is a bad idea without any colourspace
> support.

There are pros and cons here.:
Pros: it will provide HW acceleration in simple cases. 
Cons: no control for user.
I will remove it till I expose color converter property from the encoder in 
runtime implementation. 
Currently supported are 601, 709 and JPEG. Is it good enough to reinstate these 
formats once such 
encoder option is added?

> > +    buffer->pVtbl->GetProperty(buffer, PTS_PROP, &var);
> > +
> > +    pkt->pts = var.int64Value; // original pts
> > +    pkt->dts = buffer->pVtbl->GetPts(buffer); // in monotonic order
> 
> Does the GetPts function really return the DTS?

Yes. Though DTS is not exactly the right term. The component will reorder PTSs 
that are set in SetPts() (in case of B-frames) 
and return them in GetPts() - so it is monotonic order timestamps. BTW: NV has 
the same behavior, only PTS reordering 
is done in ffmpeg code.


> > +            res = ctx->context->pVtbl->CreateSurfaceFromDX11Native(ctx-
> >context, texture, &surface, NULL); // wrap to AMF surface
> > +            AMF_RETURN_IF_FALSE(ctx, res == AMF_OK, AVERROR(ENOMEM),
> "CreateSurfaceFromDX11Native() failed  with error %d\n", res);
> 
> I think you will need to hold a reference to the input frame while it's in the
> encoder here, assuming it doesn't get copied away immediately.  If not, it 
> will
> be returned to the frame pool and might be reused by someone else - it can't
> be freed because you are holding a reference to its frames context, but I
> think it can be written to.
> 
> (I could believe that testing with ffmpeg (something like "./ffmpeg_g -hwaccel
> d3d11va -hwaccel_output_format d3d11 input.mp4 -c:v h264_amf ...
> output.mp4", presumably?) is not sufficient to show any problems here.  Not
> sure what would be.)

I will add ref to the frame if it is D3D11. In case of system memory it was 
already copied. 

> 
> Just to confirm, the encoder will /always/ be able to consume the new
> surface after returning a packet?

Yes at this point. 

> > +    { "baseline",       "",                     0,              
> > AV_OPT_TYPE_CONST, { .i64 =
> AMF_VIDEO_ENCODER_PROFILE_BASELINE }, 0, 0, VE, "profile" },
> 
> I still very much doubt you support baseline profile.  Please remove it to
> avoid confusion.

According to codec it is but I will remove - not much use of it. 

> > +    { "quality",        "", 0, AV_OPT_TYPE_CONST, { .i64 =
> AMF_VIDEO_ENCODER_HEVC_QUALITY_PRESET_QUALITY  }, 0, 0, VE,
> "quality" },
> 
> These are 0, 5, 10.  Do the intermediate values work?  Should they be
> exposed?
> 
Only enum values are supported. I guess they left space for future extensions. 

> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Thanks,
Mikhail
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to