Re: [FFmpeg-devel] [PATCH v2] movenc: Add an option for hiding fragments at the end

2024-06-17 Thread Gyan Doshi via ffmpeg-devel



On 2024-06-17 04:08 pm, Martin Storsjö wrote:

On Sat, 15 Jun 2024, Gyan Doshi wrote:


On 2024-06-15 03:54 am, Dennis Sädtler via ffmpeg-devel wrote:

On 2024-06-14 13:23, Gyan Doshi wrote:


On 2024-06-14 04:35 pm, Timo Rothenpieler wrote:

On 14/06/2024 12:44, Martin Storsjö wrote:

On Fri, 14 Jun 2024, Gyan Doshi wrote:


On 2024-06-14 02:18 am, Martin Storsjö wrote:

On Thu, 13 Jun 2024, Gyan Doshi wrote:


On 2024-06-13 06:20 pm, Martin Storsjö wrote:


I'd otherwise want to push this, but I'm not entirely 
satisfied with the option name quite yet. I'm pondering if we 
should call it "hybrid_fragmented" - any opinions, Dennis or 
Timo?


How about `resilient_mode` or `recoverable`?
I agree that the how is secondary.


Those are good suggestions as well - but I think I prefer 
"hybrid_fragmented" still.


In theory, I guess one could implement resilient writing in a 
number of different ways, whereas the hybrid 
fragmented/non-fragmented only is one.


So with a couple other voices agreeing with the name 
"hybrid_fragmented", I'll post a new patch with the option in 
that form - hopefully you don't object to it.


The term hybrid is not applicable here. The fragmented state is 
transient during writing and contingent in the finished artifact 
depending on how the writing process concluded.
Hybrid implies both modes available e.g.. a hybrid vehicle can 
use both types of energy sources. The artifact here will be one 
_or_ the other.


Sure, the file itself is either or, but the process of writing 
will have utilized both. TBH, I don't see it as such a 
black-or-white thing.


What do the others who have chimed in on the thread think, 
compared to calling it "recoverable" or "resilient_mode"?


I don't have a super strong opinion on it, but out of the options 
provided, I'd prefer the hybrid_ one, since there's a good chance 
it'll become an established term now that OBS presents it quite 
publicly visible.


The OBS dev intends to change the term:

"Come up with a better name than "Hybrid MP4" that hopefully won't 
confuse users"


https://github.com/obsproject/obs-studio/pull/10608#issuecomment-2095222024 




Regards,
Gyan


Now that it's merged and in the hands of users I don't have any 
intention of changing the name any more.
We had some chats about about it, but nobody suggested anything that 
people agreed was better, so it stuck.


While "resilient" certainly fits, it could equally apply to regular 
fragmented MP4 (e.g. vMix uses that terminology for fMP4 if I'm not 
mistaken).
The important attribute with this approach is that it's resilient 
*and* compatible, and I'm still not sure how to get that across in 
name alone.


How about `failsafe`?


I don't see how that differs from "resilient", as a regular fragmented 
file also is failsafe (or resilient) in the same way - while the 
special thing here is that it's both fragmented and not.


The expert user already knows to save a fragmented file if they want 
resilience. This option saves them a remux step if the original writing 
ends gracefully.
For all other users,  the value proposition _is_ the resilience.  If the 
muxing ends normally, they just have a normal file. If it ends 
prematurely, they just want to be able to convert to a regular seekable 
MP4. The fact that it is saved in fragmented or any other mode is 
irrelevant - an academic detail at best.


Ultimately, as long as the doc is clear about what the use of this 
option is, and what to do next if the muxing does abort, it should not 
matter too much what the option is called. But just like the faststart 
flag name identifies the purpose instead of being called something like 
moov_in_front, hopefully so will the name here.


Regards,
Gyan

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding

2024-01-28 Thread Gyan Doshi via ffmpeg-devel



On 2024-01-28 04:24 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2024-01-26 05:23:50)


On 2024-01-25 06:47 pm, Andreas Rheinhardt wrote:

Gyan Doshi:

On 2024-01-25 10:29 am, Andreas Rheinhardt wrote:

Gyan Doshi:

Set up framework for non-PCM decoding in-place and
add support for Dolby-E decoding.

Useful for direct transcoding of non-PCM audio in live inputs.
---
    configure  |   1 +
    doc/decoders.texi  |  40 +++
    libavcodec/s302m.c | 609 +
    3 files changed, 543 insertions(+), 107 deletions(-)

diff --git a/configure b/configure
index c8ae0a061d..8db3fa3f4b 100755
--- a/configure
+++ b/configure
@@ -2979,6 +2979,7 @@ rv20_decoder_select="h263_decoder"
    rv20_encoder_select="h263_encoder"
    rv30_decoder_select="golomb h264pred h264qpel mpegvideodec rv34dsp"
    rv40_decoder_select="golomb h264pred h264qpel mpegvideodec rv34dsp"
+s302m_decoder_select="dolby_e_decoder"
    screenpresso_decoder_deps="zlib"
    shorten_decoder_select="bswapdsp"
    sipr_decoder_select="lsp"
diff --git a/doc/decoders.texi b/doc/decoders.texi
index 293c82c2ba..9f85c876bf 100644
--- a/doc/decoders.texi
+++ b/doc/decoders.texi
@@ -347,6 +347,46 @@ configuration. You need to explicitly configure
the build with
    An FFmpeg native decoder for Opus exists, so users can decode Opus
    without this library.
    +@section s302m
+
+SMPTE ST 302 decoder.
+
+SMPTE ST 302 is a method for storing AES3 data format within an MPEG
Transport
+Stream. AES3 streams can contain LPCM streams of 2, 4, 6 or 8
channels with a
+bit depth of 16, 20 or 24-bits at a sample rate of 48 kHz.
+They can also contain non-PCM codec streams such as AC-3 or Dolby-E.
+

This sounds like we should add bitstream filters to extract the proper
underlying streams instead.
(I see only two problems with this approach: The BSF API needs to set
the CodecID of the output during init, but at this point no packet has
reached the BSF to determine it. And changing codec IDs mid-stream is
also not supported.)

In theory, this decoder shouldn't exist, as it is just a carrier,
whether of LPCM or non-PCM.
FFmpeg architecture also imposes a fundamental limitation in that one
s302m stream may
carry multiple payload streams and we support only one decoding context
per input stream

Then why does the demuxer not separate the data into multiple streams?

I didn't add demuxing support for this codec in MPEGTS, but I can venture

a) it would mean essentially inlining this decoder in the demuxer.

Why is that a problem? This decoder seems like it shouldn't be a
decoder.

I agree with Andreas that this seems like it's a demuxer pretending to
be a decoder.


This module transforms the entire raw payload data to generate its 
output, even if the syntax is simple which
essentially makes it a de-coder. The de-multiplexer aspect of multiple 
streams is an academic possibility allowed
by the standard but not seen in any sample which makes me suspect it's 
used for carriage between broadcast
facilities rather than something ever sent to an OTT provider, let alone 
an end user.


Regards,
Gyan

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".