OK. I will. What about adding validation of format instead of adding "
something like `"%0*"PRId64`"? Dnia 28 sierpnia 2017 03:30 Rodger
Combs <rodger.co...@gmail.com> napisał(a): If you know of such a
vulnerability, report it to ffmpeg-secur...@ffmpeg.org . New code with known
vulnerabilities will not be accepted. Sent from my iPhone On Aug 27, 2017,
at 14:04, samsamsam < samsam...@o2.pl > wrote:
get_repl_pattern_and_format, you should have a fixed value of something like
`"%0*"PRId64` If you afraid about safety then the only thing which
need to be added to get_repl_pattern_and_format is validation of format.
Simple loop to validate format will be enough. Do you agree? Anyway we are
talking about safety but parser for mp4 atoms missing checking and there is
quite easy to make segfault of the libavformat when try to prepared mp4 file.
I understand that you want to have maximum safety with new code but I hope you
know that ffmpeg at all is not safety. Regards, SSS Dnia 27 sierpnia 2017
16:34 Rodger Combs < rodger.co...@gmail.com > napisał(a): You're
still calling snprintf with a string derived from the XML, which is still not
safe. Rather than having a format copied from the source in
get_repl_pattern_and_format, you should have a fixed value of something like
`"%0*"PRId64`, and specify an additional "precision" argument
you parse from the XML yourself. I can't reiterate this enough: _never pass
data from the XML into the format-string arg of a printf-family function_.
Also, rather than calling snprintf() twice with an av_malloc() in between, you
can just call av_asprintf(). That's what it does internally anyway. On
Aug 27, 2017, at 09:19, Steven Liu < l...@chinaffmpeg.org > wrote:
ffmpeg need a dash demuxer for demux the dash formats base on github.com
github.com TODO: 1. support multi bitrate dash v2 fixed: 1. from
autodetect to disabled 2. from camelCase code style to ffmpeg code style 3.
from RepType to AVMediaType 4. fix variable typo 5. change time value from
uint32_t to uint64_t 6. removed be used once API 7. change 'time(NULL)`,
except it is not 2038-safe.' to av_gettime and av_timegm 8. merge complex
free operation to free_fragment 9. use API from snprintf to av_asprintf v3
fixed: 1. fix typo from --enabled-xml2 to --enable-xml2 v4 fixed: 1. from
--enable-xml2 to --enable-libxml2 2. move system includes to top 3. remove
nouse includes 4. rename enum name 5. add a trailing comma for the last entry
enum 6. fix comment typo 7. add const to DASHContext class front 8. check
sscanf if return arguments and give warning message when error 9. check
validity before free seg->url and seg 10. check if the val is null, before
use atoll v5 fixed: 1. fix typo from mainifest to manifest v6 fixed: 1.
from realloc to av_realloc 2. from free to av_free v7 fixed: 1. remove the
-lxml2 from configure when require_pkg_config v8 fixed: 1. fix replace
filename template by av_asprintf secure problem v9 modified: 1. make
manifest parser clearly v10 fixed: 1. fix function API name code style 2.
remove redundant strreplace call 3. remove redundant memory operation and
check return value from get_content_url() 4. add space between ) and { 5.
remove no need to log the value for print v11 fixed: 1. from atoll to
strtoll v12 fixed: 1. remove strreplace and instead by av_strreplace v13
fixed: 1. fix bug: cannot play: dash.edgesuite.net dash.edgesuite.net v14
fixed: 1. fix bug: TLS connection was non-properly terminated 2. fix bug: No
trailing CRLF found in HTTP header v15 fixed: 1. play youtube link: ffmpeg
-i $(youtube-dl -J " www.youtube.com www.youtube.com | jq -r
".requested_formats[0].manifes 2. code refine for timeline living stream
Reviewed-by: Clément Bœsch < u...@pkh.me > Reviewed-by: Michael
Niedermayer < mich...@niedermayer.cc > Reviewed-by: Carl Eugen Hoyos
< ceho...@ag.or.at > Reviewed-by: Rodger Combs <
rodger.co...@gmail.com > Reviewed-by: Moritz Barsnick <
barsn...@gmx.net > Reviewed-by: Nicolas George < geo...@nsup.org >
Reviewed-by: Ricardo Constantino < wiia...@gmail.com > Reviewed-by:
wm4 < nfx...@googlemail.com > Tested-by: Andy Furniss <
adf.li...@gmail.com > Reported-by: Andy Furniss < adf.li...@gmail.com
> Signed-off-by: Steven Liu < l...@chinaffmpeg.org >
Signed-off-by: samsamsam < samsam...@o2.pl > --- configure
4 + libavformat/Makefile | 1 + libavformat/allformats.c | 2
+- libavformat/dashdec.c | 1981 ++++++++++++++++++++++++++++++ 4 files
changed, 1987 insertions(+), 1 deletion(-) create mode 100644
libavformat/dashdec.c diff --git a/configure b/configure index
05f6dcc99a..7a7d61fa13 100755 --- a/configure +++ b/configure @@ -272,6
+272,7 @@ External library support: --enable-libxcb-shape enable X11
grabbing shape rendering [autodetect] --enable-libxvid enable Xvid
encoding via xvidcore, MPEG-4/Xvid encoder exists [no] +
--enable-libxml2 enable XML parsing using the C library libxml2 [no]
--enable-libzimg enable z.lib, needed for zscale filter [no]
--enable-libzmq enable message passing via libzmq [no]
--enable-libzvbi enable teletext support via libzvbi [no] @@ -1576,6
+1577,7 @@ EXTERNAL_LIBRARY_LIST=" libvpx libwavpack libwebp +
libxml2 libzimg libzmq libzvbi @@ -2937,6 +2939,7 @@
avi_muxer_select="riffenc" caf_demuxer_select="iso_media
riffdec" caf_muxer_select="iso_media"
dash_muxer_select="mp4_muxer" +dash_demuxer_deps="libxml2"
dirac_demuxer_select="dirac_pa dts_demuxer_select="dca_parser
dtshd_demuxer_select="dca_pars @@ -5996,6 +5999,7 @@ enabled openssl
&& { use_pkg_config openssl openssl/ssl.h OPENSSL_init
openssl openssl/ssl.h SSL_library_init -lssl -lcrypto -lws2_32 -lgdi32 ||
"ERROR: openssl not found"; } enabled qtkit_indev
&& { check_header_objcc QTKit/QTKit.h || disable qtkit_indev; }
+enabled libxml2 && require_pkg_config libxml-2.0
libxml2/libxml/xmlversion.h xmlCheckVersion if enabled gcrypt; then
GCRYPT_CONFIG="${cross_p diff --git a/libavformat/Makefile
b/libavformat/Makefile index f2b465cfa2..3d478749d0 100644 ---
a/libavformat/Makefile +++ b/libavformat/Makefile @@ -133,6 +133,7 @@
OBJS-$(CONFIG_CRC_MUXER) crcenc.o OBJS-$(CONFIG_DATA_DEMUXER)
+= rawdec.o OBJS-$(CONFIG_DATA_MUXER) rawenc.o
OBJS-$(CONFIG_DASH_MUXER) dashenc.o
+OBJS-$(CONFIG_DASH_DEMUXER) += dashdec.o
OBJS-$(CONFIG_DAUD_DEMUXER) += dauddec.o
OBJS-$(CONFIG_DAUD_MUXER) daudenc.o
OBJS-$(CONFIG_DCSTR_DEMUXER) += dcstr.o diff --git
a/libavformat/allformats.c b/libavformat/allformats.c index
cd8200ea1c..aeb9b710fe 100644 --- a/libavformat/allformats.c +++
b/libavformat/allformats.c @@ -96,7 +96,7 @@ static void register_all(void)
REGISTER_DEMUXER (CINE, cine); REGISTER_DEMUXER (CONCAT,
concat); REGISTER_MUXER (CRC, crc) - REGISTER_MUXER
(DASH, dash); + REGISTER_MUXDEMUX(DASH, dash);
REGISTER_MUXDEMUX(DATA, data); REGISTER_MUXDEMUX(DAUD,
daud); REGISTER_DEMUXER (DCSTR, dcstr); diff --git
a/libavformat/dashdec.c b/libavformat/dashdec.c new file mode 100644 index
0000000000..4718ce24ab --- /dev/null +++ b/libavformat/dashdec.c @@ -0,0
+1,1981 @@ +/* + * Dynamic Adaptive Streaming over HTTP demux + * Copyright
(c) 2017 samsam...@o2.pl based on HLS demux + * Copyright (c) 2017 Steven
Liu + * + * This file is part of FFmpeg. + * + * FFmpeg is free software;
you can redistribute it and/or + * modify it under the terms of the GNU Lesser
General Public + * License as published by the Free Software Foundation;
either + * version 2.1 of the License, or (at your option) any later version.
+ * + * FFmpeg is distributed in the hope that it will be useful, + * but
WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public
License for more details. + * + * You should have received a copy of the GNU
Lesser General Public + * License along with FFmpeg; if not, write to the Free
Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301 USA + */ +#include <libxml/parser.h> +#include
"libavutil/intreadwrite.h" +#include "libavutil/opt.h"
+#include "libavutil/time.h" +#include
"libavutil/parseutils.h" +#include "internal.h" +#include
"avio_internal.h" + +#define INITIAL_BUFFER_SIZE 32768 + +struct
fragment { + int64_t url_offset; + int64_t size; + char *url; +};
+ +/* + * reference to : ISO_IEC_23009-1-DASH-2012 + * Section: 5.3.9.6.2 +
* Table: Table 17 — Semantics of SegmentTimeline element + * */ +struct
timeline { + /* t: Element or Attribute Name + * specifies the MPD
start time, in @timescale units, + * the first Segment in the series
starts relative to the beginning of the Period. + * The value of this
attribute must be equal to or greater than the sum of the previous S + *
element earliest presentation time and the sum of the contiguous Segment
durations. + * If the value of the attribute is greater than what is
expressed by the previous S element, + * it expresses discontinuities in
the timeline. + * If not present then the value shall be assumed to be
zero for the first S element + * and for the subsequent S elements, the
value shall be assumed to be the sum of + * the previous S element's
earliest presentation time and contiguous duration + * (i.e. previous S@t
+ @d * (@r + 1)). + * */ + int64_t t; + /* r: Element or Attribute
Name + * specifies the repeat count of the number of following contiguous
Segments with + * the same duration expressed by the value of @d. This
value is zero-based + * (e.g. a value of three means four Segments in the
contiguous series). + * */ + int64_t r; + /* d: Element or
Attribute Name + * specifies the Segment duration, in units of the value
of the @timescale. + * */ + int64_t d; +}; + +enum DASHTmplUrlType
{ + TMP_URL_TYPE_UNSPECIFIED + TMP_URL_TYPE_NUMBER, +
TMP_URL_TYPE_TIME, +}; + +/* + * Each playlist has its own demuxer. If it
is currently active, + * it has an opened AVIOContext too, and potentially an
AVPacket + * containing the next packet from this stream. + */ +struct
representation { + char *url_template; + char *url_template_pattern; +
char *url_template_format; + enum DASHTmplUrlType tmp_url_type; +
AVIOContext pb; + AVIOContext *input; + AVFormatContext *parent; +
AVFormatContext *ctx; + AVPacket pkt; + int rep_idx; + int
rep_count; + int stream_index; + + enum AVMediaType type; +
int64_t target_duration; + + int n_fragments; + struct fragment
**fragments; /* VOD list of fragment for profile */ + + int n_timelines;
+ struct timeline **timelines; + + int64_t first_seq_no; + int64_t
last_seq_no; + int64_t start_number; /* used in case when we have dynamic
list of segment to know which segments are new one*/ + + int64_t
fragment_duration; + int64_t fragment_timescale; + + int64_t
presentation_timeoffset; + + int64_t cur_seq_no; + int64_t
cur_seg_offset; + int64_t cur_seg_size; + struct fragment *cur_seg; +
+ /* Currently active Media Initialization Section */ + struct fragment
*init_section; + uint8_t *init_sec_buf; + uint32_t init_sec_buf_size;
+ uint32_t init_sec_data_len; + uint32_t init_sec_buf_read_offset; +
int fix_multiple_stsd_order; + int64_t cur_timestamp; + int
is_restart_needed; +}; + +typedef struct DASHContext { + const AVClass
*class; + char *base_url; + struct representation *cur_video; +
struct representation *cur_audio; + + /* MediaPresentationDescription
Attribute */ + uint64_t media_presentation_duration; + uint64_t
suggested_presentation_delay; + uint64_t availability_start_time; +
uint64_t publish_time; + uint64_t minimum_update_period; + uint64_t
time_shift_buffer_depth; + uint64_t min_buffer_time; + + /* Period
Attribute */ + uint64_t period_duration; + uint64_t period_start; + +
int is_live; + AVIOInterruptCB *interrupt_callback; + char
*user_agent; holds HTTP user agent set as an AVOption to the
HTTP protocol context + char *cookies; holds HTTP cookie
values set in either the initial response or as an AVOption to the HTTP
protocol context + char *headers; holds HTTP headers set as
an AVOption to the HTTP protocol context + AVDictionary *avio_opts; +}
DASHContext; + +static uint64_t get_current_time_in_sec(void) +{ +
return av_gettime() / 1000000; +} + +static uint64_t
get_utc_date_time_insec(AVForm *s, const char *datetime) +{ + struct tm
timeinfo; + int year = 0; + int month = 0; + int day = 0; + int
hour = 0; + int minute = 0; + int ret = 0; + float second = 0.0; +
+ /* ISO-8601 date parser */ + if (!datetime) + return 0; + +
ret = sscanf(datetime, "%d-%d-%dT%d:%d:%fZ", &year, &month,
&day, &hour, &minute, &second); + /* year, month, day,
hour, minute, second 6 arguments */ + if (ret != 6) { + av_log(s,
AV_LOG_WARNING, "get_utc_date_time_insec get a wrong time format\n");
+ } + timeinfo.tm_year = year - 1900; + timeinfo.tm_mon = month -
1; + timeinfo.tm_mday = day; + timeinfo.tm_hour = hour; +
timeinfo.tm_min = minute; + timeinfo.tm_sec = (int)second; + +
return av_timegm(&timeinfo); +} + +static uint32_t
get_duration_insec(AVFormatCon *s, const char *duration) +{ + /* ISO-8601
duration parser */ + uint32_t days = 0; + uint32_t hours = 0; +
uint32_t mins = 0; + uint32_t secs = 0; + uint32_t size = 0; +
float value = 0; + uint8_t type = 0; + const char *ptr = duration; +
+ while (*ptr) { + if (*ptr == 'P' || *ptr == 'T') {
+ ptr++; + continue + } + + if
(sscanf(ptr, "%f%c%n", &value, &type, &size) != 2) { +
av_log(s AV_LOG_WARNING, "get_duration_insec get a wrong time
format\n"); + return 0; /* parser error */ + } +
switch (type) { + case 'D': + =
(uint32_t)value; + + case 'H': +
= (uint32_t)value; + + case 'M': +
= (uint32_t)value; + + case
'S': + = (uint32_t)value; + +
default: + handle invalid type + +
} + ptr += size; + } + return ((days * 24 + hours) * 60 +
mins) * 60 + secs; +} + +static int64_t get_segment_start_time_based_o
representation *pls, int64_t cur_seq_no) +{ + int64_t start_time = 0; +
int64_t i = 0; + int64_t j = 0; + int64_t num = 0; + + if
(pls->n_timelines) { + for (i = 0; i < pls->n_timelines; i++)
{ + if (pls->timelines[i]->t > 0) { + =
pls->timelines[i]->t; + } + if (num ==
cur_seq_no) + finish; + + start_ti +=
pls->timelines[i]->d; + for (j = 0; j <
pls->timelines[i]->r; j++) { + + (num
== cur_seq_no) + finish; + +=
pls->timelines[i]->d; + } + num++; + } +
} +finish: + return start_time; +} + +static int64_t
calc_next_seg_no_from_timeline representation *pls, int64_t cur_time) +{ +
int64_t i = 0; + int64_t j = 0; + int64_t num = 0; + int64_t
start_time = 0; + + for (i = 0; i < pls->n_timelines; i++) { +
if (pls->timelines[i]->t > 0) { + start_ti =
pls->timelines[i]->t; + } + if (start_time > cur_time)
+ goto finish; + + start_time +=
pls->timelines[i]->d; + for (j = 0; j <
pls->timelines[i]->r; j++) { + num++; + if
(start_time > cur_time) + finish; + start_ti +=
pls->timelines[i]->d; + } + num++; + } + +
return -1; + +finish: + return num; +} + +static void
free_fragment(struct fragment **seg) +{ + if (!(*seg)) { + return;
+ } + av_freep(&(*seg)->url); + av_freep(seg); +} +
+static void free_fragment_list(struct representation *pls) +{ + int i; +
+ for (i = 0; i < pls->n_fragments; i++) { +
free_fragment(&p + } + av_freep(&pls->fragments +
pls->n_fragments = 0; +} + +static void free_timelines_list(struct
representation *pls) +{ + int i; + + for (i = 0; i <
pls->n_timelines; i++) { + av_freep(&pls->t + } +
av_freep(&pls->timelines + pls->n_timelines = 0; +} + +static
void free_representation(struct representation *pls) +{ +
free_fragment_list(pls); + free_timelines_list(pls) +
free_fragment(&pls->cur_ + free_fragment(&pls->init +
av_freep(&pls->init_sec_ + av_freep(&pls->pb.buffer + if
(pls->input) + ff_format_io_clo &pls->input); + if
(pls->ctx) { + pls->ctx->pb = NULL; + avformat_close_i
+ } + + av_free(pls->url_templat + av_free(pls->url_templat
+ av_free(pls->url_templat + av_free(pls); +} + +static void
update_options(char **dest, const char *name, void *src) +{ +
av_freep(dest); + av_opt_get(src, name, AV_OPT_SEARCH_CHILDREN,
(uint8_t**)dest); + if (*dest) + av_freep(dest); +} + +static
int open_url(AVFormatContext *s, AVIOContext **pb, const char *url, +
*opts, AVDictionary *opts2, int *is_http) +{ + DASHContext *c =
s->priv_data; + AVDictionary *tmp = NULL; + const char *proto_name =
NULL; + int ret; + void *p = NULL; + + av_dict_copy(&tmp,
opts, 0); + av_dict_copy(&tmp, opts2, 0); + + if (av_strstart(url,
"crypto", NULL)) { + if (url[6] == '+' || url[6] ==
':') + proto_na = avio_find_protocol_name(url + 7); + }
+ + if (!proto_name) + proto_name = avio_find_protocol_name(url);
+ + if (!proto_name) + return AVERROR_INVALIDDATA; + + // only
http(s) & file are allowed + if (!av_strstart(proto_name,
"http", NULL) && !av_strstart(proto_name, "file",
NULL)) { + return AVERROR_INVALIDDATA; + } + if
(!strncmp(proto_name, url, strlen(proto_name)) &&
url[strlen(proto_name)] == ':') { + ; + } else if
(av_strstart(url, "crypto", NULL) && !strncmp(proto_name, url +
7, strlen(proto_name)) && url[7 + strlen(proto_name)] == ':') {
+ ; + } else if (strcmp(proto_name, "file") ||
!strncmp(url, "file,", 5)) { + return AVERROR_INVALIDDATA; +
} + ret = s->io_open(s, pb, url, AVIO_FLAG_READ, &tmp); + if
(ret >= 0) { + // update cookies on http response with setcookies.
+ p = (s->flags & AVFMT_FLAG_CUSTOM_IO) ? NULL : s->pb; +
update_options(& "cookies", p); + av_dict_set(&opt
"cookies", c->cookies, 0); + } + + av_dict_free(&tmp);
+ + if (is_http) + *is_http = av_strstart(proto_name,
"http", NULL); + + return ret; + +} + +static char
*get_content_url(xmlNodePtr *baseurl_nodes, + n_baseurl_nodes,
+ *rep_id_val, + *rep_bandwidth_val, +
*val) +{ + int i; + xmlChar *text; + char *url = NULL;
+ char *tmp_str = av_mallocz(MAX_URL_SIZE); + char *tmp_str_2 = NULL; +
+ if (!tmp_str) { + return NULL; + } + for (i = 0; i <
n_baseurl_nodes; ++i) { + if (baseurl_nodes[i] && +
baseurl_ && + baseurl_ == XML_TEXT_NODE) { +
text = xmlNodeGetContent(baseurl_node + if (text) { +
= av_mallocz(MAX_URL_SIZE); + (!tmp_str_2) { +
+ NULL; + +
MAX_URL_SIZE, tmp_str, text); + + =
tmp_str_2; + + } + } + } + if
(val) + av_strlcat(tmp_s (const char*)val, MAX_URL_SIZE); + + if
(rep_id_val) { + url = av_strireplace(tmp_str,
"$RepresentationID$", (const char*)rep_id_val); +
av_free(tmp_str) + tmp_str = url; + } + if (rep_bandwidth_val
&& tmp_str) + url = av_strireplace(tmp_str,
"$Bandwidth$", (const char*)rep_bandwidth_val); + if (tmp_str !=
url) + av_free(tmp_str) + return url; +} + +static xmlChar
*get_val_from_nodes_tab(xmlNod *nodes, const int n_nodes, const xmlChar
*attrname) +{ + int i; + xmlChar *val; + + for (i = 0; i <
n_nodes; ++i) { + if (nodes[i]) { + val =
xmlGetProp(nodes[i], attrname); + if (val) + val;
+ } + } + + return NULL; +} + +static xmlNodePtr
find_child_node_by_name(xmlNod rootnode, const xmlChar *nodename) +{ +
xmlNodePtr node = rootnode; + if (!node) { + return NULL; + }
+ + node = xmlFirstElementChild(node); + while (node) { + if
(!xmlStrcmp(node->name, nodename)) { + return node; + }
+ node = xmlNextElementSibling(node); + } + return NULL; +} +
+static int get_repl_pattern_and_format(co char *i_url, const char *i_marker,
char **o_pattern, char **o_format) +{ + int ret = -1; + int marker_len
= 0; + int format_len = 0; + char *prefix = NULL; + char *start =
NULL; + char *end = NULL; + + + if (av_stristr(i_url, i_marker)) { +
*o_pattern = av_strdup(i_marker); + *o_format =
av_strdup("%"PRId64); + ret = 0; + } else { +
prefix = av_strdup(i_marker); + marker_len = strlen(prefix)-1; +
prefix[marker_le = '\0'; + start = av_stristr(i_url, prefix);
+ if (!start) + goto finish; + + end = strchr(start
+ 1, '$'); + if (!end) + goto finish; + +
if (start[marker_len] != '%') + goto finish; + +
if (end[-1] != 'd') + goto finish; + + format_len
= end - start - marker_len - 1 + strlen(PRId64); + *o_format =
av_mallocz(format_len+1); + av_strlcpy(*o_fo start + marker_len, end -
start - marker_len -1); + av_strlcat(*o_fo PRId64, strlen(*o_format) +
strlen(PRId64)); + *o_pattern = av_mallocz(end - start + 2); +
if (*o_pattern) { + ret = AVERROR(EINVAL); + goto
finish; + } + av_strlcpy(*o_pa start, end - start + 1); +
ret = 0; + +finish: + av_free(prefix); + } + + return ret;
+} + +static enum AVMediaType get_content_type(xmlNodePtr node) +{ +
enum AVMediaType type = AVMEDIA_TYPE_UNKNOWN; + int i = 0; + const char
*attr; + xmlChar *val = NULL; + + if (node) { + while (type ==
AVMEDIA_TYPE_UNKNOWN && i < 2) { + attr = (i) ?
"mimeType" : "contentType"; + val =
xmlGetProp(node, attr); + if (val) { +
(av_stristr((const char *)val, "video")) { + =
AVMEDIA_TYPE_VIDEO; + else if (av_stristr((const char *)val,
"audio")) { + = AVMEDIA_TYPE_AUDIO; +
+ + } + i++; + } + } +
return type; +} + +static int parse_manifest_segmenturlnode( *s, struct
representation *rep, + fragmenturl_node, +
*baseurl_nodes, + *rep_id_val, +
*rep_bandwidth_val) +{ + xmlChar *initialization_val = NULL; + xmlChar
*media_val = NULL; + + if (!xmlStrcmp(fragmenturl_node-> (const xmlChar
*)"Initialization")) { + initialization_v =
xmlGetProp(fragmenturl_node, "sourceURL"); + if
(initialization_val) { + rep->ini = av_mallocz(sizeof(struct
fragment)); + if (!rep->init_section) { + +
AVERROR(ENOMEM); + } + rep->ini =
get_content_url(baseurl_nodes, 4, + + +
+ if (!rep->init_section->url) { +
+ + AVERROR(ENOMEM); + } +
rep->ini = -1; + xmlFree( + } + } else if
(!xmlStrcmp(fragmenturl_node-> (const xmlChar *)"SegmentURL")) { +
media_val = xmlGetProp(fragmenturl_node, "media"); + if
(media_val) { + struct fragment *seg = av_mallocz(sizeof(struct
fragment)); + if (!seg) { + +
AVERROR(ENOMEM); + } + seg->url =
get_content_url(baseurl_nodes, 4, + + +
+ if (!seg->url) { + +
+ AVERROR(ENOMEM); + } +
seg->siz = -1; + dynarray &rep->n_fragments, seg); +
xmlFree( + } + } + + return 0; +} + +static int
parse_manifest_segmenttimeline *s, struct representation *rep, +
fragment_timeline_node) +{ + xmlAttrPtr attr = NULL; + xmlChar *val
= NULL; + + if (!xmlStrcmp(fragment_timeline_ (const xmlChar
*)"S")) { + struct timeline *tml = av_mallocz(sizeof(struct
timeline)); + if (!tml) { + return AVERROR(ENOMEM); +
} + attr = fragment_timeline_node->proper + while (attr) {
+ val = xmlGetProp(fragment_timeline_n attr->name); + +
if (!val) { + AV_LOG_WARNING,
"parse_manifest_segmenttimelin attr->name = %s val is NULL\n",
attr->name); + + } + + if
(!xmlStrcmp(attr->name, (const xmlChar *)"t")) { +
= (int64_t)strtoll(val, NULL, 10); + } else if
(!xmlStrcmp(attr->name, (const xmlChar *)"r")) { +
=(int64_t) strtoll(val, NULL, 10); + } else if
(!xmlStrcmp(attr->name, (const xmlChar *)"d")) { +
= (int64_t)strtoll(val, NULL, 10); +// = (int64_t)
strtoll(val, NULL, 10); + } + attr = attr->next; +
xmlFree( + } + dynarray_add(&re
&rep->n_timelines, tml); + } + + return 0; +} + +static int
parse_manifest_representation( *s, const char *url, + node, +
adaptionset_node, + mpd_baseurl_node, +
period_baseurl_node, + fragment_template_node, +
content_component_node, +
adaptionset_baseurl_node) +{ + int32_t ret = 0; + int32_t
audio_rep_idx = 0; + int32_t video_rep_idx = 0; + char *temp_string =
NULL; + DASHContext *c = s->priv_data; + struct representation *rep
= NULL; + struct fragment *seg = NULL; + xmlNodePtr
representation_segmenttemplate = NULL; + xmlNodePtr
representation_baseurl_node = NULL; + xmlNodePtr
representation_segmentlist_nod = NULL; + xmlNodePtr fragment_timeline_node
= NULL; + xmlNodePtr fragment_templates_tab[2]; + xmlChar *duration_val
= NULL; + xmlChar *presentation_timeoffset_val = NULL; + xmlChar
*startnumber_val = NULL; + xmlChar *timescale_val = NULL; + xmlChar
*initialization_val = NULL; + xmlChar *media_val = NULL; + xmlNodePtr
baseurl_nodes[4]; + xmlNodePtr representation_node = node; + xmlChar
*rep_id_val = xmlGetProp(representation_node "id"); + xmlChar
*rep_bandwidth_val = xmlGetProp(representation_node "bandwidth"); +
enum AVMediaType type = AVMEDIA_TYPE_UNKNOWN; + + // try get information
from representation + if (type == AVMEDIA_TYPE_UNKNOWN) + type =
get_content_type(representatio + // try get information from
contentComponen + if (type == AVMEDIA_TYPE_UNKNOWN) + type =
get_content_type(content_compo + // try get information from adaption set
+ if (type == AVMEDIA_TYPE_UNKNOWN) + type =
get_content_type(adaptionset_n + if (type == AVMEDIA_TYPE_UNKNOWN) { +
av_log(s, AV_LOG_VERBOSE, "Parsing '%s' - skipp not supported
representation type\n", url); + } else if ((type == AVMEDIA_TYPE_VIDEO
&& !c->cur_video) || (type == AVMEDIA_TYPE_AUDIO &&
!c->cur_audio)) { + // convert selected representation to our
internal struct + rep = av_mallocz(sizeof(struct representation)); +
if (!rep) { + ret = AVERROR(ENOMEM); + goto end;
+ } + representation_s = find_child_node_by_name(repres
"SegmentTemplate"); + representation_b =
find_child_node_by_name(repres "BaseURL"); + representation_s =
find_child_node_by_name(repres "SegmentList"); + +
baseurl_nodes[0] = mpd_baseurl_node; + baseurl_nodes[1] =
period_baseurl_node; + baseurl_nodes[2] = adaptionset_baseurl_node; +
baseurl_nodes[3] = representation_baseurl_node; + + if
(representation_segmenttemplat || fragment_template_node) { +
fragment = NULL; + fragment = representation_segmenttemplate +
fragment = fragment_template_node; + + presenta =
get_val_from_nodes_tab(fragmen 2, "presentationTimeOffset"); +
duration = get_val_from_nodes_tab(fragmen 2, "duration"); +
startnum = get_val_from_nodes_tab(fragmen 2, "startNumber"); +
timescal = get_val_from_nodes_tab(fragmen 2, "timescale"); +
initiali = get_val_from_nodes_tab(fragmen 2, "initialization");
+ media_va = get_val_from_nodes_tab(fragmen 2, "media"); +
+ if (initialization_val) { + =
av_mallocz(sizeof(struct fragment)); + (!rep->init_section)
{ + + = AVERROR(ENOMEM); +
end; + + = get_content_url(baseurl_nodes, 4,
rep_id_val, rep_bandwidth_val, initialization_val); +
(!rep->init_section->url) { + + +
= AVERROR(ENOMEM); + end; + +
= -1; + + } + + if
(media_val) { + = get_content_url(baseurl_nodes, 4,
rep_id_val, rep_bandwidth_val, media_val); + =
rep->url_template; + (temp_string) { +
(av_stristr(temp_string, "$Number")) { +
"$Number$", &(rep->url_template_pattern),
&(rep->url_template_format)); + = TMP_URL_TYPE_NUMBER;
/* Number-Based. */ + else if (av_stristr(temp_string,
"$Time")) { + "$Time$",
&(rep->url_template_pattern), &(rep->url_template_format)); +
= TMP_URL_TYPE_TIME; /* Time-Based. */ + else {
+ = NULL; + + +
+ } + + if (presentation_timeoffset_val) { +
= (int64_t) strtoll(presentation_timeoffse NULL, 10); +
+ } + if (duration_val) { + =
(int64_t) strtoll(duration_val, NULL, 10); + + }
+ if (timescale_val) { + = (int64_t)
strtoll(timescale_val, NULL, 10); + + } +
if (startnumber_val) { + = (int64_t)
strtoll(startnumber_val, NULL, 10); + + } + +
fragment = find_child_node_by_name(repres "SegmentTimeline");
+ + if (!fragment_timeline_node) + =
find_child_node_by_name(fragme "SegmentTimeline"); + if
(fragment_timeline_node) { + = xmlFirstElementChild(fragment_
+ (fragment_timeline_node) { + =
parse_manifest_segmenttimeline rep, fragment_timeline_node); +
(ret < 0) { + ret; + + =
xmlNextElementSibling(fragment + + } + }
else if (representation_baseurl_node && !representation_segmentlist_no
{ + seg = av_mallocz(sizeof(struct fragment)); + if
(!seg) { + = AVERROR(ENOMEM); + end; +
} + seg->url = get_content_url(baseurl_nodes, 4,
rep_id_val, rep_bandwidth_val, NULL); + if (!seg->url) { +
+ = AVERROR(ENOMEM); + end; +
} + seg->siz = -1; + dynarray
&rep->n_fragments, seg); + } else if
(representation_segmentlist_no { + // TODO: www.brendanlong.com
www.brendanlong.com + // www-itec.uni-klu.ac.at
www-itec.uni-klu.ac.at + xmlNodeP fragmenturl_node = NULL; +
duration = xmlGetProp(representation_segm "duration"); +
timescal = xmlGetProp(representation_segm "timescale"); +
if (duration_val) { + = (int64_t) strtoll(duration_val, NULL,
10); + + } + if (timescale_val) { +
= (int64_t) strtoll(timescale_val, NULL, 10); +
+ } + fragment = xmlFirstElementChild(represent +
while (fragmenturl_node) { + =
parse_manifest_segmenturlnode( rep, fragmenturl_node, + +
+ + (ret < 0) { +
ret; + + = xmlNextElementSibling(fragment
+ } + + fragment = find_child_node_by_name(repres
"SegmentTimeline"); + + if (!fragment_timeline_node) +
= find_child_node_by_name(fragme "SegmentTimeline"); +
if (fragment_timeline_node) { + =
xmlFirstElementChild(fragment_ + (fragment_timeline_node) { +
= parse_manifest_segmenttimeline rep, fragment_timeline_node);
+ (ret < 0) { + ret; + +
= xmlNextElementSibling(fragment + +
} + } else { + free_rep + rep = NULL; +
av_log(s AV_LOG_ERROR, "Unknown format of Representation node id[%s]
\n", (const char *)rep_id_val); + } + + if (rep) { +
if (rep->fragment_duration > 0 &&
!rep->fragment_timescale) + = 1; + if (type ==
AVMEDIA_TYPE_VIDEO) { + = video_rep_idx; + =
rep; + } else { + = audio_rep_idx; +
= rep; + } + } + } + + video_rep_idx += type ==
AVMEDIA_TYPE_VIDEO; + audio_rep_idx += type == AVMEDIA_TYPE_AUDIO; +
+end: + if (rep_id_val) + xmlFree(rep_id_v + if
(rep_bandwidth_val) + xmlFree(rep_band + + return ret; +} +
+static int parse_manifest_adaptationset(A *s, const char *url, +
adaptionset_node, + mpd_baseurl_node, +
period_baseurl_node) +{ + int ret = 0; + xmlNodePtr
fragment_template_node = NULL; + xmlNodePtr content_component_node = NULL;
+ xmlNodePtr adaptionset_baseurl_node = NULL; + xmlNodePtr node = NULL;
+ + node = xmlFirstElementChild(adaptions + while (node) { + if
(!xmlStrcmp(node->name, (const xmlChar *)"SegmentTemplate")) { +
fragment = node; + } else if (!xmlStrcmp(node->name, (const
xmlChar *)"ContentComponent")) { + content_ = node; +
} else if (!xmlStrcmp(node->name, (const xmlChar *)"BaseURL")) {
+ adaption = node; + } else if (!xmlStrcmp(node->name,
(const xmlChar *)"Representation")) { + ret =
parse_manifest_representation( url, node, + +
+ + + + +
if (ret < 0) { + ret; + } + }
+ node = xmlNextElementSibling(node); + } + return 0; +} + +
+static int parse_manifest(AVFormatContext *s, const char *url, AVIOContext
*in) +{ + DASHContext *c = s->priv_data; + int ret = 0; + int
close_in = 0; + uint8_t *new_url = NULL; + int64_t filesize = 0; +
char *buffer = NULL; + AVDictionary *opts = NULL; + xmlDoc *doc = NULL;
+ xmlNodePtr root_element = NULL; + xmlNodePtr node = NULL; +
xmlNodePtr period_node = NULL; + xmlNodePtr mpd_baseurl_node = NULL; +
xmlNodePtr period_baseurl_node = NULL; + xmlNodePtr adaptionset_node =
NULL; + xmlAttrPtr attr = NULL; + xmlChar *val = NULL; + uint32_t
perdiod_duration_sec = 0; + uint32_t perdiod_start_sec = 0; + int32_t
audio_rep_idx = 0; + int32_t video_rep_idx = 0; + + if (!in) { +
close_in = 1; + /* This is XML manifest there is no need to set range
header */ + av_dict_set(&opt "seekable", "0", 0);
+ // broker prior HTTP options that should be consistent across requests
+ av_dict_set(&opt "user-agent", c->user_agent, 0); +
av_dict_set(&opt "cookies", c->cookies, 0); +
av_dict_set(&opt "headers", c->headers, 0); + + ret =
avio_open2(&in, url, AVIO_FLAG_READ, c->interrupt_callback, &opts);
+ av_dict_free(&op + if (ret < 0) + return
ret; + } + + if (av_opt_get(in, "location",
AV_OPT_SEARCH_CHILDREN, &new_url) >= 0) { + c->base_url =
av_strdup(new_url); + } else { + c->base_url = av_strdup(url);
+ } + + filesize = avio_size(in); + if (filesize <= 0) { +
filesize = 8 * 1024; + } + + buffer = av_mallocz(filesize); + if
(!buffer) { + return AVERROR(ENOMEM); + } + + filesize =
avio_read(in, buffer, filesize); + if (filesize > 0) { +
LIBXML_TEST_VERS + + doc = xmlReadMemory(buffer, filesize,
c->base_url, NULL, 0); + root_element = xmlDocGetRootElement(doc);
+ node = root_element; + + if (!node) { + ret =
AVERROR_INVALIDDATA; + av_log(s AV_LOG_ERROR, "Unable to parse
'%s' - missing root node\n", url); + goto cleanup; +
} + + if (node->type != XML_ELEMENT_NODE || +
xmlStrcm (const xmlChar *)"MPD")) { + ret =
AVERROR_INVALIDDATA; + av_log(s AV_LOG_ERROR, "Unable to parse
'%s' - wrong root node name[%s] type[%d]\n", url, node->name,
(int)node->type); + goto cleanup; + } + + val =
xmlGetProp(node, "type"); + if (!val) { + av_log(s
AV_LOG_ERROR, "Unable to parse '%s' - missing type attrib\n",
url); + ret = AVERROR_INVALIDDATA; + goto cleanup; +
} + if (!xmlStrcmp(val, (const xmlChar *)"dynamic")) +
c->is_li = 1; + xmlFree(val); + + attr =
node->properties; + while (attr) { + val =
xmlGetProp(node, attr->name); + + if (!xmlStrcmp(attr->name,
(const xmlChar *)"availabilityStartTime")) { + =
get_utc_date_time_insec(s, (const char *)val); + } else if
(!xmlStrcmp(attr->name, (const xmlChar *)"publishTime")) { +
= get_utc_date_time_insec(s, (const char *)val); + } else
if (!xmlStrcmp(attr->name, (const xmlChar *)"minimumUpdatePeriod"))
{ + = get_duration_insec(s, (const char *)val); +
} else if (!xmlStrcmp(attr->name, (const xmlChar
*)"timeShiftBufferDepth")) { + = get_duration_insec(s,
(const char *)val); + } else if (!xmlStrcmp(attr->name, (const
xmlChar *)"minBufferTime")) { + =
get_duration_insec(s, (const char *)val); + } else if
(!xmlStrcmp(attr->name, (const xmlChar
*)"suggestedPresentationDelay" { + =
get_duration_insec(s, (const char *)val); + } else if
(!xmlStrcmp(attr->name, (const xmlChar
*)"mediaPresentationDuration") { + =
get_duration_insec(s, (const char *)val); + } + attr =
attr->next; + xmlFree( + } + + mpd_baseurl_node
= find_child_node_by_name(node, "BaseURL"); + + // at now we
can handle only one period, with the longest duration + node =
xmlFirstElementChild(node); + while (node) { + if
(!xmlStrcmp(node->name, (const xmlChar *)"Period")) { +
= 0; + = 0; + = node->properties; +
(attr) { + = xmlGetProp(node, attr->name); +
(!xmlStrcmp(attr->name, (const xmlChar
*)"duration")) { + = get_duration_insec(s, (const char
*)val); + else if (!xmlStrcmp(attr->name, (const xmlChar
*)"start")) { + = get_duration_insec(s, (const char
*)val); + + = attr->next; +
+ + ((perdiod_duration_sec) >=
(c->period_duration)) { + = node; + =
perdiod_duration_sec; + = perdiod_start_sec; +
(c->period_start > 0) + = c->period_duration; +
+ } + node = xmlNextElementSibling(node);
+ } + if (!period_node) { + av_log(s AV_LOG_ERROR,
"Unable to parse '%s' - missing Period node\n", url); +
ret = AVERROR_INVALIDDATA; + goto cleanup; + } + +
adaptionset_node = xmlFirstElementChild(period_no + while
(adaptionset_node) { + if (!xmlStrcmp(adaptionset_node-> (const
xmlChar *)"BaseURL")) { + = adaptionset_node; +
} else if (!xmlStrcmp(adaptionset_node-> (const xmlChar
*)"AdaptationSet")) { + url, adaptionset_node,
mpd_baseurl_node, period_baseurl_node); + } + adaption
= xmlNextElementSibling(adaption + } + if (c->cur_video) {
+ c->cur_v = video_rep_idx; + c->cur_v = 1; +
av_log(s AV_LOG_VERBOSE, "rep_idx[%d]\n",
(int)c->cur_video->rep_idx); + av_log(s AV_LOG_VERBOSE,
"rep_count[%d]\n", (int)video_rep_idx); + } + if
(c->cur_audio) { + c->cur_a = audio_rep_idx; + }
+cleanup: + /*free the document */ + xmlFreeDoc(doc); +
xmlCleanupParser + } else { + av_log(s, AV_LOG_ERROR, "Unable
to read to offset '%s'\n", url); + ret =
AVERROR_INVALIDDATA; + } + + av_free(new_url); + av_free(buffer);
+ if (close_in) { + avio_close(in); + } + return ret; +} +
+static int64_t calc_cur_seg_no(AVFormatContex *s, struct representation *pls)
+{ + DASHContext *c = s->priv_data; + int64_t num = 0; +
int64_t start_time_offset = 0; + + if (c->is_live) { + if
(pls->n_fragments) { + num = pls->first_seq_no; + }
else if (pls->n_timelines) { + start_ti =
get_segment_start_time_based_o 0xFFFFFFFF) -
pls->timelines[pls->first_seq_ // total duration of playlist +
if (start_time_offset < 60 * pls->fragment_timescale) +
= 0; + else + = start_time_offset - 60 *
pls->fragment_timescale; + + num =
calc_next_seg_no_from_timeline pls->timelines[pls->first_seq_ +
start_time_offset); + if (num == -1) + =
pls->first_seq_no; + } else { + if
(pls->presentation_timeoffset) { + =
pls->presentation_timeoffset * pls->fragment_timescale /
pls->fragment_duration; + } else if (c->publish_time > 0)
{ + = pls->first_seq_no + (((c->publish_time -
c->availability_start_time) - c->suggested_presentation_dela *
pls->fragment_timescale) / pls->fragment_duration; + } else {
+ = pls->first_seq_no + (((get_current_time_in_sec() -
c->availability_start_time) - c->suggested_presentation_dela *
pls->fragment_timescale) / pls->fragment_duration; + } +
} + } else { + num = pls->first_seq_no; + } + return
num; +} + +static int64_t calc_min_seg_no(AVFormatContex *s, struct
representation *pls) +{ + DASHContext *c = s->priv_data; + int64_t
num = 0; + + if (c->is_live && pls->fragment_duration) { +
num = pls->first_seq_no + (((get_current_time_in_sec() -
c->availability_start_time) - c->time_shift_buffer_depth) *
pls->fragment_timescale) / pls->fragment_duration; + } else { +
num = pls->first_seq_no; + } + return num; +} + +static int64_t
calc_max_seg_no(struct representation *pls) +{ + DASHContext *c =
pls->parent->priv_data; + int64_t num = 0; + + if
(pls->n_fragments) { + num = pls->first_seq_no +
pls->n_fragments - 1; + } else if (pls->n_timelines) { + int
i = 0; + num = pls->first_seq_no + pls->n_timelines - 1; +
for (i = 0; i < pls->n_timelines; i++) { + num +=
pls->timelines[i]->r; + } + } else if (c->is_live) { +
num = pls->first_seq_no + (((get_current_time_in_sec() -
c->availability_start_time)) * pls->fragment_timescale) /
pls->fragment_duration; + } else { + num = pls->first_seq_no
+ (c->media_presentation_duratio * pls->fragment_timescale) /
pls->fragment_duration; + } + + return num; +} + +static void
move_timelines(struct representation *rep_src, struct representation *rep_dest)
+{ + if (rep_dest && rep_src ) { + free_timelines_l +
rep_dest->timeli = rep_src->timelines; +
rep_dest->n_time = rep_src->n_timelines; + rep_dest->first_ =
rep_src->first_seq_no; + rep_dest->last_s =
calc_max_seg_no(rep_dest); + rep_src->timelin = NULL; +
rep_src->n_timel = 0; + rep_dest->cur_se =
rep_src->cur_seq_no; + } +} + +static void move_segments(struct
representation *rep_src, struct representation *rep_dest) +{ + if
(rep_dest && rep_src ) { + free_fragment_li + if
(rep_src->start_number > (rep_dest->start_number +
rep_dest->n_fragments)) + rep_dest = 0; + else +
rep_dest += rep_src->start_number - rep_dest->start_number; +
rep_dest->fragme = rep_src->fragments; + rep_dest->n_frag
= rep_src->n_fragments; + rep_dest->parent = rep_src->parent;
+ rep_dest->last_s = calc_max_seg_no(rep_dest); +
rep_src->fragmen = NULL; + rep_src->n_fragm = 0; + } +} +
+ +static int refresh_manifest(AVFormatConte *s) +{ + + int ret = 0; +
DASHContext *c = s->priv_data; + + // save current context +
struct representation *cur_video = c->cur_video; + struct
representation *cur_audio = c->cur_audio; + char *base_url =
c->base_url; + + c->base_url = NULL; + c->cur_video = NULL;
+ c->cur_audio = NULL; + ret = parse_manifest(s, s->filename,
NULL); + if (ret) + goto finish; + + if (cur_video &&
cur_video->timelines || cur_audio && cur_audio->timelines) { +
// calc current time + int64_t currentVideoTime = 0; +
int64_t currentAudioTime = 0; + if (cur_video &&
cur_video->timelines) + currentV =
get_segment_start_time_based_o cur_video->cur_seq_no) /
cur_video->fragment_timescale; + if (cur_audio &&
cur_audio->timelines) + currentA =
get_segment_start_time_based_o cur_audio->cur_seq_no) /
cur_audio->fragment_timescale; + // update segments + if
(cur_video && cur_video->timelines) { + c->cur_v =
calc_next_seg_no_from_timeline currentVideoTime *
cur_video->fragment_timescale - 1); + if
(c->cur_video->cur_seq_no >= 0) { + cur_video); +
} + } + if (cur_audio &&
cur_audio->timelines) { + c->cur_a =
calc_next_seg_no_from_timeline currentAudioTime *
cur_audio->fragment_timescale - 1); + if
(c->cur_audio->cur_seq_no >= 0) { + mo cur_audio); +
} + } + } + if (cur_video &&
cur_video->fragments) { + move_segments(c- cur_video); + } +
if (cur_audio && cur_audio->fragments) { + move_segments(c-
cur_audio); + } + +finish: + // restore context + if
(c->base_url) + av_free(base_url + else + c->base_url
= base_url; + if (c->cur_audio) + free_representat + if
(c->cur_video) + free_representat + c->cur_audio = cur_audio;
+ c->cur_video = cur_video; + return ret; +} + +static struct
fragment *get_current_fragment(struct representation *pls) +{ + int64_t
min_seq_no = 0; + int64_t max_seq_no = 0; + struct fragment *seg =
NULL; + struct fragment *seg_ptr = NULL; + DASHContext *c =
pls->parent->priv_data; + + while ((
!ff_check_interrupt(c->interru pls->n_fragments > 0)) { + if
(pls->cur_seq_no < pls->n_fragments) { + seg_ptr =
pls->fragments[pls->cur_seq_no + seg =
av_mallocz(sizeof(struct fragment)); + if (!seg) { +
NULL; + } + seg->url = av_strdup(seg_ptr->url);
+ if (!seg->url) { + + NULL;
+ } + seg->siz = seg_ptr->size; +
seg->url = seg_ptr->url_offset; + return seg; + }
else if (c->is_live) { + av_uslee + refresh_ +
} else { + break; + } + } + if (c->is_live) {
+ while (!ff_check_interrupt(c->interr { + min_seq_ =
calc_min_seg_no(pls->parent, pls); + max_seq_ =
calc_max_seg_no(pls); + + if (pls->cur_seq_no <= min_seq_no)
{ + AV_LOG_VERBOSE, "old fragment: cur[%"PRId64"]
min[%"PRId64"] max[%"PRId64"], playlist %d\n",
(int64_t)pls->cur_seq_no, min_seq_no, max_seq_no, (int)pls->rep_idx); +
(c->is_live && (pls->timelines ||
pls->fragments)) { + + +
= calc_cur_seg_no(pls->parent, pls); + } else if
(pls->cur_seq_no > max_seq_no) { + AV_LOG_VERBOSE,
"new fragment: min[%"PRId64"] max[%"PRId64"], playlist
%d\n", min_seq_no, max_seq_no, (int)pls->rep_idx); + +
(c->is_live && (pls->timelines ||
pls->fragments)) { + + +
+ } + break; + } + seg =
av_mallocz(sizeof(struct fragment)); + if (!seg) { + return
NULL; + } + } else if (pls->cur_seq_no <=
pls->last_seq_no) { + seg = av_mallocz(sizeof(struct fragment)); +
if (!seg) { + return NULL; + } + } + if (seg)
{ + if (pls->tmp_url_type != TMP_URL_TYPE_UNSPECIFIED) { +
int64_t val = pls->tmp_url_type == TMP_URL_TYPE_NUMBER ?
pls->cur_seq_no : get_segment_start_time_based_o pls->cur_seq_no); +
int size = snprintf(NULL, 0, pls->url_template_format, val); // calc
needed buffer size + + if (size > 0) { +
*tmp_val = av_mallocz(size + 1); + size+1,
pls->url_template_format, val); + =
av_strireplace(pls->url_templa pls->url_template_pattern, tmp_val); +
+ } + } + + if (!seg->url) { +
av_log(p AV_LOG_ERROR, "Unable to resolve template url
'%s'\n", pls->url_template); + seg->url =
av_strdup(pls->url_template); + if (!seg->url) { +
NULL; + } + } + + seg->size = -1; + }
+ + return seg; +} + +enum ReadFromURLMode { + READ_NORMAL, +
READ_COMPLETE, +}; + +static int read_from_url(struct representation *pls,
struct fragment *seg, + *buf, int buf_size, +
ReadFromURLMode mode) +{ + int ret; + + /* limit read if the fragment
was only a part of a file */ + if (seg->size >= 0) + buf_size
= FFMIN(buf_size, pls->cur_seg_size - pls->cur_seg_offset); + + if
(mode == READ_COMPLETE) { + ret = avio_read(pls->input, buf,
buf_size); + if (ret < buf_size) { + av_log(p
AV_LOG_WARNING, "Could not read complete fragment.\n"); + } +
} else { + ret = avio_read(pls->input, buf, buf_size); + } +
if (ret > 0) + pls->cur_seg_off += ret; + + return ret;
+} + +static int open_input(DASHContext *c, struct representation *pls,
struct fragment *seg) +{ + AVDictionary *opts = NULL; + char
url[MAX_URL_SIZE]; + int ret; + + // broker prior HTTP options that
should be consistent across requests + av_dict_set(&opts,
"user-agent", c->user_agent, 0); + av_dict_set(&opts,
"cookies", c->cookies, 0); + av_dict_set(&opts,
"headers", c->headers, 0); + if (c->is_live) { +
av_dict_set(&opt "seekable", "0", 0); + } + + if
(seg->size >= 0) { + /* try to restrict the HTTP request to the
part we want + * (if this is in fact a HTTP request) */ +
av_dict_set_int( "offset", seg->url_offset, 0); +
av_dict_set_int( "end_offset", seg->url_offset + seg->size, 0);
+ } + + ff_make_absolute_url(url MAX_URL_SIZE, c->base_url,
seg->url); + av_log(pls->parent, AV_LOG_VERBOSE, "DASH request
for url '%s', offset %"PRId64", playlist %d\n", +
url, seg->url_offset, pls->rep_idx); + ret =
open_url(pls->parent, &pls->input, url, c->avio_opts, opts, NULL);
+ if (ret < 0) { + goto cleanup; + } + + /* Seek to the
requested position. If this was a HTTP request, the offset + * should
already be where want it to, but this allows e.g. local testing + *
without a HTTP server. */ + if (!ret && seg->url_offset) { +
int64_t seekret = avio_seek(pls->input, seg->url_offset, SEEK_SET);
+ if (seekret < 0) { + av_log(p AV_LOG_ERROR, "Unable
to seek to offset %"PRId64" of DASH fragment '%s'\n",
seg->url_offset, seg->url); + ret = (int) seekret; +
ff_forma &pls->input); + } + } + +cleanup: +
av_dict_free(&opts); + pls->cur_seg_offset = 0; +
pls->cur_seg_size = seg->size; + return ret; +} + +static int
update_init_section(struct representation *pls) +{ + static const int
max_init_section_size = 1024*1024; + DASHContext *c =
pls->parent->priv_data; + int64_t sec_size = 0; + int64_t urlsize
= 0; + int ret = 0; + + /* read init section only once per
representation */ + if (!pls->init_section || pls->init_sec_buf) { +
return 0; + } + + ret = open_input(c, pls,
pls->init_section); + if (ret < 0) { + av_log(pls->pare
AV_LOG_WARNING, "Failed to open an initialization section in playlist
%d\n", pls->rep_idx); + return ret; + } + + if
(pls->init_section->size >= 0) { + sec_size =
pls->init_section->size; + } else if ((urlsize =
avio_size(pls->input)) >= 0) { + sec_size = urlsize; + } else
{ + sec_size = max_init_section_size; + } +
av_log(pls->parent, AV_LOG_DEBUG, "Downloading an initialization section
of size %"PRId64"\n", sec_size); + sec_size = FFMIN(sec_size,
max_init_section_size); + av_fast_malloc(&pls->ini
&pls->init_sec_buf_size, sec_size); + ret = read_from_url(pls,
pls->init_section, pls->init_sec_buf, pls->init_sec_buf_size,
READ_COMPLETE); + ff_format_io_close(pls-> &pls->input); + if
(ret < 0) + return ret; + + if (pls->fix_multiple_stsd_order
&& pls->rep_idx > 0) { + uint8_t **stsd_entries = NULL;
+ int *stsd_entries_size = NULL; + int i = 4; + + while
(i <= (ret - 4)) { + // find start stsd atom + if
(!memcmp(pls->init_sec_buf + i, "stsd", 4)) { + 1B
version + 3B flags + 4B num of entries */ +
stsd_first_offset = i + 8; + stsd_offset = 0;
+ j = 0; + stsd_count =
AV_RB32(pls->init_sec_buf + stsd_first_offset); + += 4; +
(stsd_count != pls->rep_count) { + +
+ + find all stsd entries +
= av_mallocz_array(stsd_count, sizeof(*stsd_entries)); +
= av_mallocz_array(stsd_count, sizeof(*stsd_entries_size)); +
(j = 0; j < stsd_count; ++j) { + 4B - size +
4B - format */ + = AV_RB32(pls->init_sec_buf +
stsd_first_offset + stsd_offset); + =
av_malloc(stsd_entries_size[j] + pls->init_sec_buf +
stsd_first_offset + stsd_offset, stsd_entries_size[j]); + +=
stsd_entries_size[j]; + + reorder stsd
entries + as first put stsd entry for current representation
+ = pls->rep_idx; + = stsd_first_offset; +
+ stsd_offset, stsd_entries[j], stsd_entries_size[j]); +
+= stsd_entries_size[j]; + (j = 0; j <
stsd_count; ++j) { + (j != pls->rep_idx) { +
+ stsd_offset, stsd_entries[j], stsd_entries_size[j]); + +=
stsd_entries_size[j]; + + +
+ + + + } +
i++; + } + } + + av_log(pls->parent, AV_LOG_TRACE,
"pls[%p] init section size[%d]\n", pls, (int)ret); +
pls->init_sec_data_len = ret; + pls->init_sec_buf_read_o = 0; + +
return 0; +} + +static int64_t seek_data(void *opaque, int64_t offset, int
whence) +{ + struct representation *v = opaque; + if
(v->n_fragments && !v->init_sec_data_len) { + return
avio_seek(v->input, offset, whence); + } + + return
AVERROR(ENOSYS); +} + +static int read_data(void *opaque, uint8_t *buf, int
buf_size) +{ + int ret = 0; + struct representation *v = opaque; +
DASHContext *c = v->parent->priv_data; + +restart: + if
(!v->input) { + free_fragment(&v + v->cur_seg =
get_current_fragment(v); + if (!v->cur_seg) { + ret =
AVERROR_EOF; + goto end; + } + + /* load/update
Media Initialization Section, if any */ + ret = update_init_section(v);
+ if (ret) + goto end; + + ret = open_input(c, v,
v->cur_seg); + if (ret < 0) { + if
(ff_check_interrupt(c->interru { + end; +
= AVERROR_EXIT; + } + av_log(v AV_LOG_WARNING,
"Failed to open fragment of playlist %d\n", v->rep_idx); +
v->cur_s + goto restart; + } + } + + if
(v->init_sec_buf_read_offset < v->init_sec_data_len) { + /*
Push init section out first before first actual fragment */ + int
copy_size = FFMIN(v->init_sec_data_len - v->init_sec_buf_read_offset,
buf_size); + memcpy(buf, v->init_sec_buf, copy_size); +
v->init_sec_buf_ += copy_size; + ret = copy_size; + goto
end; + } + + /* check the v->cur_seg, if it is null, get current
and double check if the new v->cur_seg*/ + if (!v->cur_seg) { +
v->cur_seg = get_current_fragment(v); + } + if (!v->cur_seg) {
+ ret = AVERROR_EOF; + goto end; + } + ret =
read_from_url(v, v->cur_seg, buf, buf_size, READ_NORMAL); + if (ret >
0) + goto end; + + if (!v->is_restart_needed) +
v->cur_seq_no++; + v->is_restart_needed = 1; + +/* +
ff_format_io_close(v->pa &v->input); + v->cur_seq_no++; +
goto restart; +*/ +end: + return ret; +} + +static int
save_avio_options(AVFormatCont *s) +{ + DASHContext *c = s->priv_data;
+ const char *opts[] = { "headers", "user_agent",
"user-agent", "cookies", NULL }, **opt = opts; + uint8_t
*buf = NULL; + int ret = 0; + + while (*opt) { + if
(av_opt_get(s->pb, *opt, AV_OPT_SEARCH_CHILDREN, &buf) >= 0) { +
if (buf[0] != '\0') { + =
av_dict_set(&c->avio_opts, *opt, buf, AV_DICT_DONT_STRDUP_VAL); +
(ret < 0) + ret; + } + } +
opt++; + } + + return ret; +} + +static int
nested_io_open(AVFormatContext *s, AVIOContext **pb, const char *url, +
flags, AVDictionary **opts) +{ + av_log(s, AV_LOG_ERROR, +
"A DASH playlist item '%s' referred to an external file
'%s'. " + "Opening this file was forbidden for
security reasons\n", + s->filenam url); + return
AVERROR(EPERM); +} + +static int reopen_demux_for_component(AVF *s, struct
representation *pls) +{ + DASHContext *c = s->priv_data; +
AVInputFormat *in_fmt = NULL; + AVDictionary *in_fmt_opts = NULL; +
uint8_t *avio_ctx_buffer = NULL; + int ret = 0; + + if (pls->ctx)
{ + /* note: the internal buffer could have changed, and be !=
avio_ctx_buffer */ + av_freep(&pls->p +
memset(&pls->pb, 0x00, sizeof(AVIOContext)); +
pls->ctx->pb = NULL; + avformat_close_i + pls->ctx =
NULL; + } + if (!(pls->ctx = avformat_alloc_context())) { +
ret = AVERROR(ENOMEM); + goto fail; + } + + avio_ctx_buffer =
av_malloc(INITIAL_BUFFER_SIZE) + if (!avio_ctx_buffer ) { + ret =
AVERROR(ENOMEM); + avformat_free_co + pls->ctx = NULL; +
goto fail; + } + if (c->is_live) { + ffio_init_contex
avio_ctx_buffer , INITIAL_BUFFER_SIZE, 0, pls, read_data, NULL, NULL); + }
else { + ffio_init_contex avio_ctx_buffer , INITIAL_BUFFER_SIZE, 0,
pls, read_data, NULL, seek_data); + } + pls->pb.seekable = 0; + +
if ((ret = ff_copy_whiteblacklists(pls->c s)) < 0) + goto fail;
+ + pls->ctx->flags = AVFMT_FLAG_CUSTOM_IO; +
pls->ctx->probesize = 1024 * 4; + pls->ctx->max_analyze_du = 4
* AV_TIME_BASE; + ret = av_probe_input_buffer(&pls->pb &in_fmt,
"", NULL, 0, 0); + if (ret < 0) { + av_log(s,
AV_LOG_ERROR, "Error when loading first fragment, playlist %d\n",
(int)pls->rep_idx); + avformat_free_co + pls->ctx = NULL;
+ goto fail; + } + + pls->ctx->pb = &pls->pb; +
pls->ctx->io_open = nested_io_open; + + // provide additional
information from mpd if available + ret =
avformat_open_input(&pls->ctx, "", in_fmt, &in_fmt_opts);
//pls->init_section->url + av_dict_free(&in_fmt_opt + if (ret
< 0) + goto fail; + if (pls->n_fragments) { + ret =
avformat_find_stream_info(pls- NULL); + if (ret < 0) +
goto fail; + } + +fail: + return ret; +} + +static int
open_demux_for_component(AVFor *s, struct representation *pls) +{ + int
ret = 0; + int i; + + pls->parent = s; + pls->cur_seq_no =
calc_cur_seg_no(s, pls); + pls->last_seq_no = calc_max_seg_no(pls); +
+ ret = reopen_demux_for_component(s, pls); + if (ret < 0) { +
goto fail; + } + for (i = 0; i < pls->ctx->nb_streams; i++) {
+ AVStream *st = avformat_new_stream(s, NULL); + AVStream *ist
= pls->ctx->streams[i]; + if (!st) { + ret =
AVERROR(ENOMEM); + goto fail; + } + st->id = i;
+ avcodec_paramete pls->ctx->streams[i]->codecpar +
avpriv_set_pts_i ist->pts_wrap_bits, ist->time_base.num,
ist->time_base.den); + } + + return 0; +fail: + return ret;
+} + +static int dash_read_header(AVFormatConte *s) +{ + void *u =
(s->flags & AVFMT_FLAG_CUSTOM_IO) ? NULL : s->pb; + DASHContext
*c = s->priv_data; + int ret = 0; + int stream_index = 0; + +
c->interrupt_callback = &s->interrupt_callback; + // if the URL
context is good, read important options we must broker later + if (u) { +
update_options(& "user-agent", u); +
update_options(& "cookies", u); + update_options(&
"headers", u); + } + + if ((ret = parse_manifest(s,
s->filename, s->pb)) < 0) + goto fail; + + if ((ret =
save_avio_options(s)) < 0) + goto fail; + + /* If this
isn't a live stream, fill the total duration of the + * stream. */ +
if (!c->is_live) { + s->duration = (int64_t)
c->media_presentation_duration * AV_TIME_BASE; + } + + /* Open the
demuxer for curent video and current audio components if available */ + if
(!ret && c->cur_video) { + ret = open_demux_for_component(s,
c->cur_video); + if (!ret) { + c->cur_v =
stream_index; + ++stream + } else { + free_rep
+ c->cur_v = NULL; + } + } + + if (!ret
&& c->cur_audio) { + ret = open_demux_for_component(s,
c->cur_audio); + if (!ret) { + c->cur_a =
stream_index; + ++stream + } else { + free_rep
+ c->cur_a = NULL; + } + } + + if
(!stream_index) { + ret = AVERROR_INVALIDDATA; + goto fail; +
} + + /* Create a program */ + if (!ret) { + AVProgram
*program; + program = av_new_program(s, 0); + if (!program) {
+ goto fail; + } + + if (c->cur_video) { +
av_progr 0, c->cur_video->stream_index); + } + if
(c->cur_audio) { + av_progr 0,
c->cur_audio->stream_index); + } + } + + return 0;
+fail: + return ret; +} + +static int dash_read_packet(AVFormatConte *s,
AVPacket *pkt) +{ + DASHContext *c = s->priv_data; + int ret = 0;
+ struct representation *cur = NULL; + + if (!c->cur_audio
&& !c->cur_video ) { + return AVERROR_INVALIDDATA; + }
+ if (c->cur_audio && !c->cur_video) { + cur =
c->cur_audio; + } else if (!c->cur_audio && c->cur_video)
{ + cur = c->cur_video; + } else if
(c->cur_video->cur_timestamp < c->cur_audio->cur_timestamp) { +
cur = c->cur_video; + } else { + cur = c->cur_audio;
+ } + + if (cur->ctx) { + while
(!ff_check_interrupt(c->interr && !ret) { + ret =
av_read_frame(cur->ctx, pkt); + if (ret >= 0) { +
If we got a packet, return it */ + =
av_rescale(pkt->pts, (int64_t)cur->ctx->streams[0]- * 90000,
cur->ctx->streams[0]->time_bas + =
cur->stream_index; + 0; + } + if
(cur->is_restart_needed) { +
(!ff_check_interrupt(c->interr { + = 0; +
= 0; + (cur->input) + &cur->input);
+ = reopen_demux_for_component(s, cur); +
(c->is_live && ret) { + + +
+ + + = 0; +
} + + } + } + return AVERROR_EOF; +} + +static int
dash_close(AVFormatContext *s) +{ + DASHContext *c = s->priv_data; +
if (c->cur_audio) { + free_representat + } + if
(c->cur_video) { + free_representat + } + +
av_freep(&c->cookies); + av_freep(&c->user_agent) +
av_dict_free(&c->avio_op + av_freep(&c->base_url); +
return 0; +} + +static int dash_seek(AVFormatContext *s, struct
representation *pls, int64_t seek_pos_msec, int flags) +{ + int ret = 0;
+ int i = 0; + int j = 0; + int64_t duration = 0; + +
av_log(pls->parent, AV_LOG_VERBOSE, "DASH seek pos[%"PRId64"ms],
playlist %d\n", seek_pos_msec, pls->rep_idx); + + // single
fragment mode + if (pls->n_fragments == 1) { +
pls->cur_timesta = 0; + pls->cur_seg_off = 0; +
ff_read_frame_fl + return av_seek_frame(pls->ctx, -1, seek_pos_msec
* 1000, flags); + } + + if (pls->input) + ff_format_io_clo
&pls->input); + + // find the nearest fragment + if
(pls->n_timelines > 0 && pls->fragment_timescale > 0) { +
int64_t num = pls->first_seq_no; + av_log(pls->pare
AV_LOG_VERBOSE, "dash_seek with SegmentTimeline start n_timelines[%d] "
+ "l playlist %d.\n", + (i
(int64_t)pls->last_seq_no, (int)pls->rep_idx); + for (i = 0; i
< pls->n_timelines; i++) { + if (pls->timelines[i]->t
> 0) { + = pls->timelines[i]->t; + } +
duration += pls->timelines[i]->d; + if
(seek_pos_msec < ((duration * 1000) / pls->fragment_timescale)) { +
set_seq_num; + } + for (j = 0; j <
pls->timelines[i]->r; j++) { + +=
pls->timelines[i]->d; + +
(seek_pos_msec < ((duration * 1000) / pls->fragment_timescale)) { +
set_seq_num; + + } +
num++; + } + +set_seq_num: + pls->cur_seq_no = num >
pls->last_seq_no ? pls->last_seq_no : num; + av_log(pls->pare
AV_LOG_VERBOSE, "dash_seek with SegmentTimeline end
cur_seq_no[%"PRId64"], playlist %d.\n", + (i
(int)pls->rep_idx); + } else if (pls->fragment_duration > 0) { +
pls->cur_seq_no = pls->first_seq_no + ((seek_pos_msec *
pls->fragment_timescale) / pls->fragment_duration) / 1000; + } else {
+ av_log(pls->pare AV_LOG_ERROR, "dash_seek missing
fragment_duration\n"); + pls->cur_seq_no = pls->first_seq_no;
+ } + pls->cur_timestamp = 0; + pls->cur_seg_offset = 0; +
pls->init_sec_buf_read_o = 0; + ret = reopen_demux_for_component(s,
pls); + + return ret; +} + +static int dash_read_seek(AVFormatContext
*s, int stream_index, int64_t timestamp, int flags) +{ + int ret = 0; +
DASHContext *c = s->priv_data; + int64_t seek_pos_msec =
av_rescale_rnd(timestamp, 1000, + + &
AVSEEK_FLAG_BACKWARD ? + : AV_ROUND_UP); + if ((flags
& AVSEEK_FLAG_BYTE) || c->is_live) + return AVERROR(ENOSYS); +
if (c->cur_audio) { + ret = dash_seek(s, c->cur_audio,
seek_pos_msec, flags); + } + if (!ret && c->cur_video) { +
ret = dash_seek(s, c->cur_video, seek_pos_msec, flags); + } +
return ret; +} + +static int dash_probe(AVProbeData *p) +{ + if
(!av_stristr(p->buf, "<MPD")) + return 0; + + if
(av_stristr(p->buf, "dash:profile:isoff-on-demand: || +
av_stristr(p->bu "dash:profile:isoff-live:2011" || +
av_stristr(p->bu "dash:profile:isoff-live:2012" || +
av_stristr(p->bu "dash:profile:isoff-main:2011" { + return
AVPROBE_SCORE_MAX; + } + if (av_stristr(p->buf,
"dash:profile")) { + return AVPROBE_SCORE_MAX / 2; + } +
+ return 0; +} + +#define OFFSET(x) offsetof(DASHContext, x) +#define
FLAGS AV_OPT_FLAG_DECODING_PARAM +static const AVOption dash_options[] = { +
{NULL} +}; + +static const AVClass dash_class = { + .class_name =
"dash", + .item_name = av_default_item_name, + .option =
dash_options, + .version = LIBAVUTIL_VERSION_INT, +}; +
+AVInputFormat ff_dash_demuxer = { + .name = "dash", +
.long_name = NULL_IF_CONFIG_SMALL("Dynamic Adaptive Streaming over
HTTP"), + .priv_class = &dash_class, + .priv_data_size =
sizeof(DASHContext), + .read_probe = dash_probe, + .read_header
= dash_read_header, + .read_packet = dash_read_packet, + .read_close
= dash_close, + .read_seek = dash_read_seek, + .flags
= AVFMT_NO_BYTE_SEEK, +}; -- 2.11.0 (Apple Git-81)
______________________________ ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org ffmpeg.org ffmpeg.org
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel