On Tue, Sep 13, 2016 at 10:37 AM, Alexandr Porunov < alexandr.poru...@gmail.com> wrote:
> > Correct me if I am wrong. Algorithm is the next: > 1. Upload 1MB sub-segments (up to 500 sub-segments per segment). After > they will be uploaded I will use "copy" request to create one 500MB > segment. After that I will delete useless 1MB segments. > 2. Upload up to 240 segments with 500MB and create a manifest for them. > > Is it correct? Will this algorithm be suitable for this situation? > > That sounds correct - I think the COPY/bulk-delete is optional - you could also just have the top level SLO point to the sub-segment SLO's manifest directly and leave all of the 1MB chunks stored in Swift as is - could potentially help smooth out the evenness of diskfullness overall. I think 1MB is big enough to smooth out any connection overhead - but you could test with COPY consolidation of the backend subsegments to objects too. Whatever works. Regardless it'll still take awhile to download a 120GB object. Do you stream the uncompressed videos directly into the encoder as you upload the compressed format - or stage to disk and re-upload? What about the playback of compressed videos - mp4 should http-pseudo stream directly to an html5 browser just fine!? Cool use case! Good luck! -Clay
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack