As a follow-up, verifying that a file in S3 is actually not corrupted would
best be done from an AMI running within the same S3 region your files are
in, so as to not incur download bandwidth charges and the slow pipe out of
AWS.


On Wed, Feb 19, 2014 at 7:47 PM, Matt Domsch <m...@domsch.com> wrote:

> On Wed, Feb 19, 2014 at 5:07 PM, Forrest Aldrich <for...@gmail.com> wrote:
>
>>  I had an interesting exchange with Amazon today, where I put in support
>> tickets asking:
>>
>>   * How can we determine how much space our buckets (or folders) are
>> using?
>>
>
> S3 will gladly tell you through their usage reports, which are calculated
> daily at least, maybe hourly.  If it's good enough for AWS to use as the
> measure to charge you each month, that's probably sufficient.
>
>
>
>>   * Is there a way to do a checksum comparison with local-file vs s3-file
>> to determine integrity?
>>
>
> If you use a recent version of s3cmd, particularly with
> --cache-file=<filename>, you _could_ do so.  It stores the md5sum of the
> file into the cache-file, at upload time, and sticks the md5sum into the
> file's metadata stored on S3.  There isn't a trivial s3cmd to automatically
> do the checking though.
>
>
------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to