This is an apology for forgetting to use a title for my previous post,
which was about my plan to develop a backup to s3 where the filepath/name
will be replaced by just the 32 character MD5.
In case you deleted the post thinking it was spam I attach the body of the
post in a text file. All commen
I'm wondering if someone could help explain:
1. Can you tell me if --files-from is an available option for the ls
command? I've experimented to find out but without success. (Example: s3cmd
-r --files-from=testlist.txt ls s3://xyztestbucket). Probably not but I
just wanted to check. It's not cle
Requests is the latter, sort of. An 'ls' command is one request despite how
many results it returns (up to 1000). A 'get' request is a download of a
single file. Although a large file may be downloaded in several parts in
which case each part is a request. To get a better idea, you can read the
det
[ls] doesn't honor the --files-from option. [ls] simply asks S3 for all
the files in a bucket, possibly recursively, starting from a given prefix.
Jeremy is correct that it doesn't matter if a request returns 0 bytes or a
list of 1000 objects, it's counted as one request. Most operations have a
Thanks Matt. I suspected the files-from wouldn't work with ls.
But your mention of a limit of 1000 on list operations worries me, as my
plan was to put about 25000 files into one folder with a name as just the
32 hex character MD5 with no subfolder heirarchy. It would seem that I
could then not ge
The 1000 per request limit doesn't limit the number of objects in a bucket
or their names. It's solely to make sure the REST API doesn't get bogged
down trying to return 1M objects in a list at once. When a bucket has more
than 1000 files, it returns the first 1000, with an tag, and
a marker to