Ok so it appears I don't have a problem with a limit of 1000 in the REST
API ls facility if I use s3cmd as s3cmd isn't restricted by this. Regarding
multipart upload I won't be needing more than 1000 parts as I will leave
the 15MB chunk size alone and I don't have any files anywhere near as big
as
The 1000 per request limit doesn't limit the number of objects in a bucket
or their names. It's solely to make sure the REST API doesn't get bogged
down trying to return 1M objects in a list at once. When a bucket has more
than 1000 files, it returns the first 1000, with an tag, and
a marker to
Thanks Matt. I suspected the files-from wouldn't work with ls.
But your mention of a limit of 1000 on list operations worries me, as my
plan was to put about 25000 files into one folder with a name as just the
32 hex character MD5 with no subfolder heirarchy. It would seem that I
could then not ge
[ls] doesn't honor the --files-from option. [ls] simply asks S3 for all
the files in a bucket, possibly recursively, starting from a given prefix.
Jeremy is correct that it doesn't matter if a request returns 0 bytes or a
list of 1000 objects, it's counted as one request. Most operations have a
Requests is the latter, sort of. An 'ls' command is one request despite how
many results it returns (up to 1000). A 'get' request is a download of a
single file. Although a large file may be downloaded in several parts in
which case each part is a request. To get a better idea, you can read the
det
I'm wondering if someone could help explain:
1. Can you tell me if --files-from is an available option for the ls
command? I've experimented to find out but without success. (Example: s3cmd
-r --files-from=testlist.txt ls s3://xyztestbucket). Probably not but I
just wanted to check. It's not cle