Hi all,

s3cmd 0.9.9-pre5 is now available for download on SourceForge.

This release highlights:

* potentially incompatible change in how 'put' and 'sync' work
* added --dry-run parameter for testing 'sync'
* added recursive 'setacl' command (development of this feature has been
sponsored by Joseph Denne from Airlock.com, thanks!)

Details of the above:

1) Non-recursive 'put' changes:
In earlier versions when you ran:
        s3cmd put blah/file1.txt s3://bucket/somewhere/
you'd end up with s3://bucket/somewhere/blah/file1.txt
The *whole* local filename has been appended to the S3 URI given. That's
not what a unix admin expects - when you run 'cp blah/file1.txt xyz/'
you get the file copied into xyz/file1.txt and not xyz/blah/file1.txt.
>From now on s3cmd follows a similar logic. So the rules are:

- if there is only one local file specified (eg blah/file1.txt) and the
remote uri doesn't end with '/' (eg s3://bkt/backup/whatever) then the
local file is copied to the given uri, ie to s3://bkt/backup/whatever

        s3cmd put blah/file1.txt s3://bkt/backup/whatever
results to:
        blah/file1.txt -> s3://bkt/backup/whatever

- if there is one or more local files specified (eg blah/file1.txt,
foo/file2.jpg) and the remote uri ends with '/' (eg s3://bkt/backup/)
then only the basenames of local files are appended to the remote uri:
        s3cmd put blah/file1.txt foo/file2.jpg s3://bkt/backup/
results to:
        blah/file1.txt -> s3://bkt/backup/file1.txt
        foo/file2.jpg  -> s3://bkt/backup/file2.jpg
this also works with wildcards, ie "put *.jpg s3://bkt/backup/" is possible.

- when multiple local files are specified and the remote uri doesn't end
with '/' you'll get an error.

2) Recursive 'put' and 'sync'
When running 'put --recursive' or 'sync' the rules are:

- when the *local* path ends with '/' only its contents is appended to
the s3 uri given.
For instance 's3cmd put --recursive /path/blah/ s3://bkt/backup/' leads to:
        /path/blah/file1.txt      -> s3://bkt/backup/file1.txt
        /path/blah/dir2/file2.jpg -> s3://bkt/backup/dir2/file2.jpg

Same for 's3cmd sync /path/blah/ s3://bkt/backup/' except that 'sync'
will first fetch a list of remote files and upload only what's needed,
based on sizes and md5's comparisons.

- however when the _local_ path does *not* end with '/' then the last
component of the path is used remotely as well:
For instance 's3cmd put --recursive /path/blah s3://bkt/backup/' does:
        /path/blah/file1.txt      -> s3://bkt/backup/blah/file1.txt
        /path/blah/dir2/file2.jpg -> s3://bkt/backup/blah/dir2/file2.jpg

Why it behaves like? To make it possible to upload multiple local *dirs*
at once into the respective remote folders:
        s3cmd put -r dir1 dir2 s3://bkt/backup/
will create s3://bkt/backup/dir1/... and s3://bkt/backup/dir2/...

On the other hand 'put dir1/' behaves in the same way as 'put dir1/*'
except that the wildcard '*' won't include 'hidden' files and dirs
starting with dot (eg .profile). It's not a s3cmd bug, that's how unix
shell works.

Essentially the last component of the path given on the command line is
always appended to the remote uri ("base"). When the local path is a
directory and on the command line ends with '/' then the last component
is "empty" and only the contents of that directory is appended to the
remote uri "base".

I hope it's clear ;-)


3) 'sync' now supports --dry-run
The --dry-run parameter prevents s3cmd from actually transferring any
files to or from S3. It will read remote and local filelists, apply
--exclude patterns, compile upload/download lists and then print them
out and exit. No file will be uploaded / downloaded / removed.

I suggest you run sync with --dry-run to check whether it really does
what you meant and whether the paths are composed as you wanted. It's
also great for debugging --exclude and --rexclude patterns.

For now it only works with 'sync' however 'put' and 'get' will get
--dry-run support before 0.9.9 final as well.

4) New command 'setacl'
You can change ACL from private to public and back on existing objects
in S3. Works recursively when requested.
        ~$ s3cmd setacl --acl-public --recursive s3://bkt/backup
        s3://bkt/backup/file1.txt: ACL set to Public  [1 of 3]
        s3://bkt/backup/dir2/file2.jpg: ACL set to Public  [2 of 3]
        [...]


Please let me know if you experience any issues, especially with the new
semantics of put, get and sync. I believe the new path handling is
better and more close to what a unix person exects, but let me know your
thoughts.

I'd like to get 0.9.9 out pretty soon and want to be reasonably sure
that it works for everyone.

Download s3cmd 0.9.9-pre5 from here:
http://sourceforge.net/project/showfiles.php?group_id=178907&package_id=206452

Follow ups and general discussion should go here:
s3tools-general@lists.sourceforge.net

Report bugs here:
s3tools-b...@lists.sourceforge.net

Enjoy!

Michal



------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to