Hi,

I am trying to upload a directory containing 60 GB of jpegs in various
sizes of 3-6 KB to a ceph storage.

First I tried using sync:

s3cmd sync -P /path-to-src/directory s3://bucket

It takes 24+ hours and at some point the process is killed. I tried a
couple of times and noticed that while it is running it uses all of the
source server's memory and swap.

I'm syncing from a 16 GB RAM / 16 GB swap server.

I thought maybe sync keeps the files in memory to compare or something and
changed to put:

s3cmd put -P --recursive /path-to-src/directory s3://bucket

But I still I experience the same - s3cmd uses all the memory.

Is there an memory leak in s3cmd so it does not remove the file from memory
after it has been uploaded?


Med venlig hilsen / Kind regards,

Christian Bjørnbak

Chefudvikler / Lead Developer
TouristOnline A/S
Islands Brygge 43
2300 København S
Denmark
TLF: +45 32888230
Dir. TLF: +45 32888235
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
S3tools-general mailing list
S3tools-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s3tools-general

Reply via email to