-boun...@lists.ceph.com] on behalf of Shain Miley
[smi...@npr.org]
Sent: Thursday, October 31, 2013 1:27 PM
To: Mark Kirkwood; de...@umiacs.umd.edu; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
Mark,
Thanks for the update.
Just an FYI I ran into an issue using the script when it t
rsday, October 31, 2013 1:27 PM
To: Mark Kirkwood; de...@umiacs.umd.edu; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
Mark,
Thanks for the update.
Just an FYI I ran into an issue using the script when it turned out that the
last part of the file was exactly 0 bytes. in l
: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of Shain Miley [smi...@npr.org]
Sent: Thursday, October 31, 2013 1:27 PM
To: Mark Kirkwood; de...@umiacs.umd.edu; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
Mark,
Thanks for the update
: Wednesday, October 30, 2013 8:29 PM
To: de...@umiacs.umd.edu; Shain Miley; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
Along those lines, you might want to use something similar to the
attached to check for any failed/partial uploads that are taking up
space (note cannot
Along those lines, you might want to use something similar to the
attached to check for any failed/partial uploads that are taking up
space (note cannot be gc'd away automatically). I just got caught by this.
In fact the previous code I posted should probably use a try:... except:
block to can
: Monday, October 28, 2013 1:04 AM
To: de...@umiacs.umd.edu; Shain Miley; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
I was looking at the same thing myself, and Boto seems to work ok
(tested a 6G file - some sample code attached).
Regards
Mark
On 27/10/13 11:46, Derek
I was looking at the same thing myself, and Boto seems to work ok
(tested a 6G file - some sample code attached).
Regards
Mark
On 27/10/13 11:46, Derek Yarnell wrote:
Hi Shain,
Yes we have tested and have working S3 Multipart support for files >5GB
(RHEL64/0.67.4).
However, crossftp unless
ists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of Shain Miley [smi...@npr.org]
Sent: Saturday, October 26, 2013 7:25 PM
To: de...@umiacs.umd.edu; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
I'll try the pro version of crossftp as soon as I have a chance.
Here is the out
ehalf of Shain Miley [smi...@npr.org]
Sent: Saturday, October 26, 2013 7:25 PM
To: de...@umiacs.umd.edu; ceph-us...@ceph.com
Subject: Re: [ceph-users] Radosgw and large files
I'll try the pro version of crossftp as soon as I have a chance.
Here is the output using s3cmd version 1.1.0-beta3:
this working soon.
Thanks agin for the help already.
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
From: Derek Yarnell [de...@umiacs.umd.edu]
Sent: Saturday, October 26, 2013 6:46 PM
To:
Hi Shain,
Yes we have tested and have working S3 Multipart support for files >5GB
(RHEL64/0.67.4).
However, crossftp unless you have pro it would seem does not support
multipart. Dragondisk gives the error that I have seen when using a PUT
and not multipart, EntityTooLarge. My guess is that it
11 matches
Mail list logo