I am not sure why this is happening someone used s3cmd to upload around
130,000 7mb objects to a single bucket. Now we are tearing down the
cluster to rebuild it better, stronger, and hopefully faster. Before we
destroy it we need to download all of the data. I am running through all
of the key
It looks like the gateway is experiencing a similar race condition to
what we reported before.
The rados object has a size of 0 bytes but the bucket index shows the
object listed and the object metadata shows a size of
7147520 bytes.
I have a lot of logs but I don't think any of them have the
Hello Yehuda,
Here it is::
radosgw-admin object stat --bucket="noaa-nexrad-l2"
--object="2015/01/01/PAKC/NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959.tar"
{
"name":
"2015\/01\/01\/PAKC\/NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959.tar",
"size": 7147520,
"policy":
adow_2015/01/01/KABR/NWS_NEXRAD_NXL2DP_KABR_2015010113_20150101135959.tar.2~wksHvto9gRgHUJbhm_TZPXJTZUPXLT2.1_1
default.384153.1__multipart_2015/01/01/KABR/NWS_NEXRAD_NXL2DP_KABR_2015010113_20150101135959.tar.2~wksHvto9gRgHUJbhm_TZPXJTZUPXLT2.1
On 1/15/16 12:05 PM, Yehuda Sadeh-Weinraub wrote:
On Fri, Jan 15, 2016
0150101135959.tar.2~${src_upload_id}.1_1
default.384153.1__shadow_2015/01/01/KABR/NWS_NEXRAD_NXL2DP_KABR_2015010113_20150101135959.tar.2~${dest_upload_id}.1_1
Yehuda
On Fri, Jan 15, 2016 at 1:02 PM, seapasu...@uchicago.edu
wrote:
lacadmin@kh28-10:~$ rados -p .rgw.buckets ls | grep 'pcu5Hz6&
the specific object name and
see if there are pieces of it lying around under a different upload
id.
Yehuda
On Fri, Jan 15, 2016 at 1:44 PM, seapasu...@uchicago.edu
wrote:
Sorry I am a bit confused. The successful list that I provided is from a
different object of the same size to show that I co
": "",
17462378 "bytes_sent": 19,
17462379 "bytes_received": 0,
17462380 "object_size": 0,
17462381 "total_time": 0,
17462382 "user_agent": "Boto\/2.38.0 Pyt
object. I don't assume you have any
logs from when the object was uploaded?
Yehuda
On Fri, Jan 15, 2016 at 2:12 PM, seapasu...@uchicago.edu
wrote:
Sorry for the confusion::
When I grepped for the prefix of the missing object::
"2015\/01\/01\/PAKC\/NWS_NEXRAD_NXL2DP_PAKC_2015010111
On 1/19/16 4:00 PM, Yehuda Sadeh-Weinraub wrote:
On Fri, Jan 15, 2016 at 5:04 PM, seapasu...@uchicago.edu
wrote:
I have looked all over and I do not see any explicit mention of
"NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959" in the logs nor do I
see a timestamp from No
wrote:
On Wed, Jan 20, 2016 at 10:43 AM, seapasu...@uchicago.edu
wrote:
On 1/19/16 4:00 PM, Yehuda Sadeh-Weinraub wrote:
On Fri, Jan 15, 2016 at 5:04 PM, seapasu...@uchicago.edu
wrote:
I have looked all over and I do not see any explicit mention of
. It would be nice to have some kind of a unitest that reproduces
it.
Yehuda
On Wed, Jan 20, 2016 at 1:34 PM, seapasu...@uchicago.edu
wrote:
So is there any way to prevent this from happening going forward? I mean
ideally this should never be possible, right? Even with a complete object
that is
I haven't been able to reproduce the issue on my end but I do not fully
understand how the bug exists or why it is happening. I was finally
given the code they are using to upload the files::
http://pastebin.com/N0j86NQJ
I don't know if this helps at all :-(. the other thing is that I have
on
if you set a RGW user to have abucket quota of 0 buckets you can still
create buckets. The only way I have found to prevent a user from being
able to create buckets is to set the op_mask to read. 1.) it looks like
bucket_policy is not enforced when you have it set to anything below 1.
It looks
So when I create a new user with the admin api. If the user already
exists it just generates a new keypair for that user. Shouldn't the
admin api report that the user already exists? I ask because I can end
up with multiple keypairs for the same user unintentionally which could
be an issue. I w
Ah thanks for the clarification. Sorry. so even setting max_buckets to 0
will not prevent them from creating buckets:::
lacadmin@ko35-10:~$ radosgw-admin user modify --uid=s3test --max-buckets=0
{
"user_id": "s3test",
"display_name": "s3test",
"email": "",
"suspended": 0,
"ma
{
"enabled": true,
"max_size_kb": -1,
"max_objects": 2
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}
The
I am trying to deploy ceph 94.5 (hammer) across a few nodes using
ceph-deploy and passing the --dmcrypt flag. The first osd:journal pair
seems to succeed but all remaining osds that have a journal on the same
ssd seem to silently fail::
http://pastebin.com/2TGG4tq4
In the end I end up with 5 O
I have a cluster of around 630 OSDs with 3 dedicated monitors and 2
dedicated gateways. The entire cluster is running hammer (0.94.5
(9764da52395923e0b32908d83a9f7304401fee43)).
(Both of my gateways have stopped responding to curl right now.
root@host:~# timeout 5 curl localhost ; echo $?
124
So an update for anyone else having this issue. It looks like radosgw
either has a memory leak or it spools the whole object into ram or
something.
root@kh11-9:/etc/apt/sources.list.d# free -m
total used free sharedbuffers cached
Mem: 64397 63775
s isn't hitting some ulimits? cat
/proc/`pidof radosgw`/limits and compare with the num processes/num
FDs in use.
Cheers, Dan
On Tue, Mar 29, 2016 at 8:35 PM, seapasu...@uchicago.edu
wrote:
So an update for anyone else having this issue. It looks like radosgw either
has a memory leak or i
20 matches
Mail list logo