Thanks for the clarification.
Now I have done exactly as you suggested.
"us-east" is the master zone and "us-west" is the secondary zone.
Each zone has two system users "us-east" and "us-west".
These system users have same access/secret keys in both zones.
I have checked the pools to confirm that t
Hello guys,
Could some one comment on the optimal or recommended values of various threads
values in ceph.conf?
At the moment I have the following settings :
filestore_op_threads = 8
osd_disk_threads = 8
osd_op_threads = 8
filestore_merge_threshold = 40
filestore_split_multiple = 8
Are
1.
Is there any one has the answer for this error?
2.
rest-bench --api-host=s3-website-us-east-1.amazonaws.com
--bucket=frank-s3-test --access-key=XXX
--secret=IzuCXXXDDObLU --block-size=8 --protocol=http
--uri_style=path write
3.
host=s3-we
-- One of my OSDs lost network connectivity for a short while. The OSD
crashed and now when I try and start it back up the process is killed
because of an illegal instruction. Is there anything that I can do to
get this going again or am I going to need to rebuild it from scratch
(which wouldn't
On Sat, Nov 22, 2014 at 1:22 PM, Gregory Farnum wrote:
> Can you post the OSD log somewhere? It should have a few more details
> about what's going on here. (This backtrace looks like it's crashing
> in a call to phreads, which is a little unusual.)
Uploaded to Google Drive:
https://drive.google
Can you post the OSD log somewhere? It should have a few more details
about what's going on here. (This backtrace looks like it's crashing
in a call to phreads, which is a little unusual.)
-Greg
On Sat, Nov 22, 2014 at 1:01 PM, Jeffrey Ollie wrote:
> -- One of my OSDs lost network connectivity fo
On Sat, Nov 22, 2014 at 11:39 AM, Jeffrey Ollie wrote:
> On Sat, Nov 22, 2014 at 1:22 PM, Gregory Farnum wrote:
>> Can you post the OSD log somewhere? It should have a few more details
>> about what's going on here. (This backtrace looks like it's crashing
>> in a call to phreads, which is a litt
On Sat, Nov 22, 2014 at 1:59 PM, Gregory Farnum wrote:
>
> Looks to me like this is the result of us being naughty with rwlock handling:
> http://tracker.ceph.com/issues/10085
> https://github.com/ceph/ceph/pull/2937
>
> It should be fixed soon, and was probably triggered by the disk
> snapshot st
On Sat, Nov 22, 2014 at 2:39 PM, Jeffrey Ollie wrote:
> On Sat, Nov 22, 2014 at 1:59 PM, Gregory Farnum wrote:
>>
>> Looks to me like this is the result of us being naughty with rwlock handling:
>> http://tracker.ceph.com/issues/10085
>> https://github.com/ceph/ceph/pull/2937
>>
>> It should be f