Hello to all,
I've a big issue with Ceph RadosGW.
I did a PoC some days ago with radosgw. It worked well.
Ceph version 0.67.3 under CentOS 6.4
Now, I'm installing a new cluster but, I can't succeed. I do not understand why.
Here is some elements :
ceph.conf:
[global]
filestore_xattr_use_omap =
Hello to all,
Thanks for your answers.
Well... after an awful night, I found the problem...
It was a MTU mistake !
No relation with Ceph !
So sorry for the noise, and thanks again.
Best Regards - Cordialement
Alexis
___
ceph-users mailing list
ceph-u
Hello to all,
I can't succeed in using the Admin Ops REST API for radosgw.
Where can I find an example, in any language (Perl, Python, Bash) ?
For instance, how to proceed to get info for user xxx ?
Via cli, i do radosgw user info --uid=xxx
but with the REST API ?
Thanks for your answers.
Alexi
Great !
Thanks a lot. It works.
I didn't know awsauth module.
Thanks again.
2013/10/9 Derek Yarnell :
>> Via cli, i do radosgw user info --uid=xxx
>> but with the REST API ?
>
> Hi Alexis,
>
> Here is a simple python example on how to use the admin api. You will
> need to get a few packages fr
Hello again,
In fact, I have still a problem...
I can do GET requests without problems,
but when I try to do a PUT request, to create a user, then I get a
"Code: AccessDenied" return.
I do it with this code :
def user_create (uid, acess_key, secret_key, email):
url = 'http://%s/admin/use
Hello to all,
Here is my context :
- Ceph cluster composed of 72 OSDs (ie 72 disks).
- 4 radosgw gateways
- Round robin DNS for load balancing accross gateways
My goal is to test / bench the S3 API.
Here is my scenario, with 300 clients from 300 différents hosts :
1) each client uploading abou
Hello to all,
Here is my ceph osd tree output :
# idweight type name up/down reweight
-1 20 root example
-12 20 drive ssd
-22 20 datacenter ssd-dc1
-10410 room ssd-dc1-A
-50210
Yep ! Thanks !
That was it :)
Thanks a lot.
Best Regards - Cordialement
Alexis GÜNST HORN,
Tel : 0826.206.307 (poste )
Fax : +33.1.83.62.92.89
IMPORTANT: The information contained in this message may be privileged
and confidential and protected from disclosure. If the reader of this
message is
Hello,
It would be great to have a command like :
ceph-deply out osd.xx
Physically change the drive, then
ceph-deploy replace osd.xx
What do you think ?
Best Regards - Cordialement
Alexis
2013/11/20 Mark Kirkwood :
> On 20/11/13 22:27, Robert van Leeuwen wrote:
>>
>> Hi,
>>
>> What is th
Hello to all,
I've read http://ceph.com/docs/master/cephfs/hadoop/
but, as I'm quite a newbie in Hadoop, i can't succeed in what I want.
Does anyone have already succeed in installing a HBase Cluster over CephFS ?
If so, how ?
Is there some tutorials or docs about it ?
Thanks a lot.
Alexis
Hello to all,
I use heavily ceph-deploy and it works really well.
I've just one question : is there an option (i have not found) or a way to let
ceph-deploy osd create ...
create an OSD with a weight of 0 ?
My goal is to reweight step-by-step, after, news OSDs, to be sure that
it will not disturb
Hello,
I can't understand an error I have since now :
HEALTH_WARN pool .rgw.buckets has too few pgs.
Do you have any ideas ?
Some info :
[root@admin ~]# ceph --version
ceph version 0.72.1 (4d923861868f6a15dcb33fef7f50f674997322de)
[root@admin ~]# ceph osd pool get .rgw.buckets pgp_num
pgp_num:
Hello,
Here it is :
http://pastie.org/private/u5yut673fv6csobuvain9g
Thanks a lot for your help
Best Regards - Cordialement
Alexis GÜNST HORN,
Tel : 0826.206.307 (poste )
Fax : +33.1.83.62.92.89
IMPORTANT: The information contained in this message may be privileged
and confidential and
Hello to all,
Here is my config :
I have Ceph Cluster with 3 NICs on each node :
* admin network : 192.168.0.0/24
* public network : 10.0.0.1/24
* cluster network : 10.0.0.2/24
So, the "admin" network is used to connect to nodes via SSH.
The "public" network is the Ceph Storage network
And, The
Hello to all,
I've a Ceph cluster composed of 4 nodes in 2 differents rooms.
room A : osd.1, osd.3, mon.a, mon.c
room B : osd.2, osd.4, mon.b
My crush rule is made to make replica accross rooms.
So normally, if I shut the whole room A, my cluster should stay usable.
... but, in fact no.
When i
Hello to all,
I've a problem with my Ceph Cluster.
Some info to begin with :
- Ceph Bobtail : ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
- CentOS 6.4
Here is the output of ceph osd tree :
http://pastebin.com/C5TM7Jww
I already have sevreal pools :
[root@ceph-admin ~]# ceph
16 matches
Mail list logo