Re: [ceph-users] Federated gateways

2014-04-15 Thread Peter
fixed! thank you for the reply. It was the backslashes in the secret that was the issue. I generated a new gateway user with: radosgw-admin user create --uid=test2 --display-name=test2 --access-key={key} --secret={secret_without_slashes} --name client.radosgw.gateway and that worked. On 04/

Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)

2014-04-15 Thread Kenneth Waegeman
Hi, We are back to our MDS problem. This time we used the kernel client to connect to the cluster. We started again filling the cluster with rsync, and got problems after about 48h. The MDS crashed again with core dump like before. I removed the client connection, and restarted the MDS's. Th

Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)

2014-04-15 Thread Yan, Zheng
On Tue, Apr 15, 2014 at 8:55 PM, Kenneth Waegeman wrote: > Hi, > > We are back to our MDS problem. > > This time we used the kernel client to connect to the cluster. We started > again filling the cluster with rsync, and got problems after about 48h. The > MDS crashed again with core dump like bef

Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)

2014-04-15 Thread Kenneth Waegeman
- Message from "Yan, Zheng" - Date: Tue, 15 Apr 2014 21:24:04 +0800 From: "Yan, Zheng" Subject: Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error) To: Kenneth Waegeman Cc: ceph-users@lists.ceph.com On Tue, Apr 15, 2014 at 8:55 PM, Kenneth Waegeman wrote

[ceph-users] mon server down

2014-04-15 Thread Jonathan Gowar
Hi, I had an OSD fail, I replaced the drive, and that part of the array is now optimal. But in the process there's developed a problem with the mon array. I have 3 mon servers and 1 is marked down. I checked there's a mon process running, and have tried restarting the mon server. worked my w

Re: [ceph-users] [Ceph-community] Ceph with cinder-volume integration failure

2014-04-15 Thread Joao Eduardo Luis
This email belongs in ceph-users (CC'ing). -Joao On 04/14/2014 11:19 AM, Matteo Stettner wrote: Hello, I followed the guide on Ceph to integrate Ceph into OpenStack ( https://ceph.com/docs/master/rbd/rbd-openstack/ ). But I'm running into an error with cinder-volume at the moment . I didn't

Re: [ceph-users] mon server down

2014-04-15 Thread Udo Lembke
Hi, is the mon-process running? netstat -an | grep 6789 | grep -i listen is the filesystem nearly full? df -k any error output if you start the mon in the foreground (here mon "b") ceph-mon -i b -d -c /etc/ceph/ceph.conf Udo Am 15.04.2014 16:11, schrieb Jonathan Gowar: > Hi, > > I had an OSD

Re: [ceph-users] Radosgw and s3cmd

2014-04-15 Thread Yehuda Sadeh
On Tue, Apr 15, 2014 at 9:10 AM, Shashank Puntamkar wrote: > Thanks Yehuda for quick response. > > I added the buckentname.servername in /etc/hosts file on the server on > which I am running radosgw. From that server only, I run the command > "s3cmd mb s3://test". now the error message is change t

Re: [ceph-users] [Ceph-rgw] pool assignment

2014-04-15 Thread ghislain.chevalier
Thanks Sorry for answering late I'm going to implement region, zone and placement targets in order to reach my goals. Best regards -Message d'origine- De : Yehuda Sadeh [mailto:yeh...@inktank.com] Envoyé : vendredi 11 avril 2014 18:34 À : CHEVALIER Ghislain IMT/OLPS Cc : ceph-users@li

Re: [ceph-users] mon server down

2014-04-15 Thread Jonathan Gowar
On Tue, 2014-04-15 at 18:11 +0200, Udo Lembke wrote: > is the mon-process running? > netstat -an | grep 6789 | grep -i listen The process is running, but none in a listening state, instead it's "established". > is the filesystem nearly full? > df -k no. > any error output if you start the mon i

Re: [ceph-users] Federated gateways

2014-04-15 Thread Craig Lewis
Also good to know that s3cmd does not handle those escapes correctly. Thanks! *Craig Lewis* Senior Systems Engineer Office +1.714.602.1309 Email cle...@centraldesktop.com *Central Desktop. Work together in ways you never thought possible.* Connect with us Web

Re: [ceph-users] mon server down

2014-04-15 Thread Joao Eduardo Luis
On 04/15/2014 04:41 PM, Jonathan Gowar wrote: On Tue, 2014-04-15 at 15:47 +0100, Joao Eduardo Luis wrote: Well, logs would be nice. Set 'debug mon = 10' and 'debug ms = 1' on the monitor, rerun it, share the log. That might be helpful to diagnose the problem. -Joao Thanks, Joao. Seem

Re: [ceph-users] mon server down

2014-04-15 Thread Joao Eduardo Luis
On 04/15/2014 05:20 PM, Jonathan Gowar wrote: On Tue, 2014-04-15 at 18:11 +0200, Udo Lembke wrote: is the mon-process running? netstat -an | grep 6789 | grep -i listen The process is running, but none in a listening state, instead it's "established". is the filesystem nearly full? df -k no

[ceph-users] question on harvesting freed space

2014-04-15 Thread John-Paul Robinson
Hi, If I have an 1GB RBD image and format it with say xfs of ext4, then I basically have thin provisioned disk. It takes up only as much space from the Ceph pool as is needed to hold the data structure of the empty file system. If I add files to my file systems and then remove them, how does Cep

Re: [ceph-users] question on harvesting freed space

2014-04-15 Thread Kyle Bader
> I'm assuming Ceph/RBD doesn't have any direct awareness of this since > the file system doesn't traditionally have a "give back blocks" > operation to the block device. Is there anything special RBD does in > this case that communicates the release of the Ceph storage back to the > pool? VMs ru

[ceph-users] ceph mds log

2014-04-15 Thread Qing Zheng
Hi - We have a question on mds journaling. Is it okay to disable mds journaling in order to increase performance, which means setting "mds log = false"? Is this journal required for one to run multi-mds as well as directory splitting? Cheers, -- Qing __

Re: [ceph-users] ceph mds log

2014-04-15 Thread Gregory Farnum
Don't do that. I'm pretty sure it doesn't actually work, and if it does it certainly won' perform better than with it off. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Apr 15, 2014 at 1:53 PM, Qing Zheng wrote: > Hi - > > We have a question on mds journaling. > > Is

Re: [ceph-users] question on harvesting freed space

2014-04-15 Thread John-Paul Robinson
Thanks for the insight. Based on that I found the fstrim command for xfs file systems. http://xfs.org/index.php/FITRIM/discard Anyone had experiences using the this command with RBD image backends? ~jpr On 04/15/2014 02:00 PM, Kyle Bader wrote: >> I'm assuming Ceph/RBD doesn't have any direct

Re: [ceph-users] ceph mds log

2014-04-15 Thread Qing Zheng
Thanks, Greg. Are there any recommended Ceph mds configs that can generally help Ceph to achieve better performance? We are currently focusing on multi-mds and directory splitting. Which options should we pay special attention to? Cheers, -- Qing -Original Message- From: Gregory Farnu

Re: [ceph-users] ceph mds log

2014-04-15 Thread Gregory Farnum
Right now the big one will be the cache size configurable, and whether directory fragmentation is enabled. You can experiment with the default log size segments as well, but I don't think they'll do much. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Apr 15, 2014 at 3

[ceph-users] force_create_pg not working

2014-04-15 Thread Craig Lewis
I have 1 incomplete PG. The data is gone, but I can upload it again. I just need to make the cluster start working so I can upload it. I've read a bunch of mailling list posts, and found ceph pg force_create_pg. Except, it doesn't work. I run: root@ceph1c:/var/lib/ceph/osd# ceph pg force_c

Re: [ceph-users] Federated gateways

2014-04-15 Thread Brian Andrus
Those backslashes as output by radosgw-admin are escape characters preceding the forward slash. They should be removed when you are connecting with most clients. AFAIK, s3cmd would work fine with your original key, had you stripped out the escape chars. You could also just regenerate or specify a k

Re: [ceph-users] force_create_pg not working

2014-04-15 Thread Gregory Farnum
What are the results of "ceph osd pg 11.483 query"? -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Apr 15, 2014 at 4:01 PM, Craig Lewis wrote: > I have 1 incomplete PG. The data is gone, but I can upload it again. I > just need to make the cluster start working so I

Re: [ceph-users] force_create_pg not working

2014-04-15 Thread Craig Lewis
http://pastebin.com/ti1VYqfr I assume the problem is at the very end: "probing_osds": [ 0, 2, 3, 4, 11, 13], "down_osds_we_would_probe": [], "peering_blocked_by": []}, O

Re: [ceph-users] mon server down

2014-04-15 Thread Jonathan Gowar
On Tue, 2014-04-15 at 18:02 +0100, Joao Eduardo Luis wrote: > Ahah! You got bit by #5804: http://tracker.ceph.com/issues/5804 > > Best solution for your issue: > > - shutdown 'mon.ceph-3' > - remove 'mon.ceph-3' from the cluster > - recreate 'mon.ceph-3' > - add 'mon.ceph-3' to the cluster > >

Re: [ceph-users] Mixing Ceph OSDs and hypervisor/compute nodes

2014-04-15 Thread Blair Bethwaite
Hi again, On 10 April 2014 16:17, Haomai Wang wrote: > I think you need to bind osd to specified cores and bind qemu-kvm to > other cores. Perhaps, but that would just be an optimisation. More likely we'd just bind the OSDs to the socket that is handling the SAS/RAID controller. Memory size i

Re: [ceph-users] mon server down

2014-04-15 Thread Joao Eduardo Luis
On 04/16/2014 01:04 AM, Jonathan Gowar wrote: On Tue, 2014-04-15 at 18:02 +0100, Joao Eduardo Luis wrote: Ahah! You got bit by #5804: http://tracker.ceph.com/issues/5804 Best solution for your issue: - shutdown 'mon.ceph-3' - remove 'mon.ceph-3' from the cluster - recreate 'mon.ceph-3' - add

Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)

2014-04-15 Thread Yan, Zheng
On Tue, Apr 15, 2014 at 9:49 PM, Kenneth Waegeman wrote: > > - Message from "Yan, Zheng" - >Date: Tue, 15 Apr 2014 21:24:04 +0800 >From: "Yan, Zheng" > > Subject: Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error) > To: Kenneth Waegeman > Cc: ceph-users@li

Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)

2014-04-15 Thread Stijn De Weirdt
What do you mean by the MDS journal? Where can I find this journal? Can a better CPU solve the slow trimming? ( Now 2 hexacore AMD Opteron 4334) MDS uses journal to record recent metadata update. the journal is stored in metadata pool (object name 200.*). The speed of trimming log is limited by

Re: [ceph-users] Access denied error

2014-04-15 Thread Punit Dambiwal
Hi, Still i am getting the same error,when i run the following :- -- curl -i 'http://xxx.xlinux.com/admin/usage?format=json' -X GET -H 'Authorization: AWS YHFQ4D8BM835BCGERHTN:kXpM0XB9UjOadexDu2ZoP8s4nKjuoL0iIZhE\/+Gv' -H 'Host: xxx.xlinux.com' -H 'Content-Length: 0' HTTP/1.1

[ceph-users] SSDs: cache pool/tier versus node-local block cache

2014-04-15 Thread Blair Bethwaite
Hi all, We'll soon be configuring a new cluster, hardware is already purchased - OSD nodes are Dell R720XDs (E5-2630v2, 32GB RAM, PERC 710p, 9x 4TB NL-SAS, 3x 200GB Intel DC S3700, Mellanox CX3 10GE DP). 12 of these to start with. So we have a 3:1 spindle:ssd ratio, but as yet I'm not sure how we

Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)

2014-04-15 Thread Yan, Zheng
On Wed, Apr 16, 2014 at 2:08 PM, Stijn De Weirdt wrote: >>> What do you mean by the MDS journal? Where can I find this journal? >>> Can a better CPU solve the slow trimming? ( Now 2 hexacore AMD Opteron >>> 4334) >>> >> >> MDS uses journal to record recent metadata update. the journal is >> stored

[ceph-users] [Bug]radosgw.log won't be generated when deleted

2014-04-15 Thread wsnote
OS: CentOS 6.5 Ceph version: 0.67.7 When I delete or move /var/log/ceph/radosgw.log, I can continue operating files through rgw. Then I find there are no log. The log won't be generated automatically. Even if I created it, it will still been written nothing. And if I restart radowgw, the log w