Re: [ceph-users] minimal ceph permissions for rados gateway

2013-06-13 Thread John Nielsen
On Jun 13, 2013, at 4:03 PM, Yehuda Sadeh wrote: > On Thu, Jun 13, 2013 at 3:01 PM, John Nielsen wrote: >> On Jun 12, 2013, at 8:15 PM, Yehuda Sadeh wrote: >> >>> On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote: With: caps osd = "allow x, allow pool .pubintent-log rw

Re: [ceph-users] minimal ceph permissions for rados gateway

2013-06-13 Thread Yehuda Sadeh
On Thu, Jun 13, 2013 at 3:01 PM, John Nielsen wrote: > On Jun 12, 2013, at 8:15 PM, Yehuda Sadeh wrote: > >> On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote: >>> On Jun 12, 2013, at 2:51 PM, Yehuda Sadeh wrote: >>> On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote: > On Jun 12,

Re: [ceph-users] minimal ceph permissions for rados gateway

2013-06-13 Thread John Nielsen
On Jun 12, 2013, at 8:15 PM, Yehuda Sadeh wrote: > On Wed, Jun 12, 2013 at 2:43 PM, John Nielsen wrote: >> On Jun 12, 2013, at 2:51 PM, Yehuda Sadeh wrote: >> >>> On Wed, Jun 12, 2013 at 1:48 PM, John Nielsen wrote: On Jun 12, 2013, at 2:02 PM, Yehuda Sadeh wrote: > On Wed, Ju

[ceph-users] DFS Job Position

2013-06-13 Thread Andy Edmonds
Apologies for interrupting the normal business... Hi all, The ICCLab [1] has another new position opened that perhaps you or someone you know might be interested in. Briefly, the position is a Applied Researcher in the area of Cloud Computing (more IaaS than PaaS) and would need particular skills

Re: [ceph-users] Disaster recovery of monitor

2013-06-13 Thread peter
On 2013-06-13 18:57, Joao Eduardo Luis wrote: On 06/13/2013 05:25 PM, pe...@2force.nl wrote: On 2013-06-13 18:06, Gregory Farnum wrote: On Thursday, June 13, 2013, wrote: Hello, We ran into a problem with our test cluster after adding monitors. It now seems that our main monitor doesn't wan

Re: [ceph-users] Disaster recovery of monitor

2013-06-13 Thread Joao Eduardo Luis
On 06/13/2013 05:25 PM, pe...@2force.nl wrote: On 2013-06-13 18:06, Gregory Farnum wrote: On Thursday, June 13, 2013, wrote: Hello, We ran into a problem with our test cluster after adding monitors. It now seems that our main monitor doesn't want to start anymore. The logs are flooded with: 20

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-13 Thread Gregory Farnum
On Thu, Jun 13, 2013 at 6:33 AM, Sławomir Skowron wrote: > Hi, sorry for late response. > > https://docs.google.com/file/d/0B9xDdJXMieKEdHFRYnBfT3lCYm8/view > > Logs in attachment, and on google drive, from today. > > https://docs.google.com/file/d/0B9xDdJXMieKEQzVNVHJ1RXFXZlU/view > > We have suc

Re: [ceph-users] Disaster recovery of monitor

2013-06-13 Thread peter
On 2013-06-13 18:06, Gregory Farnum wrote: On Thursday, June 13, 2013, wrote: Hello, We ran into a problem with our test cluster after adding monitors. It now seems that our main monitor doesn't want to start anymore. The logs are flooded with: 2013-06-13 11:41:05.316982 7f7689ca4780  7 mon.a

Re: [ceph-users] Disaster recovery of monitor

2013-06-13 Thread Gregory Farnum
On Thursday, June 13, 2013, wrote: > Hello, > > We ran into a problem with our test cluster after adding monitors. It now > seems that our main monitor doesn't want to start anymore. The logs are > flooded with: > > 2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a@0(leader).osd e2809 > update_from

Re: [ceph-users] Need help with Ceph error

2013-06-13 Thread Gregory Farnum
Both of those errors are "unable to authenticate". The daemons aren't finding your authentication keys where they expect to (generally in /var/lib/ceph or an appropriate subdir); if you set these up manually you new to copy them over (and perhaps generate them). The documentation on the website sho

[ceph-users] Disaster recovery of monitor

2013-06-13 Thread peter
Hello, We ran into a problem with our test cluster after adding monitors. It now seems that our main monitor doesn't want to start anymore. The logs are flooded with: 2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a@0(leader).osd e2809 update_from_paxos applying incremental 2810 2013-06-13

[ceph-users] Disaster recovery of monitor

2013-06-13 Thread peter
Hello, We ran into a problem with our test cluster after adding monitors. It now seems that our main monitor doesn't want to start anymore. The logs are flooded with: 2013-06-13 11:41:05.316982 7f7689ca4780 7 mon.a@0(leader).osd e2809 update_from_paxos applying incremental 2810 2013-06-13

Re: [ceph-users] rbd rm results in osd marked down wrongly with 0.61.3

2013-06-13 Thread Sage Weil
Hi Florian, Sorry, I missed this one. Since this is fully reproducible, can you generate a log of the crash by doing something like ceph osd tell \* injectargs '--debug-osd 20 --debug-filestore 20 --debug-ms 20' (that is a lot of logging, btw), triggering a crash, and then sending us the log

Re: [ceph-users] ceph mount: Only 240 GB , should be 60TB

2013-06-13 Thread Sage Weil
Ah, The fix for this is 92a49fb0f79f3300e6e50ddf56238e70678e4202, which first appeared in the 3.9 kernel. The mainline 3.8 stable kernel is EOL, but Canonical is still maintaining one for ubuntu. I can send a note to them. sage On Thu, 13 Jun 2013, Da Chun wrote: > Sage, > > I have the sa

Re: [ceph-users] QEMU -drive setting (if=none) for rbd

2013-06-13 Thread Sebastien Han
OpenStack doesn't know how to set different caching options for attached block device.See the following blueprint, https://blueprints.launchpad.net/nova/+spec/enable-rbd-tuning-optionsThis might be implemented for Havana.Cheers.Sébastien HanCloud Engineer"Always give 100%. Unless you're giving

Re: [ceph-users] rbd rm results in osd marked down wrongly with 0.61.3

2013-06-13 Thread Smart Weblications GmbH - Florian Wiessner
Hi, Is really no one on the list interrested in fixing this? Or am i the only one having this kind of bug/problem? Am 11.06.2013 16:19, schrieb Smart Weblications GmbH - Florian Wiessner: > Hi List, > > i observed that an rbd rm results in some osds mark one osd as down > wrongly in cuttlefish.

[ceph-users] Need help with Ceph error

2013-06-13 Thread Sreejith Keeriyattil
Hi Ceph lovers I really need some help here.Iam trying to setup a test ceph cluster and do a case study on ceph storage. So that I can propose it to customers who needs scalable storage .I started with documentation provided in your website but am stuck with an error .

Re: [ceph-users] Glance & RBD Vs Glance & RadosGW

2013-06-13 Thread Josh Durgin
On 06/11/2013 08:10 AM, Alvaro Izquierdo Jimeno wrote: Hi all, I want to connect an openstack Folsom glance service to ceph. The first option is setting up the glance-api.conf with 'default_store=rbd' and the user and pool. The second option is defined in https://blueprints.launchpad.net/gla

Re: [ceph-users] More data corruption issues with RBD (Ceph 0.61.2)

2013-06-13 Thread Josh Durgin
On 06/11/2013 11:59 AM, Guido Winkelmann wrote: Hi, I'm having issues with data corruption on RBD volumes again. I'm using RBD volumes for virtual harddisks for qemu-kvm virtual machines. Inside these virtual machines I have been running a C++ program (attached) that fills a mounted filesystem

Re: [ceph-users] ceph mount: Only 240 GB , should be 60TB

2013-06-13 Thread Da Chun
Sage, I have the same issue with ceph 0.61.3 on Ubuntu 13.04. ceph@ceph-node4:~/mycluster$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu1304--64--vg-root 15G 1.5G 13G 11% / none 4.0K 0 4.0K 0% /sys/fs/