[ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Abhishek L
On Thu, Sep 10, 2015 at 3:27 PM, Shinobu Kinjo wrote: > Thank you for letting me know your thought, Abhishek!! > > > > The Ceph Object Gateway will query Keystone periodically > > for a list of revoked tokens. These requests are encoded > > and signed. Also, Keystone may be configured

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
Thank you for log archives. I went to dentist -; Please do not forget CCing ceph-users from the next because there is a bunch of really **awesome** guys; Can you re-attach log files again so that they see? Shinobu - Original Message - From: "谷枫" To: "Shinobu Kinjo" Sent: Saturday, Se

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
Sorry about that. I re-attach the crash log files and the mds logs. The mds log shows that the client session timeout then mds close the socket right? I think this happend behind the ceph-fuse crash. So the root cause is the ceph-fuse crash . ​ _usr_bin_ceph-fuse.0.crash.client1.tar.gz

Re: [ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Shinobu Kinjo
In _usr_bin_ceph-fuse.0.crash.client2.tar What I'm seeing now is: 3 Date: Sat Sep 12 06:37:47 2015 ... 6 ExecutableTimestamp: 1440614242 ... 7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m 10.3.1.11,10.3.1.12,10.3.1.13 /grdata ... 30 7f32de7fe000-7f32deffe000 rw

[ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
In _usr_bin_ceph-fuse.0.crash.client2.tar What I'm seeing now is: 3 Date: Sat Sep 12 06:37:47 2015 ... 6 ExecutableTimestamp: 1440614242 ... 7 ProcCmdline: ceph-fuse -k /etc/ceph.new/ceph.client.admin.keyring -m 10.3.1.11,10.3.1.12,10.3.1.13 /grdata ... 30 7f32de7fe000-7f32deffe000 rw

[ceph-users] Query about contribution regarding monitoring of Ceph Object Storage

2015-09-12 Thread pragya jain
Hello all I am carrying out research in the area of cloud computing under Department of CS, University of Delhi. I would like to contribute my research work regarding monitoring of Ceph Object Storage to the Ceph community.  Please help me by providing the appropriate link with whom I can connect

[ceph-users] ceph-disk command execute errors

2015-09-12 Thread darko
Hi, I am working on rebuilding a new cluster. I am using Debian wheezy and http://ceph.com/debian-hammer/ wheezy main. I get the below error when running "ceph-deploy osd activate" from the admin node as my ceph user. Note: I ran into all sorts of weird issues with keyring and manually copied t

[ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Stefan Eriksson
Hi, I'm reading the documentation about creating new OSD's and I'm see: "The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and a path to an SSD journal partition. We recommend storing the journal on a separate drive to maximize throughput. You may dedicate a single drive fo

Re: [ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Christian Balzer
Hello, On Sat, 12 Sep 2015 17:11:04 +0200 Stefan Eriksson wrote: > Hi, > > I'm reading the documentation about creating new OSD's and I'm see: > > "The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, > and a path to an SSD journal partition. We recommend storing the journal

Re: [ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Stefan Eriksson
Hi Thanks for the reply. Some follow-up's. Den 2015-09-12 kl. 17:30, skrev Christian Balzer: Hello, On Sat, 12 Sep 2015 17:11:04 +0200 Stefan Eriksson wrote: Hi, I'm reading the documentation about creating new OSD's and I'm see: "The foregoing example assumes a disk dedicated to one Ceph

[ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread deeepdish
Hello, I’m having a (strange) issue with OSD bucket persistence / affinity on my test cluster.. The cluster is PoC / test, by no means production. Consists of a single OSD / MON host + another MON running on a KVM VM. Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be part of the

Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread Johannes Formann
Hi, > I’m having a (strange) issue with OSD bucket persistence / affinity on my > test cluster.. > > The cluster is PoC / test, by no means production. Consists of a single OSD > / MON host + another MON running on a KVM VM. > > Out of 12 OSDs I’m trying to get osd.10 and osd.11 to be p

Re: [ceph-users] CRUSH odd bucket affinity / persistence

2015-09-12 Thread deeepdish
Johannes, Thank you — "osd crush update on start = false” did the trick. I wasn’t aware that ceph has automatic placement logic for OSDs (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/9035 ). This

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
sorry Shinobu, I don't understand what's the means what you pasted. Multi ceph-fuse crash just now today. The ceph-fuse completely unusable for me now. Maybe i must change the kernal mount with it. 2015-09-12 20:08 GMT+08:00 Shinobu Kinjo : > In _usr_bin_ceph-fuse.0.crash.client2.tar > > What I'm

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
Can you give us package version of ceph-fuse? > Multi ceph-fuse crash just now today. Did you just mount filesystem or was there any activity on filesystem? e.g: writing / reading data Can you give us output of on cluster side: ceph -s Shinobu - Original Message - From: "谷枫" To:

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread 谷枫
Yes, when some ceph-fuse crash , the mount driver has gone, and can't remount . Reboot the server is the only way I can do. But other client with ceph-fuse mount on them working well. Can writing / reading data on them. ceph-fuse --version ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9af

Re: [ceph-users] ceph-fuse auto down

2015-09-12 Thread Shinobu Kinjo
So you are using same version on other clients? But only one client has problem? Can you provide: /sys/class/net//statistics/* just do: tar cvf .tar \ /sys/class/net//statistics/* Can you hold when same issue happen next? No reboot is necessary. But if you have to reboot, of course you

[ceph-users] 2 replications, flapping can not stop for a very long time

2015-09-12 Thread zhao.ming...@h3c.com
Hi, I'm testing reliability of ceph recently, and I have met the flapping problem. I have 2 replications, and cut off the cluster network ,now flapping can not stop,I have wait more than 30min, but status of osds are still not stable; I want to know about when monitor recv reports from osds

Re: [ceph-users] RGW Keystone interaction (was Ceph.conf)

2015-09-12 Thread Shinobu Kinjo
> Looked a bit more into this, swift apis seem to support the use > of an admin tenant, user & token for validating the bearer token, > similar to other openstack service which use a service tenant > credentials for authenticating. Yes, it's just working as middleware under Keystone. > Though it