Re: [ceph-users] Access denied error for list users

2014-05-21 Thread alain.dechorgnat
There is no detail with GET /admin/metadata/user, only ids. For PHP, have a look at http://ceph.com/docs/master/radosgw/s3/php/ Alain De : Shanil S [mailto:xielessha...@gmail.com] Envoyé : mercredi 21 mai 2014 05:48 À : DECHORGNAT Alain IMT/OLPS Objet : Re: [ceph-users] Access denied error for

Re: [ceph-users] Data still in OSD directories after removing

2014-05-21 Thread Olivier Bonvalet
Hi, I have a lot of space wasted by this problem (about 10GB per OSD, just for this RBD image). If OSDs can't detect orphans files, should I manually detect them, then remove them ? This command can do the job, at least for this image prefix : find /var/lib/ceph/osd/ -name 'rb.0.14bfb5a.238e1

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-05-21 Thread Kenneth Waegeman
Thanks! I increased the max processes parameter for all daemons quite a lot (until ulimit -u 3802720) These are the limits for the daemons now.. [root@ ~]# cat /proc/17006/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited

Re: [ceph-users] Access denied error for list users

2014-05-21 Thread Shanil S
Hi Alian, Thanks for your reply. Do you mean we can't list out all users with complete user details using GET /admin/metadata/user or using GET /admin/user? Yes, i checked http://ceph.com/docs/master/radosgw/s3/php/ and it contains only the bucket operations and not any admin operations like lis

Re: [ceph-users] 70+ OSD are DOWN and not coming up

2014-05-21 Thread Karan Singh
Hello Sage nodown, noout set on cluster # ceph status cluster 009d3518-e60d-4f74-a26d-c08c1976263c health HEALTH_WARN 1133 pgs degraded; 44 pgs incomplete; 42 pgs stale; 45 pgs stuck inactive; 42 pgs stuck stale; 2602 pgs stuck unclean; recovery 206/2199 objects degraded (9.368%); 40/1

[ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-21 Thread 10 minus
Hi, I have just started to dabble with ceph - went thru the docs http://ceph.com/howto/deploying-ceph-with-ceph-deploy/ I have a 3 node setup with 2 nodes for OSD I use ceph-deploy mechanism. The ceph init scripts expects that cluster.conf to be ceph.conf . If I give any other name the init s

Re: [ceph-users] Ceph Firefly on Centos 6.5 cannot deploy osd

2014-05-21 Thread ceph
Hi, When you just create a cluster, with no OSD, the HEALTH_ERR is "normal" It mean that your storage is damaged, but you don't care since you've no storage at this point About your OSDs, I think you should create partition on your disks (a single partition, properly aligned etc), instead of

[ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Sharmila Govind
Hi, I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to figure out to which pyhsical device/partition each of the OSDs are attached to. Is there are command that can be executed in the storage node to find out the same. Thanks in Advance, Sharmila __

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Mike Dawson
Perhaps: # mount | grep ceph - Mike Dawson On 5/21/2014 11:00 AM, Sharmila Govind wrote: Hi, I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to figure out to which pyhsical device/partition each of the OSDs are attached to. Is there are command that can be executed in the st

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Sharmila Govind
Hi Mike, Thanks for your quick response. When I try mount on the storage node this is what I get: *root@cephnode4:~# mount* */dev/sda1 on / type ext4 (rw,errors=remount-ro)* *proc on /proc type proc (rw,noexec,nosuid,nodev)* *sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)* *none on /sys/fs/fuse

Re: [ceph-users] Data still in OSD directories after removing

2014-05-21 Thread Sage Weil
Hi Olivier, On Wed, 21 May 2014, Olivier Bonvalet wrote: > Hi, > > I have a lot of space wasted by this problem (about 10GB per OSD, just > for this RBD image). > If OSDs can't detect orphans files, should I manually detect them, then > remove them ? > > This command can do the job, at least for

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Mike Dawson
Looks like you may not have any OSDs properly setup and mounted. It should look more like: user@host:~# mount | grep ceph /dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64) /dev/sdc1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime,inode64) /dev/sdd1 on /var/lib/ceph/osd/ceph-2

Re: [ceph-users] Problem with radosgw and some file name characters

2014-05-21 Thread Yehuda Sadeh
On Tue, May 20, 2014 at 4:13 AM, Andrei Mikhailovsky wrote: > Anyone have any idea how to fix the problem with getting 403 when trying to > upload files with none standard characters? I am sure I am not the only one > with these requirements. It might be the specific client that you're using and

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Sage Weil
You might also try ceph-disk list sage On Wed, 21 May 2014, Mike Dawson wrote: > Looks like you may not have any OSDs properly setup and mounted. It should > look more like: > > user@host:~# mount | grep ceph > /dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64) > /dev/sdc1 o

[ceph-users] Inter-region data replication through radosgw

2014-05-21 Thread Fabrizio G. Ventola
Hi everybody, I'm reading the doc regarding the replication through radosgw. It talks just about inter-region METAdata replication, nothing about data replication. My question is, it's possible to have (everything) geo-replicated through radosgw? Actually we have 2 ceph cluster (geo-dislocated) i

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Jimmy Lu
This would give you pretty good understanding where the mounts and /dev/sd* are. [jlu@gfsnode1 osd]$ ceph-disk list; pwd; ls -lai /dev/sda : /dev/sda1 other, mounted on /boot /dev/sda2 other /dev/sdb other, unknown, mounted on /ceph/osd120 /dev/sdc other, unknown, mounted on /ceph/osd121 /dev/sd

[ceph-users] CephFS MDS Setup

2014-05-21 Thread Scottix
I am setting a CephFS cluster and wondering about MDS setup. I know you are still hesitant to put the stable label on it but I have a few questions what would be an adequate setup. I know active active is not developed yet so that is pretty much out of the question right now. What about active st

Re: [ceph-users] CephFS MDS Setup

2014-05-21 Thread Wido den Hollander
On 05/21/2014 09:04 PM, Scottix wrote: I am setting a CephFS cluster and wondering about MDS setup. I know you are still hesitant to put the stable label on it but I have a few questions what would be an adequate setup. I know active active is not developed yet so that is pretty much out of the

[ceph-users] v0.67.9 Dumpling released

2014-05-21 Thread Sage Weil
This Dumpling point release fixes several minor bugs. The most prevalent in the field is one that occasionally prevents OSDs from starting on recently created clusters. We recommand that all v0.67.x Dumpling users upgrade at their convenience. Notable Changes --- * ceph-fuse, libce

[ceph-users] RBD cache pool - not cleaning up

2014-05-21 Thread Michael
Hi All, Experimenting with cache pools for RBD, created two pools, slowdata-hot backed by slowdata-cold. Set up max data to be stored in hot to be 100GB, data to be moved to cold above 40% hot usage. Created a 100GB RBD image, mounted it tested reading/writing, then dumped in 80GB of data. Al

Re: [ceph-users] RBD cache pool - not cleaning up

2014-05-21 Thread Sage Weil
On Wed, 21 May 2014, Michael wrote: > Hi All, > > Experimenting with cache pools for RBD, created two pools, slowdata-hot backed > by slowdata-cold. Set up max data to be stored in hot to be 100GB, data to be > moved to cold above 40% hot usage. Created a 100GB RBD image, mounted it > tested readi

Re: [ceph-users] RBD cache pool - not cleaning up

2014-05-21 Thread Michael
Thanks Sage, the cache system's look pretty great so far. Combined with erasure coding it's really adding a lot of options. -Michael On 21/05/2014 21:54, Sage Weil wrote: On Wed, 21 May 2014, Michael wrote: Hi All, Experimenting with cache pools for RBD, created two pools, slowdata-hot backe

Re: [ceph-users] Data still in OSD directories after removing

2014-05-21 Thread Olivier Bonvalet
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit : > > You should definitely not do this! :) Of course ;) > > You're certain that that is the correct prefix for the rbd image you > removed? Do you see the objects lists when you do 'rados -p rbd ls - | > grep '? I'm pretty sure yes

[ceph-users] Quota Management in CEPH

2014-05-21 Thread Vilobh Meshram
Hi All, I want to understand on how do CEPH users go about Quota Management when CEPH is used with Openstack. 1. Is it recommended to use a common pool say “volumes” for creating volumes which is shared by all tenants ? In this case a common keyring ceph.common.keyring will be shared across

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-05-21 Thread Gregory Farnum
On Wed, May 21, 2014 at 3:52 AM, Kenneth Waegeman wrote: > Thanks! I increased the max processes parameter for all daemons quite a lot > (until ulimit -u 3802720) > > These are the limits for the daemons now.. > [root@ ~]# cat /proc/17006/limits > Limit Soft Limit Har

Re: [ceph-users] Inter-region data replication through radosgw

2014-05-21 Thread Craig Lewis
On 5/21/14 09:02 , Fabrizio G. Ventola wrote: Hi everybody, I'm reading the doc regarding the replication through radosgw. It talks just about inter-region METAdata replication, nothing about data replication. My question is, it's possible to have (everything) geo-replicated through radosgw? Ac

Re: [ceph-users] Data still in OSD directories after removing

2014-05-21 Thread Josh Durgin
On 05/21/2014 03:03 PM, Olivier Bonvalet wrote: Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit : You're certain that that is the correct prefix for the rbd image you removed? Do you see the objects lists when you do 'rados -p rbd ls - | grep '? I'm pretty sure yes : since I didn't s

Re: [ceph-users] 70+ OSD are DOWN and not coming up

2014-05-21 Thread Craig Lewis
On 5/20/14 08:18 , Sage Weil wrote: On Tue, 20 May 2014, Karan Singh wrote: Hello Cephers , need your suggestion for troubleshooting. My cluster is terribly struggling , 70+ osd are down out of 165 Problem ?>OSD are getting marked out of cluster and are down. The cluster is degraded. On checki

Re: [ceph-users] Quota Management in CEPH

2014-05-21 Thread Josh Durgin
On 05/21/2014 03:29 PM, Vilobh Meshram wrote: Hi All, I want to understand on how do CEPH users go about Quota Management when CEPH is used with Openstack. 1. Is it recommended to use a common pool say “volumes” for creating volumes which is shared by all tenants ? In this case a common

Re: [ceph-users] rbd watchers

2014-05-21 Thread Mandell Degerness
The times I have seen this message, it has always been because there are snapshots of the image that haven't been deleted yet. You can see the snapshots with "rbd snap list ". On Tue, May 20, 2014 at 4:26 AM, James Eckersall wrote: > Hi, > > > > I'm having some trouble with an rbd image. I want

[ceph-users] Questions about zone and disater recovery

2014-05-21 Thread wsnote
Hi,everyone! I have 2 ceph clusters, one master zone, another secondary zone. Now I have some question. 1. Can ceph have two or more secondary zones? 2. Can the role of master zone and secondary zone transform mutual? I mean I can change the secondary zone to be master and the master zone to sec

Re: [ceph-users] 70+ OSD are DOWN and not coming up

2014-05-21 Thread Sage Weil
On Wed, 21 May 2014, Craig Lewis wrote: > If you do this over IRC, can you please post a summary to the mailling > list?  > > I believe I'm having this issue as well. In the other case, we found that some of the OSDs were behind processing maps (by several thousand epochs). The trick here to gi

Re: [ceph-users] Inter-region data replication through radosgw

2014-05-21 Thread wsnote
Hi, Lewis! With your way, there will be a contradition because of the limit of secondary zone. In secondary zone, one can't do any files operations. Let me give some example.I define the symbols first. The instances of cluster 1: M1: master zone of cluster 1 S2: Slave zone for M2 of cluster2, th