[ceph-users] Cluster always scrubbing.

2015-11-23 Thread Mika c
Hi cephers, We are facing a scrub issue. Our CEPH cluster is using Trusty / Hammer 0.94.1 and have almost 320 OSD disks on 10 nodes. And there are more than 30,000 PGs on cluster. The cluster works fine until last week. We found the cluster health status start display "active+clean+scrubbing+d

Re: [ceph-users] Cluster always scrubbing.

2015-11-23 Thread Mika c
; Thanks > > On Mon, Nov 23, 2015 at 9:32 AM, Mika c wrote: > >> Hi cephers, >> We are facing a scrub issue. Our CEPH cluster is using Trusty / Hammer >> 0.94.1 and have almost 320 OSD disks on 10 nodes. >> And there are more than 30,000 PGs on cluster. >>

Re: [ceph-users] Cluster always scrubbing.

2015-11-24 Thread Mika c
Hi, > > That seems very odd - what do the logs say for the osds with slow requests? > > Thanks > > On Tue, Nov 24, 2015 at 2:20 AM, Mika c wrote: > >> Hi Sean, >>Yes, the cluster scrubbing status(scrub + deep scrub) is almost two >> weeks. >>And

[ceph-users] Try to find the right way to enable rbd-mirror.

2016-05-12 Thread Mika c
Hi cephers, I am wondering someone that familiar with function of rbd-mirror can give me some hints. There were 2 clusters site1 and site2. Both sites deployed by command ceph-deploy. And there are the steps of my test. With Ubuntu 14.04(kernel 3.19). 1.) On site1 : Copy ceph.conf and ceph.cli

Re: [ceph-users] Try to find the right way to enable rbd-mirror.

2016-05-13 Thread Mika c
7f707bfff700 20 rbd-mirror: Replayer::set_sources: enter -log end Do I need upgrade kernel to 4.4 or higher? Best wishes, Mika 2016-05-12 21:10 GMT+08:00 Jason Dillaman : > On Thu, May 12, 2016 at 6:33 AM, Mika c wrote: > > > 4.) B

Re: [ceph-users] ceph-rest-api's behavior

2017-03-23 Thread Mika c
Hi all, Same question with CEPH 10.2.3 and 11.2.0. Is this command only for client.admin ? client.symphony key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg== caps: [mon] allow * caps: [osd] allow * Traceback (most recent call last): File "/usr/bin/ceph-rest-api", line

Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Mika c
-03-24 16:21 GMT+08:00 Brad Hubbard : > On Fri, Mar 24, 2017 at 4:06 PM, Mika c wrote: > > Hi all, > > Same question with CEPH 10.2.3 and 11.2.0. > > Is this command only for client.admin ? > > > > client.symphony > >key: AQD0td

Re: [ceph-users] ceph-rest-api's behavior

2017-03-27 Thread Mika c
: > On Fri, Mar 24, 2017 at 8:20 PM, Mika c wrote: > > Hi Brad, > > Thanks for your reply. The environment already created keyring file > and > > put it in /etc/ceph but not working. > > What was it called? > > > I have to write confi

[ceph-users] Detail of log level

2016-01-05 Thread Mika c
Dear cephers, Is there any documents can explain the detail of log level? When using librados to access ceph the result is only display true and false. Can I get more specific detail like (source client IP、object name) from log? If answer is yes then which log in subsystem I should add into ce

Re: [ceph-users] Upgrading with mon & osd on same host

2016-02-03 Thread Mika c
Hi, >** Do the packages (Debian) restart the services upon upgrade?* ​​No need, restart by yourself. >*Do I need to actually stop all OSDs, or can I upgrade them one by one?* No need to stop. Just upgrade osd server one by one and restart each osd daemons. Best wishes, Mika 2016-02-03 18:

Re: [ceph-users] learning about increasing osd / pg_num for a pol

2016-02-19 Thread Mika c
Hi John, 1. How can I see/determine the number of OSDs a pool can access? 2. For ".rgw", ".rgw.buckets", and ".rgw.buckets.index" how should I plan out the PG number for these? We're only doing object storage, so .rgw.buckets will get the most objects, and .rgw.buckets.index will (I assume) get a

[ceph-users] Is there an api to list all s3 user

2016-03-15 Thread Mika c
Hi all, Hi, I try to find an api that can list all s3 user like command 'radosgw-admin metadata list user'. But I can not find any document related. Have anyone know how to get this information? Any comments will be much appreciated! ​​ Best wishes, Mika ___

[ceph-users] How to enable civetweb log in Infernails (or Jewel)

2016-03-22 Thread Mika c
Hi Cephers, ​​Setting of "rgw frontends = access_log_file=/var/log/civetweb/access.log error_log_file=/var/log/civetweb/error.log" is working in Firefly and Giant. But Infernails and Jewel the setting is invalid, the logs are empty. Have anyone know how to set civetweb log in newer CEPH correctl

Re: [ceph-users] How to enable civetweb log in Infernails (or Jewel)

2016-03-22 Thread Mika c
Hi Cephers, I don't notice the user already changed from root into ceph. By changed the directory caps, the problem already fixed. Thank you all. Best wishes, Mika 2016-03-22 16:50 GMT+08:00 Mika c : > Hi Cephers, > ​​Setting of "rgw frontends = > access_log_fil

[ceph-users] How many mds node that ceph need.

2016-03-24 Thread Mika c
Hi cephers, If I want to activate more than one mds node. Should ceph needs an odd number of mds ? Another question, if more than one mds activated than how many mds can lost?(If 3 mds node activated in the same time) Best wishes, Mika ___ ceph-users

[ceph-users] RBD image mounted by command "rbd-nbd" the status is read-only.

2016-04-20 Thread Mika c
Hi cephers, Read this post "CEPH Jewel Preiew " before. Follow the steps can map and mount rbd image to /dev/nbd successfully. But I can not write any files. The error message is "Read-only file system". ​I

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread Mika c
Hi cephers, Had the same issue too. But the command "rbd feature disable" not working to me. Any comment will be appreciated. $sudo rbd feature disable timg1 deep-flatten fast-diff object-map exclusive-lock rbd: failed to update image features: (22) Invalid argument 2016-04-21 15:53:10.260671

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread Mika c
16:56 GMT+08:00 席智勇 : > That's true for me too. > You can disable them via set in the conf file. > > #ceph.conf > rbd_default_features = 3 > #meens only enable layering and striping > > 2016-04-21 16:00 GMT+08:00 Mika c : > >> Hi cephers, >> Had the same issue