Re: [ceph-users] RBD Exclusive locks overwritten

2017-12-19 Thread Garuti, Lorenzo
2017-12-19 16:56 GMT+01:00 Wido den Hollander : > > > On 12/19/2017 04:33 PM, Garuti, Lorenzo wrote: > >> Hi all, >> >> we are having a very strange behavior with exclusive locks. >> We have one image called test inside a pool called app. >> >> > The exclusive lock feature is that only one client

Re: [ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Vasu Kulkarni
> On Dec 19, 2017, at 8:26 AM, Nghia Than wrote: > > Hi, > > My CEPH is stuck at this for few days, we added new OSD and nothing changed: Does the new osd show up in osd tree? I see all your osd’s at ~80%, the new ones should be at much lower percentage or did they get full too? > > • 17 p

[ceph-users] Added two OSDs, 10% of pgs went inactive

2017-12-19 Thread Daniel K
I'm trying to understand why adding OSDs would cause pgs to go inactive. This cluster has 88 OSDs, and had 6 OSD with device class "hdd_10TB_7.2k" I added two more OSDs, set the device class to "hdd_10TB_7.2k" and 10% of pgs went inactive. I have an EC pool on these OSDs with the profile: user@a

Re: [ceph-users] luminous OSD_ORPHAN

2017-12-19 Thread Brad Hubbard
Version? See http://tracker.ceph.com/issues/22346 for a (limited) explanation. On Tue, Dec 19, 2017 at 6:35 PM, Vladimir Prokofev wrote: > Took a little walk and figured it out. > I just added a dummy osd.20 with weight 0.000 in my CRUSH map and set it. > This alone was enough for my cluster to

[ceph-users] Simple RGW Lifecycle processing questions (luminous 12.2.2)

2017-12-19 Thread Bryan Banister
Hi All, Hey all, How often does the "lc process" run on RGW buckets in a cluster? Also is it configurable per bucket or anything? Tried searching man pages and ceph docs with no luck, so any help appreciated! Thanks! -Bryan Note: This email is for the confi

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Nghia Than
You may try this command: 766 ceph pg set_nearfull_ratio 0.86 767 ceph pg set_full_ratio 0.9 On Wed, Dec 20, 2017 at 12:45 AM, Jean-Charles Lopez wrote: > Update your ceph.conf file > > JC > > On Dec 19, 2017, at 09:03, Karun Josy wrote: > > Hi , > > That makes sense. > > How can I adjust

Re: [ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Nghia Than
I added more OSDs few days ago to reduce usage under 70% (nearfull and full ratio is higher than this value) and it still stuck at backfill_toofull while rebalance data. I tried to change backfill full ratio and it show error (unchangeable) as below: [root@storcp ~]# ceph tell osd.\* injectargs '

[ceph-users] ceph df showing wrong MAX AVAIL for hybrid CRUSH Rule

2017-12-19 Thread Patrick Fruh
Hi, I have the following configuration of OSDs: ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS 0 hdd 5.45599 1.0 5587G 2259G 3327G 40.45 1.10 234 1 hdd 5.45599 1.0 5587G 2295G 3291G 41.08 1.11 231 2 hdd 5.45599 1.0 5587G 2321G 3265G 41.56 1.13 232 3 hdd 5.45

Re: [ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread David C
What's your backfill full ratio? You may be able to get healthy by increasing your backfill full ratio (in small increments). But your next immediate task should be to add more OSDs or remove data. On 19 Dec 2017 4:26 p.m., "Nghia Than" wrote: Hi, My CEPH is stuck at this for few days, we adde

[ceph-users] Extending OSD disk partition size

2017-12-19 Thread Ben pollard
Hi, Im struggling to understand how I can increase an OSD's disk space. I'm running ceph in a cloud environment and I'm using a persistent storage disk for the OSD's. If I increase the size of the disk say from 100 GB to 150GB and do a resizepart to grow the OSD partition to the size of the new

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Jean-Charles Lopez
Update your ceph.conf file JC > On Dec 19, 2017, at 09:03, Karun Josy wrote: > > Hi , > > That makes sense. > > How can I adjust the osd nearfull ratio ? I tried this, however it didnt > change. > > $ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86" > mon.mon-a1: injectargs:mon_o

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Karun Josy
Hi , That makes sense. How can I adjust the osd nearfull ratio ? I tried this, however it didnt change. $ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86" mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.86' (not observed, change may require restart) mon.mon-a2: injectargs:mon_os

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Jean-Charles Lopez
OK so it’s telling you that the near full OSD holds PGs for these three pools. JC > On Dec 19, 2017, at 08:05, Karun Josy wrote: > > No, I haven't. > > Interestingly, the POOL_NEARFULL flag is shown only when there is > OSD_NEARFULL flag. > I have recently upgraded to Luminous 12.2.2, haven'

[ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Nghia Than
Hi, My CEPH is stuck at this for few days, we added new OSD and nothing changed: - *17 pgs backfill_toofull* - *17 pgs stuck unclean* - *recovery 21/5156264 objects degraded (0.000%)* - *recovery 52908/5156264 objects misplaced (1.026%)* - *8 near full osd(s)* ​And here is my ceph health detail:

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Cary
Karun, You can check how much data each OSD has with "ceph osd df" ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS 1 hdd 1.84000 1.0 1885G 769G 1115G 40.84 0.97 101 3 hdd 4.64000 1.0 4679G 2613G 2065G 55.86 1.33 275 4 hdd 4.6400

Re: [ceph-users] RBD Exclusive locks overwritten

2017-12-19 Thread Jason Dillaman
Starting with the 4.12 kernel (I believe), you can pass the "--exclusive" optional to "rbd map" to disable automatic lock passing (or pass via appending "exclusive" to the map options). On Tue, Dec 19, 2017 at 10:56 AM, Wido den Hollander wrote: > > > On 12/19/2017 04:33 PM, Garuti, Lorenzo wrote

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Karun Josy
No, I haven't. Interestingly, the POOL_NEARFULL flag is shown only when there is OSD_NEARFULL flag. I have recently upgraded to Luminous 12.2.2, haven't seen this flag in 12.2.1 Karun Josy On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez wrote: > Hi > > did you set quotas on these pools?

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Jean-Charles Lopez
Hi did you set quotas on these pools? See this page for explanation of most error messages: http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full JC > On Dec 19, 2017, at 01:48, Karun J

Re: [ceph-users] RBD Exclusive locks overwritten

2017-12-19 Thread Wido den Hollander
On 12/19/2017 04:33 PM, Garuti, Lorenzo wrote: Hi all, we are having a very strange behavior with exclusive locks. We have one image called test inside a pool called app. The exclusive lock feature is that only one client can write at the same time, so they will exchange the lock when need

Re: [ceph-users] radosgw: Couldn't init storage provider (RADOS)

2017-12-19 Thread Jean-Charles Lopez
Hi, try having a look at : - network connectivity issues - firewall configuration issues - missing or inaccessible keyring file for client.rgw.ceph-rgw1 - missing or inaccessible ceph.conf file Regards JC Lopez Senior Technical Instructor, Global Storage Consulting Practice Red Hat, Inc. jelo...@

[ceph-users] Backfill/Recovery speed with small objects

2017-12-19 Thread Michal Fiala
Hello, we are testing ceph cluster for storing small files (4KiB - 256KiB) Ceph cluster has 4 OSD servers, each has - 16 cpu - 32G ram - 3x2T HDD + 4x1T HDD (7k2 rpm) - 1400G SSD - cluster network 1Gbps, public network 1Gbps Bluestore storage , 100G partition for Bluestore DB per OSD. OS Ubuntu1

[ceph-users] RBD Exclusive locks overwritten

2017-12-19 Thread Garuti, Lorenzo
Hi all, we are having a very strange behavior with exclusive locks. We have one image called test inside a pool called app. This is the output of rbd status app/test rbd image 'test': > size 120 GB in 30720 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.651bb238e1f29 > format

Re: [ceph-users] Copy RBD image from replicated to erasure pool possible?

2017-12-19 Thread Jason Dillaman
Running "rbd help " will provide documentation for all accepted optionals (there is also bash completion for all optionals as well). On Mon, Dec 18, 2017 at 4:49 PM, Caspar Smit wrote: > Hi all, > > Allthough undocumented, i just tried: > > "rbd -p rbd copy disk1 disk1ec --data-pool ecpool" > > A

Re: [ceph-users] How to fix mon scrub errors?

2017-12-19 Thread Jens-U. Mozdzen
Hi Burkhard, Zitat von Burkhard Linke : HI, since the upgrade to luminous 12.2.2 the mons are complaining about scrub errors: 2017-12-13 08:49:27.169184 mon.ceph-storage-03 [ERR] scrub mismatch today two such messages turned up here, too, in a cluster upgraded to 12.2.2 over the weekend.

Re: [ceph-users] Ceph over IP over Infiniband

2017-12-19 Thread Дробышевский , Владимир
Hello, Phil! Never tried ConnectX-2 and "repository" software versions but my setup feels pretty good with Mellanox OFED . AFAIK the latest OFED version (4.x) has dropped Connect-X2 support but you can try 3.4 versio

Re: [ceph-users] using different version of ceph on cluster and client?

2017-12-19 Thread Mark Schouten
Hi, On dinsdag 19 december 2017 11:37:27 CET 13605702...@163.com wrote: > my ceph cluster is using Jewel on centos 7.3, kernel 3.10; > while our business running on centos 6.8, kernel 2.6.32, want to use rbd; > > is it ok to use Hammer on client? > or which version of ceph should be installed o

[ceph-users] POOL_NEARFULL

2017-12-19 Thread Karun Josy
Hello, In one of our clusters, health is showing these warnings : - OSD_NEARFULL 1 nearfull osd(s) osd.22 is near full POOL_NEARFULL 3 pool(s) nearfull pool 'templates' is nearfull pool 'cvm' is nearfull pool 'ecpool' is nearfull One osd is above 85% used, whi

Re: [ceph-users] luminous OSD_ORPHAN

2017-12-19 Thread Vladimir Prokofev
Took a little walk and figured it out. I just added a dummy osd.20 with weight 0.000 in my CRUSH map and set it. This alone was enough for my cluster to assume that only this osd.20 was orphant - others disappeared. Then I just did $ ceph osd crush remove osd.20 and now my cluster has no orphaned O