[ceph-users] Cephfs Snapshots - Usable in Single FS per Pool Scenario ?

2018-01-29 Thread Paul Kunicki
I know that snapshots on Cephfs are experimental and that a known issue exists with multiple filesystems on one pool but I was surprised at the result of the following: I attempted to take a snapshot of a directory in a pool with a single fs on our properly configured Luminous cluster. I found tha

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
Works great, seemed to have alot less impact than just letting it peer all PGs at the same time. Used an increment of 0.05 without issue, then a ceph tell 'osd.*' injectargs '--osd-max-backfills 2' seems to keep the HDD at around 85-100% util, but not really affecting the clients. Solid advice,

Re: [ceph-users] BlueStore "allocate failed, wtf" error

2018-01-29 Thread Brad Hubbard
On Tue, Jan 30, 2018 at 7:31 AM, Sergey Malinin wrote: > Hello, > Can anyone help me interpret the below error, which one of our OSDs has been > occasionally throwing since last night. > Thanks. See http://tracker.ceph.com/issues/22510. > > - > Jan 29 03:00:53 osd-host ceph-osd[10964]: 2018-

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
Thanks Steve! So the peering won't actually move any blocks around, but will make sure that all PGs know what state they are in? That means that when I start increasing reweight, PGs will be allocated to the disk, but won't actually recover yet. However, they will be set as "degraded". So when

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread Steve Taylor
There are two concerns with setting the reweight to 1.0. The first is peering and the second is backfilling. Peering is going to block client I/O on the affected OSDs, while backfilling will only potentially slow things down. I don't know what your client I/O looks like, but personally I would p

[ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
Hi! Cluster: 5 HW nodes, 10 HDDs with SSD journals, filestore, 0.94.9 hammer, debian wheezy (scheduled to upgrade once this is fixed). I have a replaced HDD that another admin set to reweight 0 instead of weight 0 (I can't remember the reason). What would be the best way to slowly backfill it?

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
And so I totally forgot to add df tree to the mail. Here's the interesting bit from two first nodes. where osd.11 has weight but is reweighted to 0. root@osd1:~# ceph osd df tree ID WEIGHTREWEIGHT SIZE USEAVAIL %USE VAR TYPE NAME -1 181.7- 109T 50848G 60878G 00

[ceph-users] BlueStore "allocate failed, wtf" error

2018-01-29 Thread Sergey Malinin
Hello, Can anyone help me interpret the below error, which one of our OSDs has been occasionally throwing since last night. Thanks. - Jan 29 03:00:53 osd-host ceph-osd[10964]: 2018-01-29 03:00:53.509185 7fe4ae431700 -1 bluestore(/var/lib/ceph/osd/ceph-9) _balance_bluefs_freespace allocate

Re: [ceph-users] lease_timeout

2018-01-29 Thread Karun Josy
Thank you for looking into it. Yes, I believe it is the same issue as reported in the bug. Sorry I was not specific. - Health section is not updated - The Activity values under Pools section (right side) gets stuck it shows the old data and it is not updated. However, the Cluster log section ge

Re: [ceph-users] lease_timeout

2018-01-29 Thread John Spray
On Mon, Jan 29, 2018 at 6:58 PM, Gregory Farnum wrote: > The lease timeout means this (peon) monitor hasn't heard from the leader > monitor in too long; its read lease on the system state has expired. So it > calls a new election since that means the leader is down or misbehaving. Do > the other m

Re: [ceph-users] Signature check failures.

2018-01-29 Thread Gregory Farnum
On Fri, Jan 26, 2018 at 12:14 PM Cary wrote: > Hello, > > We are running Luminous 12.2.2. 6 OSD hosts with 12 1TB OSDs, and 64GB > RAM. Each host has a SSD for Bluestore's block.wal and block.db. > There are 5 monitor nodes as well with 32GB RAM. All servers have > Gentoo with kernel, 4.12.12-ge

Re: [ceph-users] OSDs failing to start after host reboot

2018-01-29 Thread Alfredo Deza
On Mon, Jan 29, 2018 at 1:56 PM, Andre Goree wrote: > On 2018/01/29 1:45 pm, Alfredo Deza wrote: >> >> On Mon, Jan 29, 2018 at 1:37 PM, Andre Goree wrote: >>> >>> On 2018/01/29 12:28 pm, Alfredo Deza wrote: On Mon, Jan 29, 2018 at 10:55 AM, Andre Goree wrote: > > > On

Re: [ceph-users] CRUSH straw2 can not handle big weight differences

2018-01-29 Thread Peter Linder
I realize we're probably kind of pushing it. It was the only option i could think of however that would satisfy the idea that: Have separate servers for HDD and NVMe storage spread out in 3 data centers. Always select 1 NVMe and 2 HDD, in separate data centers (make sure NVMe is primary) If on

Re: [ceph-users] lease_timeout

2018-01-29 Thread Gregory Farnum
The lease timeout means this (peon) monitor hasn't heard from the leader monitor in too long; its read lease on the system state has expired. So it calls a new election since that means the leader is down or misbehaving. Do the other monitors have a similar problem at this stage? The manager freez

Re: [ceph-users] OSDs failing to start after host reboot

2018-01-29 Thread Andre Goree
On 2018/01/29 1:45 pm, Alfredo Deza wrote: On Mon, Jan 29, 2018 at 1:37 PM, Andre Goree wrote: On 2018/01/29 12:28 pm, Alfredo Deza wrote: On Mon, Jan 29, 2018 at 10:55 AM, Andre Goree wrote: On my OSD node that I built with ceph-ansible, the OSDs are failing to start after a reboot.

Re: [ceph-users] OSDs failing to start after host reboot

2018-01-29 Thread Alfredo Deza
On Mon, Jan 29, 2018 at 1:37 PM, Andre Goree wrote: > On 2018/01/29 12:28 pm, Alfredo Deza wrote: >> >> On Mon, Jan 29, 2018 at 10:55 AM, Andre Goree wrote: >>> >>> On my OSD node that I built with ceph-ansible, the OSDs are failing to >>> start >>> after a reboot. >> >> >> This is not uncommon f

Re: [ceph-users] [Best practise] Adding new data center

2018-01-29 Thread Peter Linder
But the OSDs themselves introduce latency also, even if they are NVMe. We find that it is in the same ballpark. Latency does reduce I/O, but for sub-ms ones it is still thousands of IOPS even for a single thread. For a use case with many concurrent writers/readers (VMs), aggregated throughput

Re: [ceph-users] OSDs failing to start after host reboot

2018-01-29 Thread Andre Goree
On 2018/01/29 12:28 pm, Alfredo Deza wrote: On Mon, Jan 29, 2018 at 10:55 AM, Andre Goree wrote: On my OSD node that I built with ceph-ansible, the OSDs are failing to start after a reboot. This is not uncommon for ceph-disk unfortunately, and one of the reasons we have introduced ceph-volum

Re: [ceph-users] [Best practise] Adding new data center

2018-01-29 Thread Peter Linder
Your data centers seem to be pretty close, some 13-14km? If it is a more or less straight fiber run then latency should be 0.1-0.2ms or something, clearly not a problem for synchronous replication. It should work rather well. With "only" 2 data centers however, you need to manually decide if t

Re: [ceph-users] [Best practise] Adding new data center

2018-01-29 Thread Wido den Hollander
On 01/29/2018 07:26 PM, Nico Schottelius wrote: Hey Wido, [...] Like I said, latency, latency, latency. That's what matters. Bandwidth usually isn't a real problem. I imagined that. What latency do you have with a 8k ping between hosts? As the link will be setup this week, I cannot tel

Re: [ceph-users] Importance of Stable Mon and OSD IPs

2018-01-29 Thread Gregory Farnum
Ceph assumes monitor IP addresses are stable, as they're the identity for the monitor and clients need to know them to connect. Clients maintain a TCP connection to the monitors while they're running, and monitors publish monitor maps containing all the known monitors in the cluster. These are pus

Re: [ceph-users] [Best practise] Adding new data center

2018-01-29 Thread Nico Schottelius
Hey Wido, > [...] > Like I said, latency, latency, latency. That's what matters. Bandwidth > usually isn't a real problem. I imagined that. > What latency do you have with a 8k ping between hosts? As the link will be setup this week, I cannot tell yet. However, currently we have on a 65km lin

Re: [ceph-users] [Best practise] Adding new data center

2018-01-29 Thread Wido den Hollander
On 01/29/2018 06:33 PM, Nico Schottelius wrote: Good evening list, we are soon expanding our data center [0] to a new location [1]. We are mainly offering VPS / VM Hosting, so rbd is our main interest. We have a low latency 10 Gbit/s link between our other location [2] and we are wondering,

[ceph-users] [Best practise] Adding new data center

2018-01-29 Thread Nico Schottelius
Good evening list, we are soon expanding our data center [0] to a new location [1]. We are mainly offering VPS / VM Hosting, so rbd is our main interest. We have a low latency 10 Gbit/s link between our other location [2] and we are wondering, what is the best practise for expanding. Naturally

Re: [ceph-users] OSDs failing to start after host reboot

2018-01-29 Thread Alfredo Deza
On Mon, Jan 29, 2018 at 10:55 AM, Andre Goree wrote: > On my OSD node that I built with ceph-ansible, the OSDs are failing to start > after a reboot. This is not uncommon for ceph-disk unfortunately, and one of the reasons we have introduced ceph-volume. There are a few components that can cause

Re: [ceph-users] consequence of losing WAL/DB device with bluestore?

2018-01-29 Thread Wido den Hollander
On 01/29/2018 06:15 PM, David Turner wrote: +1 for Gregory's response.  With filestore, if you lost a journal SSD and followed the steps you outlined, you are leaving yourself open to corrupt data.  Any write that was ack'd by the journal, but not flushed to the disk would be lost and assumed

Re: [ceph-users] consequence of losing WAL/DB device with bluestore?

2018-01-29 Thread David Turner
+1 for Gregory's response. With filestore, if you lost a journal SSD and followed the steps you outlined, you are leaving yourself open to corrupt data. Any write that was ack'd by the journal, but not flushed to the disk would be lost and assumed to be there by the cluster. With a failed journa

[ceph-users] Upgrading multi-site RGW to Luminous

2018-01-29 Thread David Turner
Apparently RGW daemons running 12.2.2 cannot sync data from RGW daemons running anything other than Luminous. This means that if you run multisite and you don't upgrade both sites at the same time, then you have broken replication. There is a fix for this scheduled for 12.2.3 ( http://tracker.cep

Re: [ceph-users] CRUSH straw2 can not handle big weight differences

2018-01-29 Thread Gregory Farnum
CRUSH is a pseudorandom, probabilistic algorithm. That can lead to problems with extreme input. In this case, you've given it a bucket in which one child contains ~3.3% of the total weight, and there are only three weights. So on only 3% of "draws", as it tries to choose a child bucket to descend

Re: [ceph-users] Snapshot trimming

2018-01-29 Thread Karun Josy
The problem we are experiencing is described here: https://bugzilla.redhat.com/show_bug.cgi?id=1497332 However, we are running 12.2.2. Across our 6 ceph clusters, this one with the problem was first version 12.2.0, then upgraded to .1 and then to .2. The other 5 ceph installations started as ve

[ceph-users] Hardware considerations on setting up a new Luminous Ceph cluster

2018-01-29 Thread Hervé Ballans
Hi all, I'm managing since 3 years now an high-availability Ceph clusters for our virtualization infrastructure (Proxmox VE). We use Jewel with rbd. It works perfectly well and meets our data integrity and performance needs. In parallel, we want to add a new Ceph cluster for data storage with

Re: [ceph-users] consequence of losing WAL/DB device with bluestore?

2018-01-29 Thread Gregory Farnum
On Mon, Jan 29, 2018 at 9:37 AM Vladimir Prokofev wrote: > Hello. > > In short: what are the consequence of loosing external WAL/DB > device(assuming it’s SSD) in bluestore? > > In comparison with filestore - we used to have an external SSD for > journaling multiple HDD OSDs. Hardware failure of

[ceph-users] OSDs failing to start after host reboot

2018-01-29 Thread Andre Goree
On my OSD node that I built with ceph-ansible, the OSDs are failing to start after a reboot. Errors on boot: ~# systemctl status ceph-disk@dev-nvme0n1p16.service ● ceph-disk@dev-nvme0n1p16.service - Ceph disk activation: /dev/nvme0n1p16 Loaded: loaded (/lib/systemd/system/ceph-dis

[ceph-users] consequence of losing WAL/DB device with bluestore?

2018-01-29 Thread Vladimir Prokofev
Hello. In short: what are the consequence of loosing external WAL/DB device(assuming it’s SSD) in bluestore? In comparison with filestore - we used to have an external SSD for journaling multiple HDD OSDs. Hardware failure of such a device would not be that big of a deal, as we can quickly use xf

Re: [ceph-users] ceph CRUSH automatic weight management

2018-01-29 Thread Wido den Hollander
On 01/29/2018 04:14 PM, Niklas wrote: Thank you for the answer. It seems like a good solution to use uniform in this case. But adding this in the crush map result in failure to compile crush map. datacenter stray {     alg uniform     hash 0 } $: crushtool -c /tmp/crush.text -o /tmp/crush i

Re: [ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Wido den Hollander
On 01/29/2018 04:21 PM, Jake Grimmett wrote: Hi Nick, many thanks for the tip, I've set "osd_max_pg_per_osd_hard_ratio = 3" and restarted the OSD's. So far it's looking promising, I now have 56% objects misplaced rather than 3021 pgs inactive. cluster now working hard to rebalance. PGs s

Re: [ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Jake Grimmett
Hi Nick, many thanks for the tip, I've set "osd_max_pg_per_osd_hard_ratio = 3" and restarted the OSD's. So far it's looking promising, I now have 56% objects misplaced rather than 3021 pgs inactive. cluster now working hard to rebalance. I will report back after things stabilise... many, man

Re: [ceph-users] ceph CRUSH automatic weight management

2018-01-29 Thread Niklas
Thank you for the answer. It seems like a good solution to use uniform in this case. But adding this in the crush map result in failure to compile crush map. datacenter stray {     alg uniform     hash 0 } $: crushtool -c /tmp/crush.text -o /tmp/crush in rule 'hdd' step take default has no class

Re: [ceph-users] ceph CRUSH automatic weight management

2018-01-29 Thread Wido den Hollander
On 01/29/2018 03:38 PM, Niklas wrote: When adding new OSDs to a host, the CRUSH weight for the datacenter one level up is changed to reflect the change. What configuration is used to stop ceph from automatic weight management on the levels above the host? In your case (datacenters) you might

[ceph-users] ceph CRUSH automatic weight management

2018-01-29 Thread Niklas
When adding new OSDs to a host, the CRUSH weight for the datacenter one level up is changed to reflect the change. What configuration is used to stop ceph from automatic weight management on the levels above the host? ___ ceph-users mailing list ceph-u

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Jason Dillaman
Correct, RBD will *always* stripe the image across multiple backing objects. The "striping" feature is really just used for the cases where the striping parameters are not the default of stripe-count = 1 and stripe-unit = object size. Internally that feature is known as striping v2 and colloquially

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Konstantin Shalygin
On 01/29/2018 08:33 PM, Jason Dillaman wrote: OK -- but that is the normal case of RBD w/o the need for fancy striping (i.e. no need for the special feature bit). The striping feature is only needed when using stripe counts != 1 and stripe units != object size. When you specify the "--stripe-unit

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Jason Dillaman
OK -- but that is the normal case of RBD w/o the need for fancy striping (i.e. no need for the special feature bit). The striping feature is only needed when using stripe counts != 1 and stripe units != object size. When you specify the "--stripe-unit" / "--stripe-count" via the CLI, we just assume

Re: [ceph-users] Snapshot trimming

2018-01-29 Thread Karun Josy
Thank you for your response. We don't think there is an issue with the cluster being behind snap trimming. We just don't think snaptrim is occurring at all. We have 6 individual ceph clusters. When we delete old snapshots for clients, we can see space being made available. In this particular one

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Konstantin Shalygin
On 01/29/2018 07:49 PM, Jason Dillaman wrote: To me, it didn't make sense to set the striping feature bit if fancy striping wasn't really being used. The same logic was applied to the "data-pool" feature bit -- it does make sense to set it if the data pool is really not different from the base i

Re: [ceph-users] Debugging fstrim issues

2018-01-29 Thread Nathan Harper
Hi, Thanks all for your quick responses. In my enthusiasm to test I might have been masking the problem, plus not knowing what the output of 'fstrim' should actually show. Firstly, to answer the question, yes I have the relevant libvirt config, plus have set the correct virtio-scsi settings in

Re: [ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Wido den Hollander
On 01/29/2018 02:07 PM, Nick Fisk wrote: Hi Jake, I suspect you have hit an issue that me and a few others have hit in Luminous. By increasing the number of PG's before all the data has re-balanced, you have probably exceeded hard PG per OSD limit. See this thread https://www.spinics.net/list

Re: [ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Nick Fisk
Hi Jake, I suspect you have hit an issue that me and a few others have hit in Luminous. By increasing the number of PG's before all the data has re-balanced, you have probably exceeded hard PG per OSD limit. See this thread https://www.spinics.net/lists/ceph-users/msg41231.html Nick > -Orig

Re: [ceph-users] Snapshot trimming

2018-01-29 Thread David Turner
I don't know why you keep asking the same question about snap trimming. You haven't shown any evidence that your cluster is behind on that. Have you looked into fstrim inside of your VMs? On Mon, Jan 29, 2018, 4:30 AM Karun Josy wrote: > fast-diff map is not enabled for RBD images. > Can it be a

[ceph-users] Inconsistent PG - failed to pick suitable auth object

2018-01-29 Thread Josef Zelenka
Hi everyone, i'm having issues with one of our clusters, regarding a seemingly unfixable inconsistent pg. We are running ubuntu 16.04, ceph 10.2.7, 96 osds on 8 nodes. After a power outage, we had some inconsistent pgs, i managed to fix all of them but this one, here's an excerpt from the logs(

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Jason Dillaman
To me, it didn't make sense to set the striping feature bit if fancy striping wasn't really being used. The same logic was applied to the "data-pool" feature bit -- it does make sense to set it if the data pool is really not different from the base image pool. Therefore, both of these features are

[ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Jake Grimmett
Dear All, Our ceph luminous (12.2.2) cluster has just broken, due to either adding 260 OSDs drives in one go, or to increasing the PG number from 1024 to 4096 in one go, or a combination of both... Prior to the upgrade, the cluster consisted of 10 dual v4 Xeon nodes running SL7.4, each node

Re: [ceph-users] CRUSH straw2 can not handle big weight differences

2018-01-29 Thread Peter Linder
We kind of turned the crushmap inside out a little bit. Instead of the traditional "for 1 PG, select OSDs from 3 separate data centers" we did "force selection from only one datacenter (out of 3) and leave enough options only to make sure precisely 1 SSD and 2 HDD are selected". We then orga

Re: [ceph-users] Migrating filestore to bluestore using ceph-volume

2018-01-29 Thread Alfredo Deza
On Fri, Jan 26, 2018 at 5:00 PM, Reed Dier wrote: > Bit late for this to be helpful, but instead of zapping the lvm labels, you > could alternatively destroy the lvm volume by hand. > > lvremove -f / > vgremove > pvremove /dev/ceph-device (should wipe labels) > > > Then you should be able to run

Re: [ceph-users] CRUSH straw2 can not handle big weight differences

2018-01-29 Thread Niklas
Yes. It is a hybrid solution where a placement group is always located on one NVMe drive and two HDD drives. Advantage is great read performance and cost savings. Disadvantages is low write performance. Still the write performance is good thanks to rockdb on Intel Optane disks in HDD servers.

Re: [ceph-users] how to get bucket or object's ACL?

2018-01-29 Thread Josef Zelenka
hi, this should be possible via the s3cmd tool. |s3cmd info s3:/// s3cmd info s3://PP-2015-Tut/ here is more info - https://kunallillaney.github.io/s3cmd-tutorial/ i have succesfully used this tool in the past for ACL management, so i hope it's gonna work for you too. JZ | On 29/01/18 11:23

Re: [ceph-users] CRUSH straw2 can not handle big weight differences

2018-01-29 Thread Wido den Hollander
On 01/29/2018 01:14 PM, Niklas wrote: Ceph luminous 12.2.2 $: ceph osd pool create hybrid 1024 1024 replicated hybrid $: ceph -s   cluster:     id: e07f568d-056c-4e01-9292-732c64ab4f8e     health: HEALTH_WARN     Degraded data redundancy: 431 pgs unclean, 431 pgs degraded, 431

[ceph-users] CRUSH straw2 can not handle big weight differences

2018-01-29 Thread Niklas
Ceph luminous 12.2.2 $: ceph osd pool create hybrid 1024 1024 replicated hybrid $: ceph -s   cluster:     id: e07f568d-056c-4e01-9292-732c64ab4f8e     health: HEALTH_WARN     Degraded data redundancy: 431 pgs unclean, 431 pgs degraded, 431 pgs undersized   services:     mon: 3 daemo

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Konstantin Shalygin
On 01/29/2018 06:40 PM, Ilya Dryomov wrote: Unless you specify a non-default stripe_unit/stripe_count, striping feature bit is not set and striping-related fields aren't displayed. This behaviour is new in luminous, but jewel and older clients still work with luminous images. Yes, I see it... I

[ceph-users] ceph-helm issue

2018-01-29 Thread Ercan Aydoğan
Hello, i’m following http://docs.ceph.com/docs/master/start/kube-helm/ document my ceph-osd’s not running (also not visible with get pods -n ceph) i’m guessing it can be about ceph-overrides.yaml maybe public_network and cluster network issu

Re: [ceph-users] Debugging fstrim issues

2018-01-29 Thread Ric Wheeler
I might have missed something in the question. Fstrim does not free up space at the user level that you see with a normal df. It is meant to let the block device know about all of the space unused by the file system. Regards, Ric On Jan 29, 2018 11:56 AM, "Wido den Hollander" wrote: > > > O

Re: [ceph-users] Debugging fstrim issues

2018-01-29 Thread Wido den Hollander
On 01/29/2018 12:29 PM, Nathan Harper wrote: Hi, I don't know if this is strictly a Ceph issue, but hoping someone will be able to shed some light.   We have an Openstack environment (Ocata) backed onto a Jewel cluster. We recently ran into some issues with full OSDs but couldn't work out

Re: [ceph-users] Debugging fstrim issues

2018-01-29 Thread Janne Johansson
2018-01-29 12:29 GMT+01:00 Nathan Harper : > Hi, > I don't know if this is strictly a Ceph issue, but hoping someone will be > able to shed some light. We have an Openstack environment (Ocata) backed > onto a Jewel cluster. > We recently ran into some issues with full OSDs but couldn't work out

Re: [ceph-users] luminous rbd feature 'striping' is deprecated or just a bug?

2018-01-29 Thread Ilya Dryomov
On Mon, Jan 29, 2018 at 8:37 AM, Konstantin Shalygin wrote: > Anybody know about changes in rbd feature 'striping'? May be is deprecated > feature? What I mean: > > I have volume created by Jewel client on Luminous cluster. > > # rbd --user=cinder info > solid_rbd/volume-12b5df1e-df4c-4574-859d-22

[ceph-users] Debugging fstrim issues

2018-01-29 Thread Nathan Harper
Hi, I don't know if this is strictly a Ceph issue, but hoping someone will be able to shed some light. We have an Openstack environment (Ocata) backed onto a Jewel cluster. We recently ran into some issues with full OSDs but couldn't work out what was filling up the pools. It appears that fstr

Re: [ceph-users] Can't make LDAP work

2018-01-29 Thread Theofilos Mouratidis
Hello Matt, I am using luminous 12.2.2. I can find both accounts using both accounts as bindings e.g.: ldapsearch -x -D "CN=myuser,OU=Users,OU=Organic Units,DC=example,DC=com" -W -H ldaps://ldap.example.com:636 -b 'OU=Users,OU=Organic Units,DC=example,DC=com' 'cn=myuser' dn Enter LDAP Password: #

[ceph-users] how to get bucket or object's ACL?

2018-01-29 Thread 13605702...@163.com
hi how to get the bucket or object's ACL in command line? thanks 13605702...@163.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] swift capabilities support in radosgw

2018-01-29 Thread Syed Armani
Hi Matt, Thanks for your response. Do you think that setting the following two variables will fix the problem? rgw swift url = https://mycloud:8080 rgw swift url prefix = / Cheers, Syed On 26/01/18 8:02 PM, Matt Benjamin wrote: > Hi Syed, > > RGW supports Swift /info in Luminous. > > By defa

Re: [ceph-users] Snapshot trimming

2018-01-29 Thread Karun Josy
fast-diff map is not enabled for RBD images. Can it be a reason for Trimming not happening ? Karun Josy On Sat, Jan 27, 2018 at 10:19 PM, Karun Josy wrote: > Hi David, > > Thank you for your reply! I really appreciate it. > > The images are in pool id 55. It is an erasure coded pool. > > --

Re: [ceph-users] POOL_NEARFULL

2018-01-29 Thread Konstantin Shalygin
On 01/29/2018 04:25 PM, Karun Josy wrote: In Luminous version, we have to use osd set command Yep. Since Luminous _full options saved in osdmap. k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-

Re: [ceph-users] POOL_NEARFULL

2018-01-29 Thread Karun Josy
In Luminous version, we have to use osd set command -- ceph osd set -backfillfull-ratio .89 ceph osd set-nearfull-ratio .84 ceph osd set-full-ratio .96 -- Karun Josy On Thu, Dec 21, 2017 at 4:29 PM, Konstantin Shalygin wrote: > Update your ceph.conf file > > This is also not

Re: [ceph-users] Bluefs WAL : bluefs _allocate failed to allocate on bdev 0

2018-01-29 Thread Dietmar Rieder
Hi, just for the record: A reboot of the osd node solved the issue, now the wal is fully purged and the extra 790MB are gone. Sorry for the noise. Dietmar On 01/27/2018 11:08 AM, Dietmar Rieder wrote: > Hi, > > replying to my own message. > > After I restarted the OSD it seems some of the