Re: [ceph-users] Ceph talks/presentations at conferences/events

2018-03-26 Thread Robert Sander
Hi Kai, On 22.03.2018 18:04, Kai Wagner wrote: > > don't know if this is the right place to discuss this but I was just > wondering if there's any specific mailing list + web site where upcoming > events (Ceph/Open Source/Storage) and conferences are discussed and > generally tracked? Maybe the

Re: [ceph-users] Ceph talks/presentations at conferences/events

2018-03-26 Thread Kai Wagner
Hi Robert, thanks will forward it to the community list as well. Kai On 03/26/2018 11:03 AM, Robert Sander wrote: > Hi Kai, > > On 22.03.2018 18:04, Kai Wagner wrote: >> don't know if this is the right place to discuss this but I was just >> wondering if there's any specific mailing list + web

[ceph-users] Radosgw halts writes during recovery, recovery info issues

2018-03-26 Thread Josef Zelenka
Hi everyone, i'm currently fighting an issue in a cluster we have for a customer. It's used for a lot of small files(113m currently) that are pulled via radosgw. We have 3 nodes, 24 OSDs in total. the index etc pools are migrated to a separate root called "ssd", that root is on only ssd drives

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-26 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 5:53 PM, Nicolas Huillard wrote: > Le vendredi 23 mars 2018 à 12:14 +0100, Ilya Dryomov a écrit : >> On Fri, Mar 23, 2018 at 11:48 AM, wrote: >> > The stock kernel from Debian is perfect >> > Spectre / meltdown mitigations are worthless for a Ceph point of >> > view, >> >

Re: [ceph-users] remove big rbd image is very slow

2018-03-26 Thread Ilya Dryomov
On Sat, Mar 17, 2018 at 5:11 PM, shadow_lin wrote: > Hi list, > My ceph version is jewel 10.2.10. > I tired to use rbd rm to remove a 50TB image(without object map because krbd > does't support it).It takes about 30mins to just complete about 3%. Is this > expected? Is there a way to make it faste

Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-26 Thread Jason Dillaman
RHEL 7.5 has not been released yet, but it should be released very soon. After it's released, it usually takes the CentOS team a little time to put together their matching release. I also suspect that Linux kernel 4.16 is going to be released in the next week or so as well. On Sat, Mar 24, 2018 at

Re: [ceph-users] Radosgw halts writes during recovery, recovery info issues

2018-03-26 Thread Josef Zelenka
forgot to mention - we are running jewel, 10.2.10 On 26/03/18 11:30, Josef Zelenka wrote: Hi everyone, i'm currently fighting an issue in a cluster we have for a customer. It's used for a lot of small files(113m currently) that are pulled via radosgw. We have 3 nodes, 24 OSDs in total. the ind

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-26 Thread Alex Gorbachev
On Wed, Mar 21, 2018 at 2:26 PM, Kjetil Joergensen wrote: > I retract my previous statement(s). > > My current suspicion is that this isn't a leak as much as it being > load-driven, after enough waiting - it generally seems to settle around some > equilibrium. We do seem to sit on the mempools x 2

Re: [ceph-users] Enable object map kernel module

2018-03-26 Thread Thiago Gonzaga
It seems the bin is not present, is it part of ceph packages? tgonzaga@ceph-mon-3:~$ sudo rbd nbd map test /usr/bin/rbd-nbd: exec failed: (2) No such file or directory rbd: rbd-nbd failed with error: /usr/bin/rbd-nbd: exit status: 1 thanks in advance, *Thiago Gonzaga* SaaSOps Software Architect

Re: [ceph-users] Enable object map kernel module

2018-03-26 Thread ceph
This is an extra package: rbd-nbd On 03/26/2018 04:41 PM, Thiago Gonzaga wrote: > It seems the bin is not present, is it part of ceph packages? > > tgonzaga@ceph-mon-3:~$ sudo rbd nbd map test > /usr/bin/rbd-nbd: exec failed: (2) No such file or directory > rbd: rbd-nbd failed with error: /usr/bi

Re: [ceph-users] Enable object map kernel module

2018-03-26 Thread Thiago Gonzaga
thanks :D *Thiago Gonzaga* SaaSOps Software Architect o. 1 (512) 2018-287 x2119 Skype: thiago.gonzaga20 [image: Aurea] On Mon, Mar 26, 2018 at 11:42 AM, wrote: > This is an extra package: rbd-nbd > > On 03/26/2018 04:41 PM, Thi

[ceph-users] Fwd: Fwd: High IOWait Issue

2018-03-26 Thread Sam Huracan
Thanks for your information. Here is result when I run atop on 1 Ceph HDD host: http://prntscr.com/iwmc86 There is some disk busy with over 100%, but the SSD journal (SSD) use only 3%, is it normal? Is there any way to optimize using of SSD journal? Could you give me some keyword? Here is configu

Re: [ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-26 Thread Josh Haft
Here's what I'm seeing using basic owner/group permissions. Both directories are mounted on my NFS client with the same options. Only difference is underneath, from the NFS server, 'aclsupport' is mounted via ceph-fuse with fuse_default_permissions=0 (acls enabled), and 'noaclsupport' is mounted vi

Re: [ceph-users] Radosgw ldap info

2018-03-26 Thread Benjeman Meekhof
Hi Marc, I can't speak to your other questions but as far as the user auth caps those are still kept in the radosgw metadata outside of ldap. As far as I know all that LDAP gives you is a way to authenticate users with a user/password combination. So, for example, if you create a user 'ldapuser'

[ceph-users] multiple radosgw daemons per host, and performance

2018-03-26 Thread Robert Stanford
When I am running at full load my radosgw process uses 100% of one CPU core (and has many threads). I have many idle cores. Is it common for people to run several radosgw processes on their gateways, to take advantage of all their cores? ___ ceph-users

[ceph-users] problem while removing images

2018-03-26 Thread Thiago Gonzaga
Hey Guys, Is it normal to get all process stuck while a image is deleting? I'm removing 21g layering image and it takes ages, while the image is removing I have lots of problems in the log like this I ran rbd create rbd/test -s 21G --image-feature layering deleted the image while deleted that tr

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-26 Thread Igor Fedotov
Hi Alex, I can see your bug report: https://tracker.ceph.com/issues/23462 if your settings from there are applicable for your comment here then you have bluestore cache size limit set to 5 Gb that totals in 90 Gb RAM for  18 OSD for BlueStore cache only. There is also additional memory overh

[ceph-users] Requests blocked as cluster is unaware of dead OSDs for quite a long time

2018-03-26 Thread Jared H
I have three datacenters with three storage hosts in each, which house one OSD/MON per host. There are three replicas, one in each datacenter. I want the cluster to be able to survive a nuke dropped on 1/3 datacenters, scaling up to 2/5 datacenters. I do not need realtime data replication (Ceph is

Re: [ceph-users] why we show removed snaps in ceph osd dump pool info?

2018-03-26 Thread linghucongsong
Thanks for response.maybe we can show these removed snaps in the cmd ceph osd map dump detail? 在 2018-03-26 16:34:06,"Chris Blum" 写道: There was a discussion about this (partly) a few months back: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020514.html AFAIK the de

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-26 Thread Konstantin Shalygin
On 03/26/2018 09:09 PM, Alex Gorbachev wrote: I am seeing these entries under load - should be plenty of RAM on a node with 128GB RAM and 18 OSDs This is self inflected because you have increased: bluestore_cache_size_hdd = 5368709120 * 18 = 96636764160 bytes Look at your dashboards. k

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-26 Thread Alex Gorbachev
On Mon, Mar 26, 2018 at 3:08 PM, Igor Fedotov wrote: > Hi Alex, > > I can see your bug report: https://tracker.ceph.com/issues/23462 > > if your settings from there are applicable for your comment here then you > have bluestore cache size limit set to 5 Gb that totals in 90 Gb RAM for 18 > OSD fo

Re: [ceph-users] problem while removing images

2018-03-26 Thread Christian Balzer
Hello, On Mon, 26 Mar 2018 18:20:22 -0300 Thiago Gonzaga wrote: > Hey Guys, > > Is it normal to get all process stuck while a image is deleting? > If you'd tell us all the details of your cluster (SW and HW) we might be more helpful and point to specific issues. So to generically answer your

Re: [ceph-users] Fwd: Fwd: High IOWait Issue

2018-03-26 Thread Christian Balzer
On Mon, 26 Mar 2018 23:00:28 +0700 Sam Huracan wrote: > Thanks for your information. > Here is result when I run atop on 1 Ceph HDD host: > http://prntscr.com/iwmc86 > This pretty much confirms the iostat output, clearly deep scrubbing is killing your cluster performance. > There is some disk bu

Re: [ceph-users] Fwd: High IOWait Issue

2018-03-26 Thread Sam Huracan
Hi, We are using Raid cache mode Writeback for SSD journal, I consider this is reason of utilization of SSD journal is so low. Is it true? Anybody has experience with this matter, plz confirm. Thanks 2018-03-26 23:00 GMT+07:00 Sam Huracan : > Thanks for your information. > Here is result when

[ceph-users] What is in the mon leveldb?

2018-03-26 Thread Tracy Reed
Hello all, It seems I have underprovisioned storage space for my mons and my /var/lib/ceph/mon filesystem is getting full. When I first started using ceph this only took up tens of megabytes and I assumed it would stay that way and 5G for this filesystem seemed luxurious. Little did I know that mo

Re: [ceph-users] What is in the mon leveldb?

2018-03-26 Thread Wido den Hollander
On 03/27/2018 06:40 AM, Tracy Reed wrote: > Hello all, > > It seems I have underprovisioned storage space for my mons and my > /var/lib/ceph/mon filesystem is getting full. When I first started using > ceph this only took up tens of megabytes and I assumed it would stay > that way and 5G for thi