[ceph-users] Re: mount cephfs with autofs

2020-06-16 Thread Eugen Block
Hi, the autofs on our clients is configured to use LDAP, so there's one more layer. This is the current setup: # autofs on LDAP server /etc/sysconfig/autofs:SEARCH_BASE="ou=cephfs,ou=AUTOFS,[...]" # LDAP config -fstype=ceph,name=autofs,secretfile=/etc/ceph/autofs.key,nodev,nosuid ,,:/path/

[ceph-users] Re: Calculate recovery time

2020-06-16 Thread Janne Johansson
Den ons 17 juni 2020 kl 02:14 skrev Seena Fallah : > Hi all. > Is there any way that I could calculate how much time it takes to add > OSD to my cluster and get rebalanced or how much it takes to out OSD > from my cluster? > This is very dependent on all the variables of a cluster, from controlle

[ceph-users] Re: mount cephfs with autofs

2020-06-16 Thread Derrick Lin
Hi Eugen I configured auto.master and a mount point file auto.ceph with the following entry. I am wondering if you can show me what your autofs entry looks like? Thanks ceph -fstype=ceph,name=cephfs,conf=/etc/ceph/mini_conf.conf,secretfile=/etc/ceph/client.cephfs,noatime :/ On Mon, J

[ceph-users] Calculate recovery time

2020-06-16 Thread Seena Fallah
Hi all. Is there any way that I could calculate how much time it takes to add OSD to my cluster and get rebalanced or how much it takes to out OSD from my cluster? Thanks. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: [NFS-Ganesha-Support] bug in nfs-ganesha? and cephfs?

2020-06-16 Thread Marc Roos
Hi Jeff, how did you deduct this from the log file? I can't see where the 'error' is. -Original Message- Subject: Re: [NFS-Ganesha-Support] bug in nfs-ganesha? and cephfs? On Sun, 2020-06-14 at 15:17 +0200, Marc Roos wrote: > When rsyncing to a nfs-ganesha exported cephfs the process ha

[ceph-users] Re: advantage separate cluster network on single interface

2020-06-16 Thread Scottix
We have a dev cluster to test things on ceph, that only have one 1gig nic network. Although it can work we noticed major impact with latencies when the cluster is balancing or under heavy load. On Tue, Jun 16, 2020 at 07:54 Olivier AUDRY wrote: > hello > > as far as I know there is no perf advan

[ceph-users] struct crush_bucket **buckets in Ceph CRUSH

2020-06-16 Thread Bobby
Hi, I have a question regarding Ceph CRUSH. I have been going through Crush.h file. It says that *struct crush_bucket **buckets * (below) is an array of pointers. My understanding is that this particular array of pointers is a collection of addresses of six scalar values namely __s32 id; __u16

[ceph-users] Slow Ops start piling up, Mon Corruption ?

2020-06-16 Thread Daniel Poelzleithner
Hi, we had bad blocks on one OSD and around the same time a network switch outage, which seems to have caused some corruption on the mon service. > # ceph -s cluster: id: d7c5c9c7-a227-4e33-ab43-3f4aa1eb0630 health: HEALTH_WARN 1 daemons have recently crashed

[ceph-users] Re: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')

2020-06-16 Thread Cem Zafer
Thanks Simon, as you mentioned, I did the missing things and now everything works fine. On Tue, Jun 16, 2020 at 9:30 AM Simon Sutter wrote: > Hello, > > > When you deploy ceph to other nodes with the orchestrator, they "just" > have the containers you deployed to them. > This means in your case,

[ceph-users] Announcing go-ceph v0.4.0

2020-06-16 Thread John Mulligan
I'm happy to announce the another release of the go-ceph API bindings. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.4.0 The bindings aim to play a similar role to the "pybind" python bindings in the ceph tree but for

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-16 Thread David Galloway
Our install-deps.sh script is looking for "python36-Cython" vs "python3-Cython" so I guess that needs to be fixed in the nautilus version of install-deps.sh On 6/15/20 7:26 AM, Giulio Fidente wrote: > hi David, thanks for helping > > python3-Cython seems to be already in the centos8 PowerTools re

[ceph-users] Ceph Tech Talk for June 25th

2020-06-16 Thread Mike Perez
Hi everyone, I'm looking for a proposal for this month's Tech Talk on the 25th at 17:00 UTC. If you have something you want to share with the Ceph community, consider sending me your proposal: https://ceph.io/ceph-tech-talks/ -- Mike Perez He/Him Ceph Community Manager Red Hat Los Angele

[ceph-users] Re: Current status of multipe cephfs

2020-06-16 Thread Nathan Fish
We are using multiple filesystems in production (Nautilus). While we have had a number of issues over the past year, I don't think any of them are specific to the use of multiple filesystems. On Tue, Jun 16, 2020 at 8:35 AM Simon Sutter wrote: > > Hello, > > > What is the current status, of using

[ceph-users] Re: advantage separate cluster network on single interface

2020-06-16 Thread Olivier AUDRY
hello as far as I know there is no perf advantage to do this. Personnaly I'm doing it in order to monitoring the two different bandwidth usage. oau Le mardi 16 juin 2020 à 16:42 +0200, Marcel Kuiper a écrit : > Hi > > I wonder if there is any (theoretical) advantage running a separate > backend

[ceph-users] advantage separate cluster network on single interface

2020-06-16 Thread Marcel Kuiper
Hi I wonder if there is any (theoretical) advantage running a separate backend network next to the public network (through vlan separation) over a single interface I googled a lot and while some blogs advice to do so, they do not give any argument that supports this statement Any insights on thi

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-16 Thread kefu chai
On Mon, Jun 15, 2020 at 11:31 PM kefu chai wrote: > > On Mon, Jun 15, 2020 at 7:27 PM Giulio Fidente wrote: > > > > hi David, thanks for helping > > > > python3-Cython seems to be already in the centos8 PowerTools repo: > > > > http://mirror.centos.org/centos-8/8/PowerTools/x86_64/os/Packages/ >

[ceph-users] Re: Poor Windows performance on ceph RBD.

2020-06-16 Thread jcharles
Hello, really thanks for your answer. I will try to investigate to the 2nd problems, and I will keep you informed if I find something. For the first one, I am experimenting cache pool as you pointed me to, I never tried it because the ceph documentation doesn't really encourage this with rbd. Fo

[ceph-users] CephFS health error dir_frag recovery process

2020-06-16 Thread Christopher Wieringa
Hello helpful mailing list folks! After a networking outage, I had a MDS rank failure (originally 3 MDS ranks) that has left my CephFS cluster in a bad shape. I worked through most of the Disaster Recovery guide (https://docs.ceph.com/docs/nautilus/cephfs/disaster-recovery-experts/#disaster-re

[ceph-users] Current status of multipe cephfs

2020-06-16 Thread Simon Sutter
Hello, What is the current status, of using multiple cephfs? In octopuss I get lots of warnings, that this feature is still not fully tested, but the latest entry regarding multiple cephfs in the mailinglist is from about 2018. Is someone using multiple cephfs in production? Thanks in Advanc

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Dan van der Ster
Those df's and PG numbers all look fine to me. I wouldn't start adjusting pg_num now -- leave the autoscaler module disabled. Some might be concerned about having 190 PGs on an OSD, but this is fine provided you have ample memory (at least 3GB per OSD). Cheers, Dan On Tue, Jun 16, 2020 at 2:23 P

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Boris Behrens
Oh ok. Because we have two types of ssds. 1.8TB and 3.6TB. The 1.8TB got around 90-100pgs and the 3.6TB around 150-190pgs Here is the output: RAW STORAGE: CLASS SIZEAVAIL USEDRAW USED %RAW USED ssd 956 TiB 360 TiB 595 TiB 596 TiB 62.

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Dan van der Ster
On Tue, Jun 16, 2020 at 2:00 PM Boris Behrens wrote: > > See inline comments > > Am Di., 16. Juni 2020 um 13:29 Uhr schrieb Zhenshi Zhou : > > > > I did this on my cluster and there was a huge number of pg rebalanced. > > I think setting this option to 'on' is a good idea if it's a brand new > >

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Boris Behrens
See inline comments Am Di., 16. Juni 2020 um 13:29 Uhr schrieb Zhenshi Zhou : > > I did this on my cluster and there was a huge number of pg rebalanced. > I think setting this option to 'on' is a good idea if it's a brand new > cluster. > On our new cluster we enabled them, but not on our primary

[ceph-users] Re: Ceph latest install

2020-06-16 Thread Marc Roos
Install rhel7/cenos7 minimal Install the rpms from http://download.ceph.com/rpm-nautilus/el7/ Check this manual https://ceph.readthedocs.io/en/latest/install/index_manual/#install-manual -Original Message- To: Amudhan P Cc: ceph-users Subject: [ceph-users] Re: Ceph latest install

[ceph-users] Ceph install guide for Ubuntu

2020-06-16 Thread masud parvez
Could anyone give me the latest version ceph install guide for ubuntu 20.04 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph latest install

2020-06-16 Thread masud parvez
I need a proper guide ,plz help On Sat, Jun 13, 2020 at 9:28 PM Amudhan P wrote: > > https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/ > > > On Sat, Jun 13, 2020 at 2:31 PM masud parvez > wrote: > >> Could anyone give me the latest version ceph install guide fo

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Zhenshi Zhou
I did this on my cluster and there was a huge number of pg rebalanced. I think setting this option to 'on' is a good idea if it's a brand new cluster. Dan van der Ster 于2020年6月16日周二 下午7:07写道: > Could you share the output of > > ceph osd pool ls detail > > ? > > This way we can see how the po

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Dan van der Ster
Could you share the output of ceph osd pool ls detail ? This way we can see how the pools are configured and help recommend if pg_autoscaler is worth enabling. Cheers, Dan On Tue, Jun 16, 2020 at 11:51 AM Boris Behrens wrote: > > I read about the "warm" option and we are already discussin

[ceph-users] Ceph CRUSH rules in map

2020-06-16 Thread Bobby
Hi all, I have a question regarding following rules in Ceph CRUSH map: enum crush_opcodes { /*! do nothing */ CRUSH_RULE_NOOP = 0, CRUSH_RULE_TAKE = 1, /* arg1 = value to start with */ CRUSH_RULE_CHOOSE_FIRSTN = 2, /* arg1 = num items to pick */ /* arg2

[ceph-users] Re: Many osds down , ceph mon has a lot of scrub logs

2020-06-16 Thread Frank Schilder
Are observing something similar to this thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z/#FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z ? = Frank Schilder AIT Risø Campus Bygning 109, rum S14 Fro

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Boris Behrens
I read about the "warm" option and we are already discussing this. I don't know if the pgs needs a tuning. I don't know what the impact is and if there will be any difference if we enable it. The, meanwhile gone, last ceph admin made a ticket, and I am not particularly familiar with ceph. So I ne

[ceph-users] Re: enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Dan van der Ster
Hi, I agree with "someone" -- it's not a good idea to just naively enable pg_autoscaler on an existing cluster with lots of data and active customers. If you're curious about this feature, it would be harmless to start out by enabling it with pg_autoscale_mode = warn on each pool. This way you ca

[ceph-users] enabling pg_autoscaler on a large production storage?

2020-06-16 Thread Boris Behrens
Hi, I would like to enable the pg_autoscaler on our nautilus cluster. Someone told me that I should be really really careful to NOT have customer impact. Maybe someone can share some experience on this? The Cluster got 455 OSDs on 19 hosts with ~17000 PGs and ~1petabyte raw storage where ~600TB

[ceph-users] Re: dealing with spillovers

2020-06-16 Thread Igor Fedotov
Hi Reed, you might want to use bluefs-bdev-migrate command which simply moves BlueFS files from source path to destination. I.e. from main device to DB in you case. It needs neither OSD redeployment nor additional/new device creation. Neither it guarantees that spillover reoccurs one day tho

[ceph-users] Re: unable to obtain rotating service keys

2020-06-16 Thread Dan van der Ster
Hi Raymond, I'm pinging this old thread because we hit the same issue last week. Is it possible that when you upgraded to nautilus you ran `ceph osd require-osd-release nautilus` but did not run `ceph mon enable-msgr2` ? We were in that state (intentionally), and started getting the `unable to o

[ceph-users] Re: Should the fsid in /etc/ceph/ceph.conf match the ceph_fsid in /var/lib/ceph/osd/ceph-*/ceph_fsid?

2020-06-16 Thread Eugen Block
As Paul already answered in your previous thread you need to correct the fsid in your ceph.conf. The ceph-disk activate-all should work as soon as the config file is correct. Zitat von Zhenshi Zhou : Yep, I think the ceph_fsid tells OSDs how to recognize the cluster. It should be the same