[ceph-users] Re: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')

2020-06-15 Thread Simon Sutter
Hello, When you deploy ceph to other nodes with the orchestrator, they "just" have the containers you deployed to them. This means in your case, you started the monitor container on ceph101 and you must have installed at least the ceph-common package (else the ceph command would not work). If

[ceph-users] Many osds down , ceph mon has a lot of scrub logs

2020-06-15 Thread hoannv46
Hi all. My cluster has many osds down, 1 mon log has many line : 2020-06-15 18:00:22.575 7fa2deffe700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2176495218}) 2020-06-15 18:00:22.661 7fa2deffe700 0 log_channel(cluster) log [DBG] : scr

[ceph-users] Re: Should the fsid in /etc/ceph/ceph.conf match the ceph_fsid in /var/lib/ceph/osd/ceph-*/ceph_fsid?

2020-06-15 Thread Zhenshi Zhou
Yep, I think the ceph_fsid tells OSDs how to recognize the cluster. It should be the same as the fsid in ceph.conf. 于2020年6月16日周二 上午6:28写道: > I am having a problem on my cluster where OSDs on one host are down after > reboot. When I run ceph-disk activate-all I get an error message stating > "No

[ceph-users] Should the fsid in /etc/ceph/ceph.conf match the ceph_fsid in /var/lib/ceph/osd/ceph-*/ceph_fsid?

2020-06-15 Thread seth . duncan2
I am having a problem on my cluster where OSDs on one host are down after reboot. When I run ceph-disk activate-all I get an error message stating "No cluster conf found in /etc/ceph with fsid e1d7b4ae-2dcd-40ee-bea5-d103fe1fa9c9 When I look at the /etc/ceph/ceph.conf file I can see that the fsid

[ceph-users] Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')

2020-06-15 Thread cemzafer
I have installed  simple ceph system with two nodes (ceph100, ceph101) with cephadm and ceph orch host add command. I put the ssh-copy-id -f -i /etc/ceph/ceph.pub key to the second host (ceph101). I can execute the ceph -s command from the first host(ceph100) but when I execute the command in t

[ceph-users] Re: help with failed osds after reboot

2020-06-15 Thread Paul Emmerich
On Mon, Jun 15, 2020 at 7:01 PM wrote: > Ceph version 10.2.7 > > ceph.conf > [global] > fsid = 75d6dba9-2144-47b1-87ef-1fe21d3c58a8 > (...) > mount_activate: Failed to activate > ceph-disk: Error: No cluster conf found in /etc/ceph with fsid > e1d7b4ae-2dcd-40ee-bea5-d103fe1fa9c9 > -- Paul

[ceph-users] Re: help with failed osds after reboot

2020-06-15 Thread seth . duncan2
Ceph version 10.2.7 ceph.conf [global] fsid = 75d6dba9-2144-47b1-87ef-1fe21d3c58a8 mon_initial_members = chad, jesse, seth mon_host = 192.168.10.41,192.168.10.40,192.168.10.39 mon warn on legacy crush tunables = false auth_cluster_required = cephx auth_service_required = cephx auth_client_require

[ceph-users] Re: mount cephfs with autofs

2020-06-15 Thread Tony Lill
Also, if you are using a newer kernel, add rsize=16777216,wsize=16777216 (or smaller) if you don't want to run out or memory buffers after a few mount/unmount cycles. On 6/15/20 6:44 AM, Marc Roos wrote: > > > Thanks for these I was missing the x-systemd. entries. I assume these > are necessar

[ceph-users] Re: [NFS-Ganesha-Support] bug in nfs-ganesha? and cephfs?

2020-06-15 Thread Jeff Layton
On Sun, 2020-06-14 at 15:17 +0200, Marc Roos wrote: > When rsyncing to a nfs-ganesha exported cephfs the process hangs, and > escalates into "cache pressure" of other cephfs clients[1]. > > When testing the rsync with more debugging on, I noticed that rsync > stalled at the 'set modtime of . '[2

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-15 Thread kefu chai
On Mon, Jun 15, 2020 at 7:27 PM Giulio Fidente wrote: > > hi David, thanks for helping > > python3-Cython seems to be already in the centos8 PowerTools repo: > > http://mirror.centos.org/centos-8/8/PowerTools/x86_64/os/Packages/ > > Is it possible we're not enabling all the additional/extra repos

[ceph-users] Re: Re-run ansible to add monitor and RGWs

2020-06-15 Thread Matthew Vernon
On 14/06/2020 17:07, Khodayar Doustar wrote: Now I want to add the other two nodes as monitor and rgw. Can I just modify the ansible host file and re-run the site.yml? Yes. I've done some modification in Storage classes, I've added some OSD and uploaded a lot of data up to now. Is it safe t

[ceph-users] Ganesha rados recovery on NFS 3

2020-06-15 Thread Maged Mokhtar
Hello all, can the NFS ganesha rados recovery for multi headed active/active setup work with NFS 3 or it requires NFS 4/4.1 specifics ? Thanks for any help /Maged ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to cep

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-15 Thread Giulio Fidente
hi David, thanks for helping python3-Cython seems to be already in the centos8 PowerTools repo: http://mirror.centos.org/centos-8/8/PowerTools/x86_64/os/Packages/ Is it possible we're not enabling all the additional/extra repos when launching the build? On 6/11/20 8:09 PM, David Galloway wrote:

[ceph-users] Re: mount cephfs with autofs

2020-06-15 Thread Zhenshi Zhou
The systemd autofs will mount cephfs successfully, with both kernel and fuse clients. Marc Roos 于2020年6月15日周一 下午6:44写道: > > > Thanks for these I was missing the x-systemd. entries. I assume these > are necessary so booting does not 'hang' on trying to mount these? I > thought the _netdev was for

[ceph-users] Re: mount cephfs with autofs

2020-06-15 Thread Marc Roos
Thanks for these I was missing the x-systemd. entries. I assume these are necessary so booting does not 'hang' on trying to mount these? I thought the _netdev was for this and sufficient? -Original Message- To: Derrick Lin Cc: ceph-users Subject: [ceph-users] Re: mount cephfs with

[ceph-users] Re: Fwd: Re-run ansible to add monitor and RGWs

2020-06-15 Thread Khodayar Doustar
Yes, it's faster but I'd like to continue managing the cluster with Ansible, is that possible? On Mon, Jun 15, 2020 at 12:02 PM Marc Roos wrote: > > Just do manual install that is faster. > > > > -Original Message- > To: ceph-users > Subject: [ceph-users] Fwd: Re-run ansible to add moni

[ceph-users] Re: Fwd: Re-run ansible to add monitor and RGWs

2020-06-15 Thread Marc Roos
Just do manual install that is faster. -Original Message- To: ceph-users Subject: [ceph-users] Fwd: Re-run ansible to add monitor and RGWs Any ideas on this? -- Forwarded message - From: Khodayar Doustar Date: Sun, Jun 14, 2020 at 6:07 PM Subject: Re-run ansible to a

[ceph-users] Fwd: Re-run ansible to add monitor and RGWs

2020-06-15 Thread Khodayar Doustar
Any ideas on this? -- Forwarded message - From: Khodayar Doustar Date: Sun, Jun 14, 2020 at 6:07 PM Subject: Re-run ansible to add monitor and RGWs To: ceph-users Hi, I've installed my ceph cluster with ceph-ansible a few months ago. I've just added one monitor and one rgw at

[ceph-users] Re: Can't bind mon to v1 port in Octopus.

2020-06-15 Thread mafonso
Just to add that a dump of that live config does indeed not show the v1 port. So it seems to be ignoring the config. I tried the alternative mon host syntaxes without success. root@aio1 ~ # ceph mon dump dumped monmap epoch 2 epoch 2 fsid bb204a5c-957d-4a06-a372-redacted last_changed 2020-06-09T

[ceph-users] Can't bind mon to v1 port in Octopus.

2020-06-15 Thread Miguel Afonso
Hi, I have a lab single node cluster with octopus installed via ceph-ansible. Both v1 and v2 were enabled in ceph-ansible vars with the correct suffixes. The configuration was generated correctly and both ports were included in the mon array. [global] cluster network = 172.16.6.0/24 fsid = bb204

[ceph-users] Re: ceph mds slow requests

2020-06-15 Thread Eugen Block
Can these also be set with 'ceph tell' No, those options can't be injected, you have to restart the OSDs. Zitat von Marc Roos : Can these also be set with 'ceph tell' -Original Message- From: Andrej Filipcic [mailto:andrej.filip...@ijs.si] Sent: woensdag 10 juni 2020 12:22 To: ceph

[ceph-users] Re: poor cephFS performance on Nautilus 14.2.9 deployed by ceph_ansible

2020-06-15 Thread Zhenshi Zhou
I have encountered an issue on clients hanging on by opening a file. Besides, any other client who visited the same file or directory hung as well. The only way to resolve it is rebooting the clients server. This happened on kernel client only, Luminous version. After that I chose fuse client excep

[ceph-users] Re: mount cephfs with autofs

2020-06-15 Thread Dan van der Ster
Hi, With CentOS 7.8 you can use the systemd autofs options in /etc/fstab. Here are two examples from our clusters, first with fuse and second with kernel: none /cephfs fuse.ceph ceph.id=admin,ceph.conf=/etc/ceph/dwight.conf,ceph.client_mountpoint=/,x-systemd.device-timeout=30,x-systemd.mount-time

[ceph-users] Re: mount cephfs with autofs

2020-06-15 Thread Eugen Block
We're using autofs with CephFS since a couple of years now, what exactly doesn't work, can you share more details? Zitat von Derrick Lin : Hi guys, I can mount my cephfs via mount command and access it without any problem. Now I want to integrate it in autofs which is used on our cluster.

[ceph-users] Re: poor cephFS performance on Nautilus 14.2.9 deployed by ceph_ansible

2020-06-15 Thread Derrick Lin
Hi guys, I tried to mount via kernel driver, it works beautifully. I was surprised, below is one of the FIO test, which wasn't able to run at all in FUSE mount: # /usr/bin/fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=FIO --filename=fio.test --bs=4M --iodepth=16 --size=50

[ceph-users] mount cephfs with autofs

2020-06-15 Thread Derrick Lin
Hi guys, I can mount my cephfs via mount command and access it without any problem. Now I want to integrate it in autofs which is used on our cluster. It seems this is not a popular approach and I found only this link: https://drupal.star.bnl.gov/STAR/blog/mpoat/how-mount-cephfs I followed the

[ceph-users] Re: Degradation of write-performance after upgrading to Octopus

2020-06-15 Thread majianpeng
In our test based v15.2.2, i found osd_numa_prefer_iface/osd_numa_auto_affinity make onlye half cpu used. for 4K RW, it make performance drop much. So you can check this whether occur. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe sen

[ceph-users] Re: ceph mds slow requests

2020-06-15 Thread Eugen Block
Hi, although I've read many threads mentioning these two osd config options I didn't know what to expect of it. But since you explicitly referred the slow requests I decided to give it a try and changed 'osd op queue cut off' to "high" ('osd op queue' was already "wpq"). We've had two dee