[ceph-users] certificate docs.ceph.com

2020-04-22 Thread Nic De Muyer
Hi, It appears the certificate expired today for docs.ceph.com. Just thought I'd mention it here. kr, Nic De Muyer ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: : nautilus : progress section in ceph status is stuck

2020-04-22 Thread ceph
What is the Output from Ceph -s Ceph health detail - Mehmet Am 21. April 2020 07:02:15 MESZ schrieb Khodayar Doustar : >Hi Vasishta, > >Have you checked that osd's systemd log and perfcounters? You can check >it's metadata and bluefs logs to see what's going on. > >Thanks, > >Khodayar > >On Mon

[ceph-users] Re: MDS : replace a standby-replay daemon by an active one

2020-04-22 Thread Herve Ballans
Hi Eugen, Thanks for your confirmation, it works following your steps. In addition, I had to restart the third mds service in order to take into account the change from standby-replay to standby. Regards, Hervé On 15/04/2020 11:01, Eugen Block wrote: Hi, I didn't find any clear procedure r

[ceph-users] Re: block.db symlink missing after each reboot

2020-04-22 Thread Jan Fajerski
On Tue, Apr 21, 2020 at 04:38:18PM +0200, Stefan Priebe - Profihost AG wrote: Hi Igor, mhm i updated the missing lv tags: # lvs -o lv_tags /dev/ceph-3a295647-d5a1-423c-81dd-1d2b32d7c4c5/osd-block-c2676c5f-111c-4603-b411-473f7a7638c2 | tr ',' '\n' | sort LV Tags ceph.block_device=/dev/

[ceph-users] Upgrading to Octopus

2020-04-22 Thread Simon Sutter
Hello everybody In octopus there are some interesting looking features, so I tried to upgrading my Centos 7 test nodes, according to: https://docs.ceph.com/docs/master/releases/octopus/ Everything went fine and the cluster is healthy. To test out the new dashboard functions, I tried to instal

[ceph-users] Re: docs.ceph.com certificate expired?

2020-04-22 Thread Jos Collin
Fixed On 22/04/20 6:57 pm, Bobby wrote: Thanks ! When will it be back? On Wed, Apr 22, 2020 at 3:03 PM > wrote: Hello, trying to access the documentation on docs.ceph.com now results in an error:  The certificate expir

[ceph-users] Ceph Apply/Commit vs Read/Write Op Latency

2020-04-22 Thread John Petrini
Hello, I was hoping someone could clear up the difference between these metrics. In filestore the difference between Apply and Commit Latency was pretty clear and these metrics gave a good representation of how the cluster was performing. High commit usually meant our journals were performing poor

[ceph-users] Re: Upgrading to Octopus

2020-04-22 Thread Khodayar Doustar
Hi Simon, Have you tried installing them with yum? On Wed, Apr 22, 2020 at 6:16 PM Simon Sutter wrote: > Hello everybody > > > In octopus there are some interesting looking features, so I tried to > upgrading my Centos 7 test nodes, according to: > https://docs.ceph.com/docs/master/releases/

[ceph-users] How to remove a deamon from orch

2020-04-22 Thread Ml Ml
Hello list, i somehow have this "mgr.cph02 ceph02 stopped " line here. root@ceph01:~# ceph orch ps NAMEHOSTSTATUSREFRESHED AGE VERSIONIMAGE NAME IMAGE ID CONTAINER ID mgr.ceph02 ceph02 running (2w) 2w ago -15.2.0 docker.io/ceph/ceph:v15

[ceph-users] How to debug ssh: ceph orch host add ceph01 10.10.1.1

2020-04-22 Thread Ml Ml
Hello List, i did: root@ceph01:~# ceph cephadm set-ssh-config -i /tmp/ssh_conf root@ceph01:~# cat /tmp/ssh_conf Host * User root StrictHostKeyChecking no UserKnownHostsFile /dev/null root@ceph01:~# ceph config-key set mgr/cephadm/ssh_identity_key -i /root/.ssh/id_rsa set mgr/cephadm/ssh_identity

[ceph-users] Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread cody . schmidt
Hey Folks, This is my first ever post here in the CEPH user group and I will preface with the fact that I know this is a lot of what many people ask frequently. Unlike what I assume to be a large majority of CEPH “users” in this forum, I am more of a CEPH “distributor.” My interests lie in how

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Jack
Hi, On 4/22/20 11:47 PM, cody.schm...@iss-integration.com wrote: > Example 1: > 8x 60-Bay (8TB) Storage nodes (480x 8TB SAS Drives) > Storage Node Spec: > 2x 32C 2.9GHz AMD EPYC >- Documentation mentions .5 cores per OSD for throughput optimized. Are > they talking about .5 Physical cores

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Brian Topping
Great set of suggestions, thanks! One to consider: > On Apr 22, 2020, at 4:14 PM, Jack wrote: > > I use 32GB flash-based satadom devices for root device > They are basically SSD, and do not take front slots > As they are never burning up, we never replace them > Ergo, the need to "open" the serv

[ceph-users] Re: Sporadic mgr segmentation fault

2020-04-22 Thread Brad Hubbard
On Tue, Apr 21, 2020 at 11:39 PM XuYun wrote: > > Dear ceph users, > > We are experiencing sporadic mgr crash in all three ceph clusters (version > 14.2.6 and version 14.2.8), the crash log is: > > 2020-04-17 23:10:08.986 7fed7fe07700 -1 > /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_

[ceph-users] Re: Sporadic mgr segmentation fault

2020-04-22 Thread XuYun
Thank you, Brad. We’ll try to upgrade 14.2.9 today. > 2020年4月23日 上午7:21,Brad Hubbard 写道: > > On Tue, Apr 21, 2020 at 11:39 PM XuYun mailto:yu...@me.com>> > wrote: >> >> Dear ceph users, >> >> We are experiencing sporadic mgr crash in all three ceph clusters (version >> 14.2.6 and version 14.

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread lin . yunfan
I have seen a lot of people saying not to go with big nodes. What is the exact reason for that?I can understand that if the cluster is not big enough then the total nodes count could be too small to withstand a node failure, but if the cluster is big enough would

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Jarett DeAngelis
Well, for starters, "more network" = "faster cluster." On Wed, Apr 22, 2020 at 11:18 PM lin.yunfan wrote: > I have seen a lot of people saying not to go with big nodes. > What is the exact reason for that? > I can understand that if the cluster is not big enough then the total > nodes count coul

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread lin . yunfan
Big nodes are most for HDD cluster and with 40G nic or 100G nic I don't think the network would be the bottleneck. lin.

[ceph-users] Re: missing amqp-exchange on bucket-notification with AMQP endpoint

2020-04-22 Thread Andreas Unterkircher
Dear Yuval! The message format you tried to use is the standard one (the one being emitted from boto3, or any other AWS SDK [1]). It passes the arguments using 'x-www-form-urlencoded'. For example: Thank you for your clarification! I've previously tried it as a x-www-form-urlencoded-body as we

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Martin Verges
Hello Cody, There are a few simple rules to design a good, stable and performant Ceph cluster. 1) Don't choose big systems. Not only because often they are more expensive, but you also have more impact when a system is down. 2) Throw away all the not required stuff like RAID controllers, make th

[ceph-users] adding block.db to OSD

2020-04-22 Thread Stefan Priebe - Profihost AG
Hello, is there anything else needed beside running: ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD} bluefs-bdev-new-db --dev-target /dev/vgroup/lvdb-1 I did so some weeks ago and currently i'm seeing that all osds originally deployed with --block-db show 10-20% I/O waits while all

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Martin Verges
From all our calculations of clusters, going with smaller systems reduced the TCO because of much cheaper hardware. Having 100 Ceph nodes is not an issue, therefore you can scale small and large clusters with the exact same hardware. But please, prove me wrong. I would love to see a way to reduce

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Darren Soothill
If you want the lowest cost per TB then you will be going with larger nodes in your cluster but it does mean you minimum cluster size is going to be many PB’s in size. There are a number of fixed costs associated with a node. So Motherboard, Network cards, disk controllers, the more disks you s