[ceph-users] Re: Orchestrator: Cannot add node after mistake

2020-06-18 Thread John Zachary Dover
Simon, Could you post the unhelpful error message? I can’t rewrite cephadm, but I can at least document this error message. Zac Dover Documentation Ceph On Fri, 19 Jun 2020 at 4:28 pm, Simon Sutter wrote: > After some hours of searching around in the docs and finaly taking a look > at the sour

[ceph-users] Re: Orchestrator: Cannot add node after mistake

2020-06-18 Thread Simon Sutter
After some hours of searching around in the docs and finaly taking a look at the sourcecode of cephadm, I figured out it has to do with the docker software (podman in my case). So I figured out that podman is not installed on the new node. I never installed podman on any node so I don't know if

[ceph-users] Re: Radosgw huge traffic to index bucket compared to incoming requests

2020-06-18 Thread Simon Leinen
Mariusz Gronczewski writes: > listing itself is bugged in version > I'm running: https://tracker.ceph.com/issues/45955 Ouch! Are your OSDs all running the same version as your RadosGW? The message looks a bit as if your RadosGW might be a newer version than the OSDs, and the new optimized bucket l

[ceph-users] Re: Enable msgr2 mon service restarted

2020-06-18 Thread Amit Ghadge
+ceph dev On Sat, 13 Jun 2020, 20:34 Amit Ghadge, wrote: > Hello All, > > We saw all nodes mon services are restarted at the same time after > enabling msgr2, So this make an impact on production running cluster? We > are upgrading from Luminous to Nautilus. > > > > Thanks, > AmitG > ___

[ceph-users] Re: Can't bind mon to v1 port in Octopus.

2020-06-18 Thread DHilsbos
My understanding is that MONs only configure themselves from the config file at first startup. After that all MONs use the monmap to learn about themselves, and their peers. As such; adding an address to the config file for a running MON, even if you restart / reboot, would not achieve the exp

[ceph-users] Re: ceph grafana dashboards: rbd overview empty

2020-06-18 Thread Marc Roos
Thnx! Zhenshi -Original Message- Cc: ceph-users Subject: Re: [ceph-users] ceph grafana dashboards: rbd overview empty Yep, you should also tell mgr that rbd of which pool you wanna export statistics. Follow this, https://ceph.io/rbd/new-in-nautilus-rbd-performance-monitoring/ > 于202

[ceph-users] Re: ceph mds slow requests

2020-06-18 Thread Marc Roos
Yes I was also thinking of setting this configuration. But I am just wondering what are the drawbacks, if there are any? Why is the ceph team not making this a default setting if it is so good? -Original Message- To: ceph-users@ceph.io Subject: [ceph-users] Re: ceph mds slow request

[ceph-users] Autoscale recommendtion seems to small + it broke my pool...

2020-06-18 Thread Lindsay Mathieson
Nautilus 14.2.9, setup using Proxmox. * 5 Hosts * 18 OSDs with a mix of disk sizes (3TB, 1TB, 500GB), all bluestore * Pool size = 3, pg_num = 512 According to: https://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/#preselection With 18 OSD's I should be using pg_num=1024, bu

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-06-18 Thread klemen
Thanks for your prompt reply. I was afraid it's not ready yet. Hope it will be soon. The next thing I missed on drive_groups is to be able to specify storage devices directly (/dev/XXX). Probably that would also solve my original problem. ___ ceph-user

[ceph-users] Orchestrator: Cannot add node after mistake

2020-06-18 Thread Simon Sutter
Hello, I did a mistake, while deploying a new node on octopus. The node is a fresh installed Centos8 machine. Bevore I did a "ceph orch host add node08" I pasted the wrong command: ceph orch daemon add osd node08:cl_node08/ceph That did not return anything, so I tried to add the node first with

[ceph-users] Re: Jewel clients on recent cluster

2020-06-18 Thread Christoph Ackermann
 -- Christoph Ackermann | System Engineer INFOSERVE GmbH | Am Felsbrunnen 15 | D-66119 Saarbrücken Fon +49 (0)681 88008-59 | Fax +49 (0)681 88008-33 |  mailto:c.ackerm...@infoserve.de | https://www.infoserve.de INFOSERVE Datenschutzhinweise: https://infoserve.de/datenschutz Handelsregister: Am

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-06-18 Thread Eugen Block
I overread the partition part, I don't think drive_groups are ready for that (yet?). In that case I would deploy the OSDs manually with 'ceph-volume lvm create --data {vg name/lv name} --journal /path/to/device' or '... --journal {vg name/lv name}' on your OSD nodes. Zitat von Lars Täuber

[ceph-users] Re: OSD heartbeat failure

2020-06-18 Thread tdados
Hello Neil, You should never never never do a snapshot on a ceph cluster (in a vm perspective as you say). I have my ceph cluster in virtualbox but i only shutdown my cluster with commands like ceph osd noout, norebalance, pause etc. Regarding the osd heartbeat, here is some articles that migh

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-06-18 Thread Lars Täuber
Is there a possibility to specify partitions? I only see whole discs/devices chosen by vendor or model name. Regards, Lars Am Thu, 18 Jun 2020 09:52:36 + schrieb Eugen Block : > You'll need to specify drive_groups in a yaml file if you don't deploy > standalone OSDs: > > https://docs.cep

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-06-18 Thread Eugen Block
You'll need to specify drive_groups in a yaml file if you don't deploy standalone OSDs: https://docs.ceph.com/docs/master/cephadm/drivegroups/ Zitat von kle...@psi-net.si: I'm trying to deploy a ceph cluster with a cephadm tool. I've already successfully done all steps except adding OSDs

[ceph-users] cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-06-18 Thread klemen
I'm trying to deploy a ceph cluster with a cephadm tool. I've already successfully done all steps except adding OSDs. My testing equipment consists of three hosts. Each host has SSD storage, where OS is installed into. On that storage I created partition, which can be used as a ceph block.db. Ho

[ceph-users] Re: Radosgw huge traffic to index bucket compared to incoming requests

2020-06-18 Thread Mariusz Gronczewski
Dnia 2020-06-18, o godz. 10:51:31 Simon Leinen napisał(a): > Dear Mariusz, > > > we're using Ceph as S3-compatible storage to serve static files > > (mostly css/js/images + some videos) and I've noticed that there > > seem to be huge read amplification for index pool. > > we have observed tha

[ceph-users] Re: How to force backfill on undersized pgs ?

2020-06-18 Thread Wout van Heeswijk
Hi Kári, The backfilling process will prioritize those backfill request that are for degrade pgs or undersized pgs: " The next priority is backfill of degraded PGs and is a function of the degradation. A backfill for a PG missing two replicas will have a priority higher than a backfill

[ceph-users] OSD heartbeat failure

2020-06-18 Thread neil.ashby-senior
Hi, I have a Luminous (12.2.25) cluster with several OSDs down. The daemons start but they're reporting as down. I did see in some osd logs that heartbeats were failing but when I checked the ports for the heartbeats were incorrect for that osd, although another osd was listening on that. How do

[ceph-users] Re: Radosgw huge traffic to index bucket compared to incoming requests

2020-06-18 Thread Simon Leinen
Dear Mariusz, > we're using Ceph as S3-compatible storage to serve static files (mostly > css/js/images + some videos) and I've noticed that there seem to be > huge read amplification for index pool. we have observed that too, under Nautilus (14.2.4-14.2.8). > Incoming traffic magniture is of ar

[ceph-users] Re: Can't bind mon to v1 port in Octopus.

2020-06-18 Thread mafonso
After some more testing it seems that ceph just does no pickup on some of ceph.conf changes after bootstrapped. It was possible to bind to the v1 port using `ceph mon set-addrs aio1 [v2:172.16.6.210:3300,v1:172.16.6.210:6789]` It was defo not an issue with OS syscalls or permissions, just ceph n

[ceph-users] Re: Jewel clients on recent cluster

2020-06-18 Thread Ilya Dryomov
On Wed, Jun 17, 2020 at 8:51 PM Christoph Ackermann wrote: > > Hi all, > > we have a cluster starting from jewel to octopus nowadays. We would like > to enable Upmap but unfortunately there are some old Jewel clients > active. We cannot force Upmap by: ceph osd > set-require-min-compat-client lu

[ceph-users] Radosgw huge traffic to index bucket compared to incoming requests

2020-06-18 Thread Mariusz Gronczewski
Hi, we're using Ceph as S3-compatible storage to serve static files (mostly css/js/images + some videos) and I've noticed that there seem to be huge read amplification for index pool. Incoming traffic magniture is of around 15k req/sec (mostly sub 1MB request but index pool is getting hammered:

[ceph-users] Re: Calculate recovery time

2020-06-18 Thread 展荣臻(信泰)
you can calculate the difference of count of pg on osd before and after to estimate the amount of data migrated. Using the crush algorithm to calculate the difference of count of pg on osd without having to actually add or remove osd. > Date: Thu, 18 Jun 2020 01:18:30 +0430 > From: Seena Falla

[ceph-users] Re: Bucket link problem with tenants

2020-06-18 Thread Shilpa Manjarabad Jagannath
On Wed, Jun 17, 2020 at 8:58 PM Benjamin.Zieglmeier < benjamin.zieglme...@target.com> wrote: > Hello, > > I have a ceph object cluster (12.2.11) that I am unable to figure out how > to link a bucket to a new user when tenants are involved. If no tenant is > mentioned (default tenant) I am able to