Simon,
Could you post the unhelpful error message? I can’t rewrite cephadm, but I
can at least document this error message.
Zac Dover
Documentation
Ceph
On Fri, 19 Jun 2020 at 4:28 pm, Simon Sutter wrote:
> After some hours of searching around in the docs and finaly taking a look
> at the sour
After some hours of searching around in the docs and finaly taking a look at
the sourcecode of cephadm, I figured out it has to do with the docker software
(podman in my case).
So I figured out that podman is not installed on the new node.
I never installed podman on any node so I don't know if
Mariusz Gronczewski writes:
> listing itself is bugged in version
> I'm running: https://tracker.ceph.com/issues/45955
Ouch! Are your OSDs all running the same version as your RadosGW? The
message looks a bit as if your RadosGW might be a newer version than the
OSDs, and the new optimized bucket l
+ceph dev
On Sat, 13 Jun 2020, 20:34 Amit Ghadge, wrote:
> Hello All,
>
> We saw all nodes mon services are restarted at the same time after
> enabling msgr2, So this make an impact on production running cluster? We
> are upgrading from Luminous to Nautilus.
>
>
>
> Thanks,
> AmitG
>
___
My understanding is that MONs only configure themselves from the config file at
first startup. After that all MONs use the monmap to learn about themselves,
and their peers.
As such; adding an address to the config file for a running MON, even if you
restart / reboot, would not achieve the exp
Thnx! Zhenshi
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] ceph grafana dashboards: rbd overview empty
Yep, you should also tell mgr that rbd of which pool you wanna export
statistics.
Follow this,
https://ceph.io/rbd/new-in-nautilus-rbd-performance-monitoring/
> 于202
Yes I was also thinking of setting this configuration. But I am just
wondering what are the drawbacks, if there are any? Why is the ceph team
not making this a default setting if it is so good?
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] Re: ceph mds slow request
Nautilus 14.2.9, setup using Proxmox.
* 5 Hosts
* 18 OSDs with a mix of disk sizes (3TB, 1TB, 500GB), all bluestore
* Pool size = 3, pg_num = 512
According to:
https://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/#preselection
With 18 OSD's I should be using pg_num=1024, bu
Thanks for your prompt reply. I was afraid it's not ready yet. Hope it will be
soon.
The next thing I missed on drive_groups is to be able to specify storage
devices directly (/dev/XXX). Probably that would also solve my original problem.
___
ceph-user
Hello,
I did a mistake, while deploying a new node on octopus.
The node is a fresh installed Centos8 machine.
Bevore I did a "ceph orch host add node08" I pasted the wrong command:
ceph orch daemon add osd node08:cl_node08/ceph
That did not return anything, so I tried to add the node first with
--
Christoph Ackermann | System Engineer
INFOSERVE GmbH | Am Felsbrunnen 15 | D-66119 Saarbrücken
Fon +49 (0)681 88008-59 | Fax +49 (0)681 88008-33 |
mailto:c.ackerm...@infoserve.de | https://www.infoserve.de
INFOSERVE Datenschutzhinweise: https://infoserve.de/datenschutz
Handelsregister: Am
I overread the partition part, I don't think drive_groups are ready
for that (yet?). In that case I would deploy the OSDs manually with
'ceph-volume lvm create --data {vg name/lv name} --journal
/path/to/device' or '... --journal {vg name/lv name}' on your OSD nodes.
Zitat von Lars Täuber
Hello Neil,
You should never never never do a snapshot on a ceph cluster (in a vm
perspective as you say). I have my ceph cluster in virtualbox but i only
shutdown my cluster with commands like ceph osd noout, norebalance, pause etc.
Regarding the osd heartbeat, here is some articles that migh
Is there a possibility to specify partitions? I only see whole discs/devices
chosen by vendor or model name.
Regards,
Lars
Am Thu, 18 Jun 2020 09:52:36 +
schrieb Eugen Block :
> You'll need to specify drive_groups in a yaml file if you don't deploy
> standalone OSDs:
>
> https://docs.cep
You'll need to specify drive_groups in a yaml file if you don't deploy
standalone OSDs:
https://docs.ceph.com/docs/master/cephadm/drivegroups/
Zitat von kle...@psi-net.si:
I'm trying to deploy a ceph cluster with a cephadm tool. I've
already successfully done all steps except adding OSDs
I'm trying to deploy a ceph cluster with a cephadm tool. I've already
successfully done all steps except adding OSDs. My testing equipment consists
of three hosts. Each host has SSD storage, where OS is installed into. On that
storage I created partition, which can be used as a ceph block.db. Ho
Dnia 2020-06-18, o godz. 10:51:31
Simon Leinen napisał(a):
> Dear Mariusz,
>
> > we're using Ceph as S3-compatible storage to serve static files
> > (mostly css/js/images + some videos) and I've noticed that there
> > seem to be huge read amplification for index pool.
>
> we have observed tha
Hi Kári,
The backfilling process will prioritize those backfill request that are
for degrade pgs or undersized pgs:
"
The next priority is backfill of degraded PGs and is a function of the
degradation. A backfill for a PG missing two replicas will have a
priority higher than a backfill
Hi, I have a Luminous (12.2.25) cluster with several OSDs down. The daemons
start but they're reporting as down. I did see in some osd logs that heartbeats
were failing but when I checked the ports for the heartbeats were incorrect for
that osd, although another osd was listening on that. How do
Dear Mariusz,
> we're using Ceph as S3-compatible storage to serve static files (mostly
> css/js/images + some videos) and I've noticed that there seem to be
> huge read amplification for index pool.
we have observed that too, under Nautilus (14.2.4-14.2.8).
> Incoming traffic magniture is of ar
After some more testing it seems that ceph just does no pickup on some of
ceph.conf changes after bootstrapped. It was possible to bind to the v1 port
using `ceph mon set-addrs aio1 [v2:172.16.6.210:3300,v1:172.16.6.210:6789]`
It was defo not an issue with OS syscalls or permissions, just ceph n
On Wed, Jun 17, 2020 at 8:51 PM Christoph Ackermann
wrote:
>
> Hi all,
>
> we have a cluster starting from jewel to octopus nowadays. We would like
> to enable Upmap but unfortunately there are some old Jewel clients
> active. We cannot force Upmap by: ceph osd
> set-require-min-compat-client lu
Hi,
we're using Ceph as S3-compatible storage to serve static files (mostly
css/js/images + some videos) and I've noticed that there seem to be
huge read amplification for index pool.
Incoming traffic magniture is of around 15k req/sec (mostly sub 1MB
request but index pool is getting hammered:
you can calculate the difference of count of pg on osd before and after to
estimate the amount of data migrated.
Using the crush algorithm to calculate the difference of count of pg on osd
without having to actually add or remove osd.
> Date: Thu, 18 Jun 2020 01:18:30 +0430
> From: Seena Falla
On Wed, Jun 17, 2020 at 8:58 PM Benjamin.Zieglmeier <
benjamin.zieglme...@target.com> wrote:
> Hello,
>
> I have a ceph object cluster (12.2.11) that I am unable to figure out how
> to link a bucket to a new user when tenants are involved. If no tenant is
> mentioned (default tenant) I am able to
25 matches
Mail list logo