Hi,
We have happily tested the upgrade from v15.2.16 to v16.2.7 with cephadm on a
test cluster made of 3 nodes and everything went smoothly.
Today we started the very same operation on the production one (20 OSD servers,
720 HDDs) and the upgrade process doesn’t do anything at all…
To be more s
Hi Frank,
Did you check the shadow tree (the one with tilde's in the name, seen
with `ceph osd crush tree --show-shadow`)? Maybe the host was removed
in the outer tree, but not the one used for device-type selection.
There were bugs in this area before, e.g. https://tracker.ceph.com/issues/48065
I
Hello!
I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status of
my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have any
problem.
I want change the position of a various disks in the disk controller of some
nodes and I don't know what is the way.
-
Hi,
It's interesting that crushtool doesn't include the shadow tree -- I
am pretty sure that used to be included. I don't suggest editing the
crush map, compiling, then re-injecting -- I don't know what it will
do in this case.
What you could do instead is something like:
* ceph osd getcrushmap -
Hello, I've been having problems with my MDS and they got stuck in up:reply
state
The journal was ok and everything seemed ok, so I reset the journal and now all
MDS fail to start with the following error:
2022-05-18 12:27:40.092 7f8748561700 -1
/home/abuild/rpmbuild/BUILD/ceph-14.2.16-402-g
Do you see anything suspicious in /var/log/ceph/cephadm.log? Also
check the mgr logs for any hints.
Zitat von Lo Re Giuseppe :
Hi,
We have happily tested the upgrade from v15.2.16 to v16.2.7 with
cephadm on a test cluster made of 3 nodes and everything went
smoothly.
Today we started t
Dear Ceph community,
Let's say I want to make different sub-directories of my CephFS
separately available on a client system,
i.e. without exposing the parent directories (because it contains other
sensitive data, for instance).
I can simply mount specific different folders, as primitively
ill
Hi Mathias,
I have noticed in the past the moving directories within the same mount
point can take a very long time using the system mv command. I use a
python script to archive old user directories by moving them to a
different part of the filesystem which is not exposed to the users. I
use the re
Hey all,
We will be having a Ceph science/research/big cluster call on Tuesday
May 24th. Please note we're doing this on a Tuesday not the usual
Wednesday we've done in the past. If anyone wants to discuss something
specific they can add it to the pad linked below. If you have questions
or co
Hi Jimmy,
On Fri, Apr 22, 2022 at 11:02 AM Jimmy Spets wrote:
>
> Does cephadm automatically reduce ranks to 1 or does that have to be done
> manually?
Automatically.
--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD7
Hello,
Have I check same global flag for this operation?
Thanks!
De: Stefan Kooman
Enviado: miércoles, 18 de mayo de 2022 14:13
Para: Jorge JP
Asunto: Re: [ceph-users] Best way to change disk in controller disk without
affect cluster
On 5/18/22 13:06, Jorge JP
See this PR
https://github.com/ceph/ceph/pull/19973
From: Josh Baergen
Sent: Wednesday, May 18, 2022 10:54 AM
To: Richard Bade
Cc: Ceph Users
Subject: [ceph-users] Re: osd_disk_thread_ioprio_class deprecated?
Hi Richard,
> Could anyone confirm this? And whic
First question: why do you want to do this?
There are some deployment scenarios in which moving the drives will Just Work,
and others in which it won’t. If you try, I suggest shutting the system down
all the way, exchanging just two drives, then powering back on — and see if all
is well befo
Thanks Janne for the information in detail.
We have RHCS 4.2 non-collocated setup in one DC only. There are few RBD volumes
mapped to MariaDB Database.
Also, S3 endpoint with bucket is being used to upload objects. There is no
multisite zone has been implemented yet.
My Requirement is to take ba
Hi,
I don’t know what could cause that error, but could you share more
details? You seem to have multiple active MDSs, is that correct? Could
they be overloaded? What happened exactly, did one MDS fail or all of
them? Do the standby MDS report anything different?
Zitat von Kuko Armas :
H
Hello,
In fact S3 should be replicated on another region or AZ , and backup should
be managed with versioning on bucket.
But, in our case, we needed to secure the backup of databases (on K8S) into
our external backup solution (EMC Networker)
We implemented Ganesha and create an export NFS link t
Hello,
In my opinion the better way is to deploy a batch fio pod (PVC volume on
your rook ceph) on your K8S.
IO profile depend of your workload but you can try 8Kb (postgresql default)
random read/write and seq
In this way, you will be as close as possible from the client side
Export on Json the r
> See this PR
> https://github.com/ceph/ceph/pull/19973
> Doing "git log -Sosd_disk_thread_ioprio_class -u
> src/common/options.cc" in the Ceph source indicates that they were
> removed in commit 3a331c8be28f59e2b9d952e5b5e864256429d9d5 which first
> appeared in Mimic.
Thanks Matthew and Josh for
16.2.9 is a hotfix release to address a bug in 16.2.8 that can cause the
MGRs to deadlock.
See https://tracker.ceph.com/issues/55687.
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.9.tar.gz
* Containers at https://qua
Hi,
We are doing exactly the same, exporting bucket as NFS share and run on it our
backup software to get data to tape.
Given the data volumes replication to another S3 disk based endpoint is not
viable for us.
Regards,
Giuseppe
On 18.05.22, 23:14, "stéphane chalansonnet" wrote:
Hello,
Hi,
I didn’t notice anything suspicious in mgr logs, neither in the cephadm.log one
(attaching an extract of the latest).
What I have noticed is that one the mgr container, the active one, gets
restarted about every 3 minutes (as reported by ceph -w)
"""
2022-05-18T15:30:49.883238+0200 mon.
21 matches
Mail list logo