That wasn't really clear in the docs :(
> Op 21-12-2023 17:26 CET schreef Patrick Donnelly :
>
>
> On Thu, Dec 21, 2023 at 3:05 AM Sake Ceph wrote:
> >
> > Hi David
> >
> > Reducing max_mds didn't work. So I executed a fs reset:
> > ceph fs set atlassian-prod allow_standby_replay false
> > cep
Hi Wes,
thanks the `ceph tell mon.* sessions` got me the answer very quickly :-)
Cheers
/Simon
On Thu, 21 Dec 2023 at 18:27, Wesley Dillingham
wrote:
> You can ask the monitor to dump its sessions (which should expose the IPs
> and the release / features) you can then track down by IP those w
Hi,
On 21.12.23 15:13, Nico Schottelius wrote:
I would strongly recommend k8s+rook for new clusters, also allows
running Alpine Linux as the host OS.
Why would I want to learn Kubernetes before I can deploy a new Ceph
cluster when I have no need for K8s at all?
Regards
--
Robert Sander
Hei
Hi,
On 21.12.23 19:11, Albert Shih wrote:
What is the advantage of podman vs docker ? (I mean not in general but for
ceph).
Docker comes with the Docker daemon that when it gets an update has to
be restarted and restarts all containers. For a storage system not the
best procedure.
Everyth
On 21/12/2023 13:50, Drew Weaver wrote:
Howdy,
I am going to be replacing an old cluster pretty soon and I am looking for a
few suggestions.
#1 cephadm or ceph-ansible for management?
#2 Since the whole... CentOS thing... what distro appears to be the most
straightforward to use with Ceph? I
You can ask the monitor to dump its sessions (which should expose the IPs
and the release / features) you can then track down by IP those with the
undesirable features/release
ceph daemon mon.`hostname -s` sessions
Assuming your mon is named after the short hostname, you may need to do
this for e
On Thu, Dec 21, 2023 at 3:05 AM Sake Ceph wrote:
>
> Hi David
>
> Reducing max_mds didn't work. So I executed a fs reset:
> ceph fs set atlassian-prod allow_standby_replay false
> ceph fs set atlassian-prod cluster_down true
> ceph mds fail atlassian-prod.pwsoel13142.egsdfl
> ceph mds fail atlassi
On Thu, Dec 21, 2023 at 2:49 AM David C. wrote:
> I would start by decrementing max_mds by 1:
> ceph fs set atlassian-prod max_mds 2
This will have no positive effect. The monitors will not alter the
number of ranks (i.e. stop a rank) if the cluster is degraded.
--
Patrick Donnelly, Ph.D.
He /
On Thu, Dec 21, 2023 at 2:11 AM Sake Ceph wrote:
>
> Starting a new thread, forgot subject in the previous.
> So our FS down. Got the following error, what can I do?
>
> # ceph health detail
> HEALTH_ERR 1 filesystem is degraded; 1 mds daemon damaged
> [WRN] FS_DEGRADED: 1 filesystem is degraded
>
[rook@rook-ceph-tools-5ff8d58445-gkl5w .aws]$ ceph features
{
"mon": [
{
"features": "0x3f01cfbf7ffd",
"release": "luminous",
"num": 3
}
],
"osd": [
{
"features": "0x3f01cfbf7ffd",
"release": "lu
Hi,
Our cluster is currently running quincy, and I want to set the minimal
client version to luminous, to enable upmap balancer, but when I tried to,
I got this:
# ceph osd set-require-min-compat-client luminous Error EPERM: cannot set
require_min_compat_client to luminous: 2 connected client(s)
Hey Drew,
Drew Weaver writes:
> #1 cephadm or ceph-ansible for management?
> #2 Since the whole... CentOS thing... what distro appears to be the most
> straightforward to use with Ceph? I was going to try and deploy it on Rocky
> 9.
I would strongly recommend k8s+rook for new clusters, also
Hi,
On 12/21/23 14:50, Drew Weaver wrote:
#1 cephadm or ceph-ansible for management?
cephadm.
The ceph-ansible project writes in its README:
NOTE: cephadm is the new official installer, you should consider
migrating to cephadm.
https://github.com/ceph/ceph-ansible
#2 Since the whole...
Howdy,
I am going to be replacing an old cluster pretty soon and I am looking for a
few suggestions.
#1 cephadm or ceph-ansible for management?
#2 Since the whole... CentOS thing... what distro appears to be the most
straightforward to use with Ceph? I was going to try and deploy it on Rocky 9
Hello Ceph users,
We've been having an issue with RGW for a couple days and we would
appreciate some help, ideas, or guidance to figure out the issue.
We run a multi-site setup which has been working pretty fine so far. We
don't actually have data replication enabled yet, only metadata
replicatio
Hi David
Reducing max_mds didn't work. So I executed a fs reset:
ceph fs set atlassian-prod allow_standby_replay false
ceph fs set atlassian-prod cluster_down true
ceph mds fail atlassian-prod.pwsoel13142.egsdfl
ceph mds fail atlassian-prod.pwsoel13143.qlvypn
ceph fs reset atlassian-prod
ceph fs r
16 matches
Mail list logo