Hi,
I have been setting up a new cluster with a combination of cephadm and
ceph orch.
I have run in to a problem with rgw daemons that does not start.
I have been following the documentation:
https://docs.ceph.com/en/latest/cephadm/install/ - the RGW section
ceph orch apply rgw ikea cn-dc9-1
Doh,
I found my problem I have some how managed to swap the zonegroup and
zone in my setup.
Fixed that and it works.
---
- Karsten
On 14-10-2020 09:10, Karsten Nielsen wrote:
Hi,
I have been setting up a new cluster with a combination of cephadm and
ceph orch.
I have run in to a problem
Hi,
I have setup a ceph cluster with cephadm with docker backend.
I want to move /var/lib/docker to a separate device to get better
performance and less load on the OS device.
I tried that by stopping docker copy the content of /var/lib/docker to
the new device and mount the new device to /v
Hi cephers,
I am a happy user of ceph and cephadm, we run it in a hyberconverged
setup.
We are moving into running bgp on to the node and have ECMP.
My networking team is asking to run VRF on the nodes we will have a VRF
for client traffic and a VRF for the storage network.
Is that supported
Hi,
I am running ceph 16.2.6 installed with cephadm.
I have enabled prometheus to be able scrape metrics from an external
promethus server.
I have 3 nodes with mgr daeamon all reply to the query against
node:9283/metrics 2 is returning a empty reply - the none active mgr's.
Is there a node:9283/
ed for use outside the
internal Prometheus deployments, but we definitely intended (at some
point when time permitted) to try and submit patches that would work
for both use-cases, since it's painful to continually update the
dashboards on every release.
On Tue, Sep 28, 2021 at 12:45 PM Karsten
: [__address__]
regex: '.*'
target_label: instance
replacement: 'ceph-mgr'
Kind Regards, Ernesto
On Wed, Sep 29, 2021 at 4:08 PM Karsten Nielsen wrote:
OK thanks for that explanation. Would be awesome if you got time to do
the patches upstream. It does seem like a lot of work.
Hi all,
I am trouble shooting a issue that I am not really sure how to deal
with.
We have setup a ceph cluster version 16.2.6 with cephadm, running with
podman containers.
Our hosts run ceph and kubernetes.
Our hosts run all NVMe, 512GB mem and a single AMD EPYC 7702P CPU.
We run baremetal an
On 11-01-2022 09:36, Anthony D'Atri wrote:
Our hosts run all NVMe
Which drives, specifically? And how many OSDs per? How many PGs per
OSD?
It is 3 types of devices:
* HPE NS204i-p Gen10+ Boot Controller
- stores the /var/lib/ceph folder
* HPE 7.68TB NVMe x4 RI SFF SC U.3 SSD
- We have 3
Hi,
Last week I upgraded my ceph cluster from luminus to mimic 13.2.6
It was running fine for a while but yesterday my mds went into a crash loop.
I have 1 active and 1 standby mds for my cephfs both of which is running the
same crash loop.
I am running ceph based on https://hub.docker.com/r/cep
-00 [INF] Standby daemon mds.k8s-node-02
assigned to filesystem recovery-fs as rank 0
2019-11-05 09:43:47.821976 mon.k8s-node-00 [INF] Health check cleared:
MDS_ALL_DOWN (was: 1 filesystem is offline)
...
...
-Original message-
From: Karsten Nielsen
Sent: Tue 05-11-2019 10:29
Subject
-Original message-
From: Yan, Zheng
Sent: Wed 06-11-2019 08:15
Subject:Re: [ceph-users] mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen wrote:
> >
> > Hi,
> >
> > Last week I up
-Original message-
From: Yan, Zheng
Sent: Wed 06-11-2019 14:16
Subject:Re: [ceph-users] mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen wrote:
> >
> > -Original message-
> &g
-Original message-
From: Yan, Zheng
Sent: Wed 06-11-2019 14:16
Subject:Re: [ceph-users] mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen wrote:
> >
> > -Original message-
> &g
-Original message-
From: Yan, Zheng
Sent: Thu 07-11-2019 07:21
Subject:Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Thu, Nov 7, 2019 at 5:50 AM Karsten Nielsen wrote:
> >
> > -Original message-
> &g
: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> I have tracked down the root cause. See https://tracker.ceph.com/issues/42675
>
> Regards
> Yan, Zheng
>
> On Thu, Nov 7, 2019 at 4:01 PM Karsten Nielsen wrote:
> >
> > -Origina
-Original message-
From: Yan, Zheng
Sent: Thu 07-11-2019 14:20
Subject:Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Thu, Nov 7, 2019 at 6:40 PM Karsten Nielsen wrote:
> >
> > That is awesome.
> >
&
-2019 14:20
Subject:Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Thu, Nov 7, 2019 at 6:40 PM Karsten Nielsen wrote:
> >
> > That is awesome.
> >
> > Now I just need to figure out where the lost+found files needs t
-Original message-
From: Yan, Zheng
Sent: Mon 11-11-2019 15:09
Subject:Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Mon, Nov 11, 2019 at 5:09 PM Karsten Nielsen wrote:
> >
> > I started a job that moved some
-Original message-
From: Karsten Nielsen
Sent: Tue 12-11-2019 10:30
Subject:[ceph-users] Re: mds crash loop
To: Yan, Zheng ;
CC: ceph-users@ceph.io;
> -Original message-
> From: Yan, Zheng
> Sent: Mon 11-11-2019 15:09
> Subject: Re: [ceph-us
I am a problem with my mds that is in a crash loop, with the help of Yan, Zheng
I have run a few attempts to save it but it seems that it is not going the way
it should.
I am reading through this documentation.
https://docs.ceph.com/docs/mimic/cephfs/disaster-recovery/
If I use the last step t
:Re: [ceph-users] Re: mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Tue, Nov 12, 2019 at 6:18 PM Karsten Nielsen wrote:
> >
> > -Original message-----
> > From: Karsten Nielsen
> > Sent: Tue 12-11-2019 10:30
> > Subject:
22 matches
Mail list logo