I guess RBD mirror seems the was to go.
Build a Cluster on the NEW Site and configure mirroring.
Another idea is to Stop your origin Cluster, rent a car, Transport it the new
location and Start - hopefully - All again At the new Site.
Just my 2 Cents
Mehmet
Am 19. Februar 2020 14:23:02 MEZ schr
If it helps at all, I've posted a log with a recent backtrace
ceph-post-file: 589aa7aa-7a80-49a2-ba55-376e467c4550
In fact, the log seems to span two different lifetimes of the same OSD,
the first of which aborted, then rather than repairing, the OSD was
recreated and 12 hours later aborted ag
Hi Andras,
I think you're missing one (at least!) more important aspect in your
calculations which is writing block size. BlueStore compresses each
writing block independently. As Ceph object size in your case is
presumably 4MiB which (as far as I understand EC functioning) is split
into 9 pa
Hi,
the pg balancer is not working at all and if I call status the plugin does not
respond, it just hangs forever. Mgr restart doesnt help.
I have a PG distribution issue now, how to fix this?
v 14.2.5
kind regards
___
ceph-users mailing list -- ceph-us
Thore,
Thank you for your reply.
Unless the issue was specifically with a ceph telemetry server or their
subnet we had no network issues at that time, at least none was reported by
monitoring or customers. It is very weird unless the telemetry module may
have a bug of some kind and hangs on its o
Hello,
I run a ceph nautilus cluster where we use rbd for data storage and
retrieval. THe rbd images metadata are in a replicated pool with 1+2
copies. The data are placed in a pool that uses erasure coding with a
4+2 profile.
Now I am unsure about the exact meaing or min_size for both pools. The
On Fri, Feb 21, 2020 at 05:28:12AM -, alexander.v.lit...@gmail.com wrote:
> This evening I was awakened by an error message
>
> cluster:
> id: 9b4468b7-5bf2-4964-8aec-4b2f4bee87ad
> health: HEALTH_ERR
> Module 'telemetry' has failed: ('Connection aborted.', error(101,
On Fri, 2020-02-21 at 15:19 +0700, Konstantin Shalygin wrote:
> On 2/21/20 3:04 PM, Andreas Haupt wrote:
> > As you can see, only the first, old RGW (ceph-s3) is listed. Is there
> > any place where the RGWs need to get "announced"? Any idea, how to
> > debug this?
>
> You was try to restart activ
On 2/21/20 3:04 PM, Andreas Haupt wrote:
As you can see, only the first, old RGW (ceph-s3) is listed. Is there
any place where the RGWs need to get "announced"? Any idea, how to
debug this?
You was try to restart active mgr?
k
___
ceph-users mailin
Dear all,
we recently added two additional RGWs to our CEPH cluster (version
14.2.7). They work flawlessly, however they do not show up in 'ceph
status':
[cephmon1] /root # ceph -s | grep -A 6 services
services:
mon: 3 daemons, quorum cephmon1,cephmon2,cephmon3 (age 14h)
mgr: cephmon1(a
10 matches
Mail list logo