> Hello! Any news?
>
Yes, it will be around 18° today, Israel was heckled at EU song contest ..
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks for the help!
I wanted to give an update on the resolution to the issues I was
having. I didn't realize that I had created several competing OSD
specifications via dashboard . By cleaning that up, OSD creation now is
working as expected.
-Mike
> On Tue, 23 Apr 2024 00:06:19 -, c
Hello Christopher,
We had something similar on Pacific multi-site.
The problem was in leftover bucket metadata in our case, and was solved
by "radosgw-admin metadata list ..." and "radosgw-admin metadata rm
..." on master, for a non-existent bucket.
Best regards,
Konstantin
On Tue, 2024-04-30 at
Hi!
i hope someone can help us out here :)
We need to move from 3 datacenters to 2 datacenters (+ 1 small serverroom
reachable via layer 3 VPN)
NOW we have a ceph-mon in each datacenter, which is fine. But we have to move
and will only have 2 datacenters in the future (that are connected, so d
Dear Ceph users,
I'm pretty new on this list, but I've been using Ceph with satisfaction since
2020. I faced some problems through these years consulting the list archive,
but now we're down with a problem that seems without an answer.
After a power failure, we have a bunch of OSDs that during r
- DigitalOcean credits
- things to ask
- what would promotional material require
- how much are credits worth
- Neha to ask
- 19.1.0 centos9 container status
- close to being ready
- will be building centos 8 and 9 containers simultaneously
- should test o
I'm sorry I did a little mistake: our release is mimic, obviously as
stated in the logged error, and all the ceph stuffs are aligned to mimic.
Il 06/05/2024 10:04, sergio.rabell...@unito.it ha scritto:
Dear Ceph users,
I'm pretty new on this list, but I've been using Ceph with satisfaction s
Hi,
a 17.2.7 cluster with two filesystems has suddenly non-working MDSs:
# ceph -s
cluster:
id: f54eea86-265a-11eb-a5d0-457857ba5742
health: HEALTH_ERR
22 failed cephadm daemon(s)
2 filesystems are degraded
1 mds daemon damaged
insuff
Hello.
We're running a containerized deployment of Reef with a focus on RGW. We
noticed that while the Grafana graphs for other categories - OSDs, Pools,
etc - have data, the graphs for the Object Gateway category are empty.
I did some looking last week and found reference to something about an
This is a known issue, please see https://tracker.ceph.com/issues/60986.
If you could reproduce it then please enable the mds debug logs and this
could help debug it fast:
debug_mds = 25
debug_ms = 1
Thanks
- Xiubo
On 5/7/24 00:26, Robert Sander wrote:
Hi,
a 17.2.7 cluster with two fil
The same issue with https://tracker.ceph.com/issues/60986 and as Robert
Sander reported.
On 5/6/24 05:11, E Taka wrote:
Hi all,
we have a serious problem with CephFS. A few days ago, the CephFS file
systems became inaccessible, with the message MDS_DAMAGE: 1 mds daemon
damaged
The cephfs-jour
Hi,
would an update to 18.2 help?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
__
Possibly, because we have seen this only in ceph 17.
And if you could reproduce it then please provide the mds debug logs,
after this we can quickly find the root cause of it.
Thanks
- Xiubo
On 5/7/24 12:19, Robert Sander wrote:
Hi,
would an update to 18.2 help?
Regards
Thanks Sake,
That recovered just under 4 Gig of space for us
Sorry about the delay getting back to you (been *really* busy) :-)
Cheers
Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
14 matches
Mail list logo