> On 8 Aug 2021, at 20:10, Tony Liu wrote:
>
> That's what I thought. I am confused by this.
>
> # ceph osd map vm fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk
> osdmap e18381 pool 'vm' (4) object
> 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk' -> pg 4.c7a78d40 (4.0) -> up
> ([4,17,6], p4) acting
Hello,
Did you follow the fix/recommendation when applying patches as per
the documentation in the CVE security post [1] ?
Best regards
[1] https://docs.ceph.com/en/latest/security/CVE-2021-20288/
> On 9 Aug 2021, at 02:26, Richard Bade wrote:
>
> Hi Daniel,
> I had a similar issue last week
Hi Tobias and Richard.
Thank you for answering my questions. I got the link suggested by Tobias on
the issue report, which led me to further investigation. It was hard to see
what version the kernel version on the system was using, but looking at the
result of "ceph health detail" and ldd librados
Hello,
I have a ceph cluster with 5 nodes. I have 23 osds distributed in these one
with hdd class. The disk size are:
15 x 12TB = 180TB
8 x 18TB = 144TB
Result of execute "ceph df" command:
--- RAW STORAGE ---
CLASS SIZE AVAILUSED RAW USED %RAW USED
hdd295 TiB 163 TiB 131 T
Hi,
Might anyone have any insight for this issue? I have been unable to resolve
it so far and it prevents many "ceph orch" commands and breaks many aspects
of the Web user interface.
--
_
/ __// /__ __
Hi,
Am 09.08.21 um 12:56 schrieb Jorge JP:
> 15 x 12TB = 180TB
> 8 x 18TB = 144TB
How are these distributed across your nodes and what is the failure
domain? I.e. how will Ceph distribute data among them?
> The raw size of this cluster (HDD) should be 295TB after format but the size
> of my "p
Hello, this is my osd tree:
ID CLASS WEIGHT TYPE NAME
-1 312.14557 root default
-3 68.97755 host pveceph01
3hdd 10.91409 osd.3
14hdd 16.37109 osd.14
15hdd 16.37109 osd.15
20hdd 10.91409 osd.20
23h
Hello all,
a year ago we started with a 3-node-Cluster for Ceph with 21 HDD and 3
SSD, which we installed with Cephadm, configuring the disks with
`ceph orch apply osd --all-available-devices`
Over the time the usage grew quite significantly: now we have another
5 nodes with 8-12 HDD and 1-2 SSD
I have had the same issue with the windows client.
I had to issue
ceph config set mon auth_expose_insecure_global_id_reclaim false
Which allows the other clients to connect.
I think you need to restart the monitors as well, because the first few times I
tried this, I still couldn't co
On Mon, Aug 9, 2021 at 5:14 PM Robert W. Eckert wrote:
>
> I have had the same issue with the windows client.
> I had to issue
> ceph config set mon auth_expose_insecure_global_id_reclaim false
> Which allows the other clients to connect.
> I think you need to restart the monitors as well,
Hi
Today we suddenly experience multiple MDS crashes during the day with an error
we have not seen earlier. We run octopus 15.2.13 with 4 ranks and 4
standby-reply MDSes and 1 passive standby. Any input on how to troubleshot or
resolve this would be most welcome.
---
root@hk-cephnode-54:~# ce
Thank you Konstantin!
Tony
From: Konstantin Shalygin
Sent: August 9, 2021 01:20 AM
To: Tony Liu
Cc: ceph-users; d...@ceph.io
Subject: Re: [ceph-users] rbd object mapping
On 8 Aug 2021, at 20:10, Tony Liu
mailto:tonyliu0...@hotmail.com>> wrote:
That's wh
Hi,
We are seeing very similar behavior on 16.2.5, and also have noticed
that an undeploy/deploy cycle fixes things. Before we go rummaging
through the source code trying to determine the root cause, has
anybody else figured this out? It seems odd that a repeatable issue
(I've seen other mailing l
Hello
I have a 4 nodes Ceph cluster on Azure. Each node is a E32s_v4 VM ,which has
32vcpus and 256GB memory.The network between nodes is 15GBit/sec measured with
iperf.
The OS is CentOS 8.2 .Ceph version is Pacific and was deployed with
ceph-ansible.
Three nodes have the OSDs and the fourth n
Wanted to respond to the original thread I saw archived on this topic but I
wasn't subscribed to the mailing list yet so don't have the thread in my
inbox to reply to. Hopefully, those involved in that thread still see this.
This issue looks the same as https://tracker.ceph.com/issues/51027 which
15 matches
Mail list logo