--- Begin Message ---
December 9, 2022 3:05 PM, "Aaron Lauterer" wrote:
> On 12/7/22 18:23, Alwin Antreich wrote:
>
>> December 7, 2022 2:22 PM, "Aaron Lauterer" wrote:
>> On 12/7/22 12:15, Alwin Antreich wrote:
>>>
>
> [...]
>
>>> 'ceph-volume' is used to gather the infos, except for the cr
--- Begin Message ---
December 7, 2022 2:22 PM, "Aaron Lauterer" wrote:
> On 12/7/22 12:15, Alwin Antreich wrote:
>
>> Hi,
>
> December 6, 2022 4:47 PM, "Aaron Lauterer" wrote:
>> To get more details for a single OSD, we add two new endpoints:
>
> * nodes/{node}/ceph/osd/{osdid}/metadata
> *
--- Begin Message ---
Hi,
December 6, 2022 4:47 PM, "Aaron Lauterer" wrote:
> To get more details for a single OSD, we add two new endpoints:
> * nodes/{node}/ceph/osd/{osdid}/metadata
> * nodes/{node}/ceph/osd/{osdid}/lv-info
As an idea for a different name for lv-info,
`nodes/{node}/ceph/osd
--- Begin Message ---
On October 19, 2022 2:16:44 PM GMT+02:00, Stefan Sterz
wrote:
>when using a hyper-converged cluster it was previously possible to add
>the pool used by the ceph-mgr modules (".mgr" since quincy or
>"device_health_metrics" previously) as an RBD storage. this would lead
>to al
--- Begin Message ---
On October 12, 2022 3:22:18 PM GMT+02:00, Stefan Sterz
wrote:
>when using a hyper-converged cluster it was previously possible to add
>the pool used by the ceph-mgr modules (".mgr" since quincy or
>"device_health_metrics" previously) as an RBD storage. this would lead
>to al
--- Begin Message ---
Hi,
I see ceph 16.2.9 in the testing repository for some time. Would it be possible
to push it to main?
Thanks in advance.
Cheers,
Alwin
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.pr
--- Begin Message ---
Signed-off-by: Alwin Antreich
---
pve-storage-rbd.adoc | 19 +++
1 file changed, 19 insertions(+)
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index cd3fb2e..5f8619a 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -106,6 +106,25 @
--- Begin Message ---
February 4, 2022 10:50 AM, "Aaron Lauterer" wrote:
> If an OSD is removed during the wrong conditions, it could lead to
> blocked IO or worst case data loss.
>
> Check against global flags that limit the capabilities of Ceph to heal
> itself (norebalance, norecover, noout)