Re: [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info

2022-12-09 Thread Alwin Antreich via pve-devel
--- Begin Message --- December 9, 2022 3:05 PM, "Aaron Lauterer" wrote: > On 12/7/22 18:23, Alwin Antreich wrote: > >> December 7, 2022 2:22 PM, "Aaron Lauterer" wrote: >> On 12/7/22 12:15, Alwin Antreich wrote: >>> > > [...] > >>> 'ceph-volume' is used to gather the infos, except for the cr

Re: [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info

2022-12-07 Thread Alwin Antreich via pve-devel
--- Begin Message --- December 7, 2022 2:22 PM, "Aaron Lauterer" wrote: > On 12/7/22 12:15, Alwin Antreich wrote: > >> Hi, > > December 6, 2022 4:47 PM, "Aaron Lauterer" wrote: >> To get more details for a single OSD, we add two new endpoints: > > * nodes/{node}/ceph/osd/{osdid}/metadata > *

Re: [pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info

2022-12-07 Thread Alwin Antreich via pve-devel
--- Begin Message --- Hi, December 6, 2022 4:47 PM, "Aaron Lauterer" wrote: > To get more details for a single OSD, we add two new endpoints: > * nodes/{node}/ceph/osd/{osdid}/metadata > * nodes/{node}/ceph/osd/{osdid}/lv-info As an idea for a different name for lv-info, `nodes/{node}/ceph/osd

Re: [pve-devel] [PATCH manager v2 2/2] ui: remove ceph-mgr pools from rbd pool selection

2022-10-19 Thread Alwin Antreich via pve-devel
--- Begin Message --- On October 19, 2022 2:16:44 PM GMT+02:00, Stefan Sterz wrote: >when using a hyper-converged cluster it was previously possible to add >the pool used by the ceph-mgr modules (".mgr" since quincy or >"device_health_metrics" previously) as an RBD storage. this would lead >to al

Re: [pve-devel] [PATCH manager] ui: remove ceph-mgr pools from rbd pool selection

2022-10-13 Thread Alwin Antreich via pve-devel
--- Begin Message --- On October 12, 2022 3:22:18 PM GMT+02:00, Stefan Sterz wrote: >when using a hyper-converged cluster it was previously possible to add >the pool used by the ceph-mgr modules (".mgr" since quincy or >"device_health_metrics" previously) as an RBD storage. this would lead >to al

[pve-devel] Ceph 16.2.9

2022-06-22 Thread Alwin Antreich via pve-devel
--- Begin Message --- Hi, I see ceph 16.2.9 in the testing repository for some time. Would it be possible to push it to main? Thanks in advance. Cheers, Alwin --- End Message --- ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.pr

[pve-devel] [PATCH docs] storage: rbd: add optional ceph client configuration

2022-03-06 Thread Alwin Antreich via pve-devel
--- Begin Message --- Signed-off-by: Alwin Antreich --- pve-storage-rbd.adoc | 19 +++ 1 file changed, 19 insertions(+) diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index cd3fb2e..5f8619a 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -106,6 +106,25 @

Re: [pve-devel] [PATCH manager] ui: osd: warn if removal could be problematic

2022-02-04 Thread Alwin Antreich via pve-devel
--- Begin Message --- February 4, 2022 10:50 AM, "Aaron Lauterer" wrote: > If an OSD is removed during the wrong conditions, it could lead to > blocked IO or worst case data loss. > > Check against global flags that limit the capabilities of Ceph to heal > itself (norebalance, norecover, noout)