--- Begin Message ---
December 9, 2022 3:05 PM, "Aaron Lauterer" wrote:
> On 12/7/22 18:23, Alwin Antreich wrote:
>
>> December 7, 2022 2:22 PM, "Aaron Lauterer" wrote:
>> On 12/7/22 12:15, Alwin Antreich wrote:
>>>
>
> [...]
>
>&
--- Begin Message ---
December 7, 2022 2:22 PM, "Aaron Lauterer" wrote:
> On 12/7/22 12:15, Alwin Antreich wrote:
>
>> Hi,
>
> December 6, 2022 4:47 PM, "Aaron Lauterer" wrote:
>> To get more details for a single OSD, we add two new endpoints:
&
--- Begin Message ---
Hi,
December 6, 2022 4:47 PM, "Aaron Lauterer" wrote:
> To get more details for a single OSD, we add two new endpoints:
> * nodes/{node}/ceph/osd/{osdid}/metadata
> * nodes/{node}/ceph/osd/{osdid}/lv-info
As an idea for a different name for lv-info,
`nodes/{node}/ceph/osd
--- Begin Message ---
On October 19, 2022 2:16:44 PM GMT+02:00, Stefan Sterz
wrote:
>when using a hyper-converged cluster it was previously possible to add
>the pool used by the ceph-mgr modules (".mgr" since quincy or
>"device_health_metrics" previously) as an RBD storage. this would lead
>to al
--- Begin Message ---
On October 12, 2022 3:22:18 PM GMT+02:00, Stefan Sterz
wrote:
>when using a hyper-converged cluster it was previously possible to add
>the pool used by the ceph-mgr modules (".mgr" since quincy or
>"device_health_metrics" previously) as an RBD storage. this would lead
>to al
--- Begin Message ---
Hi,
I see ceph 16.2.9 in the testing repository for some time. Would it be possible
to push it to main?
Thanks in advance.
Cheers,
Alwin
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.pr
--- Begin Message ---
Signed-off-by: Alwin Antreich
---
pve-storage-rbd.adoc | 19 +++
1 file changed, 19 insertions(+)
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index cd3fb2e..5f8619a 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -106,6 +106,25
--- Begin Message ---
February 4, 2022 10:50 AM, "Aaron Lauterer" wrote:
> If an OSD is removed during the wrong conditions, it could lead to
> blocked IO or worst case data loss.
>
> Check against global flags that limit the capabilities of Ceph to heal
> itself (norebalance, norecover, noout)
Hi Moayad,
February 10, 2021 8:16 AM, "Moayad Almalat" wrote:
> Signed-off-by: Moayad Almalat
> ---
> qm.adoc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/qm.adoc b/qm.adoc
> index 333b2e6..1108908 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -203,7 +203,7 @@ either t
Hi Moayad,
February 10, 2021 8:16 AM, "Moayad Almalat" wrote:
> Signed-off-by: Moayad Almalat
> ---
> qm.adoc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/qm.adoc b/qm.adoc
> index 333b2e6..1108908 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -203,7 +203,7 @@ either t
Signed-off-by: Alwin Antreich
---
pveceph.adoc | 45 ++---
1 file changed, 38 insertions(+), 7 deletions(-)
diff --git a/pveceph.adoc b/pveceph.adoc
index fd3fded..42dfb02 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -466,12 +466,16 @@ WARNING: **Do
Signed-off-by: Alwin Antreich
---
pveceph.adoc | 36
1 file changed, 36 insertions(+)
diff --git a/pveceph.adoc b/pveceph.adoc
index 42dfb02..da8d35e 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -540,6 +540,42 @@ pveceph pool destroy
NOTE: Deleting the
On Tue, Jan 12, 2021 at 11:21:47AM +0100, Alwin Antreich wrote:
> Information of a single pool can be queried.
>
> Signed-off-by: Alwin Antreich
> ---
> PVE/API2/Ceph/Pools.pm | 99 ++
> PVE/CLI/pveceph.pm | 4 ++
> 2 files ch
for better handling and since the pool endpoints got more entries.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Makefile | 1 +
PVE/API2/Ceph.pm | 378 +--
PVE/API2/Ceph/Pools.pm | 395 +
PVE/CLI/pveceph.pm
* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 249
t of PGs.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Pools.pm | 96 +-
PVE/CLI/pveceph.pm | 4 ++
2 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 01c11100..014e6be7 100644
--- a
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Pools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion
this is used to fine-tune the ceph autoscaler
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 18 ++
1 file changed, 18 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index bd395956..9b8b68dd 100644
--- a/www/manager6/ceph/Pool.js
of the unneeded ref copy for params.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Pools.pm | 10 +++---
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index fac21301..b9e295f5 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich
---
PVE/Ceph/Tools.pm | 61
Letting the columns flex needs a flat column head structure.
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 138 ++
1 file changed, 82 insertions(+), 56 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index
tuning for the pg_autoscaler
[0]
https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105
Alwin Antreich (10):
api: ceph: subclass pools
ceph: setpool, use parameter extraction instead
ceph: add titles to ceph_pool_common_options
ceph: add
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Pools.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index b9e295f5..24562456 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -112,10 +112,12 @@ my
Information of a single pool can be queried.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Pools.pm | 99 ++
PVE/CLI/pveceph.pm | 4 ++
2 files changed, 103 insertions(+)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 24562456
On Wed, Dec 16, 2020 at 12:59:04PM +0100, Alwin Antreich wrote:
> the check_connection is done by querying the exports of the nfs server
> in question. With nfs v4 those exports aren't listed anymore since nfs
> v4 employs a pseudo-filesystem starting from root (/).
>
> rpc
the check_connection is done by querying the exports of the nfs server
in question. With nfs v4 those exports aren't listed anymore since nfs
v4 employs a pseudo-filesystem starting from root (/).
rpcinfo allows to query the existence of an nfs v4 service.
Signed-off-by: Alwin Antreich
---
Signed-off-by: Alwin Antreich
---
local-zfs.adoc | 2 --
1 file changed, 2 deletions(-)
diff --git a/local-zfs.adoc b/local-zfs.adoc
index 89ab8bd..e794286 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -42,8 +42,6 @@ management.
* Designed for high storage capacities
-* Protection
* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 276
tion instead of the unneeded ref copy for params.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/POOLS.pm | 131 +++--
PVE/CLI/pveceph.pm | 3 +
PVE/Ceph/Tools.pm | 21 +++
3 files changed, 123 insertions(+), 32 deletions(-)
diff --git a/PVE/
. Run an extra rados command to verify the current setting.
Signed-off-by: Alwin Antreich
---
PVE/Ceph/Tools.pm | 49 +--
1 file changed, 47 insertions(+), 2 deletions(-)
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index d95e8676..9505f0bf 100644
to 1, since Ceph creates a pool with 1 PG for device health metrics. And
the autoscaler may adjust the PGs of a pool anyway.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/POOLS.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph
for better handling and since the pool endpoints got more entries.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/Makefile | 1 +
PVE/API2/Ceph.pm | 380 +-
PVE/API2/Ceph/POOLS.pm | 404 +
PVE/CLI/pveceph.pm
Information of a single pool can be queried.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/POOLS.pm | 113 +++--
1 file changed, 108 insertions(+), 5 deletions(-)
diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
index 744f2bce..19fc1b7e 100644
Letting the columns flex needs a flat column head structure.
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 131 ++
1 file changed, 75 insertions(+), 56 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index
The default pg_autoscale_mode can be configured at Ceph directly. With
Nautilus the default mode is warn and with Octopus it has changed to on.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph/POOLS.pm | 2 --
1 file changed, 2 deletions(-)
diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph
view of ceph pools
- rework the create input panel
- add an edit button using the reworked input panel
- fix borken add_storages
- extend setp_pool function to avoid a race condition
- remove the pg_autoscale_mode default to allow Ceph's setting to
take precedence
Signed-off-by: Alwin Antreich
---
pveceph.adoc | 4
1 file changed, 4 insertions(+)
diff --git a/pveceph.adoc b/pveceph.adoc
index 84a45d5..39da354 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -462,6 +462,10 @@ state.
NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 18 +-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 28b0b4a5..93ed667e 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
on existing buttons.
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index d1fb2f3e..28b0b4a5 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6
Information of a single pool can be queried.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 105 +++
1 file changed, 105 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index c3a3091d..e44714f6 100644
--- a/PVE/API2/Ceph.pm
+++ b
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 82 ++-
1 file changed, 56 insertions(+), 26 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 11bcf9d5..d1fb2f3e 100644
--- a/www/manager6/ceph/Pool.js
+++ b
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 98 ++
PVE/CLI/pveceph.pm | 1 +
2 files changed, 99 inser
ll
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich
---
Note:
v1 -> v2:
* reorder patches, since pool create & set share common pool
options.
* include new setpool API
PVE/API2/Ceph.pm | 17 +++
PVE/C
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 0aeb5075..69fe3d6d 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -718,6 +718,13 @@ my $ceph_pool_common_options = sub
to keep the pool create & set in sync.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 40 +---
1 file changed, 1 insertion(+), 39 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7cdbdccd..0aeb5075 100644
--- a/PVE/API2/Ceph.pm
+
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/Pool.js | 13 +
1 file changed, 13 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 19eb01e9..11bcf9d5 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -39,6 +39,19
Signed-off-by: Alwin Antreich
---
Note: I forgot to include the patch on the first send-email
www/manager6/ceph/Pool.js | 13 +
1 file changed, 13 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 19eb01e9..11bcf9d5 100644
--- a/www/manager6
Signed-off-by: Alwin Antreich
---
www/manager6/ceph/OSD.js | 17 +
1 file changed, 17 insertions(+)
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 88109315..e9224743 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -77,6 +77,23
Different defaults for nautilus (warn) and octopus (on), more
conservative setting used.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 48d0484f..f8b1b22f 100644
--- a/PVE/API2/Ceph.pm
to reduce code duplication and make it easier to add more options for
pool commands.
Signed-off-by: Alwin Antreich
---
PVE/API2/Ceph.pm | 17 +++
PVE/Ceph/Tools.pm | 74 +--
2 files changed, 52 insertions(+), 39 deletions(-)
diff --git a/PVE
On Fri, Sep 25, 2020 at 02:51:45PM +0200, Alwin Antreich wrote:
> Signed-off-by: Alwin Antreich
> ---
> Note: The footnote in the title section broke the link building for that
> footnote when used with a variable on the beginning of the url.
> The parser seems to loo
Signed-off-by: Alwin Antreich
---
Note: The footnote in the title section broke the link building for that
footnote when used with a variable on the beginning of the url.
The parser seems to look for an http(s) and considers it text
otherwise. But interestingly it worked with
* use a variable instead of hardcoded url+release name
* ceph migrated to readthedocs with a minor uri change
https://lists.ceph.io/hyperkitty/list/ceph-us...@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/
Signed-off-by: Alwin Antreich
---
pve-storage-cephfs.adoc| 2 +-
pveceph.adoc
ceph migrated their documentation to readthedocs with a minor uri change
https://lists.ceph.io/hyperkitty/list/ceph-us...@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/
Signed-off-by: Alwin Antreich
---
pveceph.adoc | 28 ++--
1 file changed, 14 insertions(+), 14
* use codname instead of hardcoded release name
* ceph migrated to readthedocs with a minor uri change
https://lists.ceph.io/hyperkitty/list/ceph-us...@ceph.io/thread/AQZJG75IST7HFDW7OB5MNCITQOVAAUR4/
Signed-off-by: Alwin Antreich
---
pve-storage-cephfs.adoc | 2 +-
1 file changed, 1 insertion
On Fri, Jul 24, 2020 at 02:38:50PM +0200, Thomas Lamprecht wrote:
> Am 7/24/20 um 2:24 PM schrieb Alwin Antreich:
> > On Fri, Jul 24, 2020 at 11:54:10AM +0200, Thomas Lamprecht wrote:
> >> Am 7/24/20 um 11:46 AM schrieb Alwin Antreich:
> >>> On Fri, Jul 24, 20
On Fri, Jul 24, 2020 at 11:54:10AM +0200, Thomas Lamprecht wrote:
> Am 7/24/20 um 11:46 AM schrieb Alwin Antreich:
> > On Fri, Jul 24, 2020 at 11:34:33AM +0200, Thomas Lamprecht wrote:
> >> Am 7/23/20 um 3:25 PM schrieb Alwin Antreich:
> >>> In some situations
On Fri, Jul 24, 2020 at 11:34:33AM +0200, Thomas Lamprecht wrote:
> Am 7/23/20 um 3:25 PM schrieb Alwin Antreich:
> > In some situations Ceph's auto-detection doesn't recognize the device
> > class correctly. The option allows to set it directly on osd create,
> > in
In some situations Ceph's auto-detection doesn't recognize the device
class correctly. The option allows to set it directly on osd create,
instead of altering it afterwards. This way the cluster doesn't need to
shift data back and forth unnecessarily.
Signed-off-by: Alwin Antreich
59 matches
Mail list logo