Printing a lot of very detailed JSON output on the CLI is not very
useful.
Printing the `ceph -s` overview is much more suited to give an overview
of the ceph cluster status.
Signed-off-by: Aaron Lauterer
---
v1 -> v2:
* added check if Ceph is iniated to avoid ugly error msg
* removed eval (if t
The first three patches are rather minor things, improving warnings/mails.
The fourth patch makes parse errors for section configs visible to the caller,
which can then decide if/how to handle them. And the fifth patch uses this new
information for aborting a backup when there are parse errors in t
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f75e4b16..3b864514 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -394,13 +394,15 @@ sub sendmail {
}
$html_log_part .= escape_h
so that callers can know about them. This is useful in places where we'd rather
abort then continue with a faulty configuration. For example, when reading the
storage configuration before executing a backup job.
Originally-by: Thomas Lamprecht
Signed-off-by: Fabian Ebner
---
I skimmed over the
The actual error is already printed on the CLI and in the task log, so
there's no real need to make the error message in storage_info() more than
"parse error\n". It also can/will end up in the mail subject, which is another
reason to keep it simple.
Signed-off-by: Fabian Ebner
---
Needs a depen
Fixes the case where reading from /etc/vzdump.conf fails.
Also convert the options read from /etc/vzdump.conf before the loop. That
avoids showing a wrong warning when 'prune-backups' is configured in
/etc/vzdump.conf, and maxfiles isn't. Previously, because 'maxfiles' from the
schema defaults was
Errors from storage_info() are newline-terminated, because perl would append
the line number otherwise. Chomp those errors, because sendmail() relies
on the presence of a newline to decide if it's multiple problems or only one.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 1 +
1 file changed,
On 21/12/2020 11:07, Aaron Lauterer wrote:
> Printing a lot of very detailed JSON output on the CLI is not very
> useful.
>
> Printing the `ceph -s` overview is much more suited to give an overview
> of the ceph cluster status.
>
> Signed-off-by: Aaron Lauterer
> ---
> v1 -> v2:
> * added check
On 12/21/20 3:25 PM, Thomas Lamprecht wrote:
On 21/12/2020 11:07, Aaron Lauterer wrote:
Printing a lot of very detailed JSON output on the CLI is not very
useful.
Printing the `ceph -s` overview is much more suited to give an overview
of the ceph cluster status.
Signed-off-by: Aaron Lautere
On 21/12/2020 14:48, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/VZDump.pm | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.prox
On 21/12/2020 14:48, Fabian Ebner wrote:
> Errors from storage_info() are newline-terminated, because perl would append
> the line number otherwise. Chomp those errors, because sendmail() relies
> on the presence of a newline to decide if it's multiple problems or only one.
>
> Signed-off-by: Fabi
On 21/12/2020 14:48, Fabian Ebner wrote:
> Fixes the case where reading from /etc/vzdump.conf fails.
>
> Also convert the options read from /etc/vzdump.conf before the loop. That
> avoids showing a wrong warning when 'prune-backups' is configured in
> /etc/vzdump.conf, and maxfiles isn't. Previous
On 21/12/2020 14:48, Fabian Ebner wrote:
> so that callers can know about them. This is useful in places where we'd
> rather
> abort then continue with a faulty configuration. For example, when reading the
> storage configuration before executing a backup job.
>
> Originally-by: Thomas Lamprecht
On 21/12/2020 14:48, Fabian Ebner wrote:
> The actual error is already printed on the CLI and in the task log, so
> there's no real need to make the error message in storage_info() more than
> "parse error\n". It also can/will end up in the mail subject, which is another
> reason to keep it simple.
we now push it to the correct hash if it is installed
Signed-off-by: Aaron Lauterer
---
PVE/Report.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/Report.pm b/PVE/Report.pm
index a4a3d779..5ee3453d 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -79,7 +79,8 @@ my $
Signed-off-by: Aaron Lauterer
---
PVE/Report.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/Report.pm b/PVE/Report.pm
index f8d5e663..228fbb29 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -73,7 +73,8 @@ my $init_report_cmds = sub {
],
};
-push
add:
* HA status
* ceph osd df tree
* ceph conf file and conf db
* ceph versions
removed:
* ceph status, as pveceph status is now printing the same information
Signed-off-by: Aaron Lauterer
---
@Thomas, we did discuss using the cluster/ceph/metadata endpoint off
list for more information about
On 03/12/2020 08:36, Dominic Jäger wrote:
Please try changing the AddressOnParent values so that they are unique.
As you mentioned, the disks should then be attached with different numbers
scsi0, scsi1, scsi2...
Hi,
I wonder if the current proxmox ovs parser is not wrong.
Seem than "adressOn
On 21/12/2020 16:32, alexandre derumier wrote:
> On 03/12/2020 08:36, Dominic Jäger wrote:
>> Please try changing the AddressOnParent values so that they are unique.
>> As you mentioned, the disks should then be attached with different numbers
>> scsi0, scsi1, scsi2...
>
> Hi,
>
> I wonder if th
On 21/12/2020 16:13, Aaron Lauterer wrote:
> we now push it to the correct hash if it is installed
>
> Signed-off-by: Aaron Lauterer
> ---
> PVE/Report.pm | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>
good find!
applied, thanks!
___
On 21/12/2020 16:13, Aaron Lauterer wrote:
> add:
> * HA status
> * ceph osd df tree
> * ceph conf file and conf db
> * ceph versions
>
> removed:
> * ceph status, as pveceph status is now printing the same information
>
> Signed-off-by: Aaron Lauterer
> ---
>
> @Thomas, we did discuss using th
On 21/12/2020 16:13, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer
> ---
> PVE/Report.pm | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.prox
On 21/12/2020 16:13, Aaron Lauterer wrote:
> @@ -76,7 +77,9 @@ my $init_report_cmds = sub {
>
> if (-e '/etc/ceph/ceph.conf') {
> # TODO: add (now working) rdb ls over all pools? really needed?
> - push @{$report_def->{volumes}}, 'ceph status', 'ceph osd status', 'ceph
> df', 'pve
mmm, seem that
AddressOnParent is indeed the disk location on the controller,
but on the provided ovf example, they are 2 differents controllers (parent=4 &&
parent=5)
(I don't known how vmware manage disk, 1controller with multliples disk, or
1controller by disk. Maybe it's related to vm mac
24 matches
Mail list logo