[pve-devel] [PATCH storage] Fix #2346: rbd: storage shows wrong %-usage

2019-09-02 Thread Alwin Antreich
The patch uses the value from the new field 'stored' if it is available.

In Ceph 14.2.2 the storage calculation changed to a per pool basis. This
introduced an additional field 'stored' that holds the amount of data
that has been written to the pool. While the field 'used' now has the
data after replication for the pool.

The new calculation will be used only if all OSDs are running with the
on-disk format introduced by Ceph 14.2.2.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 8433715..5e351a9 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -521,7 +521,7 @@ sub status {
 # max_avail -> max available space for data w/o replication in the pool
 # bytes_used -> data w/o replication in the pool
 my $free = $d->{stats}->{max_avail};
-my $used = $d->{stats}->{bytes_used};
+my $used = $d->{stats}->{stored} ? $d->{stats}->{stored} : 
$d->{stats}->{bytes_used};
 my $total = $used + $free;
 my $active = 1;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2] Fix #2346: rbd storage shows wrong %-usage

2019-09-03 Thread Alwin Antreich
The patch uses the value from the field 'stored' if it is available.

In Ceph 14.2.2 the storage calculation changed to a per pool basis. This
introduced an additional field 'stored' that holds the amount of data
that has been written to the pool. While the field 'used' now has the
data after replication for the pool.

The new calculation will be used only if all OSDs are running with the
on-disk format introduced by Ceph 14.2.2.

Signed-off-by: Alwin Antreich 
---
v1 -> v2: checks now if key is defined and not for just truth

 PVE/Storage/RBDPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 8433715..214b732 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -521,7 +521,7 @@ sub status {
 # max_avail -> max available space for data w/o replication in the pool
 # bytes_used -> data w/o replication in the pool
 my $free = $d->{stats}->{max_avail};
-my $used = $d->{stats}->{bytes_used};
+my $used = $d->{stats}->{stored} // $d->{stats}->{bytes_used};
 my $total = $used + $free;
 my $active = 1;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] Fix #2171: VM statefile was not activated

2019-10-07 Thread Alwin Antreich
Machine states that were created on snapshots with memory could not be
restored on rollback. The state volume was not activated so KVM couldn't
load the state.

This patch moves the path generation into vm_start and de-/activates the
state volume.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuConfig.pm |  3 +--
 PVE/QemuServer.pm | 10 +-
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index edbf1a7..e9796a3 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
 my ($class, $vmid, $vmstate, $data) = @_;
 
 my $storecfg = PVE::Storage::config();
-my $statefile = PVE::Storage::path($storecfg, $vmstate);
-PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
undef, $data->{forcemachine});
+PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, undef, 
$data->{forcemachine});
 }
 
 sub __snapshot_rollback_get_unused {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8376260..39315b3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5420,6 +5420,7 @@ sub vm_start {
 
my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
$conf, $defaults, $forcemachine);
 
+
my $migrate_port = 0;
my $migrate_uri;
if ($statefile) {
@@ -5466,7 +5467,12 @@ sub vm_start {
push @$cmd, '-S';
 
} else {
-   push @$cmd, '-loadstate', $statefile;
+   my $sfile = $statefile;
+   if (!-e $statefile) {
+   PVE::Storage::activate_volumes($storecfg, [$statefile]);
+   $sfile = PVE::Storage::path($storecfg, $statefile);
+   }
+   push @$cmd, '-loadstate', $sfile;
}
} elsif ($paused) {
push @$cmd, '-S';
@@ -5622,6 +5628,8 @@ sub vm_start {
PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
PVE::Storage::vdisk_free($storecfg, $vmstate);
PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif ($statefile && (!-e $statefile)) {
+   PVE::Storage::deactivate_volumes($storecfg, [$statefile]);
}
 
PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Fix #2171: VM statefile was not activated

2019-10-08 Thread Alwin Antreich
On Tue, Oct 08, 2019 at 08:36:57AM +0200, Fabian Grünbichler wrote:
> On October 7, 2019 2:41 pm, Alwin Antreich wrote:
> > Machine states that were created on snapshots with memory could not be
> > restored on rollback. The state volume was not activated so KVM couldn't
> > load the state.
> > 
> > This patch moves the path generation into vm_start and de-/activates the
> > state volume.
> 
> alternatively, the following could also work and re-use more code so 
> that we don't miss the next special handling of some corner case. 
> rolling back from a snapshot with state is just like resuming, but we 
> want to keep the statefile instead of deleting it.
I will send another version with your alternative.

> 
> (untested):
> 
> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> index edbf1a7..b70c276 100644
> --- a/PVE/QemuConfig.pm
> +++ b/PVE/QemuConfig.pm
> @@ -358,9 +358,7 @@ sub __snapshot_rollback_vm_stop {
>  sub __snapshot_rollback_vm_start {
>  my ($class, $vmid, $vmstate, $data) = @_;
>  
> -my $storecfg = PVE::Storage::config();
> -my $statefile = PVE::Storage::path($storecfg, $vmstate);
> -PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
> undef, $data->{forcemachine});
> +PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, 
> undef, $data->{forcemachine});
>  }
>  
>  sub __snapshot_rollback_get_unused {
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 8376260..f2d19e1 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -5418,6 +5418,11 @@ sub vm_start {
>   print "Resuming suspended VM\n";
>   }
>  
> + if ($statefile && $statefile ne 'tcp' && $statefile ne 'unix') {
> + # re-use resume code
> + $conf->{vmstate} = $statefile;
> + }
> +
>   my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
> $conf, $defaults, $forcemachine);
>  
>   my $migrate_port = 0;
> @@ -5465,8 +5470,6 @@ sub vm_start {
>   push @$cmd, '-incoming', $migrate_uri;
>   push @$cmd, '-S';
>  
> - } else {
> - push @$cmd, '-loadstate', $statefile;
>   }
>   } elsif ($paused) {
>   push @$cmd, '-S';
> @@ -5616,11 +5619,16 @@ sub vm_start {
>   property => "guest-stats-polling-interval",
>   value => 2) if (!defined($conf->{balloon}) || 
> $conf->{balloon});
>  
> - if ($is_suspended && (my $vmstate = $conf->{vmstate})) {
> - print "Resumed VM, removing state\n";
> - delete $conf->@{qw(lock vmstate runningmachine)};
> + if (my $vmstate = $conf->{vmstate}) {
>   PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
> - PVE::Storage::vdisk_free($storecfg, $vmstate);
> + delete $conf->{vmstate};
> +
> + if ($is_suspended) {
> + print "Resumed VM, removing state\n";
> + delete $conf->@{qw(lock runningmachine)};
> + PVE::Storage::vdisk_free($storecfg, $vmstate);
> + }
> +
>   PVE::QemuConfig->write_config($vmid, $conf);
>   }
>  
> 
> > 
> > Signed-off-by: Alwin Antreich 
> > ---
> >  PVE/QemuConfig.pm |  3 +--
> >  PVE/QemuServer.pm | 10 +-
> >  2 files changed, 10 insertions(+), 3 deletions(-)
> > 
> > diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> > index edbf1a7..e9796a3 100644
> > --- a/PVE/QemuConfig.pm
> > +++ b/PVE/QemuConfig.pm
> > @@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
> >  my ($class, $vmid, $vmstate, $data) = @_;
> >  
> >  my $storecfg = PVE::Storage::config();
> > -my $statefile = PVE::Storage::path($storecfg, $vmstate);
> > -PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
> > undef, $data->{forcemachine});
> > +PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, 
> > undef, $data->{forcemachine});
> >  }
> >  
> >  sub __snapshot_rollback_get_unused {
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index 8376260..39315b3 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -5420,6 +5420,7 @@ sub vm_start {
> >  
> > my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
> > $conf, $defaults, $forcemachine);
> >  
> > +
> > my $migrate_port = 0;
> > my $migrate_uri;
> > if ($s

Re: [pve-devel] [PATCH qemu-server] Fix #2171: VM statefile was not activated

2019-10-08 Thread Alwin Antreich
On Tue, Oct 08, 2019 at 12:31:06PM +0200, Fabian Grünbichler wrote:
> On October 8, 2019 11:25 am, Alwin Antreich wrote:
> > On Tue, Oct 08, 2019 at 08:36:57AM +0200, Fabian Grünbichler wrote:
> >> On October 7, 2019 2:41 pm, Alwin Antreich wrote:
> >> > Machine states that were created on snapshots with memory could not be
> >> > restored on rollback. The state volume was not activated so KVM couldn't
> >> > load the state.
> >> > 
> >> > This patch moves the path generation into vm_start and de-/activates the
> >> > state volume.
> >> 
> >> alternatively, the following could also work and re-use more code so 
> >> that we don't miss the next special handling of some corner case. 
> >> rolling back from a snapshot with state is just like resuming, but we 
> >> want to keep the statefile instead of deleting it.
> > I will send another version with your alternative.
> > 
> >> 
> >> (untested):
> >> 
> >> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> >> index edbf1a7..b70c276 100644
> >> --- a/PVE/QemuConfig.pm
> >> +++ b/PVE/QemuConfig.pm
> >> @@ -358,9 +358,7 @@ sub __snapshot_rollback_vm_stop {
> >>  sub __snapshot_rollback_vm_start {
> >>  my ($class, $vmid, $vmstate, $data) = @_;
> >>  
> >> -my $storecfg = PVE::Storage::config();
> >> -my $statefile = PVE::Storage::path($storecfg, $vmstate);
> >> -PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
> >> undef, $data->{forcemachine});
> >> +PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, 
> >> undef, $data->{forcemachine});
> >>  }
> >>  
> >>  sub __snapshot_rollback_get_unused {
> >> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> >> index 8376260..f2d19e1 100644
> >> --- a/PVE/QemuServer.pm
> >> +++ b/PVE/QemuServer.pm
> >> @@ -5418,6 +5418,11 @@ sub vm_start {
> >>print "Resuming suspended VM\n";
> >>}
> >>  
> >> +  if ($statefile && $statefile ne 'tcp' && $statefile ne 'unix') {
> >> +  # re-use resume code
> >> +  $conf->{vmstate} = $statefile;
> >> +  }
> >> +
> >>my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
> >> $conf, $defaults, $forcemachine);
> >>  
> >>my $migrate_port = 0;
> >> @@ -5465,8 +5470,6 @@ sub vm_start {
> >>push @$cmd, '-incoming', $migrate_uri;
> >>push @$cmd, '-S';
> >>  
> >> -  } else {
> >> -  push @$cmd, '-loadstate', $statefile;
> >>}
> >>} elsif ($paused) {
> >>push @$cmd, '-S';
> >> @@ -5616,11 +5619,16 @@ sub vm_start {
> >>property => "guest-stats-polling-interval",
> >>value => 2) if (!defined($conf->{balloon}) || 
> >> $conf->{balloon});
> >>  
> >> -  if ($is_suspended && (my $vmstate = $conf->{vmstate})) {
> >> -  print "Resumed VM, removing state\n";
> >> -  delete $conf->@{qw(lock vmstate runningmachine)};
> >> +  if (my $vmstate = $conf->{vmstate}) {
> >>PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
> >> -  PVE::Storage::vdisk_free($storecfg, $vmstate);
> >> +  delete $conf->{vmstate};
> >> +
> >> +  if ($is_suspended) {
> >> +  print "Resumed VM, removing state\n";
> >> +  delete $conf->@{qw(lock runningmachine)};
> >> +  PVE::Storage::vdisk_free($storecfg, $vmstate);
> >> +  }
> >> +
> >>PVE::QemuConfig->write_config($vmid, $conf);
> >>}
> >>  
> >> 
> >> > 
> >> > Signed-off-by: Alwin Antreich 
> >> > ---
> >> >  PVE/QemuConfig.pm |  3 +--
> >> >  PVE/QemuServer.pm | 10 +-
> >> >  2 files changed, 10 insertions(+), 3 deletions(-)
> >> > 
> >> > diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> >> > index edbf1a7..e9796a3 100644
> >> > --- a/PVE/QemuConfig.pm
> >> > +++ b/PVE/QemuConfig.pm
> >> > @@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
> >> >  my ($class, $vmid, $vmstate, $data) = @_;
> >> >  
> >&

[pve-devel] [PATCH qemu-server v2] Fix #2171: VM statefile was not activated

2019-10-10 Thread Alwin Antreich
Machine states that were created on snapshots with memory could not be
restored on rollback. The state volume was not activated so KVM couldn't
load the state.

This patch removes the path generation on rollback. It uses the vmstate
and de-/activates the state volume in vm_start. This in turn disallows
the use of path based statefiles when used with the '--stateuri' option
on 'qm start'. Only 'tcp', 'unix' and our storage based URIs can be
used now.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuConfig.pm | 3 +--
 PVE/QemuServer.pm | 8 +---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index edbf1a7..e9796a3 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
 my ($class, $vmid, $vmstate, $data) = @_;
 
 my $storecfg = PVE::Storage::config();
-my $statefile = PVE::Storage::path($storecfg, $vmstate);
-PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
undef, $data->{forcemachine});
+PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, undef, 
$data->{forcemachine});
 }
 
 sub __snapshot_rollback_get_unused {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ac9dfde..d4feae9 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5340,6 +5340,7 @@ sub vm_start {
die "you can't start a vm if it's a template\n" if 
PVE::QemuConfig->is_template($conf);
 
my $is_suspended = PVE::QemuConfig->has_lock($conf, 'suspended');
+   $conf->{vmstate} = $statefile if ($statefile && $statefile ne 'tcp' && 
$statefile ne 'unix');
 
PVE::QemuConfig->check_lock($conf)
if !($skiplock || $is_suspended);
@@ -5465,8 +5466,6 @@ sub vm_start {
push @$cmd, '-incoming', $migrate_uri;
push @$cmd, '-S';
 
-   } else {
-   push @$cmd, '-loadstate', $statefile;
}
} elsif ($paused) {
push @$cmd, '-S';
@@ -5616,12 +5615,15 @@ sub vm_start {
property => "guest-stats-polling-interval",
value => 2) if (!defined($conf->{balloon}) || 
$conf->{balloon});
 
-   if ($is_suspended && (my $vmstate = $conf->{vmstate})) {
+   my $vmstate = $conf->{vmstate};
+   if ($is_suspended && $vmstate) {
print "Resumed VM, removing state\n";
delete $conf->@{qw(lock vmstate runningmachine)};
PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
PVE::Storage::vdisk_free($storecfg, $vmstate);
PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif ($vmstate) {
+   PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
}
 
PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server v2] Fix #2171: VM statefile was not activated

2019-10-11 Thread Alwin Antreich
On Fri, Oct 11, 2019 at 07:10:53AM +0200, Thomas Lamprecht wrote:
> On 10/10/19 3:58 PM, Alwin Antreich wrote:
> > Machine states that were created on snapshots with memory could not be
> > restored on rollback. The state volume was not activated so KVM couldn't
> > load the state.
> > 
> > This patch removes the path generation on rollback. It uses the vmstate
> > and de-/activates the state volume in vm_start. This in turn disallows
> > the use of path based statefiles when used with the '--stateuri' option
> > on 'qm start'. Only 'tcp', 'unix' and our storage based URIs can be
> > used now.
> > 
> > Signed-off-by: Alwin Antreich 
> > ---
> >  PVE/QemuConfig.pm | 3 +--
> >  PVE/QemuServer.pm | 8 +---
> >  2 files changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> > index edbf1a7..e9796a3 100644
> > --- a/PVE/QemuConfig.pm
> > +++ b/PVE/QemuConfig.pm
> > @@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
> >  my ($class, $vmid, $vmstate, $data) = @_;
> >  
> >  my $storecfg = PVE::Storage::config();
> > -my $statefile = PVE::Storage::path($storecfg, $vmstate);
> > -PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
> > undef, $data->{forcemachine});
> > +PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, 
> > undef, $data->{forcemachine});
> >  }
> >  
> >  sub __snapshot_rollback_get_unused {
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index ac9dfde..d4feae9 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -5340,6 +5340,7 @@ sub vm_start {
> > die "you can't start a vm if it's a template\n" if 
> > PVE::QemuConfig->is_template($conf);
> >  
> > my $is_suspended = PVE::QemuConfig->has_lock($conf, 'suspended');
> > +   $conf->{vmstate} = $statefile if ($statefile && $statefile ne 'tcp' && 
> > $statefile ne 'unix');
> 
> why? I mean you then get it out of this hash in the same submethod, i.e,
> same scope, below again? 
No, the config_to_command takes care of activation and path generation
of the vmstate. The same way as the resume of a hibernated VM.

> 
> And even if you'd need it and you just decided to not explain why
> in the commit message it would be still better to get it ...
> 
> >  
> > PVE::QemuConfig->check_lock($conf)
> > if !($skiplock || $is_suspended);
> > @@ -5465,8 +5466,6 @@ sub vm_start {
> > push @$cmd, '-incoming', $migrate_uri;
> > push @$cmd, '-S';
> >  
> > -   } else {
> > -   push @$cmd, '-loadstate', $statefile;
> 
> ... here, as we really have exact the condition you checked
> above: $statefile truthy, but neither 'tcp' or 'unix'...
> 
> But as said, I'd rather not have it in the $conf (which can get written out
> again) but maybe rather:
> 
> $statefile //= $conf->{vmstate};
> 
> and then just use $statefile... (I mean rename it to $vmstate, if you want)
My first version was in this intention. After talking with Fabain G., I
made the v2, to re-use the same method as the resume of an hibernated
VM. I have no bias here, either way is fine for me.

> 
> > }
> > } elsif ($paused) {
> > push @$cmd, '-S';
> > @@ -5616,12 +5615,15 @@ sub vm_start {
> > property => "guest-stats-polling-interval",
> > value => 2) if (!defined($conf->{balloon}) || 
> > $conf->{balloon});
> >  
> > -   if ($is_suspended && (my $vmstate = $conf->{vmstate})) {
> > +   my $vmstate = $conf->{vmstate};
> > +   if ($is_suspended && $vmstate) {
> > print "Resumed VM, removing state\n";
> > delete $conf->@{qw(lock vmstate runningmachine)};
> > PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
> > PVE::Storage::vdisk_free($storecfg, $vmstate);
> > PVE::QemuConfig->write_config($vmid, $conf);
> > +   } elsif ($vmstate) {
> > +   PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
> > }
> 
> to be more clear that we always want to deactivate and for nicer code
> in general I'd do:
> 
> if ($vmstate) {
> # always deactive vmstate volume again!
> PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
> if ($is_suspended) {
> print "Resumed VM, removing state\n";
> delete $conf->@{qw(lock vmstate runningmachine)};
> PVE::Storage::vdisk_free($storecfg, $vmstate);
> PVE::QemuConfig->write_config($vmid, $conf);
> }
> }
> 
> 
> 
> As then you have a clear linear flow in the if branches.
> (note: $vmstate is $statefile, or whatever we call it then)
Yes, that looks better then my version. :)

Thanks for reviewing.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server v2] Fix #2171: VM statefile was not activated

2019-10-11 Thread Alwin Antreich
On Fri, Oct 11, 2019 at 12:17:28PM +0200, Thomas Lamprecht wrote:
> On 10/11/19 12:02 PM, Alwin Antreich wrote:
> > On Fri, Oct 11, 2019 at 07:10:53AM +0200, Thomas Lamprecht wrote:
> >> On 10/10/19 3:58 PM, Alwin Antreich wrote:
> >>> Machine states that were created on snapshots with memory could not be
> >>> restored on rollback. The state volume was not activated so KVM couldn't
> >>> load the state.
> >>>
> >>> This patch removes the path generation on rollback. It uses the vmstate
> >>> and de-/activates the state volume in vm_start. This in turn disallows
> >>> the use of path based statefiles when used with the '--stateuri' option
> >>> on 'qm start'. Only 'tcp', 'unix' and our storage based URIs can be
> >>> used now.
> >>>
> >>> Signed-off-by: Alwin Antreich 
> >>> ---
> >>>  PVE/QemuConfig.pm | 3 +--
> >>>  PVE/QemuServer.pm | 8 +---
> >>>  2 files changed, 6 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
> >>> index edbf1a7..e9796a3 100644
> >>> --- a/PVE/QemuConfig.pm
> >>> +++ b/PVE/QemuConfig.pm
> >>> @@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
> >>>  my ($class, $vmid, $vmstate, $data) = @_;
> >>>  
> >>>  my $storecfg = PVE::Storage::config();
> >>> -my $statefile = PVE::Storage::path($storecfg, $vmstate);
> >>> -PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, 
> >>> undef, undef, $data->{forcemachine});
> >>> +PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, 
> >>> undef, $data->{forcemachine});
> >>>  }
> >>>  
> >>>  sub __snapshot_rollback_get_unused {
> >>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> >>> index ac9dfde..d4feae9 100644
> >>> --- a/PVE/QemuServer.pm
> >>> +++ b/PVE/QemuServer.pm
> >>> @@ -5340,6 +5340,7 @@ sub vm_start {
> >>>   die "you can't start a vm if it's a template\n" if 
> >>> PVE::QemuConfig->is_template($conf);
> >>>  
> >>>   my $is_suspended = PVE::QemuConfig->has_lock($conf, 'suspended');
> >>> + $conf->{vmstate} = $statefile if ($statefile && $statefile ne 'tcp' && 
> >>> $statefile ne 'unix');
> >>
> >> why? I mean you then get it out of this hash in the same submethod, i.e,
> >> same scope, below again? 
> > No, the config_to_command takes care of activation and path generation
> > of the vmstate. The same way as the resume of a hibernated VM.
> > 
> 
> then write/explain such things in the commit message...
I will add this to the next version.

> 
> >>
> >> And even if you'd need it and you just decided to not explain why
> >> in the commit message it would be still better to get it ...
> >>
> >>>  
> >>>   PVE::QemuConfig->check_lock($conf)
> >>>   if !($skiplock || $is_suspended);
> >>> @@ -5465,8 +5466,6 @@ sub vm_start {
> >>>   push @$cmd, '-incoming', $migrate_uri;
> >>>   push @$cmd, '-S';
> >>>  
> >>> - } else {
> >>> - push @$cmd, '-loadstate', $statefile;
> >>
> >> ... here, as we really have exact the condition you checked
> >> above: $statefile truthy, but neither 'tcp' or 'unix'...
> >>
> >> But as said, I'd rather not have it in the $conf (which can get written out
> >> again) but maybe rather:
> >>
> >> $statefile //= $conf->{vmstate};
> >>
> >> and then just use $statefile... (I mean rename it to $vmstate, if you want)
> > My first version was in this intention. After talking with Fabain G., I
> > made the v2, to re-use the same method as the resume of an hibernated
> > VM. I have no bias here, either way is fine for me.
> 
> but you can still do it here even if you put it in the config, here is the
> correct place to do:
> 
> $conf->{vmstate} //= $statefile; 
Do you mean by "correct place", the else clause with the "--loadstate"?
It can't go there because the config_to_command has to happen before, as
it assigns the @$cmd. The other options are then pushed in addition to the
@$cmd, if the statefile is equal to tcp or unix.

> 
> I.e., I never said you should go back and handle it in your v1.. I approve
> with Fabians suggestion. The above postif just seemed out-of-place, especially
> if we have a $statefile handling already here..
Prior to this patch, the $statefile contained a path to the state
file/image on rollback of a snapshot and the loadstate could directly
take the $statefile. But the storage was not activated, eg. this made the
loadstate fail on RBD. Because of the config_to_command I placed the
assignment close to the top, where the config is loaded.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server v2] Fix #2171: VM statefile was not activated

2019-10-14 Thread Alwin Antreich
On Mon, Oct 14, 2019 at 11:44:59AM +0200, Thomas Lamprecht wrote:
> On 10/11/19 1:45 PM, Alwin Antreich wrote:
> > On Fri, Oct 11, 2019 at 12:17:28PM +0200, Thomas Lamprecht wrote:
> >> On 10/11/19 12:02 PM, Alwin Antreich wrote:
> >>> On Fri, Oct 11, 2019 at 07:10:53AM +0200, Thomas Lamprecht wrote:
> >>>> On 10/10/19 3:58 PM, Alwin Antreich wrote:
> >>>>> Machine states that were created on snapshots with memory could not be
> >>>>> restored on rollback. The state volume was not activated so KVM couldn't
> >>>>> load the state.
> >>>>>
> >>>>> This patch removes the path generation on rollback. It uses the vmstate
> >>>>> and de-/activates the state volume in vm_start. This in turn disallows
> >>>>> the use of path based statefiles when used with the '--stateuri' option
> >>>>> on 'qm start'. Only 'tcp', 'unix' and our storage based URIs can be
> 
> this is also API breakage, or? Why not a simple path check fallback in 
> cfg2cmd?
That's what I am not sure about, see my earlier email. Should I check
for file/device paths or just drop it?
https://pve.proxmox.com/pipermail/pve-devel/2019-October/039465.html

> 
> >>>>> used now.
> >>>>>
> >>>>> Signed-off-by: Alwin Antreich 
> >>>>> ---
> >>>>>  PVE/QemuConfig.pm | 3 +--
> >>>>>  PVE/QemuServer.pm | 8 +---
> >>>>>  2 files changed, 6 insertions(+), 5 deletions(-)
> >>>>>
> 
> >>>>
> 
> >>>>> PVE::QemuConfig->check_lock($conf)
> >>>>> if !($skiplock || $is_suspended);
> >>>>> @@ -5465,8 +5466,6 @@ sub vm_start {
> >>>>> push @$cmd, '-incoming', $migrate_uri;
> >>>>> push @$cmd, '-S';
> >>>>>  
> >>>>> -   } else {
> >>>>> -   push @$cmd, '-loadstate', $statefile;
> >>>>
> >>>> ... here, as we really have exact the condition you checked
> >>>> above: $statefile truthy, but neither 'tcp' or 'unix'...
> >>>>
> >>>> But as said, I'd rather not have it in the $conf (which can get written 
> >>>> out
> >>>> again) but maybe rather:
> >>>>
> >>>> $statefile //= $conf->{vmstate};
> >>>>
> >>>> and then just use $statefile... (I mean rename it to $vmstate, if you 
> >>>> want)
> >>> My first version was in this intention. After talking with Fabain G., I
> >>> made the v2, to re-use the same method as the resume of an hibernated
> >>> VM. I have no bias here, either way is fine for me.
> >>
> >> but you can still do it here even if you put it in the config, here is the
> >> correct place to do:
> >>
> >> $conf->{vmstate} //= $statefile; 
> > Do you mean by "correct place", the else clause with the "--loadstate"?
> > It can't go there because the config_to_command has to happen before, as
> > it assigns the @$cmd. The other options are then pushed in addition to the
> > @$cmd, if the statefile is equal to tcp or unix.
> > 
> 
> but we could move that below?
> Do a @extra_cmd (or the like) and have then one unified statefile handling?
> I mean this method is already very big and the more things inside are spread
> out the harder it's to maintain it..
Yes, I will go ahead and put this into v3.

> 
> Also this makes your "It uses the vmstate and de-/activates the state volume
> in vm_start" sentence from the commit message false, as it's activated in
> config_to_command not vm_start ...
True, I will rewrite the commit message in v3 to also better reflect the
outcome of the discussion.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v3] Fix #2171: VM statefile was not activated

2019-10-17 Thread Alwin Antreich
On rollback of a snapshot with state, the storage wasn't activated and
KVM failed to load the state on a storage like RBD. The $statefile
contained a path to the state file/device on rollback of a snapshot.

This patch assigns the statefile of the snapshot with a storage based
URI (:), to $conf->{vmstate} so config_to_command can
activate, generate and assign the path to '-loadstate'. Any file/device
based statefile will be added directly to -loadstate.

Signed-off-by: Alwin Antreich 
---
Note:   V1 -> V2: re-use resume code for rollback, incorporate
  suggestions of Fabian
  https://pve.proxmox.com/pipermail/pve-devel/2019-October/039459.html

V2 -> V3: move config_to_command below command additions,
  incorporate suggestions of Thomas
  https://pve.proxmox.com/pipermail/pve-devel/2019-October/039564.html

 PVE/QemuConfig.pm |  3 +--
 PVE/QemuServer.pm | 36 +++-
 2 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index edbf1a7..e9796a3 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -359,8 +359,7 @@ sub __snapshot_rollback_vm_start {
 my ($class, $vmid, $vmstate, $data) = @_;
 
 my $storecfg = PVE::Storage::config();
-my $statefile = PVE::Storage::path($storecfg, $vmstate);
-PVE::QemuServer::vm_start($storecfg, $vmid, $statefile, undef, undef, 
undef, $data->{forcemachine});
+PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, undef, 
$data->{forcemachine});
 }
 
 sub __snapshot_rollback_get_unused {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8dda594..b4e1ec8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5416,9 +5416,8 @@ sub vm_start {
print "Resuming suspended VM\n";
}
 
-   my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
$conf, $defaults, $forcemachine);
-
my $migrate_uri;
+   my $cmd_ext;
if ($statefile) {
if ($statefile eq 'tcp') {
my $localip = "localhost";
@@ -5447,8 +5446,8 @@ sub vm_start {
my $pfamily = PVE::Tools::get_host_address_family($nodename);
my $migrate_port = PVE::Tools::next_migrate_port($pfamily);
$migrate_uri = "tcp:${localip}:${migrate_port}";
-   push @$cmd, '-incoming', $migrate_uri;
-   push @$cmd, '-S';
+   push @$cmd_ext, '-incoming', $migrate_uri;
+   push @$cmd_ext, '-S';
 
} elsif ($statefile eq 'unix') {
# should be default for secure migrations as a ssh TCP forward
@@ -5459,16 +5458,24 @@ sub vm_start {
 
$migrate_uri = "unix:$socket_addr";
 
-   push @$cmd, '-incoming', $migrate_uri;
-   push @$cmd, '-S';
+   push @$cmd_ext, '-incoming', $migrate_uri;
+   push @$cmd_ext, '-S';
 
+   } elsif (-e $statefile) {
+   push @$cmd_ext, '-loadstate', $statefile;
} else {
-   push @$cmd, '-loadstate', $statefile;
+   # config_to_command takes care of activation and path
+   # generation of storage URIs (storage:vmid/vm-image) and adds
+   # the statefile to -loadstate
+   $conf->{vmstate} = $statefile;
}
} elsif ($paused) {
-   push @$cmd, '-S';
+   push @$cmd_ext, '-S';
}
 
+   my ($cmd, $vollist, $spice_port) = config_to_command($storecfg, $vmid, 
$conf, $defaults, $forcemachine);
+   push @$cmd, $cmd_ext if $cmd_ext;
+
# host pci devices
 for (my $i = 0; $i < $MAX_HOSTPCI_DEVICES; $i++)  {
   my $d = parse_hostpci($conf->{"hostpci$i"});
@@ -5613,12 +5620,15 @@ sub vm_start {
property => "guest-stats-polling-interval",
value => 2) if (!defined($conf->{balloon}) || 
$conf->{balloon});
 
-   if ($is_suspended && (my $vmstate = $conf->{vmstate})) {
-   print "Resumed VM, removing state\n";
-   delete $conf->@{qw(lock vmstate runningmachine)};
+   if (my $vmstate = $conf->{vmstate}) {
+   # always deactive vmstate volume again
PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
-   PVE::Storage::vdisk_free($storecfg, $vmstate);
-   PVE::QemuConfig->write_config($vmid, $conf);
+   if ($is_suspended) {
+   print "Resumed VM, removing state\n";
+   delete $conf->@{qw(lock vmstate runningmachine)};
+   PVE::Storage::vdisk_free($storecfg, $vmstate);
+   PVE::QemuConfig->write_config($vmid, $conf);
+   }
}
 
PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 08/11] pveceph: Reorganize TOC for new sections

2019-11-04 Thread Alwin Antreich
Put the previous added sections into subsection for a better outline of
the TOC.

With the rearrangement of the first level titles to second level, the
general descriptions of a service needs to move into the new first level
titles. And add/corrects some statements of those descriptions.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 79 ++--
 1 file changed, 45 insertions(+), 34 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 9806401..2972a68 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -234,11 +234,8 @@ configuration file.
 
 
 [[pve_ceph_monitors]]
-Creating Ceph Monitors
---
-
-[thumbnail="screenshot/gui-ceph-monitor.png"]
-
+Ceph Monitor
+---
 The Ceph Monitor (MON)
 footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
 maintains a master copy of the cluster map. For high availability you need to
@@ -247,6 +244,12 @@ used the installation wizard. You won't need more than 3 
monitors as long
 as your cluster is small to midsize, only really large clusters will
 need more than that.
 
+
+Creating Monitors
+~
+
+[thumbnail="screenshot/gui-ceph-monitor.png"]
+
 On each node where you want to place a monitor (three monitors are 
recommended),
 create it by using the 'Ceph -> Monitor' tab in the GUI or run.
 
@@ -256,12 +259,9 @@ create it by using the 'Ceph -> Monitor' tab in the GUI or 
run.
 pveceph mon create
 
 
-This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
-do not want to install a manager, specify the '-exclude-manager' option.
-
 
-Destroying Ceph Monitor
---
+Destroying Monitors
+~~~
 
 [thumbnail="screenshot/gui-ceph-monitor-destroy.png"]
 
@@ -280,16 +280,19 @@ NOTE: At least three Monitors are needed for quorum.
 
 
 [[pve_ceph_manager]]
-Creating Ceph Manager
---
+Ceph Manager
+
+The Manager daemon runs alongside the monitors, providing an interface for
+monitoring the cluster. Since the Ceph luminous release at least one ceph-mgr
+footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is
+required.
+
+Creating Manager
+
 
 [thumbnail="screenshot/gui-ceph-manager.png"]
 
-The Manager daemon runs alongside the monitors, providing an interface for
-monitoring the cluster. Since the Ceph luminous release the
-ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
-is required. During monitor installation the ceph manager will be installed as
-well.
+You can install multiple Manager, but at any time only one Manager is active.
 
 [source,bash]
 
@@ -300,8 +303,8 @@ NOTE: It is recommended to install the Ceph Manager on the 
monitor nodes. For
 high availability install more then one manager.
 
 
-Destroying Ceph Manager
---
+Destroying Manager
+~~
 
 [thumbnail="screenshot/gui-ceph-manager-destroy.png"]
 
@@ -321,8 +324,15 @@ the cluster status or usage require a running Manager.
 
 
 [[pve_ceph_osds]]
-Creating Ceph OSDs
---
+Ceph OSDs
+-
+Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
+network. In a Ceph cluster, you will usually have one OSD per physical disk.
+
+NOTE: By default an object is 4 MiB in size.
+
+Creating OSDs
+~
 
 [thumbnail="screenshot/gui-ceph-osd-status.png"]
 
@@ -346,8 +356,7 @@ ceph-volume lvm zap /dev/sd[X] --destroy
 
 WARNING: The above command will destroy data on the disk!
 
-Ceph Bluestore
-~~
+.Ceph Bluestore
 
 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
 introduced, the so called Bluestore
@@ -386,8 +395,7 @@ internal journal or write-ahead log. It is recommended to 
use a fast SSD or
 NVRAM for better performance.
 
 
-Ceph Filestore
-~~
+.Ceph Filestore
 
 Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
 Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
@@ -399,8 +407,8 @@ Starting with Ceph Nautilus, {pve} does not support 
creating such OSDs with
 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
 
 
-Destroying Ceph OSDs
-
+Destroying OSDs
+~~~
 
 [thumbnail="screenshot/gui-ceph-osd-destroy.png"]
 
@@ -431,14 +439,17 @@ WARNING: The above command will destroy data on the disk!
 
 
 [[pve_ceph_pools]]
-Creating Ceph Pools

-
-[thumbnail="screenshot/gui-ceph-pools.png"]
-
+Ceph Pools
+--
 A pool is a logical group for storing objects. It holds **P**lacement
 **G**roups (`PG`, `pg_num`), a collection of objects.
 
+
+Creating Pools
+~~
+
+[thumbnail="screenshot/gui-ceph-pools.png"]
+
 When no options are given, we set a 

[pve-devel] [PATCH docs 10/11] Fix #1958: pveceph: add section Ceph maintenance

2019-11-04 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 54 
 1 file changed, 54 insertions(+)

diff --git a/pveceph.adoc b/pveceph.adoc
index 087c4d0..127e3bb 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -331,6 +331,7 @@ network. In a Ceph cluster, you will usually have one OSD 
per physical disk.
 
 NOTE: By default an object is 4 MiB in size.
 
+[[pve_ceph_osd_create]]
 Creating OSDs
 ~
 
@@ -407,6 +408,7 @@ Starting with Ceph Nautilus, {pve} does not support 
creating such OSDs with
 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
 
 
+[[pve_ceph_osd_destroy]]
 Destroying OSDs
 ~~~
 
@@ -724,6 +726,58 @@ pveceph pool destroy NAME
 
 
 
+Ceph maintenance
+
+Replace OSDs
+
+One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If
+a disk already failed, you can go ahead and run through the steps in
+xref:pve_ceph_osd_destroy[Destroying OSDs]. As no data is accessible from the
+disk. Ceph will recreate those copies on the remaining OSDs if possible.
+
+For replacing a still functioning disk. From the GUI run through the steps as
+shown in xref:pve_ceph_osd_destroy[Destroying OSDs]. The only addition is to
+wait till the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
+
+On the command line use the below commands.
+
+ceph osd out osd.
+
+
+You can check with the below command if the OSD can be already removed.
+
+ceph osd safe-to-destroy osd.
+
+
+Once the above check tells you that it is save to remove the OSD, you can
+continue with below commands.
+
+systemctl stop ceph-osd@.service
+pveceph osd destroy 
+
+
+Replace the old with the new disk and use the same procedure as described in
+xref:pve_ceph_osd_create[Creating OSDs].
+
+NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
+`size + 1` nodes are available.
+
+Run fstrim (discard)
+
+It is a good measure to run fstrim (discard) regularly on VMs or containers.
+This releases data blocks that the filesystem isn’t using anymore. It reduces
+data usage and the resource load.
+
+Scrub & Deep Scrub
+~~
+Ceph insures data integrity by 'scrubbing' placement groups. Ceph check every
+object in a PG for its health. There are two forms of Scrubbing, daily
+(metadata compare) and weekly. The latter reads the object and uses checksums
+to ensure data integrity. If a running scrub interferes with business needs,
+you can adjust the time of execution of Scrub footnote:[Ceph scrubbing
+https://docs.ceph.com/docs/nautilus/rados/configuration/osd-config-ref/#scrubbing].
+
+
 Ceph monitoring and troubleshooting
 ---
 A good start is to continuosly monitor the ceph health from the start of
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 00/11] pveceph: update doc

2019-11-04 Thread Alwin Antreich
In patch 11 I have added an attribute to asciidoc-pve.conf to replace
Ceph's codename. I hope this is the right location for this.

Review and suggestions are very welcome. Thanks. :)

Alwin Antreich (11):
  pveceph: old style commands to subcommands
  pveceph: add section - Destroying Ceph OSDs
  pveceph: add section - Destroying Ceph Monitor
  pveceph: add Ceph Monitor screenshot
  pveceph: add section - Destroying Ceph Manager
  pveceph: add section - Destroying Ceph Pools
  pveceph: switch note for Creating Ceph Manager
  pveceph: Reorganize TOC for new sections
  pveceph: rename CephFS subtitles
  Fix #1958: pveceph: add section Ceph maintenance
  pveceph: add attribute ceph_codename

 pveceph.adoc  | 246 ++
 asciidoc/asciidoc-pve.conf|   2 +-
 .../screenshot/gui-ceph-manager-destroy.png   | Bin 0 -> 153596 bytes
 images/screenshot/gui-ceph-manager.png| Bin 0 -> 153389 bytes
 .../screenshot/gui-ceph-monitor-destroy.png   | Bin 0 -> 154084 bytes
 images/screenshot/gui-ceph-osd-destroy.png| Bin 0 -> 146184 bytes
 images/screenshot/gui-ceph-pools-destroy.png  | Bin 0 -> 141532 bytes
 7 files changed, 203 insertions(+), 45 deletions(-)
 create mode 100644 images/screenshot/gui-ceph-manager-destroy.png
 create mode 100644 images/screenshot/gui-ceph-manager.png
 create mode 100644 images/screenshot/gui-ceph-monitor-destroy.png
 create mode 100644 images/screenshot/gui-ceph-osd-destroy.png
 create mode 100644 images/screenshot/gui-ceph-pools-destroy.png

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 09/11] pveceph: rename CephFS subtitles

2019-11-04 Thread Alwin Antreich
to reflect the same active voice style as the other subtitles in pveceph

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 2972a68..087c4d0 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -662,7 +662,7 @@ refer to the ceph documentation. footnote:[Configuring 
multiple active MDS
 daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
 
 [[pveceph_fs_create]]
-Create a CephFS
+Creating CephFS
 ~~~
 
 With {pve}'s CephFS integration into you can create a CephFS easily over the
@@ -695,8 +695,8 @@ 
http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
 storage configuration after it was created successfully.
 
-Destroy CephFS
-~~
+Destroying CephFS
+~
 
 WARNING: Destroying a CephFS will render all its data unusable, this cannot be
 undone!
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 07/11] pveceph: switch note for Creating Ceph Manager

2019-11-04 Thread Alwin Antreich
to be more consistent with other sections, the note for creating the
Ceph Manager was moved below the command.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index a4f2e4e..9806401 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -291,14 +291,14 @@ ceph-mgr footnote:[Ceph Manager 
http://docs.ceph.com/docs/luminous/mgr/] daemon
 is required. During monitor installation the ceph manager will be installed as
 well.
 
-NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
-high availability install more then one manager.
-
 [source,bash]
 
 pveceph mgr create
 
 
+NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
+high availability install more then one manager.
+
 
 Destroying Ceph Manager
 --
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 01/11] pveceph: old style commands to subcommands

2019-11-04 Thread Alwin Antreich
Replace remaining old style single commands with current subcommands

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index ebf9ef7..cfb86a8 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -253,7 +253,7 @@ create it by using the 'Ceph -> Monitor' tab in the GUI or 
run.
 
 [source,bash]
 
-pveceph createmon
+pveceph mon create
 
 
 This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
@@ -275,7 +275,7 @@ high availability install more then one manager.
 
 [source,bash]
 
-pveceph createmgr
+pveceph mgr create
 
 
 
@@ -289,7 +289,7 @@ via GUI or via CLI as follows:
 
 [source,bash]
 
-pveceph createosd /dev/sd[X]
+pveceph osd create /dev/sd[X]
 
 
 TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed 
evenly
@@ -315,7 +315,7 @@ This is the default when creating OSDs since Ceph Luminous.
 
 [source,bash]
 
-pveceph createosd /dev/sd[X]
+pveceph osd create /dev/sd[X]
 
 
 .Block.db and block.wal
@@ -326,7 +326,7 @@ specified separately.
 
 [source,bash]
 
-pveceph createosd /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
+pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
 
 
 You can directly choose the size for those with the '-db_size' and '-wal_size'
@@ -385,7 +385,7 @@ You can create pools through command line or on the GUI on 
each PVE host under
 
 [source,bash]
 
-pveceph createpool 
+pveceph pool create 
 
 
 If you would like to automatically also get a storage definition for your pool,
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 11/11] pveceph: add attribute ceph_codename

2019-11-04 Thread Alwin Antreich
To change the codename for Ceph in one place, the patch adds the
asciidoc attribute 'ceph_codename'. Replaces the outdated references to
luminous and the http -> https on the links in pveceph.adoc.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc   | 26 +-
 asciidoc/asciidoc-pve.conf |  2 +-
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 127e3bb..3316389 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -58,15 +58,15 @@ and VMs on the same node is possible.
 To simplify management, we provide 'pveceph' - a tool to install and
 manage {ceph} services on {pve} nodes.
 
-.Ceph consists of a couple of Daemons footnote:[Ceph intro 
http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
+.Ceph consists of a couple of Daemons footnote:[Ceph intro 
https://docs.ceph.com/docs/{ceph_codename}/start/intro/], for use as a RBD 
storage:
 - Ceph Monitor (ceph-mon)
 - Ceph Manager (ceph-mgr)
 - Ceph OSD (ceph-osd; Object Storage Daemon)
 
 TIP: We highly recommend to get familiar with Ceph's architecture
-footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
+footnote:[Ceph architecture 
https://docs.ceph.com/docs/{ceph_codename}/architecture/]
 and vocabulary
-footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
+footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary].
 
 
 Precondition
@@ -76,7 +76,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there 
should be at least
 three (preferably) identical servers for the setup.
 
 Check also the recommendations from
-http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's 
website].
+https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's
 website].
 
 .CPU
 Higher CPU core frequency reduce latency and should be preferred. As a simple
@@ -237,7 +237,7 @@ configuration file.
 Ceph Monitor
 ---
 The Ceph Monitor (MON)
-footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
+footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/]
 maintains a master copy of the cluster map. For high availability you need to
 have at least 3 monitors. One monitor will already be installed if you
 used the installation wizard. You won't need more than 3 monitors as long
@@ -284,7 +284,7 @@ Ceph Manager
 
 The Manager daemon runs alongside the monitors, providing an interface for
 monitoring the cluster. Since the Ceph luminous release at least one ceph-mgr
-footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is
+footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon 
is
 required.
 
 Creating Manager
@@ -479,7 +479,7 @@ mark the checkbox "Add storages" in the GUI or use the 
command line option
 
 Further information on Ceph pool handling can be found in the Ceph pool
 operation footnote:[Ceph pool operation
-http://docs.ceph.com/docs/luminous/rados/operations/pools/]
+https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/]
 manual.
 
 
@@ -515,7 +515,7 @@ advantage that no central index service is needed. CRUSH 
works with a map of
 OSDs, buckets (device locations) and rulesets (data replication) for pools.
 
 NOTE: Further information can be found in the Ceph documentation, under the
-section CRUSH map footnote:[CRUSH map 
http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
+section CRUSH map footnote:[CRUSH map 
https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/].
 
 This map can be altered to reflect different replication hierarchies. The 
object
 replicas can be separated (eg. failure domains), while maintaining the desired
@@ -661,7 +661,7 @@ Since Luminous (12.2.x) you can also have multiple active 
metadata servers
 running, but this is normally only useful for a high count on parallel clients,
 as else the `MDS` seldom is the bottleneck. If you want to set this up please
 refer to the ceph documentation. footnote:[Configuring multiple active MDS
-daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
+daemons https://docs.ceph.com/docs/{ceph_codename}/cephfs/multimds/]
 
 [[pveceph_fs_create]]
 Creating CephFS
@@ -693,7 +693,7 @@ This creates a CephFS named `'cephfs'' using a pool for its 
data named
 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
 Ceph documentation for more information regarding a fitting placement group
 number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
+https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/].
 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
 storage configuration after it was created successfully.
 
@@ -775,7 +775,7 @@ object 

[pve-devel] [PATCH docs v2 00/10] pveceph: update doc

2019-11-06 Thread Alwin Antreich
In patch 10 I have added an attribute to asciidoc-pve.conf to replace
Ceph's codename. I hope this is the right location for this.

Changes V1 -> V2:
Added corrections and suggestions from Aaron.
Doesn't ship the applied patch "pveceph: old style commands to subcommands"

Review and suggestions are very welcome. Thanks. :)

Alwin Antreich (10):
  pveceph: add section - Destroying Ceph OSDs
  pveceph: add section - Destroying Ceph Monitor
  pveceph: add Ceph Monitor screenshot
  pveceph: add section - Destroying Ceph Manager
  pveceph: add section - Destroying Ceph Pools
  pveceph: switch note for Creating Ceph Manager
  pveceph: Reorganize TOC for new sections
  pveceph: correct CephFS subtitle
  Fix #1958: pveceph: add section Ceph maintenance
  pveceph: add attribute ceph_codename

 pveceph.adoc  | 252 ++
 asciidoc/asciidoc-pve.conf|   1 +
 .../screenshot/gui-ceph-manager-destroy.png   | Bin 0 -> 153596 bytes
 images/screenshot/gui-ceph-manager.png| Bin 0 -> 153389 bytes
 .../screenshot/gui-ceph-monitor-destroy.png   | Bin 0 -> 154084 bytes
 images/screenshot/gui-ceph-osd-destroy.png| Bin 0 -> 146184 bytes
 images/screenshot/gui-ceph-pools-destroy.png  | Bin 0 -> 141532 bytes
 7 files changed, 205 insertions(+), 48 deletions(-)
 create mode 100644 images/screenshot/gui-ceph-manager-destroy.png
 create mode 100644 images/screenshot/gui-ceph-manager.png
 create mode 100644 images/screenshot/gui-ceph-monitor-destroy.png
 create mode 100644 images/screenshot/gui-ceph-osd-destroy.png
 create mode 100644 images/screenshot/gui-ceph-pools-destroy.png

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs v2 09/10] Fix #1958: pveceph: add section Ceph maintenance

2019-11-06 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 55 
 1 file changed, 55 insertions(+)

diff --git a/pveceph.adoc b/pveceph.adoc
index 66ea111..0d62943 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -331,6 +331,7 @@ network. It is recommended to use one OSD per physical disk.
 
 NOTE: By default an object is 4 MiB in size.
 
+[[pve_ceph_osd_create]]
 Create OSDs
 ~~~
 
@@ -407,6 +408,7 @@ Starting with Ceph Nautilus, {pve} does not support 
creating such OSDs with
 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
 
 
+[[pve_ceph_osd_destroy]]
 Destroy OSDs
 
 
@@ -721,6 +723,59 @@ pveceph pool destroy NAME
 
 
 
+Ceph maintenance
+
+Replace OSDs
+
+One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If
+a disk is already in a failed state, then you can go ahead and run through the
+steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate those
+copies on the remaining OSDs if possible.
+
+To replace a still functioning disk, on the GUI go through the steps in
+xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
+the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
+
+On the command line use the following commands.
+
+ceph osd out osd.
+
+
+You can check with the command below if the OSD can be safely removed.
+
+ceph osd safe-to-destroy osd.
+
+
+Once the above check tells you that it is save to remove the OSD, you can
+continue with following commands.
+
+systemctl stop ceph-osd@.service
+pveceph osd destroy 
+
+
+Replace the old disk with the new one and use the same procedure as described
+in xref:pve_ceph_osd_create[Create OSDs].
+
+NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
+`size + 1` nodes are available.
+
+Run fstrim (discard)
+
+It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
+This releases data blocks that the filesystem isn’t using anymore. It reduces
+data usage and the resource load.
+
+Scrub & Deep Scrub
+~~
+Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
+object in a PG for its health. There are two forms of Scrubbing, daily
+(metadata compare) and weekly. The weekly reads the objects and uses checksums
+to ensure data integrity. If a running scrub interferes with business needs,
+you can adjust the time when scrubs footnote:[Ceph scrubbing
+https://docs.ceph.com/docs/nautilus/rados/configuration/osd-config-ref/#scrubbing]
+are executed.
+
+
 Ceph monitoring and troubleshooting
 ---
 A good start is to continuosly monitor the ceph health from the start of
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs v2 08/10] pveceph: correct CephFS subtitle

2019-11-06 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index e97e2e6..66ea111 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -659,8 +659,8 @@ refer to the ceph documentation. footnote:[Configuring 
multiple active MDS
 daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
 
 [[pveceph_fs_create]]
-Create a CephFS
-~~~
+Create CephFS
+~
 
 With {pve}'s CephFS integration into you can create a CephFS easily over the
 Web GUI, the CLI or an external API interface. Some prerequisites are required
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs v2 07/10] pveceph: Reorganize TOC for new sections

2019-11-06 Thread Alwin Antreich
Put the previous added sections into subsection for a better outline of
the TOC.

With the rearrangement of the first level titles to second level, the
general descriptions of a service needs to move into the new first level
titles. And add/corrects some statements of those descriptions.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 95 +---
 1 file changed, 53 insertions(+), 42 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index dbfe909..e97e2e6 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -212,8 +212,8 @@ This sets up an `apt` package repository in
 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
 
 
-Creating initial Ceph configuration

+Create initial Ceph configuration
+-
 
 [thumbnail="screenshot/gui-ceph-config.png"]
 
@@ -234,11 +234,8 @@ configuration file.
 
 
 [[pve_ceph_monitors]]
-Creating Ceph Monitors
---
-
-[thumbnail="screenshot/gui-ceph-monitor.png"]
-
+Ceph Monitor
+---
 The Ceph Monitor (MON)
 footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
 maintains a master copy of the cluster map. For high availability you need to
@@ -247,6 +244,12 @@ used the installation wizard. You won't need more than 3 
monitors as long
 as your cluster is small to midsize, only really large clusters will
 need more than that.
 
+
+Create Monitors
+~~~
+
+[thumbnail="screenshot/gui-ceph-monitor.png"]
+
 On each node where you want to place a monitor (three monitors are 
recommended),
 create it by using the 'Ceph -> Monitor' tab in the GUI or run.
 
@@ -256,12 +259,9 @@ create it by using the 'Ceph -> Monitor' tab in the GUI or 
run.
 pveceph mon create
 
 
-This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
-do not want to install a manager, specify the '-exclude-manager' option.
 
-
-Destroying Ceph Monitor
---
+Destroy Monitors
+
 
 [thumbnail="screenshot/gui-ceph-monitor-destroy.png"]
 
@@ -280,16 +280,19 @@ NOTE: At least three Monitors are needed for quorum.
 
 
 [[pve_ceph_manager]]
-Creating Ceph Manager
---
+Ceph Manager
+
+The Manager daemon runs alongside the monitors. It provides an interface to
+monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
+footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is
+required.
+
+Create Manager
+~~
 
 [thumbnail="screenshot/gui-ceph-manager.png"]
 
-The Manager daemon runs alongside the monitors, providing an interface for
-monitoring the cluster. Since the Ceph luminous release the
-ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
-is required. During monitor installation the ceph manager will be installed as
-well.
+Multiple Managers can be installed, but at any time only one Manager is active.
 
 [source,bash]
 
@@ -300,8 +303,8 @@ NOTE: It is recommended to install the Ceph Manager on the 
monitor nodes. For
 high availability install more then one manager.
 
 
-Destroying Ceph Manager
---
+Destroy Manager
+~~~
 
 [thumbnail="screenshot/gui-ceph-manager-destroy.png"]
 
@@ -321,8 +324,15 @@ the cluster status or usage require a running Manager.
 
 
 [[pve_ceph_osds]]
-Creating Ceph OSDs
---
+Ceph OSDs
+-
+Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
+network. It is recommended to use one OSD per physical disk.
+
+NOTE: By default an object is 4 MiB in size.
+
+Create OSDs
+~~~
 
 [thumbnail="screenshot/gui-ceph-osd-status.png"]
 
@@ -333,8 +343,8 @@ via GUI or via CLI as follows:
 pveceph osd create /dev/sd[X]
 
 
-TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed 
evenly
-among your, at least three nodes (4 OSDs on each node).
+TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed
+evenly among your, at least three nodes (4 OSDs on each node).
 
 If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
 sector and any OSD leftover the following command should be sufficient.
@@ -346,8 +356,7 @@ ceph-volume lvm zap /dev/sd[X] --destroy
 
 WARNING: The above command will destroy data on the disk!
 
-Ceph Bluestore
-~~
+.Ceph Bluestore
 
 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
 introduced, the so called Bluestore
@@ -362,8 +371,8 @@ pveceph osd create /dev/sd[X]
 .Block.db and block.wal
 
 If you want to use a separate DB/WAL device for your OSDs, you can specify it
-through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, 
if not
-specified separately.
+through the '-db_dev' and 

[pve-devel] [PATCH docs v2 06/10] pveceph: switch note for Creating Ceph Manager

2019-11-06 Thread Alwin Antreich
to be more consistent with other sections, the note for creating the
Ceph Manager was moved below the command.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 5933cc8..dbfe909 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -291,14 +291,14 @@ ceph-mgr footnote:[Ceph Manager 
http://docs.ceph.com/docs/luminous/mgr/] daemon
 is required. During monitor installation the ceph manager will be installed as
 well.
 
-NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
-high availability install more then one manager.
-
 [source,bash]
 
 pveceph mgr create
 
 
+NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
+high availability install more then one manager.
+
 
 Destroying Ceph Manager
 --
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs v2 10/10] pveceph: add attribute ceph_codename

2019-11-06 Thread Alwin Antreich
To change the codename for Ceph in one place, the patch adds the
asciidoc attribute 'ceph_codename'. Replaces the outdated references to
luminous and the http -> https on the links in pveceph.adoc.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc   | 30 +++---
 asciidoc/asciidoc-pve.conf |  1 +
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 0d62943..68a4e8a 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -58,15 +58,15 @@ and VMs on the same node is possible.
 To simplify management, we provide 'pveceph' - a tool to install and
 manage {ceph} services on {pve} nodes.
 
-.Ceph consists of a couple of Daemons footnote:[Ceph intro 
http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
+.Ceph consists of a couple of Daemons footnote:[Ceph intro 
https://docs.ceph.com/docs/{ceph_codename}/start/intro/], for use as a RBD 
storage:
 - Ceph Monitor (ceph-mon)
 - Ceph Manager (ceph-mgr)
 - Ceph OSD (ceph-osd; Object Storage Daemon)
 
 TIP: We highly recommend to get familiar with Ceph's architecture
-footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
+footnote:[Ceph architecture 
https://docs.ceph.com/docs/{ceph_codename}/architecture/]
 and vocabulary
-footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
+footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary].
 
 
 Precondition
@@ -76,7 +76,7 @@ To build a hyper-converged Proxmox + Ceph Cluster there 
should be at least
 three (preferably) identical servers for the setup.
 
 Check also the recommendations from
-http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's 
website].
+https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's
 website].
 
 .CPU
 Higher CPU core frequency reduce latency and should be preferred. As a simple
@@ -237,7 +237,7 @@ configuration file.
 Ceph Monitor
 ---
 The Ceph Monitor (MON)
-footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
+footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/]
 maintains a master copy of the cluster map. For high availability you need to
 have at least 3 monitors. One monitor will already be installed if you
 used the installation wizard. You won't need more than 3 monitors as long
@@ -284,7 +284,7 @@ Ceph Manager
 
 The Manager daemon runs alongside the monitors. It provides an interface to
 monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
-footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is
+footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon 
is
 required.
 
 Create Manager
@@ -361,7 +361,7 @@ WARNING: The above command will destroy data on the disk!
 
 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
 introduced, the so called Bluestore
-footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
+footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
 This is the default when creating OSDs since Ceph Luminous.
 
 [source,bash]
@@ -460,7 +460,7 @@ NOTE: The default number of PGs works for 2-5 disks. Ceph 
throws a
 
 It is advised to calculate the PG number depending on your setup, you can find
 the formula and the PG calculator footnote:[PG calculator
-http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
+https://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
 never be decreased.
 
 
@@ -478,7 +478,7 @@ mark the checkbox "Add storages" in the GUI or use the 
command line option
 
 Further information on Ceph pool handling can be found in the Ceph pool
 operation footnote:[Ceph pool operation
-http://docs.ceph.com/docs/luminous/rados/operations/pools/]
+https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/]
 manual.
 
 
@@ -512,7 +512,7 @@ advantage that no central index service is needed. CRUSH 
works with a map of
 OSDs, buckets (device locations) and rulesets (data replication) for pools.
 
 NOTE: Further information can be found in the Ceph documentation, under the
-section CRUSH map footnote:[CRUSH map 
http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
+section CRUSH map footnote:[CRUSH map 
https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/].
 
 This map can be altered to reflect different replication hierarchies. The 
object
 replicas can be separated (eg. failure domains), while maintaining the desired
@@ -658,7 +658,7 @@ Since Luminous (12.2.x) you can also have multiple active 
metadata servers
 running, but this is normally only useful for a high count on parallel clients,
 as else the `MDS` seldom is the bottleneck. If you want to set this up please
 refer to the ceph documentation. footnote:[Configuring multiple active M

Re: [pve-devel] [PATCH docs] qm: spice foldersharing: Add experimental warning

2019-11-06 Thread Alwin Antreich
On Wed, Nov 06, 2019 at 03:20:59PM +0100, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer 
> ---
>  qm.adoc | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 9ee4460..c0fe892 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -856,6 +856,8 @@ Select the folder to share and then enable the checkbox.
>  
>  NOTE: Folder sharing currently only works in the Linux version of 
> Virt-Viewer.
>  
> +CAUTION: Experimental! This feature does not work reliably.
Maybe use a s/reliably/reliably yet/ to inidicate that this might change
in the future?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] qm: spice foldersharing: Add experimental warning

2019-11-06 Thread Alwin Antreich
On Wed, Nov 06, 2019 at 03:45:26PM +0100, Aaron Lauterer wrote:
> Hmm, What about:
> 
> Currently this feature does not work reliably.
> 
> 
> On 11/6/19 3:29 PM, Alwin Antreich wrote:
> > On Wed, Nov 06, 2019 at 03:20:59PM +0100, Aaron Lauterer wrote:
> > > Signed-off-by: Aaron Lauterer 
> > > ---
> > >   qm.adoc | 2 ++
> > >   1 file changed, 2 insertions(+)
> > > 
> > > diff --git a/qm.adoc b/qm.adoc
> > > index 9ee4460..c0fe892 100644
> > > --- a/qm.adoc
> > > +++ b/qm.adoc
> > > @@ -856,6 +856,8 @@ Select the folder to share and then enable the 
> > > checkbox.
> > >   NOTE: Folder sharing currently only works in the Linux version of 
> > > Virt-Viewer.
> > > +CAUTION: Experimental! This feature does not work reliably.
> > Maybe use a s/reliably/reliably yet/ to inidicate that this might change
> > in the future?
+1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] Fix: pveceph: broken ref anchor pveceph_mgr_create

2019-11-07 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index ef257ac..99c610a 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -286,7 +286,7 @@ monitor the cluster. Since the Ceph luminous release at 
least one ceph-mgr
 footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon 
is
 required.
 
-[i[pveceph_create_mgr]]
+[[pveceph_create_mgr]]
 Create Manager
 ~~
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] Fix: pveceph: spelling in section Trim/Discard

2019-11-07 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 99c610a..122f063 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -762,9 +762,9 @@ Trim/Discard
 
 It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
 This releases data blocks that the filesystem isn’t using anymore. It reduces
-data usage and the resource load. Most modern operating systems issue such
-discard commands to their disks regurarly. You only need to ensure that the
-Virtual Machines enable the xref:qm_hard_disk_discard[disk discard option].
+data usage and resource load. Most modern operating systems issue such discard
+commands to their disks regularly. You only need to ensure that the Virtual
+Machines enable the xref:qm_hard_disk_discard[disk discard option].
 
 [[pveceph_scrub]]
 Scrub & Deep Scrub
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] ceph: Create symlink on standalone MGR creation

2019-12-03 Thread Alwin Antreich
Ceph MGR fails to start when installed on a node without existing
symlink to /etc/pve/ceph.conf.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph/MGR.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/API2/Ceph/MGR.pm b/PVE/API2/Ceph/MGR.pm
index d3d86c0d..ffae7495 100644
--- a/PVE/API2/Ceph/MGR.pm
+++ b/PVE/API2/Ceph/MGR.pm
@@ -108,6 +108,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_installed('ceph_mgr');
PVE::Ceph::Tools::check_ceph_inited();
+   PVE::Ceph::Tools::setup_pve_symlinks();
 
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH proxmox-ve] Update kernel links for install CD (rescue boot)

2019-12-03 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
Note: Thanks to Stoiko, he build an ISO to test the patch.
  This works with LVM based installs, but fails currently for ZFS
  with "Compression algorithm inherit not supported. Unable to find
  bootdisk automatically"

 debian/postinst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/debian/postinst b/debian/postinst
index 1e17a89..a0d88f6 100755
--- a/debian/postinst
+++ b/debian/postinst
@@ -7,8 +7,8 @@ case "$1" in
   configure)
 # setup kernel links for installation CD (rescue boot)
 mkdir -p /boot/pve
-ln -sf /boot/pve/vmlinuz-5.0 /boot/pve/vmlinuz
-ln -sf /boot/pve/initrd.img-5.0 /boot/pve/initrd.img
+ln -sf /boot/pve/vmlinuz-5.3 /boot/pve/vmlinuz
+ln -sf /boot/pve/initrd.img-5.3 /boot/pve/initrd.img
 ;;
 esac
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] API: OSD: Fix #2496 Check OSD Network

2019-12-13 Thread Alwin Antreich
Some comments inline.

On Fri, Dec 13, 2019 at 03:56:42PM +0100, Aaron Lauterer wrote:
> It's possible to have a situation where the cluster network (used for
> inter-OSD traffic) is not configured on a node. The OSD can still be
> created but can't communicate.
> 
> This check will abort the creation if there is no IP within the subnet
> of the cluster network present on the node. If there is no dedicated
> cluster network the public network is used. The chances of that not
> being configured is much lower but better be on the safe side and check
> it if there is no cluster network.
> 
> Signed-off-by: Aaron Lauterer 
> ---
>  PVE/API2/Ceph/OSD.pm | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
> index 5f70cf58..59cc9567 100644
> --- a/PVE/API2/Ceph/OSD.pm
> +++ b/PVE/API2/Ceph/OSD.pm
> @@ -275,6 +275,14 @@ __PACKAGE__->register_method ({
>   # extract parameter info and fail if a device is set more than once
>   my $devs = {};
>  
> + my $ceph_conf = cfs_read_file('ceph.conf');
The public/cluster networks could have been migrated into the MON DB. In
this case they would not appear in the ceph.conf.

ATM it might be unlikely, there is an ugly warning, with every command
execution. But still possible.
```
Configuration option 'cluster_network' may not be modified at runtime
```

> +
> + # check if network is configured
> + my $osd_network = $ceph_conf->{global}->{cluster_network}
> + // $ceph_conf->{global}->{public_network};
An OSD needs both networks. Public for communication with the MONS &
clients. And the cluster network for replication. On our default setup,
it's both the same network.

I have tested the OSD creation with the cluster network down. During
creation, it only needs the public network to create the OSD on the MON.
But the OSD can't start and therefore isn't placed on the CRUSH map.
Once it can start, it will be added to the correct location on the map.

IMHO, the code needs to check both.

> + die "No network interface configured for subnet $osd_network. Check ".
> + "your network config.\n" if 
> !@{PVE::Network::get_local_ip_from_cidr($osd_network)};
> +
>   # FIXME: rename params on next API compatibillity change (7.0)
>   $param->{wal_dev_size} = delete $param->{wal_size};
>   $param->{db_dev_size} = delete $param->{db_size};
> @@ -330,7 +338,6 @@ __PACKAGE__->register_method ({
>   my $fsid = $monstat->{monmap}->{fsid};
>  $fsid = $1 if $fsid =~ m/^([0-9a-f\-]+)$/;
>  
> - my $ceph_conf = cfs_read_file('ceph.conf');
>   my $ceph_bootstrap_osd_keyring = 
> PVE::Ceph::Tools::get_config('ceph_bootstrap_osd_keyring');
>  
>   if (! -f $ceph_bootstrap_osd_keyring && 
> $ceph_conf->{global}->{auth_client_required} eq 'cephx') {
> -- 
> 2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ceph 1/2] fix Ceph version string handling

2020-01-09 Thread Alwin Antreich
Hello Martin,

On Thu, Jan 09, 2020 at 02:23:07PM +0100, Martin Verges wrote:
> Hello Thomas,
> 
> 1. we provide 14.2.5, the latest stable release available (we have upcoming
> > 14.2.6 already on track)
> >
> 
> Good to know, does not seem to be a knowledge that some proxmox users have
> nor was it the case in the past.
I hope this knowledge spreads (it's hardly hidden). ;)
A good point to start from is the release notes and our documentation [0].

I guess the past needs a little clarification (Debian Stretch + Mimic),
Fabian had a discussion upstream [1]. And as there was no supported way
to build available. We decided at this point to not burden our users
with an experimental build of Ceph. Which would also have changed
fundamental parts of the OS (eg. glibc).

> 
> 
> > If you have custom patches which improve the experience I'd suggest
> > up-streaming them to Ceph or, if they affect our management tooling for
> > ceph, telling us here or at bugzilla.proxmox.com and/or naturally
> > up-streaming them to PVE.
> >
> 
> As a founding member of the Ceph foundation, we always provide all patches
> to the Ceph upstream and as always they will be included in future releases
> of Ceph or backported to older versions.
Thanks.

> 
> The Ceph integration from a client perspective should work as with every
> > other
> > "external" ceph server setup. IMO, it makes no sense to mix our management
> > interface for Ceph with externally untested builds. We sync releases of
> > Ceph
> > on our side with releases of the management stack, that would be
> > circumvented
> > completely, as would be the testing of the Ceph setup.
> >
> > If people want to use croit that's naturally fine for us, they can use the
> > croit managed ceph cluster within PVE instances as RBD or CephFS client
> > just
> > fine, as it is and was always the case. But, mixing croit packages with PVE
> > management makes not much sense to me, I'm afraid.
> >
> 
> I agree that user should stick to the versions a vendor provides, in your
> case the proxmox Ceph versions. But as I already wrote, we get a lot of
> proxmox users on our table that use proxmox and Ceph and some seem to have
> an issue.
I urge those users to also speak to us. If we don't know about possible
issues, then we can't help.

> 
> As my fix does not affect any proxmox functionality in a negative way, no
> will it break anything. Why would you hesitate to allow users to choose the
> Ceph versions of their liking? It just enables proxmox to don't break on
> such versions.
Proxmox VE's Ceph management is written explicitly for the
hyper-converged use case. This intent binds the management of Ceph to
the Proxmox VE clustered nodes and not to a separate Ceph cluster.

We provide packages specifically tested on Proxmox VE. And for its use
case, as Ceph client or cluster (RBD/CephFS services).

As user, using packages provided by a third party circumvents our
testing, possibly breaks usage (e.g., API/CLI changes) and in the end,
the user may be left with an installation in an unknown state.

When you use Proxmox VE as a client, the dashboard (or CLI) should not
be used.  Only due to the nature of Ceph's commands, some functionality
is working on the dashboard. For sure, this separation could be made
more visible.

I hopefully this explains, why we are currently against applying this
patch of yours.

--
Cheers,
Alwin

[0] https://pve.proxmox.com/wiki/Roadmap

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_package_repositories_ceph
https://pve.proxmox.com/pve-docs/chapter-pveceph.html

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-June/027366.html

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] Fix: fsck: rbd volume not mapped

2020-01-13 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/CLI/pct.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 98e2c6e..9dee68d 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -247,7 +247,7 @@ __PACKAGE__->register_method ({
die "unable to run fsck for '$volid' (format == $format)\n"
if $format ne 'raw';
 
-   $path = PVE::Storage::path($storage_cfg, $volid);
+   $path = PVE::Storage::map_volume($storage_cfg, $volid);
 
} else {
if (($volid =~ m|^/.+|) && (-b $volid)) {
@@ -264,6 +264,7 @@ __PACKAGE__->register_method ({
die "cannot run fsck on active container\n";
 
PVE::Tools::run_command($command);
+   PVE::Storage::unmap_volume($storage_cfg, $volid);
};
 
PVE::LXC::Config->lock_config($vmid, $do_fsck);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix: fsck: rbd volume not mapped

2020-01-17 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
V1 -> V2: run unmap only if it has a storage id.

 src/PVE/CLI/pct.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 98e2c6e..ec071c5 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -247,7 +247,7 @@ __PACKAGE__->register_method ({
die "unable to run fsck for '$volid' (format == $format)\n"
if $format ne 'raw';
 
-   $path = PVE::Storage::path($storage_cfg, $volid);
+   $path = PVE::Storage::map_volume($storage_cfg, $volid);
 
} else {
if (($volid =~ m|^/.+|) && (-b $volid)) {
@@ -264,6 +264,7 @@ __PACKAGE__->register_method ({
die "cannot run fsck on active container\n";
 
PVE::Tools::run_command($command);
+   PVE::Storage::unmap_volume($storage_cfg, $volid) if $storage_id;
};
 
PVE::LXC::Config->lock_config($vmid, $do_fsck);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix #2124: Add support for zstd

2020-01-31 Thread Alwin Antreich
This seems to me as a totally new try, since so much time has passed. :)

Zstandard (zstd) [0] is a data compression algorithm, in addition to gzip,
lzo for our backup/restore.

v1 -> v2:
* factored out the decompressor info first, as Thomas suggested
* made the regex pattern of backup files more compact, easier to
  read (hopefully)
* less code changes for container restores

Thanks for any comment or suggestion in advance.

[0] https://facebook.github.io/zstd/


__pve-container__

Alwin Antreich (1):
  Fix: #2124 add zstd support

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)


__qemu-server__

Alwin Antreich (2):
  restore: replace archive format/compression regex
  Fix #2124: Add support for zstd

 PVE/QemuServer.pm | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)


__pve-storage__

Alwin Antreich (3):
  backup: more compact regex for backup file filter
  storage: merge archive format/compressor detection
  Fix: #2124 storage: add zstd support

 PVE/Storage.pm| 86 +++
 PVE/Storage/Plugin.pm |  4 +-
 2 files changed, 65 insertions(+), 25 deletions(-)


__pve-guest-common__

Alwin Antreich (1):
  Fix: #2124 add zstd support

 PVE/VZDump/Common.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


__pve-manager__

Alwin Antreich (2):
  Fix #2124: Add support for zstd
  Fix #2124: Add zstd pkg as install dependency

 PVE/VZDump.pm| 6 --
 debian/control   | 1 +
 www/manager6/form/CompressionSelector.js | 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix: #2124 add zstd support

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
v1 -> v2: less code changes for container restores

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index c13f30d..65d5068 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -79,6 +79,7 @@ sub restore_archive {
'.bz2' => '-j',
'.xz'  => '-J',
'.lzo'  => '--lzop',
+   '.zst'  => '--zstd',
);
if ($archive =~ /\.tar(\.[^.]+)?$/) {
if (defined($1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2 3/3] Fix: #2124 storage: add zstd support

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 10 +++---
 PVE/Storage/Plugin.pm |  4 ++--
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bf12634..51c8bc9 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -514,7 +514,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!) {
my $name = $1;
return ('iso', "$sid:backup/$name");
}
@@ -1271,7 +1271,7 @@ sub archive_info {
 
 if (!defined($comp) || !defined($format)) {
my $volid = basename($archive);
-   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
+   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo|zst))?))$/)
 {
$type = $1;
 
if ($8 eq 'tgz') {
@@ -1295,6 +1295,10 @@ sub archive_info {
'vma' => [ "lzop", "-d", "-c", $archive ],
'tar' => [ "tar", "--lzop", $archive ],
},
+   zst => {
+   'vma' => [ "zstd", "-d", "-c", $archive ],
+   'tar' => [ "tar", "--zstd", $archive ],
+   },
 };
 
 my $info;
@@ -1369,7 +1373,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 58a801a..c300c58 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -423,7 +423,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tgz|((tar|vma)(\.(gz|lzo))?$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(\.(tgz|((tar|vma)(\.(gz|lzo|zst))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -910,7 +910,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ m!/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!;
+   next if $fn !~ m!/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!;
 
$info = { volid => "$sid:backup/$1", format => $2 };
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2 1/3] compact regex for backup file filter

2020-01-31 Thread Alwin Antreich
this, more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 2 +-
 PVE/Storage/Plugin.pm | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0bd103e..1688077 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -514,7 +514,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!) {
my $name = $1;
return ('iso', "$sid:backup/$name");
}
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 0c39cbd..58a801a 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -423,7 +423,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo)))$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(\.(tgz|((tar|vma)(\.(gz|lzo))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -910,7 +910,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
+   next if $fn !~ m!/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!;
 
$info = { volid => "$sid:backup/$1", format => $2 };
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common v2] Fix: #2124 add zstd support

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/VZDump/Common.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..a661552 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -88,7 +88,7 @@ my $confdesc = {
type => 'string',
description => "Compress dump file.",
optional => 1,
-   enum => ['0', '1', 'gzip', 'lzo'],
+   enum => ['0', '1', 'gzip', 'lzo', 'zstd'],
default => '0',
 },
 pigz=> {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 1/2] Fix #2124: Add support for zstd

2020-01-31 Thread Alwin Antreich
Adds the zstd to the compression selection for backup on the GUI and the
.zst extension to the backup file filter.

Signed-off-by: Alwin Antreich 
---

 PVE/VZDump.pm| 6 --
 www/manager6/form/CompressionSelector.js | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 3caa7ab8..21032fd6 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -592,6 +592,8 @@ sub compressor_info {
} else {
return ('gzip --rsyncable', 'gz');
}
+} elsif ($opt_compress eq 'zstd') {
+   return ('zstd', 'zst');
 } else {
die "internal error - unknown compression option '$opt_compress'";
 }
@@ -603,7 +605,7 @@ sub get_backup_file_list {
 my $bklist = [];
 foreach my $fn (<$dir/${bkname}-*>) {
next if $exclude_fn && $fn eq $exclude_fn;
-   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
 {
+   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!)
 {
$fn = "$dir/$1"; # untaint
my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
push @$bklist, [$fn, $t];
@@ -863,7 +865,7 @@ sub exec_backup_task {
debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
unlink $d->[0];
my $logfn = $d->[0];
-   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
+   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo|zst))?))$/\.log/;
unlink $logfn;
}
}
diff --git a/www/manager6/form/CompressionSelector.js 
b/www/manager6/form/CompressionSelector.js
index 8938fc0e..e8775e71 100644
--- a/www/manager6/form/CompressionSelector.js
+++ b/www/manager6/form/CompressionSelector.js
@@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
 comboItems: [
 ['0', Proxmox.Utils.noneText],
 ['lzo', 'LZO (' + gettext('fast') + ')'],
-['gzip', 'GZIP (' + gettext('good') + ')']
+['gzip', 'GZIP (' + gettext('good') + ')'],
+['zstd', 'ZSTD (' + gettext('better') + ')']
 ]
 });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2 2/3] storage: merge archive format/compressor

2020-01-31 Thread Alwin Antreich
detection into a separate function to reduce code duplication and allow
for easier modification.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm | 78 --
 1 file changed, 57 insertions(+), 21 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 1688077..390b343 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1265,6 +1265,52 @@ sub foreach_volid {
 }
 }
 
+sub archive_info {
+my ($archive, $comp, $format) = @_;
+my $type;
+
+if (!defined($comp) || !defined($format)) {
+   my $volid = basename($archive);
+   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
+   $type = $1;
+
+   if ($8 eq 'tgz') {
+   $format = 'tar';
+   $comp = 'gz';
+   } else {
+   $format = $10;
+   $comp = $12 if defined($12);
+   }
+   } else {
+   die "ERROR: couldn't determine format and compression type\n";
+   }
+}
+
+my $decompressor = {
+   gz  => {
+   'vma' => [ "zcat", $archive ],
+   'tar' => [ "tar", "-z", $archive ],
+   },
+   lzo => {
+   'vma' => [ "lzop", "-d", "-c", $archive ],
+   'tar' => [ "tar", "--lzop", $archive ],
+   },
+};
+
+my $info;
+$info->{'format'} = $format;
+$info->{'type'} = $type;
+$info->{'compression'} = $comp;
+
+if (defined($comp) && defined($format)) {
+   my $dcomp = $decompressor->{$comp}->{$format};
+   pop(@$dcomp) if !defined($archive);
+   $info->{'decompressor'} = $dcomp;
+}
+
+return $info;
+}
+
 sub extract_vzdump_config_tar {
 my ($archive, $conf_re) = @_;
 
@@ -1310,16 +1356,12 @@ sub extract_vzdump_config_vma {
 };
 
 
+my $info = archive_info($archive);
+$comp //= $info->{compression};
+my $decompressor = $info->{decompressor};
+
 if ($comp) {
-   my $uncomp;
-   if ($comp eq 'gz') {
-   $uncomp = ["zcat", $archive];
-   } elsif ($comp eq 'lzo') {
-   $uncomp = ["lzop", "-d", "-c", $archive];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
-   $cmd = [$uncomp, ["vma", "config", "-"]];
+   $cmd = [ $decompressor, ["vma", "config", "-"] ];
 
# in some cases, lzop/zcat exits with 1 when its stdout pipe is
# closed early by vma, detect this and ignore the exit code later
@@ -1360,20 +1402,14 @@ sub extract_vzdump_config {
 my ($cfg, $volid) = @_;
 
 my $archive = abs_filesystem_path($cfg, $volid);
+my $info = archive_info($archive);
+my $format = $info->{format};
+my $comp = $info->{compression};
+my $type = $info->{type};
 
-if ($volid =~ 
/vzdump-(lxc|openvz)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|(tar(\.(gz|lzo))?))$/)
 {
+if ($type eq 'lxc' || $type eq 'openvz') {
return extract_vzdump_config_tar($archive, 
qr!^(\./etc/vzdump/(pct|vps)\.conf)$!);
-} elsif ($volid =~ 
/vzdump-qemu-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
-   my $format;
-   my $comp;
-   if ($7 eq 'tgz') {
-   $format = 'tar';
-   $comp = 'gz';
-   } else {
-   $format = $9;
-   $comp = $11 if defined($11);
-   }
-
+} elsif ($type eq 'qemu') {
if ($format eq 'tar') {
return extract_vzdump_config_tar($archive, 
qr!\(\./qemu-server\.conf\)!);
} else {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 2/2] Fix #2124: Add support for zstd

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ff7dcab..8af1cb6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7217,7 +7217,7 @@ sub complete_backup_archives {
 my $res = [];
 foreach my $id (keys %$data) {
foreach my $item (@{$data->{$id}}) {
-   next if $item->{format} !~ m/^vma\.(gz|lzo)$/;
+   next if $item->{format} !~ m/^vma\.(gz|lzo|zst)$/;
push @$res, $item->{volid} if defined($item->{volid});
}
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 2/2] Fix #2124: Add zstd pkg as install dependency

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index bcc6bb6e..497395da 100644
--- a/debian/control
+++ b/debian/control
@@ -60,6 +60,7 @@ Depends: apt-transport-https | apt (>= 1.5~),
  logrotate,
  lsb-base,
  lzop,
+ zstd,
  novnc-pve,
  pciutils,
  perl (>= 5.10.0-19),
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 1/2] restore: replace archive regex

2020-01-31 Thread Alwin Antreich
to reduce the code duplication, as archive_info provides the same
information as well.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 36 ++--
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7374bf1..ff7dcab 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5879,28 +5879,9 @@ sub tar_restore_cleanup {
 sub restore_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
-my $format = $opts->{format};
-my $comp;
-
-if ($archive =~ m/\.tgz$/ || $archive =~ m/\.tar\.gz$/) {
-   $format = 'tar' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.tar$/) {
-   $format = 'tar' if !$format;
-} elsif ($archive =~ m/.tar.lzo$/) {
-   $format = 'tar' if !$format;
-   $comp = 'lzop';
-} elsif ($archive =~ m/\.vma$/) {
-   $format = 'vma' if !$format;
-} elsif ($archive =~ m/\.vma\.gz$/) {
-   $format = 'vma' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.vma\.lzo$/) {
-   $format = 'vma' if !$format;
-   $comp = 'lzop';
-} else {
-   $format = 'vma' if !$format; # default
-}
+my $info = PVE::Storage::archive_info($archive);
+my $format = $opts->{format} // $info->{format};
+my $comp = $info->{compression};
 
 # try to detect archive format
 if ($format eq 'tar') {
@@ -6212,14 +6193,9 @@ sub restore_vma_archive {
 }
 
 if ($comp) {
-   my $cmd;
-   if ($comp eq 'gzip') {
-   $cmd = ['zcat', $readfrom];
-   } elsif ($comp eq 'lzop') {
-   $cmd = ['lzop', '-d', '-c', $readfrom];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
+   my $info = PVE::Storage::archive_info(undef, $comp, 'vma');
+   my $cmd = $info->{decompressor};
+   push @$cmd, $readfrom;
$add_pipe->($cmd);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 1/2] Fix #2124: Add support for zstd

2020-02-04 Thread Alwin Antreich
On Mon, Feb 03, 2020 at 05:51:38PM +0100, Stefan Reiter wrote:
> On 1/31/20 5:00 PM, Alwin Antreich wrote:
> > Adds the zstd to the compression selection for backup on the GUI and the
> > .zst extension to the backup file filter.
> > 
> > Signed-off-by: Alwin Antreich 
> > ---
> > 
> >   PVE/VZDump.pm| 6 --
> >   www/manager6/form/CompressionSelector.js | 3 ++-
> >   2 files changed, 6 insertions(+), 3 deletions(-)
> > 
> > diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
> > index 3caa7ab8..21032fd6 100644
> > --- a/PVE/VZDump.pm
> > +++ b/PVE/VZDump.pm
> > @@ -592,6 +592,8 @@ sub compressor_info {
> > } else {
> > return ('gzip --rsyncable', 'gz');
> > }
> > +} elsif ($opt_compress eq 'zstd') {
> > +   return ('zstd', 'zst');
> 
> Did some testing, two things I noticed, first one regarding this patch
> especially:
> 
> 1) By default zstd uses only one core. I feel like this should be increased
> (or made configurable as with pigz?). Also, zstd has an '--rsyncable flag'
> like gzip, might be good to include that too (according to the man page it
> only has a 'negligible impact on compression ratio').
Thanks for spotting, I put this into my v3.

> 
> 2) The task log is somewhat messed up... It seems zstd prints a status as
> well, additionally to our own progress meter:
True, I will silence the output. In turn, this makes it also similar to
the lzo,gzip compression output.

> 
> 
> _03-17_05_09.vma.zst : 13625 MB... progress 94% (read 32298172416 bytes,
> duration 34 sec)
> 
> _03-17_05_09.vma.zst : 13668 MB...
> _03-17_05_09.vma.zst : 13721 MB...
> _03-17_05_09.vma.zst : 13766 MB...
> _03-17_05_09.vma.zst : 13821 MB...
> _03-17_05_09.vma.zst : 13869 MB...
> _03-17_05_09.vma.zst : 13933 MB... progress 95% (read 32641777664 bytes,
> duration 35 sec)
> 
> _03-17_05_09.vma.zst : 14014 MB...
> _03-17_05_09.vma.zst : 14091 MB...
> 
> 
> Looks a bit unsightly IMO.
> 
> But functionality wise it works fine, tried with a VM and a container, so
> 
> Tested-by: Stefan Reiter 
Thanks for testing.

> 
> for the series.
> 
> >   } else {
> > die "internal error - unknown compression option '$opt_compress'";
> >   }
> > @@ -603,7 +605,7 @@ sub get_backup_file_list {
> >   my $bklist = [];
> >   foreach my $fn (<$dir/${bkname}-*>) {
> > next if $exclude_fn && $fn eq $exclude_fn;
> > -   if ($fn =~ 
> > m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
> >  {
> > +   if ($fn =~ 
> > m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!)
> >  {
> > $fn = "$dir/$1"; # untaint
> > my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
> > push @$bklist, [$fn, $t];
> > @@ -863,7 +865,7 @@ sub exec_backup_task {
> > debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
> > unlink $d->[0];
> > my $logfn = $d->[0];
> > -   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
> > +   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo|zst))?))$/\.log/;
> > unlink $logfn;
> > }
> > }
> > diff --git a/www/manager6/form/CompressionSelector.js 
> > b/www/manager6/form/CompressionSelector.js
> > index 8938fc0e..e8775e71 100644
> > --- a/www/manager6/form/CompressionSelector.js
> > +++ b/www/manager6/form/CompressionSelector.js
> > @@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
> >   comboItems: [
> >   ['0', Proxmox.Utils.noneText],
> >   ['lzo', 'LZO (' + gettext('fast') + ')'],
> > -['gzip', 'GZIP (' + gettext('good') + ')']
> > +['gzip', 'GZIP (' + gettext('good') + ')'],
> > +['zstd', 'ZSTD (' + gettext('better') + ')']
> >   ]
> >   });
> > 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2020-02-05 Thread Alwin Antreich
On Thu, Nov 14, 2019 at 03:01:37PM +0100, Thomas Lamprecht wrote:
> On 11/14/19 6:30 AM, Dietmar Maurer wrote:
> >> The main reason for this is to identify backups residing on an old backup 
> >> store like an archive.
> >>  
> >> But I am open. Would you prefer having a manifest included in the archive 
> >> or as a separate file on the same storage?
> > 
> > The backup archive already contains the full VM config. I thought the 
> > manifest should be
> > an extra file on the same storage.
> > 
> 
> An idea for the backup note/description feature request is to have
> a simple per backup file where that info is saved, having the same
> base name as the backup archive and the log, so those can easily get
> moved/copied around all at once by using an extension glob for the
> file ending.
> 
> Simple manifest works too, needs to always have the cluster storage
> lock though, whereas a per backup file could do with a vmid based one
> (finer granularity avoids lock contention). Also it makes it less easier
> to copy a whole archive to another storage/folder.
If I didn't miss an email, then this feature request (#438 [0]) seems to
be still open (I'm the assignee).

In which direction should this feature go? Per backup manifest?

Or maybe extending the vzdump CLI with an info command that displays
some information, parsed from the backup logfile itself. Since the VM/CT
name is already in the log. Would that be a possibility too?

Example form a backup logfiles:
```
2020-02-04 15:58:55 INFO: VM Name: testvm
2020-01-13 15:39:35 INFO: CT Name: test
```

[0] https://bugzilla.proxmox.com/show_bug.cgi?id=438

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] pveceph: reword memory precondition

2020-02-17 Thread Alwin Antreich
and add the memory target for OSDs, included since Luminous. As well as
distinguish the memory usage between the OSD backends.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index b3bbadf..8dc8568 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -86,9 +86,16 @@ provide enough resources for stable and durable Ceph 
performance.
 .Memory
 Especially in a hyper-converged setup, the memory consumption needs to be
 carefully monitored. In addition to the intended workload from virtual machines
-and container, Ceph needs enough memory available to provide good and stable
-performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
-will be used by an OSD. OSD caching will use additional memory.
+and containers, Ceph needs enough memory available to provide excellent and
+stable performance.
+
+As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
+by an OSD. Especially during recovery, rebalancing or backfilling.
+
+The daemon itself will use additional memory. The Bluestore backend of the
+daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
+legacy Filestore backend uses the OS page cache and the memory consumption is
+generally related to PGs of an OSD daemon.
 
 .Network
 We recommend a network bandwidth of at least 10 GbE or more, which is used
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/3] Fix: ceph: mon_address not considered by new MON

2020-03-11 Thread Alwin Antreich
The public_addr option for creating a new MON is only valid for manual
startup (since Ceph Jewel) and is just ignored by ceph-mon during setup.
As the MON is started after the creation through systemd without an IP
specified. It is trying to auto-select an IP.

Before this patch the public_addr was only explicitly written to the
ceph.conf if no public_network was set. The mon_address is only needed
in the config on the first start of the MON.

The ceph-mon itself tries to select an IP on the following conditions.
- no public_network or public_addr is in the ceph.conf
* startup fails

- public_network is in the ceph.conf
* with a single network, take the first available IP
* on multiple networks, walk through the list orderly and start on
  the first network where an IP is found

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph/MON.pm | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 18b563c9..3baeac52 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -255,7 +255,7 @@ __PACKAGE__->register_method ({
run_command("monmaptool --create --clobber --addv 
$monid '[v2:$monaddr:3300,v1:$monaddr:6789]' --print $monmap");
}
 
-   run_command("ceph-mon --mkfs -i $monid --monmap $monmap 
--keyring $mon_keyring --public-addr $ip");
+   run_command("ceph-mon --mkfs -i $monid --monmap $monmap 
--keyring $mon_keyring");
run_command("chown ceph:ceph -R $mondir");
};
my $err = $@;
@@ -275,11 +275,8 @@ __PACKAGE__->register_method ({
}
$monhost .= " $ip";
$cfg->{global}->{mon_host} = $monhost;
-   if (!defined($cfg->{global}->{public_network})) {
-   # if there is no info about the public_network
-   # we have to set it explicitly for the monitor
-   $cfg->{$monsection}->{public_addr} = $ip;
-   }
+   # The IP is needed in the ceph.conf for the first boot
+   $cfg->{$monsection}->{public_addr} = $ip;
 
cfs_write_file('ceph.conf', $cfg);
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/3] Fix #2422: allow multiple Ceph public networks

2020-03-11 Thread Alwin Antreich
Multiple public networks can be defined in the ceph.conf. The networks
need to be routed to each other.

On first service start the Ceph MON will register itself with one of the
IPs configured locally, matching one of the public networks defined in
the ceph.conf.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph/MON.pm | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 3baeac52..5128fea2 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -33,11 +33,19 @@ my $find_mon_ip = sub {
 }
 $pubnet //= $cfg->{global}->{public_network};
 
+my $public_nets = [ PVE::Tools::split_list($pubnet) ];
+warn "Multiple ceph public networks detected on $node: $pubnet\n".
+"Networks must be capable of routing to each other.\n" if 
scalar(@$public_nets) > 1;
+
 if (!$pubnet) {
return $overwrite_ip // PVE::Cluster::remote_node_ip($node);
 }
 
-my $allowed_ips = PVE::Network::get_local_ip_from_cidr($pubnet);
+my $allowed_ips;
+foreach my $net (@$public_nets) {
+push @$allowed_ips, @{ PVE::Network::get_local_ip_from_cidr($net) };
+}
+
 die "No active IP found for the requested ceph public network '$pubnet' on 
node '$node'\n"
if scalar(@$allowed_ips) < 1;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/3] ceph: remove unused variable assignment

2020-03-11 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Ceph/Services.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index c17008cf..7015cafe 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -63,7 +63,6 @@ sub get_cluster_service {
 sub ceph_service_cmd {
 my ($action, $service) = @_;
 
-my $pve_ceph_cfgpath = PVE::Ceph::Tools::get_config('pve_ceph_cfgpath');
 if ($service && $service =~ 
m/^(mon|osd|mds|mgr|radosgw)(\.(${\SERVICE_REGEX}))?$/) {
$service = defined($3) ? "ceph-$1\@$3" : "ceph-$1.target";
 } else {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/2] Disable memory hotplugging for custom NUMA topologies

2020-03-19 Thread Alwin Antreich
On Wed, Mar 18, 2020 at 04:18:44PM +0100, Stefan Reiter wrote:
> This cannot work, since we adjust the 'memory' property of the VM config
> on hotplugging, but then the user-defined NUMA topology won't match for
> the next start attempt.
> 
> Check needs to happen here, since it otherwise fails early with "total
> memory for NUMA nodes must be equal to vm static memory".
> 
> With this change the error message reflects what is actually happening
> and doesn't allow VMs with exactly 1GB of RAM either.
> 
> Signed-off-by: Stefan Reiter 
> ---
Tested-by: Alwin Antreich 

> 
> Came up after investigating:
> https://forum.proxmox.com/threads/task-error-total-memory-for-numa-nodes-must-be-equal-to-vm-static-memory.67251/
> 
> Spent way too much time 'fixing' it before realizing that it can never work
> like this anyway...
> 
>  PVE/QemuServer/Memory.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index d500b3b..ae9598b 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -225,6 +225,12 @@ sub config {
>  if ($hotplug_features->{memory}) {
>   die "NUMA needs to be enabled for memory hotplug\n" if !$conf->{numa};
>   die "Total memory is bigger than ${MAX_MEM}MB\n" if $memory > $MAX_MEM;
> +
> + for (my $i = 0; $i < $MAX_NUMA; $i++) {
> + die "cannot enable memory hotplugging with custom NUMA topology\n"
s/hotplugging/hotplug/ or s/hotplugging/hot plugging/
The word hotplugging doesn't seem to exist in the dictionaries.

> + if $conf->{"numa$i"};
> + }
> +
>   my $sockets = 1;
>   $sockets = $conf->{sockets} if $conf->{sockets};
>  
> -- 
> 2.25.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 2/2] Die on misaligned memory for hotplugging

2020-03-19 Thread Alwin Antreich
On Wed, Mar 18, 2020 at 04:18:45PM +0100, Stefan Reiter wrote:
> ...instead of booting with an invalid config once and then silently
> changing the memory size for consequent VM starts.
> 
> Signed-off-by: Stefan Reiter 
> ---
Tested-by: Alwin Antreich 

> 
> This confused me for a bit, I don't think that's very nice behaviour as it
> stands.
> 
>  PVE/QemuServer/Memory.pm | 7 ++-
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index ae9598b..b7cf5d5 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -321,11 +321,8 @@ sub config {
>   push @$cmd, "-object" , $mem_object;
>   push @$cmd, "-device", 
> "pc-dimm,id=$name,memdev=mem-$name,node=$numanode";
>  
> - #if dimm_memory is not aligned to dimm map
> - if($current_size > $memory) {
> -  $conf->{memory} = $current_size;
> -  PVE::QemuConfig->write_config($vmid, $conf);
> - }
> + die "memory size ($memory) must be aligned to $dimm_size for 
> hotplugging\n"
same nit as in my mail to path 1/2

> + if $current_size > $memory;
>   });
>  }
>  }
> -- 
> 2.25.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Patch about Intel processors flaw

2018-01-05 Thread Alwin Antreich
Hello Gilberto,

On Thu, Jan 04, 2018 at 03:36:36PM -0200, Gilberto Nunes wrote:
> Hi list
>
> Is there any patch to PVE Kernel about Intel processors flaw??
Please, follow up on the thread on our forum.
https://forum.proxmox.com/threads/fuckwit-kaiser-kpti.39025/

>
>
> ---
> Gilberto Ferreira
>
> (47) 3025-5907
> (47) 99676-7530
>
> Skype: gilberto.nunes36
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
Cheers,
Alwin

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 3/3] Fix typo in sub s/krdb_feature_disable/krbd_feature_disable

2018-02-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index ef9fc4a..83c1924 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -101,7 +101,7 @@ my $rados_mon_cmd = sub {
 };
 
 # needed for volumes created using ceph jewel (or higher)
-my $krdb_feature_disable = sub {
+my $krbd_feature_disable = sub {
 my ($scfg, $storeid, $name) = @_;
 
 return 1 if !$scfg->{krbd};
@@ -459,7 +459,7 @@ sub clone_image {
 
 run_rbd_command($cmd, errmsg => "rbd clone '$basename' error");
 
-&$krdb_feature_disable($scfg, $storeid, $name);
+&$krbd_feature_disable($scfg, $storeid, $name);
 
 return $newvol;
 }
@@ -476,7 +476,7 @@ sub alloc_image {
 my $cmd = &$rbd_cmd($scfg, $storeid, 'create', '--image-format' , 2, 
'--size', int(($size+1023)/1024), $name);
 run_rbd_command($cmd, errmsg => "rbd create $name' error");
 
-&$krdb_feature_disable($scfg, $storeid, $name);
+&$krbd_feature_disable($scfg, $storeid, $name);
 
 return $name;
 }
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 2/3] krbd: remove 'exclusive-lock' from blacklist, kernel-4.13

2018-02-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 804dded..ef9fc4a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -109,7 +109,7 @@ my $krdb_feature_disable = sub {
 my ($major, undef, undef, undef) = ceph_version();
 return 1 if $major < 10;
 
-my $krbd_feature_blacklist = ['deep-flatten', 'fast-diff', 'object-map', 
'exclusive-lock'];
+my $krbd_feature_blacklist = ['deep-flatten', 'fast-diff', 'object-map'];
 my (undef, undef, undef, undef, $features) = rbd_volume_info($scfg, 
$storeid, $name);
 
 my $active_features = { map { $_ => 1 } PVE::Tools::split_list($features)};
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 1/3] Fix #1574: could not disable krbd-incompatible image features

2018-02-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 19 +++
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2ca14ef..804dded 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -109,8 +109,16 @@ my $krdb_feature_disable = sub {
 my ($major, undef, undef, undef) = ceph_version();
 return 1 if $major < 10;
 
-my $feature_cmd = &$rbd_cmd($scfg, $storeid, 'feature', 'disable', $name, 
'deep-flatten,fast-diff,object-map,exclusive-lock');
-run_rbd_command($feature_cmd, errmsg => "could not disable 
krbd-incompatible image features of rbd volume $name");
+my $krbd_feature_blacklist = ['deep-flatten', 'fast-diff', 'object-map', 
'exclusive-lock'];
+my (undef, undef, undef, undef, $features) = rbd_volume_info($scfg, 
$storeid, $name);
+
+my $active_features = { map { $_ => 1 } PVE::Tools::split_list($features)};
+my $incompatible_features = join(',', grep { %$active_features{$_} } 
@$krbd_feature_blacklist);
+
+if ($incompatible_features) {
+   my $feature_cmd = &$rbd_cmd($scfg, $storeid, 'feature', 'disable', 
$name, $incompatible_features);
+   run_rbd_command($feature_cmd, errmsg => "could not disable 
krbd-incompatible image features of rbd volume $name");
+}
 };
 
 my $ceph_version_parser = sub {
@@ -221,6 +229,7 @@ sub rbd_volume_info {
 my $parent = undef;
 my $format = undef;
 my $protected = undef;
+my $features = undef;
 
 my $parser = sub {
my $line = shift;
@@ -233,13 +242,15 @@ sub rbd_volume_info {
$format = $1;
} elsif ($line =~ m/protected:\s(\S+)/) {
$protected = 1 if $1 eq "True";
-   }
+   } elsif ($line =~ m/features:\s(.+)/) {
+   $features = $1;
+   }
 
 };
 
 run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => 
$parser);
 
-return ($size, $parent, $format, $protected);
+return ($size, $parent, $format, $protected, $features);
 }
 
 # Configuration
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2] Fix #1574: could not disable krbd-incompatible image features

2018-03-01 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2ca14ef..06e0a0a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -109,8 +109,16 @@ my $krdb_feature_disable = sub {
 my ($major, undef, undef, undef) = ceph_version();
 return 1 if $major < 10;

-my $feature_cmd = &$rbd_cmd($scfg, $storeid, 'feature', 'disable', $name, 
'deep-flatten,fast-diff,object-map,exclusive-lock');
-run_rbd_command($feature_cmd, errmsg => "could not disable 
krbd-incompatible image features of rbd volume $name");
+my $krbd_feature_blacklist = ['deep-flatten', 'fast-diff', 'object-map', 
'exclusive-lock'];
+my (undef, undef, undef, undef, $features) = rbd_volume_info($scfg, 
$storeid, $name);
+
+my $active_features = { map { $_ => 1 } PVE::Tools::split_list($features)};
+my $incompatible_features = join(',', grep { %$active_features{$_} } 
@$krbd_feature_blacklist);
+
+if ($incompatible_features) {
+   my $feature_cmd = &$rbd_cmd($scfg, $storeid, 'feature', 'disable', 
$name, $incompatible_features);
+   run_rbd_command($feature_cmd, errmsg => "could not disable 
krbd-incompatible image features of rbd volume $name");
+}
 };

 my $ceph_version_parser = sub {
@@ -221,6 +229,7 @@ sub rbd_volume_info {
 my $parent = undef;
 my $format = undef;
 my $protected = undef;
+my $features = undef;

 my $parser = sub {
my $line = shift;
@@ -233,13 +242,15 @@ sub rbd_volume_info {
$format = $1;
} elsif ($line =~ m/protected:\s(\S+)/) {
$protected = 1 if $1 eq "True";
+   } elsif ($line =~ m/features:\s(.+)/) {
+   $features = $1;
}

 };

 run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => 
$parser);

-return ($size, $parent, $format, $protected);
+return ($size, $parent, $format, $protected, $features);
 }

 # Configuration
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] NACK: [PATCH storage 2/3] krbd: remove 'exclusive-lock' from blacklist, kernel-4.13

2018-03-01 Thread Alwin Antreich
On Thu, Mar 01, 2018 at 10:59:15AM +0100, Fabian Grünbichler wrote:
> this is problematic because it potentially hides bugs in our application
> logic, with currently no benefit.
>
> with exclusive-locks disabled, mapping on multiple hosts is possible,
> but mounting the same image is not (e.g., when attempting to mount, all
> but the first successful node will fail).
>
> with exclusive-locks enabled, mapping and accessing/mounting is
> sometimes possible (it seems a bit racy?), but since ext4 is not a
> cluster FS, this will cause undesired behaviour / inconsistencies /
> corruption.
>
> OTOH, with exclusive-locks enabled we would have the option of exclusive
> mapping - if we find a way to make this work with Qemu live-migration it
> might solve all of our problems
>
> TL;DR: I think we should postpone this pending further investigations
> into potential pros and cons
>
I did some testing on this topic. Qemu dies when live migrating, as rbd
can not get the lock on the new destination. A offline migration is
possible.

I guess, if we want to use it, then either we get qemu to work
differntly on migration or only allow offline migration when
exclusive mapping is requested.

I am up for discussion.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 2/2 v2] Fix typo in sub s/krdb_feature_disable/krbd_feature_disable

2018-03-02 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 06e0a0a..9e0e720 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -101,7 +101,7 @@ my $rados_mon_cmd = sub {
 };

 # needed for volumes created using ceph jewel (or higher)
-my $krdb_feature_disable = sub {
+my $krbd_feature_disable = sub {
 my ($scfg, $storeid, $name) = @_;

 return 1 if !$scfg->{krbd};
@@ -459,7 +459,7 @@ sub clone_image {

 run_rbd_command($cmd, errmsg => "rbd clone '$basename' error");

-&$krdb_feature_disable($scfg, $storeid, $name);
+&$krbd_feature_disable($scfg, $storeid, $name);

 return $newvol;
 }
@@ -476,7 +476,7 @@ sub alloc_image {
 my $cmd = &$rbd_cmd($scfg, $storeid, 'create', '--image-format' , 2, 
'--size', int(($size+1023)/1024), $name);
 run_rbd_command($cmd, errmsg => "rbd create $name' error");

-&$krdb_feature_disable($scfg, $storeid, $name);
+&$krbd_feature_disable($scfg, $storeid, $name);

 return $name;
 }
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 1/2 v2] Fix #1574: could not disable krbd-incompatible image features

2018-03-02 Thread Alwin Antreich
To prevent an error when disabling features of a rbd image with already
disabled flags. This aborted the CT/VM cloning halfway through with
a leftover rbd image, but no vmid.conf to it.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 2ca14ef..06e0a0a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -109,8 +109,16 @@ my $krdb_feature_disable = sub {
 my ($major, undef, undef, undef) = ceph_version();
 return 1 if $major < 10;

-my $feature_cmd = &$rbd_cmd($scfg, $storeid, 'feature', 'disable', $name, 
'deep-flatten,fast-diff,object-map,exclusive-lock');
-run_rbd_command($feature_cmd, errmsg => "could not disable 
krbd-incompatible image features of rbd volume $name");
+my $krbd_feature_blacklist = ['deep-flatten', 'fast-diff', 'object-map', 
'exclusive-lock'];
+my (undef, undef, undef, undef, $features) = rbd_volume_info($scfg, 
$storeid, $name);
+
+my $active_features = { map { $_ => 1 } PVE::Tools::split_list($features)};
+my $incompatible_features = join(',', grep { %$active_features{$_} } 
@$krbd_feature_blacklist);
+
+if ($incompatible_features) {
+   my $feature_cmd = &$rbd_cmd($scfg, $storeid, 'feature', 'disable', 
$name, $incompatible_features);
+   run_rbd_command($feature_cmd, errmsg => "could not disable 
krbd-incompatible image features of rbd volume $name");
+}
 };

 my $ceph_version_parser = sub {
@@ -221,6 +229,7 @@ sub rbd_volume_info {
 my $parent = undef;
 my $format = undef;
 my $protected = undef;
+my $features = undef;

 my $parser = sub {
my $line = shift;
@@ -233,13 +242,15 @@ sub rbd_volume_info {
$format = $1;
} elsif ($line =~ m/protected:\s(\S+)/) {
$protected = 1 if $1 eq "True";
+   } elsif ($line =~ m/features:\s(.+)/) {
+   $features = $1;
}

 };

 run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => 
$parser);

-return ($size, $parent, $format, $protected);
+return ($size, $parent, $format, $protected, $features);
 }

 # Configuration
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] NACK: [PATCH storage 2/3] krbd: remove 'exclusive-lock' from blacklist, kernel-4.13

2018-03-05 Thread Alwin Antreich
On Thu, Mar 01, 2018 at 04:23:32PM +0100, Alwin Antreich wrote:
> On Thu, Mar 01, 2018 at 10:59:15AM +0100, Fabian Grünbichler wrote:
> > this is problematic because it potentially hides bugs in our application
> > logic, with currently no benefit.
> >
> > with exclusive-locks disabled, mapping on multiple hosts is possible,
> > but mounting the same image is not (e.g., when attempting to mount, all
> > but the first successful node will fail).
> >
> > with exclusive-locks enabled, mapping and accessing/mounting is
> > sometimes possible (it seems a bit racy?), but since ext4 is not a
> > cluster FS, this will cause undesired behaviour / inconsistencies /
> > corruption.
> >
> > OTOH, with exclusive-locks enabled we would have the option of exclusive
> > mapping - if we find a way to make this work with Qemu live-migration it
> > might solve all of our problems
> >
> > TL;DR: I think we should postpone this pending further investigations
> > into potential pros and cons
> >
> I did some testing on this topic. Qemu dies when live migrating, as rbd
> can not get the lock on the new destination. A offline migration is
> possible.
>
> I guess, if we want to use it, then either we get qemu to work
> differntly on migration or only allow offline migration when
> exclusive mapping is requested.
>
> I am up for discussion.
>
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
As to do once the kernel rbd client is able to down-/upgrade the
exclusive lock.

https://bugzilla.proxmox.com/show_bug.cgi?id=1686

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 2/2] Remove obsolete read from storage.cfg in vm_start api call

2018-03-09 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/API2/LXC/Status.pm | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
index 976f264..b98dc24 100644
--- a/src/PVE/API2/LXC/Status.pm
+++ b/src/PVE/API2/LXC/Status.pm
@@ -185,8 +185,6 @@ __PACKAGE__->register_method({
 
}
 
-   my $storage_cfg = cfs_read_file("storage.cfg");
-
PVE::LXC::vm_start($vmid, $conf, $skiplock);
 
return;
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 1/2] Fix #1547: on migration abort, the CT starts again

2018-03-09 Thread Alwin Antreich
When a migration fails, the final_cleanup phase now starts the container
on the source node again, if it was a migration in restart_mode and the
CT was running.

Signed-off-by: Alwin Antreich 
---
 src/PVE/API2/LXC/Status.pm |  8 +---
 src/PVE/LXC.pm | 14 ++
 src/PVE/LXC/Migrate.pm |  7 +++
 3 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
index 39882e2..976f264 100644
--- a/src/PVE/API2/LXC/Status.pm
+++ b/src/PVE/API2/LXC/Status.pm
@@ -187,13 +187,7 @@ __PACKAGE__->register_method({
 
my $storage_cfg = cfs_read_file("storage.cfg");
 
-   PVE::LXC::update_lxc_config($vmid, $conf);
-
-   local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
-
-   my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];
-
-   run_command($cmd);
+   PVE::LXC::vm_start($vmid, $conf, $skiplock);
 
return;
};
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 80d79e1..1b0da19 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1515,6 +1515,20 @@ sub userns_command {
 return [];
 }
 
+sub vm_start {
+my ($vmid, $conf, $skiplock) = @_;
+
+update_lxc_config($vmid, $conf);
+
+local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
+
+my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];
+
+PVE::Tools::run_command($cmd);
+
+return;
+}
+
 # Helper to stop a container completely and make sure it has stopped 
completely.
 # This is necessary because we want the post-stop hook to have completed its
 # unmount-all step, but post-stop happens after lxc puts the container into the
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index ee78a5f..dfe8f55 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -354,6 +354,13 @@ sub final_cleanup {
if (my $err = $@) {
$self->log('err', $err);
}
+   # in restart mode, we start the container on the source node
+   # on migration error
+   if ($self->{opts}->{restart} && $self->{was_running}) {
+   $self->log('info', "start container on source node");
+   my skiplock = 1;
+   PVE::LXC::vm_start($vmid, $self->{vmconf}, $skiplock);
+   }
 } else {
my $cmd = [ @{$self->{rem_ssh}}, 'pct', 'unlock', $vmid ];
$self->cmd_logerr($cmd, errmsg => "failed to clear migrate lock");
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] Fix pct skiplock

2018-03-13 Thread Alwin Antreich
The method vm_start sets an environment variable that is not picked up
anymore by systemd. This patch keeps the environment variable and
introduces a skiplock file that is picked up by the
lxc-pve-prestart-hook.

Signed-off-by: Alwin Antreich 
---
 src/PVE/LXC.pm| 9 -
 src/lxc-pve-prestart-hook | 5 -
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7adbcd1..2e3e4ca 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1545,7 +1545,14 @@ sub vm_start {
 
 update_lxc_config($vmid, $conf);
 
-local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
+if ($skiplock) {
+   # to stay compatible with old behaviour
+   local $ENV{PVE_SKIPLOCK}=1;
+
+   my $file = "/var/lib/lxc/$vmid/skiplock";
+   open(my $fh, '>', $file) || die "failed to open $file for writing: 
$!\n";
+   close($fh);
+}
 
 my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];
 
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index fd29423..abe61aa 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -57,13 +57,16 @@ __PACKAGE__->register_method ({
return undef if $param->{name} !~ m/^\d+$/;
 
my $vmid = $param->{name};
+   my $file = "/var/lib/lxc/$vmid/skiplock";
+   my $skiplock = $ENV{PVE_SKIPLOCK} || 1 if -e $file;
+   unlink $file if -e $file;
 
PVE::Cluster::check_cfs_quorum(); # only start if we have quorum
 
return undef if ! -f PVE::LXC::Config->config_file($vmid);
 
my $conf = PVE::LXC::Config->load_config($vmid);
-   if (!$ENV{PVE_SKIPLOCK} && !PVE::LXC::Config->has_lock($conf, 
'mounted')) {
+   if (!$skiplock && !PVE::LXC::Config->has_lock($conf, 'mounted')) {
PVE::LXC::Config->check_lock($conf);
}
 
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix pct skiplock

2018-03-14 Thread Alwin Antreich
The method vm_start sets an environment variable that is not picked up
anymore by systemd. This patch removes the environment variable and
introduces a skiplock file that is picked up by the
lxc-pve-prestart-hook.

Signed-off-by: Alwin Antreich 
---
note: after discussion with Fabian, I removed the ENV variable. But I left the
path of the skiplock file because we use it on other places in the code too.
Though, I added a cleanup step if the container start fails.

 src/PVE/LXC.pm| 27 ++-
 src/lxc-pve-prestart-hook |  5 -
 2 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7adbcd1..acb5cfd 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -393,7 +393,7 @@ sub update_lxc_config {

 # some init scripts expect a linux terminal (turnkey).
 $raw .= "lxc.environment = TERM=linux\n";
-
+
 my $utsname = $conf->{hostname} || "CT$vmid";
 $raw .= "lxc.uts.name = $utsname\n";

@@ -454,7 +454,7 @@ sub update_lxc_config {
}
$raw .= "lxc.cgroup.cpuset.cpus = ".$cpuset->short_string()."\n";
 }
-
+
 File::Path::mkpath("$dir/rootfs");

 PVE::Tools::file_set_contents("$dir/config", $raw);
@@ -566,7 +566,7 @@ sub destroy_lxc_container {

 sub vm_stop_cleanup {
 my ($storage_cfg, $vmid, $conf, $keepActive) = @_;
-
+
 eval {
if (!$keepActive) {

@@ -1172,19 +1172,19 @@ sub mountpoint_mount {
 my $type = $mountpoint->{type};
 my $quota = !$snapname && !$mountpoint->{ro} && $mountpoint->{quota};
 my $mounted_dev;
-
+
 return if !$volid || !$mount;

 $mount =~ s!/+!/!g;

 my $mount_path;
 my ($mpfd, $parentfd, $last_dir);
-
+
 if (defined($rootdir)) {
($rootdir, $mount_path, $mpfd, $parentfd, $last_dir) =
__mount_prepare_rootdir($rootdir, $mount);
 }
-
+
 my ($storage, $volname) = PVE::Storage::parse_volume_id($volid, 1);

 die "unknown snapshot path for '$volid'" if !$storage && 
defined($snapname);
@@ -1288,7 +1288,7 @@ sub mountpoint_mount {
warn "cannot enable quota control for bind mounts\n" if $quota;
return wantarray ? ($volid, 0, undef) : $volid;
 }
-
+
 die "unsupported storage";
 }

@@ -1545,11 +1545,20 @@ sub vm_start {

 update_lxc_config($vmid, $conf);

-local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
+my $skiplock_flag_fn = "/var/lib/lxc/$vmid/skiplock";
+
+if ($skiplock) {
+   open(my $fh, '>', $skiplock_flag_fn) || die "failed to open 
$skiplock_flag_fn for writing: $!\n";
+   close($fh);
+}

 my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];

-PVE::Tools::run_command($cmd);
+eval { PVE::Tools::run_command($cmd); };
+if (my $err = $@) {
+   unlink $skiplock_flag_fn if -e $skiplock_flag_fn;
+   die $err if $err;
+}

 return;
 }
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index fd29423..eba23ee 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -57,13 +57,16 @@ __PACKAGE__->register_method ({
return undef if $param->{name} !~ m/^\d+$/;

my $vmid = $param->{name};
+   my $skiplock_flag_fn = "/var/lib/lxc/$vmid/skiplock";
+   my $skiplock = 1 if -e $skiplock_flag_fn;
+   unlink $skiplock_flag_fn if -e $skiplock_flag_fn;

PVE::Cluster::check_cfs_quorum(); # only start if we have quorum

return undef if ! -f PVE::LXC::Config->config_file($vmid);

my $conf = PVE::LXC::Config->load_config($vmid);
-   if (!$ENV{PVE_SKIPLOCK} && !PVE::LXC::Config->has_lock($conf, 
'mounted')) {
+   if (!$skiplock && !PVE::LXC::Config->has_lock($conf, 'mounted')) {
PVE::LXC::Config->check_lock($conf);
}

--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v3] Fix pct skiplock

2018-03-14 Thread Alwin Antreich
The method vm_start sets an environment variable that is not picked up
anymore by systemd. This patch removes the environment variable and
introduces a skiplock file that is picked up by the
lxc-pve-prestart-hook.

Signed-off-by: Alwin Antreich 
---
note: after discussion with Fabian, I removed the ENV variable. But I left the
path of the skiplock file because we use it on other places in the code too.
Though, I added a cleanup step if the container start fails.

 src/PVE/LXC.pm| 13 +++--
 src/lxc-pve-prestart-hook |  5 -
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7adbcd1..acb5cfd 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1545,11 +1545,20 @@ sub vm_start {

 update_lxc_config($vmid, $conf);

-local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
+my $skiplock_flag_fn = "/var/lib/lxc/$vmid/skiplock";
+
+if ($skiplock) {
+   open(my $fh, '>', $skiplock_flag_fn) || die "failed to open 
$skiplock_flag_fn for writing: $!\n";
+   close($fh);
+}

 my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];

-PVE::Tools::run_command($cmd);
+eval { PVE::Tools::run_command($cmd); };
+if (my $err = $@) {
+   unlink $skiplock_flag_fn if -e $skiplock_flag_fn;
+   die $err if $err;
+}

 return;
 }
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index fd29423..eba23ee 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -57,13 +57,16 @@ __PACKAGE__->register_method ({
return undef if $param->{name} !~ m/^\d+$/;

my $vmid = $param->{name};
+   my $skiplock_flag_fn = "/var/lib/lxc/$vmid/skiplock";
+   my $skiplock = 1 if -e $skiplock_flag_fn;
+   unlink $skiplock_flag_fn if -e $skiplock_flag_fn;

PVE::Cluster::check_cfs_quorum(); # only start if we have quorum

return undef if ! -f PVE::LXC::Config->config_file($vmid);

my $conf = PVE::LXC::Config->load_config($vmid);
-   if (!$ENV{PVE_SKIPLOCK} && !PVE::LXC::Config->has_lock($conf, 
'mounted')) {
+   if (!$skiplock && !PVE::LXC::Config->has_lock($conf, 'mounted')) {
PVE::LXC::Config->check_lock($conf);
}

--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v4] Fix pct skiplock

2018-03-14 Thread Alwin Antreich
The method vm_start sets an environment variable that is not picked up
anymore by systemd. This patch removes the environment variable and
introduces a skiplock file that is picked up by the
lxc-pve-prestart-hook.

Signed-off-by: Alwin Antreich 
---
note: this version changes the path of the skiplock file to /run/lxc/.

 src/PVE/LXC.pm| 13 +++--
 src/lxc-pve-prestart-hook |  5 -
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7adbcd1..4398cfd 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1545,11 +1545,20 @@ sub vm_start {

 update_lxc_config($vmid, $conf);

-local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
+my $skiplock_flag_fn = "/run/lxc/skiplock-$vmid";
+
+if ($skiplock) {
+   open(my $fh, '>', $skiplock_flag_fn) || die "failed to open 
$skiplock_flag_fn for writing: $!\n";
+   close($fh);
+}

 my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];

-PVE::Tools::run_command($cmd);
+eval { PVE::Tools::run_command($cmd); };
+if (my $err = $@) {
+   unlink $skiplock_flag_fn if -e $skiplock_flag_fn;
+   die $err if $err;
+}

 return;
 }
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index fd29423..79297da 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -57,13 +57,16 @@ __PACKAGE__->register_method ({
return undef if $param->{name} !~ m/^\d+$/;

my $vmid = $param->{name};
+   my $skiplock_flag_fn = "/run/lxc/skiplock-$vmid";
+   my $skiplock = 1 if -e $skiplock_flag_fn;
+   unlink $skiplock_flag_fn if -e $skiplock_flag_fn;

PVE::Cluster::check_cfs_quorum(); # only start if we have quorum

return undef if ! -f PVE::LXC::Config->config_file($vmid);

my $conf = PVE::LXC::Config->load_config($vmid);
-   if (!$ENV{PVE_SKIPLOCK} && !PVE::LXC::Config->has_lock($conf, 
'mounted')) {
+   if (!$skiplock && !PVE::LXC::Config->has_lock($conf, 'mounted')) {
PVE::LXC::Config->check_lock($conf);
}

--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v5] Fix pct skiplock

2018-03-14 Thread Alwin Antreich
The method vm_start sets an environment variable that is not picked up
anymore by systemd. This patch removes the environment variable and
introduces a skiplock file that is picked up by the
lxc-pve-prestart-hook.

Signed-off-by: Alwin Antreich 
---
note: made changes by Dietmar's comments

 src/PVE/LXC.pm| 13 +++--
 src/lxc-pve-prestart-hook |  5 -
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7adbcd1..2a3950c 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1545,11 +1545,20 @@ sub vm_start {

 update_lxc_config($vmid, $conf);

-local $ENV{PVE_SKIPLOCK}=1 if $skiplock;
+my $skiplock_flag_fn = "/run/lxc/skiplock-$vmid";
+
+if ($skiplock) {
+   open(my $fh, '>', $skiplock_flag_fn) || die "failed to open 
$skiplock_flag_fn for writing: $!\n";
+   close($fh);
+}

 my $cmd = ['systemctl', 'start', "pve-container\@$vmid"];

-PVE::Tools::run_command($cmd);
+eval { PVE::Tools::run_command($cmd); };
+if (my $err = $@) {
+   unlink $skiplock_flag_fn;
+   die $err if $err;
+}

 return;
 }
diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index fd29423..61a8ef3 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -57,13 +57,16 @@ __PACKAGE__->register_method ({
return undef if $param->{name} !~ m/^\d+$/;

my $vmid = $param->{name};
+   my $skiplock_flag_fn = "/run/lxc/skiplock-$vmid";
+   my $skiplock = 1 if -e $skiplock_flag_fn;
+   unlink $skiplock_flag_fn if $skiplock;

PVE::Cluster::check_cfs_quorum(); # only start if we have quorum

return undef if ! -f PVE::LXC::Config->config_file($vmid);

my $conf = PVE::LXC::Config->load_config($vmid);
-   if (!$ENV{PVE_SKIPLOCK} && !PVE::LXC::Config->has_lock($conf, 
'mounted')) {
+   if (!$skiplock && !PVE::LXC::Config->has_lock($conf, 'mounted')) {
PVE::LXC::Config->check_lock($conf);
}

--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 2/2] Addition to #1544, implement delete lock in lxc api path

2018-03-16 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
note: not sure, if there needs to be some extra handling for deleting the lock

 src/PVE/LXC/Config.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 7b27be1..08139df 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -870,6 +870,8 @@ sub update_pct_config {
}
} elsif ($opt eq 'unprivileged') {
die "unable to delete read-only option: '$opt'\n";
+   } elsif ($opt eq 'lock') {
+   delete $conf->{$opt};
} else {
die "implement me (delete: $opt)"
}
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container 1/2] Fix #1544: add skiplock to lxc api path

2018-03-16 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/API2/LXC/Config.pm | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index 2b622b3..2d69049 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -80,6 +80,7 @@ __PACKAGE__->register_method({
{
node => get_standard_option('pve-node'),
vmid => get_standard_option('pve-vmid', { completion => 
\&PVE::LXC::complete_ctid }),
+   skiplock => get_standard_option('skiplock'),
delete => {
type => 'string', format => 'pve-configid-list',
description => "A list of settings you want to delete.",
@@ -107,6 +108,10 @@ __PACKAGE__->register_method({
 
my $digest = extract_param($param, 'digest');
 
+   my $skiplock = extract_param($param, 'skiplock');
+   raise_param_exc({ skiplock => "Only root may use this option." })
+   if $skiplock && $authuser ne 'root@pam';
+
die "no options specified\n" if !scalar(keys %$param);
 
my $delete_str = extract_param($param, 'delete');
@@ -155,7 +160,7 @@ __PACKAGE__->register_method({
my $code = sub {
 
my $conf = PVE::LXC::Config->load_config($vmid);
-   PVE::LXC::Config->check_lock($conf);
+   PVE::LXC::Config->check_lock($conf) if !$skiplock;
 
PVE::Tools::assert_if_modified($digest, $conf->{digest});
 
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH librados2-perl] Split method pve_rados_connect

2018-03-30 Thread Alwin Antreich
To be able to connect through librados2 without a config file, the
method pve_rados_connect is split up into pve_rados_connect and
pve_rados_conf_read_file.

Signed-off-by: Alwin Antreich 
---
 PVE/RADOS.pm |  9 -
 RADOS.xs | 26 +-
 2 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index aa6a102..ad1c2db 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -1,6 +1,6 @@
 package PVE::RADOS;
 
-use 5.014002;
+use 5.014002; # FIXME: update version??
 use strict;
 use warnings;
 use Carp;
@@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
 require Exporter;
 
 my $rados_default_timeout = 5;
+my $ceph_default_conf = '/etc/ceph/ceph.conf';
 
 
 our @ISA = qw(Exporter);
@@ -164,6 +165,12 @@ sub new {
$conn = pve_rados_create() ||
die "unable to create RADOS object\n";
 
+   my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;
+
+   if (-e $ceph_conf) {
+   pve_rados_conf_read_file($conn, $ceph_conf);
+   }
+
pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);
 
foreach my $k (keys %params) {
diff --git a/RADOS.xs b/RADOS.xs
index a9f6bc3..ad3cf96 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -47,19 +47,35 @@ CODE:
 }
 
 void
-pve_rados_connect(cluster) 
+pve_rados_conf_read_file(cluster, path)
 rados_t cluster
-PROTOTYPE: $
+SV *path
+PROTOTYPE: $$
 CODE:
 {
-DPRINTF("pve_rados_connect\n");
+char *p = NULL;
 
-int res = rados_conf_read_file(cluster, NULL);
+if (SvOK(path)) {
+   p = SvPV_nolen(path);
+}
+
+DPRINTF("pve_rados_conf_read_file %s\n", p);
+
+int res = rados_conf_read_file(cluster, p);
 if (res < 0) {
 die("rados_conf_read_file failed - %s\n", strerror(-res));
 }
+}
+
+void
+pve_rados_connect(cluster)
+rados_t cluster
+PROTOTYPE: $
+CODE:
+{
+DPRINTF("pve_rados_connect\n");
 
-res = rados_connect(cluster);
+int res = rados_connect(cluster);
 if (res < 0) {
 die("rados_connect failed - %s\n", strerror(-res));
 }
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH librados2-perl] Split method pve_rados_connect

2018-04-04 Thread Alwin Antreich
On Tue, Apr 03, 2018 at 02:13:18PM +0200, Dietmar Maurer wrote:
> comments inline
>
> > On March 30, 2018 at 12:25 PM Alwin Antreich  wrote:
> >
> >
> > To be able to connect through librados2 without a config file, the
> > method pve_rados_connect is split up into pve_rados_connect and
> > pve_rados_conf_read_file.
> >
> > Signed-off-by: Alwin Antreich 
> > ---
> >  PVE/RADOS.pm |  9 -
> >  RADOS.xs | 26 +-
> >  2 files changed, 29 insertions(+), 6 deletions(-)
> >
> > diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
> > index aa6a102..ad1c2db 100644
> > --- a/PVE/RADOS.pm
> > +++ b/PVE/RADOS.pm
> > @@ -1,6 +1,6 @@
> >  package PVE::RADOS;
> >
> > -use 5.014002;
> > +use 5.014002; # FIXME: update version??
>
> why this FIXME?
On debian stretch there is a newer perl version v5.24.1, thought that we
might want to change it.

>
> >  use strict;
> >  use warnings;
> >  use Carp;
> > @@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
> >  require Exporter;
> >
> >  my $rados_default_timeout = 5;
> > +my $ceph_default_conf = '/etc/ceph/ceph.conf';
> >
> >
> >  our @ISA = qw(Exporter);
> > @@ -164,6 +165,12 @@ sub new {
> > $conn = pve_rados_create() ||
> > die "unable to create RADOS object\n";
> >
> > +   my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;
> > +
> > +   if (-e $ceph_conf) {
> > +   pve_rados_conf_read_file($conn, $ceph_conf);
> > +   }
> > +
>
> What if $params{ceph_conf} is set, but file does not exist? IMHO this should
> raise an error
> instead of using the default?
This sure needs handling but I would prefer a warning, when all other
keys for the connection are available in $params, then the connection
could still be done and default values would be taken. If other keys
would be missing, then the rados_connect would die later on.

>
> > pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);
> >
> > foreach my $k (keys %params) {
> > diff --git a/RADOS.xs b/RADOS.xs
> > index a9f6bc3..ad3cf96 100644
> > --- a/RADOS.xs
> > +++ b/RADOS.xs
> > @@ -47,19 +47,35 @@ CODE:
> >  }
> >
> >  void
> > -pve_rados_connect(cluster)
> > +pve_rados_conf_read_file(cluster, path)
> >  rados_t cluster
> > -PROTOTYPE: $
> > +SV *path
> > +PROTOTYPE: $$
> >  CODE:
> >  {
> > -DPRINTF("pve_rados_connect\n");
> > +char *p = NULL;
> >
> > -int res = rados_conf_read_file(cluster, NULL);
> > +if (SvOK(path)) {
> > +   p = SvPV_nolen(path);
> > +}
> > +
> > +DPRINTF("pve_rados_conf_read_file %s\n", p);
> > +
> > +int res = rados_conf_read_file(cluster, p);
>
>
> I thought we only want to call this if p != NULL ?
I kept this to stay with the default behaviour of ceph and if there
is no config, then ceph searches:
- $CEPH_CONF (environment variable)
- /etc/ceph/ceph.conf
- ~/.ceph/config
- ceph.conf (in the current working directory)our current

Currently our code is also expecting the config under
/etc/ceph/ceph.conf, I tried to keep it similar to it.

>
> >  if (res < 0) {
> >  die("rados_conf_read_file failed - %s\n", strerror(-res));
> >  }
> > +}
> > +
> > +void
> > +pve_rados_connect(cluster)
> > +rados_t cluster
> > +PROTOTYPE: $
> > +CODE:
> > +{
> > +DPRINTF("pve_rados_connect\n");
> >
> > -res = rados_connect(cluster);
> > +int res = rados_connect(cluster);
> >  if (res < 0) {
> >  die("rados_connect failed - %s\n", strerror(-res));
> >  }
> > --
> > 2.11.0
> >
> >
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH librados2-perl] Split method pve_rados_connect

2018-04-04 Thread Alwin Antreich
On Tue, Apr 03, 2018 at 10:25:53AM +0200, Thomas Lamprecht wrote:
>
> Am 03/30/2018 um 12:25 PM schrieb Alwin Antreich:
> > To be able to connect through librados2 without a config file, the
> > method pve_rados_connect is split up into pve_rados_connect and
> > pve_rados_conf_read_file.
> >
> > Signed-off-by: Alwin Antreich 
> > ---
> >   PVE/RADOS.pm |  9 -
> >   RADOS.xs | 26 +-
> >   2 files changed, 29 insertions(+), 6 deletions(-)
> >
> > diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
> > index aa6a102..ad1c2db 100644
> > --- a/PVE/RADOS.pm
> > +++ b/PVE/RADOS.pm
> > @@ -1,6 +1,6 @@
> >   package PVE::RADOS;
> > -use 5.014002;
> > +use 5.014002; # FIXME: update version??
> >   use strict;
> >   use warnings;
> >   use Carp;
> > @@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
> >   require Exporter;
> >   my $rados_default_timeout = 5;
> > +my $ceph_default_conf = '/etc/ceph/ceph.conf';
> >   our @ISA = qw(Exporter);
> > @@ -164,6 +165,12 @@ sub new {
> > $conn = pve_rados_create() ||
> > die "unable to create RADOS object\n";
> > +   my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;
> > +
> > +   if (-e $ceph_conf) {
> > +   pve_rados_conf_read_file($conn, $ceph_conf);
> > +   }
> > +
> > pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);
> > foreach my $k (keys %params) {
> > diff --git a/RADOS.xs b/RADOS.xs
> > index a9f6bc3..ad3cf96 100644
> > --- a/RADOS.xs
> > +++ b/RADOS.xs
> > @@ -47,19 +47,35 @@ CODE:
>
> This whole hunk does not apply here...
> A quick look gave me one whitespace problem (see below), but that alone did
> not fix it for me...
> Are you sure you sent all commits between this and origin/master ?
>
> git log origin/master..
I will add a white space cleanup before my v2 of this patch, as it seems
not to apply without the cleanup.

>
>
> >   }
> >   void
> > -pve_rados_connect(cluster)
> > +pve_rados_conf_read_file(cluster, path)
> >   rados_t cluster
> > -PROTOTYPE: $
> > +SV *path
> > +PROTOTYPE: $$
> >   CODE:
> >   {
> > -DPRINTF("pve_rados_connect\n");
> > +char *p = NULL;
> > -int res = rados_conf_read_file(cluster, NULL);
> > +if (SvOK(path)) {
> > +   p = SvPV_nolen(path);
> > +}
> > +
> > +DPRINTF("pve_rados_conf_read_file %s\n", p);
> > +
> > +int res = rados_conf_read_file(cluster, p);
> >   if (res < 0) {
> >   die("rados_conf_read_file failed - %s\n", strerror(-res));
> >   }
> > +}
> > +
> > +void
> > +pve_rados_connect(cluster)
> > +rados_t cluster
> > +PROTOTYPE: $
> > +CODE:
> > +{
> > +DPRINTF("pve_rados_connect\n");
>
> The empty line above contains a trailing whitespace in origin/master, which
> your patch does not contain.
see above

>
> > -res = rados_connect(cluster);
> > +int res = rados_connect(cluster);
> >   if (res < 0) {
> >   die("rados_connect failed - %s\n", strerror(-res));
> >   }
>

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 2/3] Split method pve_rados_connect

2018-04-04 Thread Alwin Antreich
To be able to connect through librados2 without a config file, the
method pve_rados_connect is split up into pve_rados_connect and
pve_rados_conf_read_file.

Signed-off-by: Alwin Antreich 
---
changes from v1 -> v2:
 - die if the supplied ceph config in %params does not exist
 - removed the FIXME, as it stands for the minimal perl version needed

 PVE/RADOS.pm | 11 +++
 RADOS.xs | 26 +-
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index aff8141..2ed92b7 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
 require Exporter;

 my $rados_default_timeout = 5;
+my $ceph_default_conf = '/etc/ceph/ceph.conf';


 our @ISA = qw(Exporter);
@@ -164,6 +165,16 @@ sub new {
$conn = pve_rados_create() ||
die "unable to create RADOS object\n";

+   if (defined($params{ceph_conf}) && (!-e $params{ceph_conf})) {
+   die "Supplied ceph config doesn't exist, $params{ceph_conf}";
+   }
+
+   my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;
+
+   if (-e $ceph_conf) {
+   pve_rados_conf_read_file($conn, $ceph_conf);
+   }
+
pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);

foreach my $k (keys %params) {
diff --git a/RADOS.xs b/RADOS.xs
index 66fa65a..ad3cf96 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -47,19 +47,35 @@ CODE:
 }

 void
-pve_rados_connect(cluster)
+pve_rados_conf_read_file(cluster, path)
 rados_t cluster
-PROTOTYPE: $
+SV *path
+PROTOTYPE: $$
 CODE:
 {
-DPRINTF("pve_rados_connect\n");
+char *p = NULL;

-int res = rados_conf_read_file(cluster, NULL);
+if (SvOK(path)) {
+   p = SvPV_nolen(path);
+}
+
+DPRINTF("pve_rados_conf_read_file %s\n", p);
+
+int res = rados_conf_read_file(cluster, p);
 if (res < 0) {
 die("rados_conf_read_file failed - %s\n", strerror(-res));
 }
+}
+
+void
+pve_rados_connect(cluster)
+rados_t cluster
+PROTOTYPE: $
+CODE:
+{
+DPRINTF("pve_rados_connect\n");

-res = rados_connect(cluster);
+int res = rados_connect(cluster);
 if (res < 0) {
 die("rados_connect failed - %s\n", strerror(-res));
 }
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 1/3] white space cleanup

2018-04-04 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
note: seems that, this one is need for the following patches to apply

 PVE/RADOS.pm | 12 ++--
 RADOS.xs | 32 
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index aa6a102..aff8141 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -43,7 +43,7 @@ my $writedata = sub {
 my ($fh, $cmd, $data) = @_;

 local $SIG{PIPE} = 'IGNORE';
-
+
 my $bin = pack "a L/a*", $cmd, $data || '';
 my $res = syswrite $fh, $bin;

@@ -63,7 +63,7 @@ my $readdata = sub {
 return undef if $allow_eof && length($head) == 0;

 die "partial read\n" if length($head) < 5;
-
+
 my ($cmd, $len) = unpack "a L", $head;

 my $data = '';
@@ -86,7 +86,7 @@ my $kill_worker = sub {
 close($self->{child}) if defined($self->{child});

 # only kill if we created the process
-return if $self->{pid} != $$;
+return if $self->{pid} != $$;

 kill(9, $self->{cpid});
 waitpid($self->{cpid}, 0);
@@ -140,7 +140,7 @@ sub new {

 if ($cpid) { # parent
close $parent;
-
+
$self->{cpid} = $cpid;
$self->{child} = $child;

@@ -182,7 +182,7 @@ sub new {

for (;;) {
my ($cmd, $data) = &$readdata($parent, 1);
-
+
last if !$cmd || $cmd eq 'Q';

my $res;
@@ -203,7 +203,7 @@ sub new {
}
&$writedata($parent, '>', $res);
}
-
+
exit(0);
 }

diff --git a/RADOS.xs b/RADOS.xs
index a9f6bc3..66fa65a 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -13,14 +13,14 @@

 MODULE = PVE::RADOSPACKAGE = PVE::RADOS

-rados_t
-pve_rados_create()
+rados_t
+pve_rados_create()
 PROTOTYPE:
 CODE:
-{
-rados_t clu = NULL;
+{
+rados_t clu = NULL;
 int ret = rados_create(&clu, NULL);
-
+
 if (ret == 0)
 RETVAL = clu;
 else {
@@ -31,7 +31,7 @@ CODE:
 OUTPUT: RETVAL

 void
-pve_rados_conf_set(cluster, key, value)
+pve_rados_conf_set(cluster, key, value)
 rados_t cluster
 char *key
 char *value
@@ -41,13 +41,13 @@ CODE:
 DPRINTF("pve_rados_conf_set %s = %s\n", key, value);

 int res = rados_conf_set(cluster, key, value);
-if (res < 0) {
+if (res < 0) {
 die("rados_conf_set failed - %s\n", strerror(-res));
 }
 }

 void
-pve_rados_connect(cluster)
+pve_rados_connect(cluster)
 rados_t cluster
 PROTOTYPE: $
 CODE:
@@ -58,7 +58,7 @@ CODE:
 if (res < 0) {
 die("rados_conf_read_file failed - %s\n", strerror(-res));
 }
-
+
 res = rados_connect(cluster);
 if (res < 0) {
 die("rados_connect failed - %s\n", strerror(-res));
@@ -66,7 +66,7 @@ CODE:
 }

 void
-pve_rados_shutdown(cluster)
+pve_rados_shutdown(cluster)
 rados_t cluster
 PROTOTYPE: $
 CODE:
@@ -76,7 +76,7 @@ CODE:
 }

 SV *
-pve_rados_mon_command(cluster, cmds)
+pve_rados_mon_command(cluster, cmds)
 rados_t cluster
 AV *cmds
 PROTOTYPE: $$
@@ -99,7 +99,7 @@ CODE:
 cmd[cmdlen] = SvPV_nolen(arg);
 DPRINTF("pve_rados_mon_command%zd %s\n", cmdlen, cmd[cmdlen]);
 cmdlen++;
-}
+}

 int ret = rados_mon_command(cluster, cmd, cmdlen,
 NULL, 0,
@@ -112,15 +112,15 @@ CODE:
 rados_buffer_free(outs);
 die(msg);
 }
-
+
 RETVAL = newSVpv(outbuf, outbuflen);

 rados_buffer_free(outbuf);
 }
 OUTPUT: RETVAL

-HV *
-pve_rados_cluster_stat(cluster)
+HV *
+pve_rados_cluster_stat(cluster)
 rados_t cluster
 PROTOTYPE: $
 CODE:
@@ -130,7 +130,7 @@ CODE:
 DPRINTF("pve_rados_cluster_stat");

 int ret = rados_cluster_stat(cluster, &result);
-
+
 if(ret != 0) {
 warn("rados_cluster_stat failed (ret=%d)\n", ret);
 XSRETURN_UNDEF;
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 3/3] allow to specify the userid with rados_create

2018-04-04 Thread Alwin Antreich
This allows to connect to a cluster with a different user, besides admin

Signed-off-by: Alwin Antreich 
---
 PVE/RADOS.pm |  4 +++-
 RADOS.xs | 13 ++---
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index 2ed92b7..d53f655 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -14,6 +14,7 @@ require Exporter;

 my $rados_default_timeout = 5;
 my $ceph_default_conf = '/etc/ceph/ceph.conf';
+my $ceph_default_user = 'admin';


 our @ISA = qw(Exporter);
@@ -162,7 +163,8 @@ sub new {

my $conn;
eval {
-   $conn = pve_rados_create() ||
+   my $ceph_user = delete $params{userid} || $ceph_default_user;
+   $conn = pve_rados_create($ceph_user) ||
die "unable to create RADOS object\n";

if (defined($params{ceph_conf}) && (!-e $params{ceph_conf})) {
diff --git a/RADOS.xs b/RADOS.xs
index ad3cf96..f3f5516 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -14,12 +14,19 @@
 MODULE = PVE::RADOSPACKAGE = PVE::RADOS

 rados_t
-pve_rados_create()
-PROTOTYPE:
+pve_rados_create(user)
+SV *user
+PROTOTYPE: $
 CODE:
 {
+char *u = NULL;
 rados_t clu = NULL;
-int ret = rados_create(&clu, NULL);
+
+if (SvOK(user)) {
+   u = SvPV_nolen(user);
+}
+
+int ret = rados_create(&clu, u);

 if (ret == 0)
 RETVAL = clu;
--
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 librados2-perl 1/3] white space cleanup

2018-04-05 Thread Alwin Antreich
On Thu, Apr 05, 2018 at 08:14:05AM +0200, Thomas Lamprecht wrote:
> Still doesn't applies
>
> I recreated your patch with s/\s+$//g and compared, all the empty "stay
> lines"
> (i.e., lines from context which are not touched by the patch) miss their
> initial
> space (in patches the first character in a line is special: '-' means
> remove,
> '+' means add line, ' ' (space) means this line stays the same).
>
> So you patch, as is, cannot work!
>
> Am 04/04/2018 um 05:37 PM schrieb Alwin Antreich:
> > Signed-off-by: Alwin Antreich 
> > ---
> > note: seems that, this one is need for the following patches to apply
>
> FYI: It's only needed if you had it in your git tree, else it wouldn't be
> needed,
> git can operate just fine on trailing whitespaces in otherwise empty lines.
> (although it complains when applying the mail, but still does the work)
>
> If you insist that you, in fact, did not had any commit, changing those
> trailing
> whitespaces, in your history the only other reason I could immagine is that
> your mail got scrambled by your MTA (unlikely if you use git send-email) or
> by you when doing a last minute correction during sending (never do this,
> I know everyone including myself does sometimes - being 100% sure it was
> correct, but fact is: this action is heavily cursed and in >80% of the cases
> you will end up with a patch which won't apply or has a new bug)
>
> In your case it's highly probably your MTA stripping away trailing spaces
> (bad MTA!) as it's just to consistent in your patch...
>
> Other patches apply just fine, and I use a pretty vanilla setup which worked
> for a few years good, so I guess/hope that it is not a fault here with me.
> :)
>
Strange, I tried it with a fresh clone and it applied localy fine. I
send my patches with git send-email. But what I didn't check is, if it
applies from the pve-devel list. I will check. :-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 1/3] white space cleanup

2018-04-05 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/RADOS.pm | 12 ++--
 RADOS.xs | 32 
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index aa6a102..aff8141 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -43,7 +43,7 @@ my $writedata = sub {
 my ($fh, $cmd, $data) = @_;
 
 local $SIG{PIPE} = 'IGNORE';
- 
+
 my $bin = pack "a L/a*", $cmd, $data || '';
 my $res = syswrite $fh, $bin;
 
@@ -63,7 +63,7 @@ my $readdata = sub {
 return undef if $allow_eof && length($head) == 0;
 
 die "partial read\n" if length($head) < 5;
-
+
 my ($cmd, $len) = unpack "a L", $head;
 
 my $data = '';
@@ -86,7 +86,7 @@ my $kill_worker = sub {
 close($self->{child}) if defined($self->{child});
 
 # only kill if we created the process
-return if $self->{pid} != $$; 
+return if $self->{pid} != $$;
 
 kill(9, $self->{cpid});
 waitpid($self->{cpid}, 0);
@@ -140,7 +140,7 @@ sub new {
 
 if ($cpid) { # parent
close $parent;
- 
+
$self->{cpid} = $cpid;
$self->{child} = $child;
 
@@ -182,7 +182,7 @@ sub new {
 
for (;;) {
my ($cmd, $data) = &$readdata($parent, 1);
-   
+
last if !$cmd || $cmd eq 'Q';
 
my $res;
@@ -203,7 +203,7 @@ sub new {
}
&$writedata($parent, '>', $res);
}
- 
+
exit(0);
 }
 
diff --git a/RADOS.xs b/RADOS.xs
index a9f6bc3..66fa65a 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -13,14 +13,14 @@
 
 MODULE = PVE::RADOSPACKAGE = PVE::RADOS
 
-rados_t 
-pve_rados_create() 
+rados_t
+pve_rados_create()
 PROTOTYPE:
 CODE:
-{  
-rados_t clu = NULL; 
+{
+rados_t clu = NULL;
 int ret = rados_create(&clu, NULL);
-
+
 if (ret == 0)
 RETVAL = clu;
 else {
@@ -31,7 +31,7 @@ CODE:
 OUTPUT: RETVAL
 
 void
-pve_rados_conf_set(cluster, key, value) 
+pve_rados_conf_set(cluster, key, value)
 rados_t cluster
 char *key
 char *value
@@ -41,13 +41,13 @@ CODE:
 DPRINTF("pve_rados_conf_set %s = %s\n", key, value);
 
 int res = rados_conf_set(cluster, key, value);
-if (res < 0) {  
+if (res < 0) {
 die("rados_conf_set failed - %s\n", strerror(-res));
 }
 }
 
 void
-pve_rados_connect(cluster) 
+pve_rados_connect(cluster)
 rados_t cluster
 PROTOTYPE: $
 CODE:
@@ -58,7 +58,7 @@ CODE:
 if (res < 0) {
 die("rados_conf_read_file failed - %s\n", strerror(-res));
 }
- 
+
 res = rados_connect(cluster);
 if (res < 0) {
 die("rados_connect failed - %s\n", strerror(-res));
@@ -66,7 +66,7 @@ CODE:
 }
 
 void
-pve_rados_shutdown(cluster) 
+pve_rados_shutdown(cluster)
 rados_t cluster
 PROTOTYPE: $
 CODE:
@@ -76,7 +76,7 @@ CODE:
 }
 
 SV *
-pve_rados_mon_command(cluster, cmds) 
+pve_rados_mon_command(cluster, cmds)
 rados_t cluster
 AV *cmds
 PROTOTYPE: $$
@@ -99,7 +99,7 @@ CODE:
 cmd[cmdlen] = SvPV_nolen(arg);
 DPRINTF("pve_rados_mon_command%zd %s\n", cmdlen, cmd[cmdlen]);
 cmdlen++;
-} 
+}
 
 int ret = rados_mon_command(cluster, cmd, cmdlen,
 NULL, 0,
@@ -112,15 +112,15 @@ CODE:
 rados_buffer_free(outs);
 die(msg);
 }
- 
+
 RETVAL = newSVpv(outbuf, outbuflen);
 
 rados_buffer_free(outbuf);
 }
 OUTPUT: RETVAL
 
-HV * 
-pve_rados_cluster_stat(cluster) 
+HV *
+pve_rados_cluster_stat(cluster)
 rados_t cluster
 PROTOTYPE: $
 CODE:
@@ -130,7 +130,7 @@ CODE:
 DPRINTF("pve_rados_cluster_stat");
 
 int ret = rados_cluster_stat(cluster, &result);
-  
+
 if(ret != 0) {
 warn("rados_cluster_stat failed (ret=%d)\n", ret);
 XSRETURN_UNDEF;
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 0/3] resend of last series

2018-04-05 Thread Alwin Antreich
I am resending the last series, as my vim removed trailing whitespace
after I annotated the patch (added v2), before sending it with git.

I hope the patches apply now. :)

Alwin Antreich (3):
  white space cleanup
  Split method pve_rados_connect
  allow to specify the userid with rados_create

 PVE/RADOS.pm | 27 +---
 RADOS.xs | 67 
 2 files changed, 65 insertions(+), 29 deletions(-)

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 3/3] allow to specify the userid with rados_create

2018-04-05 Thread Alwin Antreich
This allows to connect to a cluster with a different user, besides admin

Signed-off-by: Alwin Antreich 
---
 PVE/RADOS.pm |  4 +++-
 RADOS.xs | 13 ++---
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index 2ed92b7..d53f655 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -14,6 +14,7 @@ require Exporter;
 
 my $rados_default_timeout = 5;
 my $ceph_default_conf = '/etc/ceph/ceph.conf';
+my $ceph_default_user = 'admin';
 
 
 our @ISA = qw(Exporter);
@@ -162,7 +163,8 @@ sub new {
 
my $conn;
eval {
-   $conn = pve_rados_create() ||
+   my $ceph_user = delete $params{userid} || $ceph_default_user;
+   $conn = pve_rados_create($ceph_user) ||
die "unable to create RADOS object\n";
 
if (defined($params{ceph_conf}) && (!-e $params{ceph_conf})) {
diff --git a/RADOS.xs b/RADOS.xs
index ad3cf96..f3f5516 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -14,12 +14,19 @@
 MODULE = PVE::RADOSPACKAGE = PVE::RADOS
 
 rados_t
-pve_rados_create()
-PROTOTYPE:
+pve_rados_create(user)
+SV *user
+PROTOTYPE: $
 CODE:
 {
+char *u = NULL;
 rados_t clu = NULL;
-int ret = rados_create(&clu, NULL);
+
+if (SvOK(user)) {
+   u = SvPV_nolen(user);
+}
+
+int ret = rados_create(&clu, u);
 
 if (ret == 0)
 RETVAL = clu;
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 librados2-perl 2/3] Split method pve_rados_connect

2018-04-05 Thread Alwin Antreich
To be able to connect through librados2 without a config file, the
method pve_rados_connect is split up into pve_rados_connect and
pve_rados_conf_read_file.

Signed-off-by: Alwin Antreich 
---
 PVE/RADOS.pm | 11 +++
 RADOS.xs | 26 +-
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index aff8141..2ed92b7 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
 require Exporter;
 
 my $rados_default_timeout = 5;
+my $ceph_default_conf = '/etc/ceph/ceph.conf';
 
 
 our @ISA = qw(Exporter);
@@ -164,6 +165,16 @@ sub new {
$conn = pve_rados_create() ||
die "unable to create RADOS object\n";
 
+   if (defined($params{ceph_conf}) && (!-e $params{ceph_conf})) {
+   die "Supplied ceph config doesn't exist, $params{ceph_conf}";
+   }
+
+   my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;
+
+   if (-e $ceph_conf) {
+   pve_rados_conf_read_file($conn, $ceph_conf);
+   }
+
pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);
 
foreach my $k (keys %params) {
diff --git a/RADOS.xs b/RADOS.xs
index 66fa65a..ad3cf96 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -47,19 +47,35 @@ CODE:
 }
 
 void
-pve_rados_connect(cluster)
+pve_rados_conf_read_file(cluster, path)
 rados_t cluster
-PROTOTYPE: $
+SV *path
+PROTOTYPE: $$
 CODE:
 {
-DPRINTF("pve_rados_connect\n");
+char *p = NULL;
 
-int res = rados_conf_read_file(cluster, NULL);
+if (SvOK(path)) {
+   p = SvPV_nolen(path);
+}
+
+DPRINTF("pve_rados_conf_read_file %s\n", p);
+
+int res = rados_conf_read_file(cluster, p);
 if (res < 0) {
 die("rados_conf_read_file failed - %s\n", strerror(-res));
 }
+}
+
+void
+pve_rados_connect(cluster)
+rados_t cluster
+PROTOTYPE: $
+CODE:
+{
+DPRINTF("pve_rados_connect\n");
 
-res = rados_connect(cluster);
+int res = rados_connect(cluster);
 if (res < 0) {
 die("rados_connect failed - %s\n", strerror(-res));
 }
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 2/2] Refactor of method build_cmd and path

2018-04-09 Thread Alwin Antreich
Method build_cmd and path use similar code to generate the ceph command
line or qemu config parameters. They now use the private method
ceph_connect_option for parameter generation.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 113 +--
 1 file changed, 40 insertions(+), 73 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 62c1933..f8388c6 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -11,8 +11,6 @@ use PVE::RADOS;
 
 use base qw(PVE::Storage::Plugin);
 
-my $pveceph_config = '/etc/pve/ceph.conf';
-
 my $rbd_unittobytes = {
 "k"  => 1024,
 "M"  => 1024*1024,
@@ -40,62 +38,12 @@ my $hostlist = sub {
 } @monhostlist);
 };
 
-my $build_cmd = sub {
-my ($binary, $scfg, $storeid, $op, @options) = @_;
-
-my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
-my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
-my $username =  $scfg->{username} ? $scfg->{username} : 'admin';
-
-my $cmd = [$binary, '-p', $pool];
-my $pveceph_managed = !defined($scfg->{monhost});
-
-if ($pveceph_managed) {
-   push @$cmd, '-c', $pveceph_config;
-} else {
-   push @$cmd, '-m', $hostlist->($scfg->{monhost}, ',');
-   push @$cmd, '--auth_supported', -e $keyring ? 'cephx' : 'none';
-}
-
-if (-e $keyring) {
-   push @$cmd, '-n', "client.$username";
-   push @$cmd, '--keyring', $keyring;
-}
-
-my $cephconfig = "/etc/pve/priv/ceph/${storeid}.conf";
-
-if (-e $cephconfig) {
-   if ($pveceph_managed) {
-   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
-   } else {
-   push @$cmd, '-c', $cephconfig;
-   }
-}
-
-push @$cmd, $op;
-
-push @$cmd, @options if scalar(@options);
-
-return $cmd;
-};
-
-my $rbd_cmd = sub {
-my ($scfg, $storeid, $op, @options) = @_;
-
-return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
-};
-
-my $rados_cmd = sub {
-my ($scfg, $storeid, $op, @options) = @_;
-
-return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
-};
-
 my $ceph_connect_option = sub {
 my ($scfg, $storeid, %options) = @_;
 
 my $cmd_option = {};
 my $ceph_storeid_conf = "/etc/pve/priv/ceph/${storeid}.conf";
+my $pveceph_config = '/etc/pve/ceph.conf';
 my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
 my $pveceph_managed = !defined($scfg->{monhost});
 
@@ -120,11 +68,43 @@ my $ceph_connect_option = sub {
}
 }
 
-
 return $cmd_option;
 
 };
 
+my $build_cmd = sub {
+my ($binary, $scfg, $storeid, $op, @options) = @_;
+
+my $cmd_option = $ceph_connect_option->($scfg, $storeid);
+my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
+
+my $cmd = [$binary, '-p', $pool];
+
+push @$cmd, '-c', $cmd_option->{ceph_conf} if ($cmd_option->{ceph_conf});
+push @$cmd, '-m', $cmd_option->{mon_host} if ($cmd_option->{mon_host});
+push @$cmd, '--auth_supported', $cmd_option->{auth_supported} if 
($cmd_option->{auth_supported});
+push @$cmd, '-n', "client.$cmd_option->{userid}" if 
($cmd_option->{userid});
+push @$cmd, '--keyring', $cmd_option->{keyring} if 
($cmd_option->{keyring});
+
+push @$cmd, $op;
+
+push @$cmd, @options if scalar(@options);
+
+return $cmd;
+};
+
+my $rbd_cmd = sub {
+my ($scfg, $storeid, $op, @options) = @_;
+
+return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
+};
+
+my $rados_cmd = sub {
+my ($scfg, $storeid, $op, @options) = @_;
+
+return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
+};
+
 my $librados_connect = sub {
 my ($scfg, $storeid, $options) = @_;
 
@@ -353,38 +333,25 @@ sub parse_volname {
 sub path {
 my ($class, $scfg, $volname, $storeid, $snapname) = @_;
 
+my $cmd_option = $ceph_connect_option->($scfg, $storeid);
 my ($vtype, $name, $vmid) = $class->parse_volname($volname);
 $name .= '@'.$snapname if $snapname;
 
 my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
 return ("/dev/rbd/$pool/$name", $vmid, $vtype) if $scfg->{krbd};
 
-my $username =  $scfg->{username} ? $scfg->{username} : 'admin';
-
 my $path = "rbd:$pool/$name";
-my $pveceph_managed = !defined($scfg->{monhost});
-my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
 
-if ($pveceph_managed) {

[pve-devel] [PATCH v2 storage 0/2] show storage-utilization per pool

2018-04-09 Thread Alwin Antreich
My goal behind changing the storage utilization for ceph is, that it shows
currently the global available/used storage space and calculates the usage
percentage based on it.
This is suboptimal in some cases, eg. multiple pools with different
size/min_size on the same root, recovery or the ceph pool is not residing on
all OSDs in the cluster.

To get a usage per pool basis, I use the librados2-perl bindings
(hence the update to v1.0-5), as the ceph command line tool doesn't work w/o
a config file and rados doesn't provide the necessary information on its
command line.

The storage status calculates a percent_used for ceph clusters prior Kraken,
in releases >= Kraken the percent_used is provided by ceph. The GUI, while not
using the percent_used, will show correct values, with the exception when a
recovery is running. The CLI uses the percent_used.

Alwin Antreich (2):
  Fix #1542: show storage utilization per pool
  Refactor of method build_cmd and path

 PVE/CLI/pvesm.pm |   1 +
 PVE/Storage.pm   |   4 +-
 PVE/Storage/RBDPlugin.pm | 149 ++-
 debian/control   |   2 +
 4 files changed, 87 insertions(+), 69 deletions(-)

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 1/2] Fix #1542: show storage utilization per pool

2018-04-09 Thread Alwin Antreich
 - get the percent_used value for a ceph pool and
   calculate it where ceph doesn't supply it (pre kraken)
 - use librados2-perl for pool status
 - add librados2-perl as build-depends and depends in debian/control

Signed-off-by: Alwin Antreich 
---
 PVE/CLI/pvesm.pm |  1 +
 PVE/Storage.pm   |  4 +-
 PVE/Storage/RBDPlugin.pm | 98 
 debian/control   |  2 +
 4 files changed, 78 insertions(+), 27 deletions(-)

diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index 5774364..98cd9e9 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -149,6 +149,7 @@ my $print_status = sub {
my $active = $res->{active} ? 'active' : 'inactive';
my ($per, $per_fmt) = (0, '% 7.2f%%');
$per = ($res->{used}*100)/$res->{total} if $res->{total} > 0;
+   $per = $res->{percent_used} if defined($res->{percent_used});
 
if (!$res->{enabled}) {
$per = 'N/A';
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 4140a99..0d9d7cf 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1065,14 +1065,14 @@ sub storage_info {
next;
}
 
-   my ($total, $avail, $used, $active);
-   eval { ($total, $avail, $used, $active) = $plugin->status($storeid, 
$scfg, $cache); };
+   my ($total, $avail, $used, $active, $percent_used) = eval { 
$plugin->status($storeid, $scfg, $cache); };
warn $@ if $@;
next if !$active;
$info->{$storeid}->{total} = int($total);
$info->{$storeid}->{avail} = int($avail);
$info->{$storeid}->{used} = int($used);
$info->{$storeid}->{active} = $active;
+   $info->{$storeid}->{percent_used} = $percent_used if 
(defined($percent_used));
 }
 
 return $info;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index fd5a2ef..62c1933 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -7,6 +7,7 @@ use Net::IP;
 use PVE::Tools qw(run_command trim);
 use PVE::Storage::Plugin;
 use PVE::JSONSchema qw(get_standard_option);
+use PVE::RADOS;
 
 use base qw(PVE::Storage::Plugin);
 
@@ -90,6 +91,50 @@ my $rados_cmd = sub {
 return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
 };
 
+my $ceph_connect_option = sub {
+my ($scfg, $storeid, %options) = @_;
+
+my $cmd_option = {};
+my $ceph_storeid_conf = "/etc/pve/priv/ceph/${storeid}.conf";
+my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
+my $pveceph_managed = !defined($scfg->{monhost});
+
+$cmd_option->{ceph_conf} = $pveceph_config if (-e $pveceph_config);
+
+if (-e $ceph_storeid_conf) {
+   if ($pveceph_managed) {
+   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
+   } else {
+   $cmd_option->{ceph_conf} = $ceph_storeid_conf;
+   }
+}
+
+$cmd_option->{keyring} = $keyring if (-e $keyring);
+$cmd_option->{auth_supported} = (defined $cmd_option->{keyring}) ? 'cephx' 
: 'none';
+$cmd_option->{userid} =  $scfg->{username} ? $scfg->{username} : 'admin';
+$cmd_option->{mon_host} = $hostlist->($scfg->{monhost}, ',') if 
(defined($scfg->{monhost}));
+
+if (%options) {
+   foreach my $k (keys %options) {
+   $cmd_option->{$k} = $options{$k};
+   }
+}
+
+
+return $cmd_option;
+
+};
+
+my $librados_connect = sub {
+my ($scfg, $storeid, $options) = @_;
+
+my $librados_config = $ceph_connect_option->($scfg, $storeid);
+
+my $rados = PVE::RADOS->new(%$librados_config);
+
+return $rados;
+};
+
 # needed for volumes created using ceph jewel (or higher)
 my $krbd_feature_disable = sub {
 my ($scfg, $storeid, $name) = @_;
@@ -160,7 +205,7 @@ sub run_rbd_command {
*STDERR->flush();
};
 }
-
+
 eval { run_command($cmd, %args); };
 if (my $err = $@) {
die $errmsg . $lasterr if length($lasterr);
@@ -200,7 +245,7 @@ sub rbd_ls {
 my $err = $@;
 
 die $err if $err && $err !~ m/doesn't contain rbd images/ ;
-  
+
 return $list;
 }
 
@@ -425,7 +470,7 @@ sub clone_image {
 my ($vtype, $basename, $basevmid, undef, undef, $isBase) =
 $class->parse_volname($volname);
 
-die "$volname is not a base image and snapname is not provided\n" 
+die "$volname is not a base image and snapname is not provided\n"
if !$isBase && !length($snapname);
 
 my $name = &$find_free_diskname($storeid, $scfg, $vmid);
@@ -444,7 +489,7 @@ sub clone_image {
 my $newvol = "$basename/$name";
 $newvol = $name if length($snapname);
 
-my $cmd = &$rbd_cmd($scfg, $storeid, &#

[pve-devel] [PATCH v3 storage 1/2] Fix #1542: show storage utilization per pool

2018-04-11 Thread Alwin Antreich
 - get the percent_used value for a ceph pool and
   calculate it where ceph doesn't supply it (pre kraken)
 - use librados2-perl for pool status
 - add librados2-perl as build-depends and depends in debian/control

Signed-off-by: Alwin Antreich 
---
 PVE/CLI/pvesm.pm |  1 +
 PVE/Storage.pm   |  4 +-
 PVE/Storage/RBDPlugin.pm | 96 ++--
 debian/control   |  2 +
 4 files changed, 81 insertions(+), 22 deletions(-)

diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index 5774364..98cd9e9 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -149,6 +149,7 @@ my $print_status = sub {
my $active = $res->{active} ? 'active' : 'inactive';
my ($per, $per_fmt) = (0, '% 7.2f%%');
$per = ($res->{used}*100)/$res->{total} if $res->{total} > 0;
+   $per = $res->{percent_used} if defined($res->{percent_used});
 
if (!$res->{enabled}) {
$per = 'N/A';
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 4140a99..0d9d7cf 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1065,14 +1065,14 @@ sub storage_info {
next;
}
 
-   my ($total, $avail, $used, $active);
-   eval { ($total, $avail, $used, $active) = $plugin->status($storeid, 
$scfg, $cache); };
+   my ($total, $avail, $used, $active, $percent_used) = eval { 
$plugin->status($storeid, $scfg, $cache); };
warn $@ if $@;
next if !$active;
$info->{$storeid}->{total} = int($total);
$info->{$storeid}->{avail} = int($avail);
$info->{$storeid}->{used} = int($used);
$info->{$storeid}->{active} = $active;
+   $info->{$storeid}->{percent_used} = $percent_used if 
(defined($percent_used));
 }
 
 return $info;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index fd5a2ef..1b09b58 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -7,6 +7,7 @@ use Net::IP;
 use PVE::Tools qw(run_command trim);
 use PVE::Storage::Plugin;
 use PVE::JSONSchema qw(get_standard_option);
+use PVE::RADOS;
 
 use base qw(PVE::Storage::Plugin);
 
@@ -90,6 +91,50 @@ my $rados_cmd = sub {
 return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
 };
 
+my $ceph_connect_option = sub {
+my ($scfg, $storeid, %options) = @_;
+
+my $cmd_option = {};
+my $ceph_storeid_conf = "/etc/pve/priv/ceph/${storeid}.conf";
+my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
+my $pveceph_managed = !defined($scfg->{monhost});
+
+$cmd_option->{ceph_conf} = $pveceph_config if (-e $pveceph_config);
+
+if (-e $ceph_storeid_conf) {
+   if ($pveceph_managed) {
+   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
+   } else {
+   $cmd_option->{ceph_conf} = $ceph_storeid_conf;
+   }
+}
+
+$cmd_option->{keyring} = $keyring if (-e $keyring);
+$cmd_option->{auth_supported} = (defined $cmd_option->{keyring}) ? 'cephx' 
: 'none';
+$cmd_option->{userid} =  $scfg->{username} ? $scfg->{username} : 'admin';
+$cmd_option->{mon_host} = $hostlist->($scfg->{monhost}, ',') if 
(defined($scfg->{monhost}));
+
+if (%options) {
+   foreach my $k (keys %options) {
+   $cmd_option->{$k} = $options{$k};
+   }
+}
+
+
+return $cmd_option;
+
+};
+
+my $librados_connect = sub {
+my ($scfg, $storeid, $options) = @_;
+
+my $librados_config = $ceph_connect_option->($scfg, $storeid);
+
+my $rados = PVE::RADOS->new(%$librados_config);
+
+return $rados;
+};
+
 # needed for volumes created using ceph jewel (or higher)
 my $krbd_feature_disable = sub {
 my ($scfg, $storeid, $name) = @_;
@@ -539,31 +584,42 @@ sub list_images {
 sub status {
 my ($class, $storeid, $scfg, $cache) = @_;
 
-my $cmd = &$rados_cmd($scfg, $storeid, 'df');
-
-my $stats = {};
 
-my $parser = sub {
-   my $line = shift;
-   if ($line =~ m/^\s*total(?:\s|_)(\S+)\s+(\d+)(k|M|G|T)?/) {
-   $stats->{$1} = $2;
-   # luminous has units here..
-   if ($3) {
-   $stats->{$1} *= $rbd_unittobytes->{$3}/1024;
-   }
-   }
-};
+my $rados = &$librados_connect($scfg, $storeid);
+my $df = $rados->mon_command({ prefix => 'df', format => 'json' });
 
-eval {
-   run_rbd_command($cmd, errmsg => "rados error", errfunc => sub {}, 
outfunc => $parser);
-};
+my ($d) = grep { $_->{name} eq $scfg->{pool} } @{$df->{pools}};
 
-my $total = $stats->{space} ? $stats->{space}*1024 : 0;
-my $free = $stats->{avail} ? $stats

[pve-devel] [PATCH v3 storage 0/2] show storage-utilization per pool

2018-04-11 Thread Alwin Antreich
Changes from v2 -> v3:
 - added comments to the status method for better understanding the calculation
 - had whitespace issues (again :/)

Alwin Antreich (2):
  Fix #1542: show storage utilization per pool
  Refactor of method build_cmd and path

 PVE/CLI/pvesm.pm |   1 +
 PVE/Storage.pm   |   4 +-
 PVE/Storage/RBDPlugin.pm | 143 +++
 debian/control   |   2 +
 4 files changed, 88 insertions(+), 62 deletions(-)

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 storage 2/2] Refactor of method build_cmd and path

2018-04-11 Thread Alwin Antreich
Method build_cmd and path use similar code to generate the ceph command
line or qemu config parameters. They now use the private method
ceph_connect_option for parameter generation.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 113 +--
 1 file changed, 40 insertions(+), 73 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 1b09b58..7eac955 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -11,8 +11,6 @@ use PVE::RADOS;
 
 use base qw(PVE::Storage::Plugin);
 
-my $pveceph_config = '/etc/pve/ceph.conf';
-
 my $rbd_unittobytes = {
 "k"  => 1024,
 "M"  => 1024*1024,
@@ -40,62 +38,12 @@ my $hostlist = sub {
 } @monhostlist);
 };
 
-my $build_cmd = sub {
-my ($binary, $scfg, $storeid, $op, @options) = @_;
-
-my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
-my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
-my $username =  $scfg->{username} ? $scfg->{username} : 'admin';
-
-my $cmd = [$binary, '-p', $pool];
-my $pveceph_managed = !defined($scfg->{monhost});
-
-if ($pveceph_managed) {
-   push @$cmd, '-c', $pveceph_config;
-} else {
-   push @$cmd, '-m', $hostlist->($scfg->{monhost}, ',');
-   push @$cmd, '--auth_supported', -e $keyring ? 'cephx' : 'none';
-}
-
-if (-e $keyring) {
-   push @$cmd, '-n', "client.$username";
-   push @$cmd, '--keyring', $keyring;
-}
-
-my $cephconfig = "/etc/pve/priv/ceph/${storeid}.conf";
-
-if (-e $cephconfig) {
-   if ($pveceph_managed) {
-   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
-   } else {
-   push @$cmd, '-c', $cephconfig;
-   }
-}
-
-push @$cmd, $op;
-
-push @$cmd, @options if scalar(@options);
-
-return $cmd;
-};
-
-my $rbd_cmd = sub {
-my ($scfg, $storeid, $op, @options) = @_;
-
-return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
-};
-
-my $rados_cmd = sub {
-my ($scfg, $storeid, $op, @options) = @_;
-
-return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
-};
-
 my $ceph_connect_option = sub {
 my ($scfg, $storeid, %options) = @_;
 
 my $cmd_option = {};
 my $ceph_storeid_conf = "/etc/pve/priv/ceph/${storeid}.conf";
+my $pveceph_config = '/etc/pve/ceph.conf';
 my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
 my $pveceph_managed = !defined($scfg->{monhost});
 
@@ -120,11 +68,43 @@ my $ceph_connect_option = sub {
}
 }
 
-
 return $cmd_option;
 
 };
 
+my $build_cmd = sub {
+my ($binary, $scfg, $storeid, $op, @options) = @_;
+
+my $cmd_option = $ceph_connect_option->($scfg, $storeid);
+my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
+
+my $cmd = [$binary, '-p', $pool];
+
+push @$cmd, '-c', $cmd_option->{ceph_conf} if ($cmd_option->{ceph_conf});
+push @$cmd, '-m', $cmd_option->{mon_host} if ($cmd_option->{mon_host});
+push @$cmd, '--auth_supported', $cmd_option->{auth_supported} if 
($cmd_option->{auth_supported});
+push @$cmd, '-n', "client.$cmd_option->{userid}" if 
($cmd_option->{userid});
+push @$cmd, '--keyring', $cmd_option->{keyring} if 
($cmd_option->{keyring});
+
+push @$cmd, $op;
+
+push @$cmd, @options if scalar(@options);
+
+return $cmd;
+};
+
+my $rbd_cmd = sub {
+my ($scfg, $storeid, $op, @options) = @_;
+
+return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
+};
+
+my $rados_cmd = sub {
+my ($scfg, $storeid, $op, @options) = @_;
+
+return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
+};
+
 my $librados_connect = sub {
 my ($scfg, $storeid, $options) = @_;
 
@@ -353,38 +333,25 @@ sub parse_volname {
 sub path {
 my ($class, $scfg, $volname, $storeid, $snapname) = @_;
 
+my $cmd_option = $ceph_connect_option->($scfg, $storeid);
 my ($vtype, $name, $vmid) = $class->parse_volname($volname);
 $name .= '@'.$snapname if $snapname;
 
 my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
 return ("/dev/rbd/$pool/$name", $vmid, $vtype) if $scfg->{krbd};
 
-my $username =  $scfg->{username} ? $scfg->{username} : 'admin';
-
 my $path = "rbd:$pool/$name";
-my $pveceph_managed = !defined($scfg->{monhost});
-my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
 
-if ($pveceph_managed) {

[pve-devel] [PATCH v4 storage 2/2] Refactor of method build_cmd and path

2018-04-13 Thread Alwin Antreich
Method build_cmd and path use similar code to generate the ceph command
line or qemu config parameters. They now use the private method
ceph_connect_option for parameter generation.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 113 +--
 1 file changed, 40 insertions(+), 73 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index e71494d..109ed93 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -11,8 +11,6 @@ use PVE::RADOS;
 
 use base qw(PVE::Storage::Plugin);
 
-my $pveceph_config = '/etc/pve/ceph.conf';
-
 my $rbd_unittobytes = {
 "k"  => 1024,
 "M"  => 1024*1024,
@@ -40,62 +38,12 @@ my $hostlist = sub {
 } @monhostlist);
 };
 
-my $build_cmd = sub {
-my ($binary, $scfg, $storeid, $op, @options) = @_;
-
-my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
-my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
-my $username =  $scfg->{username} ? $scfg->{username} : 'admin';
-
-my $cmd = [$binary, '-p', $pool];
-my $pveceph_managed = !defined($scfg->{monhost});
-
-if ($pveceph_managed) {
-   push @$cmd, '-c', $pveceph_config;
-} else {
-   push @$cmd, '-m', $hostlist->($scfg->{monhost}, ',');
-   push @$cmd, '--auth_supported', -e $keyring ? 'cephx' : 'none';
-}
-
-if (-e $keyring) {
-   push @$cmd, '-n', "client.$username";
-   push @$cmd, '--keyring', $keyring;
-}
-
-my $cephconfig = "/etc/pve/priv/ceph/${storeid}.conf";
-
-if (-e $cephconfig) {
-   if ($pveceph_managed) {
-   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
-   } else {
-   push @$cmd, '-c', $cephconfig;
-   }
-}
-
-push @$cmd, $op;
-
-push @$cmd, @options if scalar(@options);
-
-return $cmd;
-};
-
-my $rbd_cmd = sub {
-my ($scfg, $storeid, $op, @options) = @_;
-
-return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
-};
-
-my $rados_cmd = sub {
-my ($scfg, $storeid, $op, @options) = @_;
-
-return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
-};
-
 my $ceph_connect_option = sub {
 my ($scfg, $storeid, %options) = @_;
 
 my $cmd_option = {};
 my $ceph_storeid_conf = "/etc/pve/priv/ceph/${storeid}.conf";
+my $pveceph_config = '/etc/pve/ceph.conf';
 my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
 my $pveceph_managed = !defined($scfg->{monhost});
 
@@ -120,11 +68,43 @@ my $ceph_connect_option = sub {
}
 }
 
-
 return $cmd_option;
 
 };
 
+my $build_cmd = sub {
+my ($binary, $scfg, $storeid, $op, @options) = @_;
+
+my $cmd_option = $ceph_connect_option->($scfg, $storeid);
+my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
+
+my $cmd = [$binary, '-p', $pool];
+
+push @$cmd, '-c', $cmd_option->{ceph_conf} if ($cmd_option->{ceph_conf});
+push @$cmd, '-m', $cmd_option->{mon_host} if ($cmd_option->{mon_host});
+push @$cmd, '--auth_supported', $cmd_option->{auth_supported} if 
($cmd_option->{auth_supported});
+push @$cmd, '-n', "client.$cmd_option->{userid}" if 
($cmd_option->{userid});
+push @$cmd, '--keyring', $cmd_option->{keyring} if 
($cmd_option->{keyring});
+
+push @$cmd, $op;
+
+push @$cmd, @options if scalar(@options);
+
+return $cmd;
+};
+
+my $rbd_cmd = sub {
+my ($scfg, $storeid, $op, @options) = @_;
+
+return $build_cmd->('/usr/bin/rbd', $scfg, $storeid, $op, @options);
+};
+
+my $rados_cmd = sub {
+my ($scfg, $storeid, $op, @options) = @_;
+
+return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
+};
+
 my $librados_connect = sub {
 my ($scfg, $storeid, $options) = @_;
 
@@ -353,38 +333,25 @@ sub parse_volname {
 sub path {
 my ($class, $scfg, $volname, $storeid, $snapname) = @_;
 
+my $cmd_option = $ceph_connect_option->($scfg, $storeid);
 my ($vtype, $name, $vmid) = $class->parse_volname($volname);
 $name .= '@'.$snapname if $snapname;
 
 my $pool =  $scfg->{pool} ? $scfg->{pool} : 'rbd';
 return ("/dev/rbd/$pool/$name", $vmid, $vtype) if $scfg->{krbd};
 
-my $username =  $scfg->{username} ? $scfg->{username} : 'admin';
-
 my $path = "rbd:$pool/$name";
-my $pveceph_managed = !defined($scfg->{monhost});
-my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
 
-if ($pveceph_managed) {

[pve-devel] [PATCH v4 storage 1/2] Fix #1542: show storage utilization per pool

2018-04-13 Thread Alwin Antreich
 - get storage utilization per pool
 - use librados2-perl for pool status
 - add librados2-perl as build-depends and depends in debian/control

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm   |  3 +-
 PVE/Storage/RBDPlugin.pm | 72 +++-
 debian/control   |  2 ++
 3 files changed, 56 insertions(+), 21 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 4140a99..d733380 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1065,8 +1065,7 @@ sub storage_info {
next;
}
 
-   my ($total, $avail, $used, $active);
-   eval { ($total, $avail, $used, $active) = $plugin->status($storeid, 
$scfg, $cache); };
+   my ($total, $avail, $used, $active) = eval { $plugin->status($storeid, 
$scfg, $cache); };
warn $@ if $@;
next if !$active;
$info->{$storeid}->{total} = int($total);
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index fd5a2ef..e71494d 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -7,6 +7,7 @@ use Net::IP;
 use PVE::Tools qw(run_command trim);
 use PVE::Storage::Plugin;
 use PVE::JSONSchema qw(get_standard_option);
+use PVE::RADOS;
 
 use base qw(PVE::Storage::Plugin);
 
@@ -90,6 +91,50 @@ my $rados_cmd = sub {
 return $build_cmd->('/usr/bin/rados', $scfg, $storeid, $op, @options);
 };
 
+my $ceph_connect_option = sub {
+my ($scfg, $storeid, %options) = @_;
+
+my $cmd_option = {};
+my $ceph_storeid_conf = "/etc/pve/priv/ceph/${storeid}.conf";
+my $keyring = "/etc/pve/priv/ceph/${storeid}.keyring";
+my $pveceph_managed = !defined($scfg->{monhost});
+
+$cmd_option->{ceph_conf} = $pveceph_config if (-e $pveceph_config);
+
+if (-e $ceph_storeid_conf) {
+   if ($pveceph_managed) {
+   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
+   } else {
+   $cmd_option->{ceph_conf} = $ceph_storeid_conf;
+   }
+}
+
+$cmd_option->{keyring} = $keyring if (-e $keyring);
+$cmd_option->{auth_supported} = (defined $cmd_option->{keyring}) ? 'cephx' 
: 'none';
+$cmd_option->{userid} =  $scfg->{username} ? $scfg->{username} : 'admin';
+$cmd_option->{mon_host} = $hostlist->($scfg->{monhost}, ',') if 
(defined($scfg->{monhost}));
+
+if (%options) {
+   foreach my $k (keys %options) {
+   $cmd_option->{$k} = $options{$k};
+   }
+}
+
+
+return $cmd_option;
+
+};
+
+my $librados_connect = sub {
+my ($scfg, $storeid, $options) = @_;
+
+my $librados_config = $ceph_connect_option->($scfg, $storeid);
+
+my $rados = PVE::RADOS->new(%$librados_config);
+
+return $rados;
+};
+
 # needed for volumes created using ceph jewel (or higher)
 my $krbd_feature_disable = sub {
 my ($scfg, $storeid, $name) = @_;
@@ -539,28 +584,17 @@ sub list_images {
 sub status {
 my ($class, $storeid, $scfg, $cache) = @_;
 
-my $cmd = &$rados_cmd($scfg, $storeid, 'df');
 
-my $stats = {};
+my $rados = &$librados_connect($scfg, $storeid);
+my $df = $rados->mon_command({ prefix => 'df', format => 'json' });
 
-my $parser = sub {
-   my $line = shift;
-   if ($line =~ m/^\s*total(?:\s|_)(\S+)\s+(\d+)(k|M|G|T)?/) {
-   $stats->{$1} = $2;
-   # luminous has units here..
-   if ($3) {
-   $stats->{$1} *= $rbd_unittobytes->{$3}/1024;
-   }
-   }
-};
-
-eval {
-   run_rbd_command($cmd, errmsg => "rados error", errfunc => sub {}, 
outfunc => $parser);
-};
+my ($d) = grep { $_->{name} eq $scfg->{pool} } @{$df->{pools}};
 
-my $total = $stats->{space} ? $stats->{space}*1024 : 0;
-my $free = $stats->{avail} ? $stats->{avail}*1024 : 0;
-my $used = $stats->{used} ? $stats->{used}*1024: 0;
+# max_avail -> max available space for data w/o replication in the pool
+# bytes_used -> data w/o replication in the pool
+my $free = $d->{stats}->{max_avail};
+my $used = $d->{stats}->{bytes_used};
+my $total = $used + $free;
 my $active = 1;
 
 return ($total, $free, $used, $active);
diff --git a/debian/control b/debian/control
index 3f39364..2cf585a 100644
--- a/debian/control
+++ b/debian/control
@@ -5,6 +5,7 @@ Maintainer: Proxmox Support Team 
 Build-Depends: debhelper (>= 7.0.50~),
libpve-common-perl (>= 5.0-28),
libtest-mockmodule-perl,
+   librados2-perl,
lintian,
perl (>= 5.10.0-19),
pve-doc-generator,
@@ -18,6 +19,7 @@ Depends: cstream,
  libfile-chdir-perl,
  libnet-dbus-perl,
  

[pve-devel] [PATCH v4 storage 0/2] show storage utilization per pool

2018-04-13 Thread Alwin Antreich
After some off-list discussions, I removed the calculations and use the used,
free to stay with the old output. This removes the introduction of a new key
and keeps the per pool values. It has the slight disadvantage not to show the
same values as ceph during recovery, but otherwise shows the right %-usage.

Alwin Antreich (2):
  Fix #1542: show storage utilization per pool
  Refactor of method build_cmd and path

 PVE/Storage.pm   |   3 +-
 PVE/Storage/RBDPlugin.pm | 119 ---
 debian/control   |   2 +
 3 files changed, 63 insertions(+), 61 deletions(-)

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/1] Cephfs storage wizard

2018-04-20 Thread Alwin Antreich
 Add internal and external storage wizard for cephfs

Signed-off-by: Alwin Antreich 
---
 www/manager6/Makefile  |  1 +
 www/manager6/Utils.js  | 10 ++
 www/manager6/storage/CephFSEdit.js | 68 ++
 3 files changed, 79 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 60e8103e..093c78f5 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -158,6 +158,7 @@ JSSRC=  
\
storage/IScsiEdit.js\
storage/LVMEdit.js  \
storage/LvmThinEdit.js  \
+   storage/CephFSEdit.js   \
storage/RBDEdit.js  \
storage/SheepdogEdit.js \
storage/ZFSEdit.js  \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index af03958c..f9902652 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -424,6 +424,16 @@ Ext.define('PVE.Utils', { utilities: {
hideAdd: true,
faIcon: 'building'
},
+   cephfs: {
+   name: 'CephFS (PVE)',
+   ipanel: 'PVECephFSInputPanel',
+   faIcon: 'building'
+   },
+   cephfs_ext: {
+   name: 'CephFS (external)',
+   ipanel: 'CephFSInputPanel',
+   faIcon: 'building'
+   },
rbd: {
name: 'RBD',
ipanel: 'RBDInputPanel',
diff --git a/www/manager6/storage/CephFSEdit.js 
b/www/manager6/storage/CephFSEdit.js
new file mode 100644
index ..a7aedbbf
--- /dev/null
+++ b/www/manager6/storage/CephFSEdit.js
@@ -0,0 +1,68 @@
+Ext.define('PVE.storage.CephFSInputPanel', {
+extend: 'PVE.panel.StorageBase',
+
+initComponent : function() {
+   var me = this;
+
+   if (!me.nodename) {
+   me.nodename = 'localhost';
+   }
+   me.type = 'cephfs';
+
+   me.column1 = [];
+
+   if (me.pveceph) {
+   me.column1.push(
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   nodename: me.nodename,
+   name: 'username',
+   value: me.isCreate ? 'admin': '',
+   fieldLabel: gettext('User name'),
+   allowBlank: true
+   }
+   );
+   } else {
+   me.column1.push(
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   name: 'monhost',
+   vtype: 'HostList',
+   value: '',
+   fieldLabel: 'Monitor(s)',
+   allowBlank: false
+   },
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   name: 'username',
+   value: me.isCreate ? 'admin': '',
+   fieldLabel: gettext('User name'),
+   allowBlank: true
+   }
+   );
+   }
+
+   // here value is an array,
+   // while before it was a string
+   /*jslint confusion: true*/
+   me.column2 = [
+   {
+   xtype: 'pveContentTypeSelector',
+   fieldLabel: gettext('Content'),
+   name: 'content',
+   value: ['images'],
+   multiSelect: true,
+   allowBlank: false
+   }
+   ];
+   /*jslint confusion: false*/
+
+   me.callParent();
+}
+});
+
+Ext.define('PVE.storage.PVECephFSInputPanel', {
+extend: 'PVE.storage.CephFSInputPanel',
+
+pveceph: 1
+});
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 1/1] Cephfs storage plugin

2018-04-20 Thread Alwin Antreich
 - ability to mount through kernel and fuse client
 - allow mount options
 - get MONs from ceph config if not in storage.cfg
 - allow the use of ceph config with fuse client

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 3b38304..368a5c9 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -171,7 +171,7 @@ __PACKAGE__->register_method ({
PVE::Storage::activate_storage($cfg, $baseid);
 
PVE::Storage::LVMPlugin::lvm_create_volume_group($path, 
$opts->{vgname}, $opts->{shared});
-   } elsif ($type eq 'rbd' && !defined($opts->{monhost})) {
+   } elsif (($type eq 'rbd' || $type eq 'cephfs') && 
!defined($opts->{monhost})) {
my $ceph_admin_keyring = 
'/etc/pve/priv/ceph.client.admin.keyring';
my $ceph_storage_keyring = 
"/etc/pve/priv/ceph/${storeid}.keyring";
 
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index ab07146..2d8d143 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE/API2/Storage/Status.pm
@@ -335,7 +335,7 @@ __PACKAGE__->register_method ({
my $scfg = PVE::Storage::storage_check_enabled($cfg, $param->{storage}, 
$node);
 
die "cant upload to storage type '$scfg->{type}'\n" 
-   if !($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || 
$scfg->{type} eq 'glusterfs');
+   if !($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || 
$scfg->{type} eq 'glusterfs' || $scfg->{type} eq 'cephfs');
 
my $content = $param->{content};
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index d733380..f9732fe 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -28,6 +28,7 @@ use PVE::Storage::NFSPlugin;
 use PVE::Storage::CIFSPlugin;
 use PVE::Storage::ISCSIPlugin;
 use PVE::Storage::RBDPlugin;
+use PVE::Storage::CephFSPlugin;
 use PVE::Storage::SheepdogPlugin;
 use PVE::Storage::ISCSIDirectPlugin;
 use PVE::Storage::GlusterfsPlugin;
@@ -46,6 +47,7 @@ PVE::Storage::NFSPlugin->register();
 PVE::Storage::CIFSPlugin->register();
 PVE::Storage::ISCSIPlugin->register();
 PVE::Storage::RBDPlugin->register();
+PVE::Storage::CephFSPlugin->register();
 PVE::Storage::SheepdogPlugin->register();
 PVE::Storage::ISCSIDirectPlugin->register();
 PVE::Storage::GlusterfsPlugin->register();
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
new file mode 100644
index 000..614a88f
--- /dev/null
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -0,0 +1,262 @@
+package PVE::Storage::CephFSPlugin;
+
+use strict;
+use warnings;
+use IO::File;
+use Net::IP;
+use File::Path;
+use PVE::Tools qw(run_command);
+use PVE::ProcFSTools;
+use PVE::Storage::Plugin;
+use PVE::JSONSchema qw(get_standard_option);
+
+use base qw(PVE::Storage::Plugin);
+
+my $hostlist = sub {
+my ($list_text, $separator) = @_;
+
+my @monhostlist = PVE::Tools::split_list($list_text);
+return join($separator, map {
+   my ($host, $port) = PVE::Tools::parse_host_and_port($_);
+   $port = defined($port) ? ":$port" : '';
+   $host = "[$host]" if Net::IP::ip_is_ipv6($host);
+   "${host}${port}"
+} @monhostlist);
+};
+
+my $parse_ceph_config = sub {
+my ($filename) = @_;
+
+my $cfg = {};
+
+return $cfg if ! -f $filename;
+
+my $fh = IO::File->new($filename, "r") ||
+   die "unable to open '$filename' - $!\n";
+
+my $section;
+
+while (defined(my $line = <$fh>)) {
+   $line =~ s/[;#].*$//;
+   $line =~ s/^\s+//;
+   $line =~ s/\s+$//;
+   next if !$line;
+
+   $section = $1 if $line =~ m/^\[(\S+)\]$/;
+   if (!$section) {
+   warn "no section - skip: $line\n";
+   next;
+   }
+
+   if ($line =~ m/^(.*?\S)\s*=\s*(\S.*)$/) {
+   $cfg->{$section}->{$1} = $2;
+   }
+
+}
+
+return $cfg;
+};
+
+my $get_monaddr_list = sub {
+my ($scfg, $configfile) = @_;
+
+my $server;
+my $no_mon = !defined($scfg->{monhost});
+
+if (($no_mon) && defined($configfile)) {
+   my $config = $parse_ceph_config->($configfile);
+   $server = join(',', sort { $a cmp $b }
+   map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config});

[pve-devel] [RFC storage/manager 0/2] Cephfs storage plugin

2018-04-20 Thread Alwin Antreich
This patch series adds the Cephfs to our list of storages. You can mount the
storage through the kernel or fuse client. The plugin for now allows all
content formats, but this needs further testing.

Config and keyfile locations are the same as in the RBD plugin.

Example entry:
cephfs: cephfs0
monhost 192.168.1.2:6789
path /mnt/pve/cephfs0
content iso,backup,images,vztmpl,rootdir
subdir /blubb
fuse 0
username admin

Comments and tests are very welcome. ;)

Alwin Antreich (1):
  Cephfs storage plugin

 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

Alwin Antreich (1):
  Cephfs storage wizard

 www/manager6/Makefile  |  1 +
 www/manager6/Utils.js  | 10 ++
 www/manager6/storage/CephFSEdit.js | 68 ++
 3 files changed, 79 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage 1/1] Cephfs storage plugin

2018-04-20 Thread Alwin Antreich
Hi,

On Fri, Apr 20, 2018 at 03:42:22PM +0200, Alexandre DERUMIER wrote:
> Hi,
> 
> +
> +sub plugindata {
> +return {
> +content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, 
> backup => 1},
> + { images => 1 }],
> +format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
> +};
> +} 
> 
> 
> I think we should forbid images, as I'm pretty sure that users will try it.
> 
OFC, they will try it. ;)

I will try to do some testing next week and if results are bad, I will adapt my
patch accordingly.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] Fix #1750: set monhost split to old behavior

2018-05-04 Thread Alwin Antreich
The path method of the RBDPlugin got a list with comma separated monhosts,
but it needs the list with semi-colon for qemu.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/RBDPlugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 1f54c37..f695548 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -345,7 +345,7 @@ sub path {
 if ($cmd_option->{ceph_conf}) {
$path .= ":conf=$cmd_option->{ceph_conf}";
 } else {
-   my $monhost = $cmd_option->{mon_host};
+   my $monhost = $hostlist->($scfg->{monhost}, ';');
$monhost =~ s/:/\\:/g;
$path .= ":mon_host=$monhost";
$path .= ":auth_supported=$cmd_option->{auth_supported}";
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager] Cephfs storage wizard

2018-05-17 Thread Alwin Antreich
 Add internal and external storage wizard for cephfs

Signed-off-by: Alwin Antreich 
---
 www/manager6/Makefile  |  1 +
 www/manager6/Utils.js  | 10 ++
 www/manager6/storage/CephFSEdit.js | 71 ++
 3 files changed, 82 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 7e9877b2..6f9b40ca 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -161,6 +161,7 @@ JSSRC=  
\
storage/IScsiEdit.js\
storage/LVMEdit.js  \
storage/LvmThinEdit.js  \
+   storage/CephFSEdit.js   \
storage/RBDEdit.js  \
storage/SheepdogEdit.js \
storage/ZFSEdit.js  \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ad5a0a61..f41a9562 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -427,6 +427,16 @@ Ext.define('PVE.Utils', { utilities: {
hideAdd: true,
faIcon: 'building'
},
+   cephfs: {
+   name: 'CephFS (PVE)',
+   ipanel: 'PVECephFSInputPanel',
+   faIcon: 'building'
+   },
+   cephfs_ext: {
+   name: 'CephFS (external)',
+   ipanel: 'CephFSInputPanel',
+   faIcon: 'building'
+   },
rbd: {
name: 'RBD',
ipanel: 'RBDInputPanel',
diff --git a/www/manager6/storage/CephFSEdit.js 
b/www/manager6/storage/CephFSEdit.js
new file mode 100644
index ..8f745b63
--- /dev/null
+++ b/www/manager6/storage/CephFSEdit.js
@@ -0,0 +1,71 @@
+Ext.define('PVE.storage.CephFSInputPanel', {
+extend: 'PVE.panel.StorageBase',
+
+initComponent : function() {
+   var me = this;
+
+   if (!me.nodename) {
+   me.nodename = 'localhost';
+   }
+   me.type = 'cephfs';
+
+   me.column1 = [];
+
+   if (me.pveceph) {
+   me.column1.push(
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   nodename: me.nodename,
+   name: 'username',
+   value: '',
+   emptyText: gettext('admin'),
+   fieldLabel: gettext('User name'),
+   allowBlank: true
+   }
+   );
+   } else {
+   me.column1.push(
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   name: 'monhost',
+   vtype: 'HostList',
+   value: '',
+   fieldLabel: 'Monitor(s)',
+   allowBlank: false
+   },
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   name: 'username',
+   value: '',
+   emptyText: gettext('admin'),
+   fieldLabel: gettext('User name'),
+   allowBlank: true
+   }
+   );
+   }
+
+   // here value is an array,
+   // while before it was a string
+   /*jslint confusion: true*/
+   me.column2 = [
+   {
+   xtype: 'pveContentTypeSelector',
+   cts: ['backup', 'iso', 'vztmpl'],
+   fieldLabel: gettext('Content'),
+   name: 'content',
+   value: ['backup'],
+   multiSelect: true,
+   allowBlank: false
+   }
+   ];
+   /*jslint confusion: false*/
+
+   me.callParent();
+}
+});
+
+Ext.define('PVE.storage.PVECephFSInputPanel', {
+extend: 'PVE.storage.CephFSInputPanel',
+
+pveceph: 1
+});
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC v2 storage/manager 0/2] Cephfs storage plugin

2018-05-17 Thread Alwin Antreich
This patch series is an update and adds the Cephfs to our list of storages.
You can mount the storage through the kernel or fuse client. The plugin for
now allows all content formats, but this needs further testing.

Config and keyfile locations are the same as in the RBD plugin.

Example entry:
cephfs: cephfs0
monhost 192.168.1.2:6789
path /mnt/pve/cephfs0
content iso,backup,images,vztmpl,rootdir
subdir /blubb
fuse 0
username admin

Comments and tests are very welcome. ;)

Changes in V2:
After some testing, I decided to remove the image/rootfs option from the
plugin in this version.
Also cephfs incorrectly propagates sparse files to the stat() system call, as
cephfs doesn't track which part is written. This will confuse users looking at
their image files and directories with tools such as du.

My test results:
### directly on cephfs
# fio --filename=/mnt/pve/cephfs0/testfile --size=10G --direct=1 --sync=1 
--rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based 
--group_reporting --name=cephfs-test
  WRITE: io=273200KB, aggrb=4553KB/s, minb=4553KB/s, maxb=4553KB/s, 
mint=60001msec, maxt=60001msec

### /dev/loop0 -> raw image on cephfs
# fio --filename=/dev/loop0 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=cephfs-test
  WRITE: io=258644KB, aggrb=4310KB/s, minb=4310KB/s, maxb=4310KB/s, 
mint=60001msec, maxt=60001msec

### /dev/rbd0 -> rbd image mapped 
# fio --filename=/dev/rbd0 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=cephfs-test
  WRITE: io=282064KB, aggrb=4700KB/s, minb=4700KB/s, maxb=4700KB/s, 
mint=60001msec, maxt=60001msec

### ext4 on mapped rbd image
# fio --ioengine=libaio --filename=/opt/testfile --size=10G --direct=1 --sync=1 
--rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based 
--group_reporting --name=fio
  WRITE: io=122608KB, aggrb=2043KB/s, minb=2043KB/s, maxb=2043KB/s, 
mint=60002msec, maxt=60002msec

### timed cp -r linux kernel source from tempfs
# -> cephfs
real0m23.522s
user0m0.744s
sys 0m3.292s

# -> /root/ (SSD MX100)
real0m3.318s
user0m0.502s
sys 0m2.770s

# -> rbd mapped ext4 (SM863a)
real0m3.313s
user0m0.441s
sys 0m2.826s


Alwin Antreich (1):
  Cephfs storage plugin

 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

Alwin Antreich (1):
  Cephfs storage wizard

 www/manager6/Makefile  |  1 +
 www/manager6/Utils.js  | 10 ++
 www/manager6/storage/CephFSEdit.js | 71 ++
 3 files changed, 82 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage] Cephfs storage plugin

2018-05-17 Thread Alwin Antreich
 - ability to mount through kernel and fuse client
 - allow mount options
 - get MONs from ceph config if not in storage.cfg
 - allow the use of ceph config with fuse client

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 3b38304..368a5c9 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -171,7 +171,7 @@ __PACKAGE__->register_method ({
PVE::Storage::activate_storage($cfg, $baseid);
 
PVE::Storage::LVMPlugin::lvm_create_volume_group($path, 
$opts->{vgname}, $opts->{shared});
-   } elsif ($type eq 'rbd' && !defined($opts->{monhost})) {
+   } elsif (($type eq 'rbd' || $type eq 'cephfs') && 
!defined($opts->{monhost})) {
my $ceph_admin_keyring = 
'/etc/pve/priv/ceph.client.admin.keyring';
my $ceph_storage_keyring = 
"/etc/pve/priv/ceph/${storeid}.keyring";
 
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index ab07146..2d8d143 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE/API2/Storage/Status.pm
@@ -335,7 +335,7 @@ __PACKAGE__->register_method ({
my $scfg = PVE::Storage::storage_check_enabled($cfg, $param->{storage}, 
$node);
 
die "cant upload to storage type '$scfg->{type}'\n" 
-   if !($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || 
$scfg->{type} eq 'glusterfs');
+   if !($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || 
$scfg->{type} eq 'glusterfs' || $scfg->{type} eq 'cephfs');
 
my $content = $param->{content};
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index d733380..f9732fe 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -28,6 +28,7 @@ use PVE::Storage::NFSPlugin;
 use PVE::Storage::CIFSPlugin;
 use PVE::Storage::ISCSIPlugin;
 use PVE::Storage::RBDPlugin;
+use PVE::Storage::CephFSPlugin;
 use PVE::Storage::SheepdogPlugin;
 use PVE::Storage::ISCSIDirectPlugin;
 use PVE::Storage::GlusterfsPlugin;
@@ -46,6 +47,7 @@ PVE::Storage::NFSPlugin->register();
 PVE::Storage::CIFSPlugin->register();
 PVE::Storage::ISCSIPlugin->register();
 PVE::Storage::RBDPlugin->register();
+PVE::Storage::CephFSPlugin->register();
 PVE::Storage::SheepdogPlugin->register();
 PVE::Storage::ISCSIDirectPlugin->register();
 PVE::Storage::GlusterfsPlugin->register();
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
new file mode 100644
index 000..a368c5b
--- /dev/null
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -0,0 +1,262 @@
+package PVE::Storage::CephFSPlugin;
+
+use strict;
+use warnings;
+use IO::File;
+use Net::IP;
+use File::Path;
+use PVE::Tools qw(run_command);
+use PVE::ProcFSTools;
+use PVE::Storage::Plugin;
+use PVE::JSONSchema qw(get_standard_option);
+
+use base qw(PVE::Storage::Plugin);
+
+my $hostlist = sub {
+my ($list_text, $separator) = @_;
+
+my @monhostlist = PVE::Tools::split_list($list_text);
+return join($separator, map {
+   my ($host, $port) = PVE::Tools::parse_host_and_port($_);
+   $port = defined($port) ? ":$port" : '';
+   $host = "[$host]" if Net::IP::ip_is_ipv6($host);
+   "${host}${port}"
+} @monhostlist);
+};
+
+my $parse_ceph_config = sub {
+my ($filename) = @_;
+
+my $cfg = {};
+
+return $cfg if ! -f $filename;
+
+my $fh = IO::File->new($filename, "r") ||
+   die "unable to open '$filename' - $!\n";
+
+my $section;
+
+while (defined(my $line = <$fh>)) {
+   $line =~ s/[;#].*$//;
+   $line =~ s/^\s+//;
+   $line =~ s/\s+$//;
+   next if !$line;
+
+   $section = $1 if $line =~ m/^\[(\S+)\]$/;
+   if (!$section) {
+   warn "no section - skip: $line\n";
+   next;
+   }
+
+   if ($line =~ m/^(.*?\S)\s*=\s*(\S.*)$/) {
+   $cfg->{$section}->{$1} = $2;
+   }
+
+}
+
+return $cfg;
+};
+
+my $get_monaddr_list = sub {
+my ($scfg, $configfile) = @_;
+
+my $server;
+my $no_mon = !defined($scfg->{monhost});
+
+if (($no_mon) && defined($configfile)) {
+   my $config = $parse_ceph_config->($configfile);
+   $server = join(',', sort { $a cmp $b }
+   map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config});

[pve-devel] [PATCH v2 storage] Cephfs storage plugin

2018-05-17 Thread Alwin Antreich
 - ability to mount through kernel and fuse client
 - allow mount options
 - get MONs from ceph config if not in storage.cfg
 - allow the use of ceph config with fuse client

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 6 files changed, 269 insertions(+), 2 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 3b38304..368a5c9 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -171,7 +171,7 @@ __PACKAGE__->register_method ({
PVE::Storage::activate_storage($cfg, $baseid);
 
PVE::Storage::LVMPlugin::lvm_create_volume_group($path, 
$opts->{vgname}, $opts->{shared});
-   } elsif ($type eq 'rbd' && !defined($opts->{monhost})) {
+   } elsif (($type eq 'rbd' || $type eq 'cephfs') && 
!defined($opts->{monhost})) {
my $ceph_admin_keyring = 
'/etc/pve/priv/ceph.client.admin.keyring';
my $ceph_storage_keyring = 
"/etc/pve/priv/ceph/${storeid}.keyring";
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index d733380..f9732fe 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -28,6 +28,7 @@ use PVE::Storage::NFSPlugin;
 use PVE::Storage::CIFSPlugin;
 use PVE::Storage::ISCSIPlugin;
 use PVE::Storage::RBDPlugin;
+use PVE::Storage::CephFSPlugin;
 use PVE::Storage::SheepdogPlugin;
 use PVE::Storage::ISCSIDirectPlugin;
 use PVE::Storage::GlusterfsPlugin;
@@ -46,6 +47,7 @@ PVE::Storage::NFSPlugin->register();
 PVE::Storage::CIFSPlugin->register();
 PVE::Storage::ISCSIPlugin->register();
 PVE::Storage::RBDPlugin->register();
+PVE::Storage::CephFSPlugin->register();
 PVE::Storage::SheepdogPlugin->register();
 PVE::Storage::ISCSIDirectPlugin->register();
 PVE::Storage::GlusterfsPlugin->register();
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
new file mode 100644
index 000..a368c5b
--- /dev/null
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -0,0 +1,262 @@
+package PVE::Storage::CephFSPlugin;
+
+use strict;
+use warnings;
+use IO::File;
+use Net::IP;
+use File::Path;
+use PVE::Tools qw(run_command);
+use PVE::ProcFSTools;
+use PVE::Storage::Plugin;
+use PVE::JSONSchema qw(get_standard_option);
+
+use base qw(PVE::Storage::Plugin);
+
+my $hostlist = sub {
+my ($list_text, $separator) = @_;
+
+my @monhostlist = PVE::Tools::split_list($list_text);
+return join($separator, map {
+   my ($host, $port) = PVE::Tools::parse_host_and_port($_);
+   $port = defined($port) ? ":$port" : '';
+   $host = "[$host]" if Net::IP::ip_is_ipv6($host);
+   "${host}${port}"
+} @monhostlist);
+};
+
+my $parse_ceph_config = sub {
+my ($filename) = @_;
+
+my $cfg = {};
+
+return $cfg if ! -f $filename;
+
+my $fh = IO::File->new($filename, "r") ||
+   die "unable to open '$filename' - $!\n";
+
+my $section;
+
+while (defined(my $line = <$fh>)) {
+   $line =~ s/[;#].*$//;
+   $line =~ s/^\s+//;
+   $line =~ s/\s+$//;
+   next if !$line;
+
+   $section = $1 if $line =~ m/^\[(\S+)\]$/;
+   if (!$section) {
+   warn "no section - skip: $line\n";
+   next;
+   }
+
+   if ($line =~ m/^(.*?\S)\s*=\s*(\S.*)$/) {
+   $cfg->{$section}->{$1} = $2;
+   }
+
+}
+
+return $cfg;
+};
+
+my $get_monaddr_list = sub {
+my ($scfg, $configfile) = @_;
+
+my $server;
+my $no_mon = !defined($scfg->{monhost});
+
+if (($no_mon) && defined($configfile)) {
+   my $config = $parse_ceph_config->($configfile);
+   $server = join(',', sort { $a cmp $b }
+   map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config});
+}else {
+   $server = $hostlist->($scfg->{monhost}, ',');
+}
+
+return $server;
+};
+
+my $get_configfile = sub {
+my ($storeid) = @_;
+
+my $configfile;
+my $pve_cephconfig = '/etc/pve/ceph.conf';
+my $storeid_cephconfig = "/etc/pve/priv/ceph/${storeid}.conf";
+
+if (-e $pve_cephconfig) {
+   if (-e $storeid_cephconfig) {
+   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
+   }
+   $configfile = $pve_cephconfig;
+} elsif (-e $storeid_cephconfig) {
+   $configfile = $storeid_cephconfig;
+} else {
+   die "Missing ceph config for ${storeid} storage\n";
+}
+
+return $configfile;

Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone

2018-05-28 Thread Alwin Antreich
On Mon, May 28, 2018 at 05:36:50PM +0200, Alexandre Derumier wrote:
> Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror.
> 
> Call qga guest-fstrim if qga is available
> ---
>  PVE/API2/Qemu.pm   | 8 
>  PVE/QemuMigrate.pm | 5 +
>  2 files changed, 13 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 8d4b10d..86fac9d 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -2741,6 +2741,10 @@ __PACKAGE__->register_method({
>  
>   PVE::QemuConfig->write_config($newid, $newconf);
>  
> + if ($running && $conf->{agent} && 
> PVE::QemuServer::qga_check_running($vmid)) {
> + eval { PVE::QemuServer::vm_mon_cmd($vmid, 
> "guest-fstrim"); };
> + }
> +
>  if ($target) {
>   # always deactivate volumes - avoid lvm LVs to be 
> active on several nodes
>   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
> $snapname) if !$running;
> @@ -2918,6 +2922,10 @@ __PACKAGE__->register_method({
>  
>   PVE::QemuConfig->write_config($vmid, $conf);
>  
> + if ($running && $conf->{agent} && 
> PVE::QemuServer::qga_check_running($vmid)) {
> + eval { PVE::QemuServer::vm_mon_cmd($vmid, 
> "guest-fstrim"); };
> + }
> +
>   eval {
>   # try to deactivate volumes - avoid lvm LVs to be 
> active on several nodes
>   PVE::Storage::deactivate_volumes($storecfg, [ 
> $newdrive->{file} ])
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 27cf7e3..ab2258d 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -966,6 +966,11 @@ sub phase3_cleanup {
>   $self->{errors} = 1;
>   }
>   }
> +
> + if ($self->{storage_migration} && $conf->{qga} && $self->{running}) {
> + my $cmd = [@{$self->{rem_ssh}}, 'qm', 'agent','fstrim'];
> + eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => 
> sub {}) };
> + }
>  }
>  
>  # close tunnel on successful migration, on error phase2_cleanup closed it
> -- 
> 2.11.0
> 
I have some thoughts on your patch.

If I understood it right, then the fstrim is called on every migrate with a
running guest agent. While, I guess the command is called also if you don't
have discard in the vm config activated and might only produce a error
message.

Some users also like some of their VMs to be thick provisioned.

With multiple simultanious migrations though this would extend/multiply the
IO load on the target system. As the fstrim starts, while still other VMs are
migrated. I think that might make users unhappy, especially that the behaviour
would change with your patch.

IMHO, it might be good to have a config option that sets if a VM should do a
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
it and are knowing that this also has its drawbacks on their systems.

Please correct me if I'm wrong.
My two cents. ;)
--
Cheers,
Alwin

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone

2018-05-29 Thread Alwin Antreich
On Mon, May 28, 2018 at 08:49:29PM +0200, Alexandre DERUMIER wrote:
> >>If I understood it right, then the fstrim is called on every migrate with a 
> >>running guest agent. While, I guess the command is called also if you don't 
> >>have discard in the vm config activated and might only produce a error 
> >>message. 
> 
> It don't produce an error, but indeed, it does nothing in this case.
> I can add a check for discard option, to avoid the extra qga call.
If there is a config option (see below) then no extra care would be
needed here.

> 
> 
> 
> >>IMHO, it might be good to have a config option that sets if a VM should do a
> >>fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively 
> >>setting
> >>it and are knowing that this also has its drawbacks on their systems.
> 
> maybe can we add it in datacenter.cfg ? or storage.cfg option ?
> 
I would think, the best is to have it in the vmid.conf itself, as maybe there
is one VM where I want to have flat images (eg. DB server) and not for the
rest.

Maybe in a fashion like, qga: fstrim=1,guest-exec=..., I guess this
makes extending the guest-agent commands more straight forward too.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  1   2   3   4   >