Re: [pve-devel] cfs-locked 'authkey' operation: pve cluster filesystem not online

2021-05-24 Thread Dietmar Maurer
Hi Julien,


> Hello to all.
> 
> I have the plan to implement the SSO authentication feature with the SAML 
> protocol.
> However, I have an error that prevents me from validating the authentication 
> process.
> It is about the locks.
> The first step is to store the request_saml_id. If I try to create a file by 
> your libraries, I get an 500 error with msg:
> error during cfs-locked \'file-request_tmp\' operation: pve cluster 
> filesystem not online /etc/pve/priv/lock.

Your cluster fs is not working (pmxcfs). See you run on a broken installation.

> https://github.com/jbsky/proxmox-saml2-auth/commit/d75dc621aae719c8fdd251859af9641cda0e526b
> Ok, I can make a temp workaround.
> 
> 2nd step :
> When I try to create a ticket with the function create_ticket in package 
> PVE::API2::AccessControl;
> I've got this error :
> authentication failure; rhost=127.0.0.1 user=admin@DOM msg=error during 
> cfs-locked 'authkey' operation: pve cluster filesystem not online 
> /etc/pve/priv/lock

Again, the pmxcfs is not online.

> src : 
> https://github.com/jbsky/proxmox-saml2-auth/commit/93b02727d2e172968c14c4ce3a7c27e8d5c0feb0
> 
> I have really bad luck with these locks!
> Can you help me to understand the prerequisites to make the lock work?

You need a working PVE installation before doing any API calls...


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 pve-container 0/1] add ipam support

2021-05-24 Thread Alexandre Derumier
Changelog v2:

- refactor code
- move code from PVE::LXC::Config to PVE::LXC
- add update_net_ip tests
- fix bugs when changing from vnet ipam to vnet without ipam/ without subnets / 
classic vmbr
- add support for snasphot rollback
- add support for backup restore


Changelog v3:

- small fix with forgot PVE::LXC change on del_net_ip


Alexandre Derumier (1):
  add ipam support

 src/PVE/LXC.pm| 144 ++
 src/PVE/LXC/Config.pm |  58 +++
 src/PVE/LXC/Create.pm |  33 +++-
 src/test/Makefile |   5 +-
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  17 +++
 .../ipam_config   |   7 +
 .../net   |   7 +
 .../net.expected  |   7 +
 .../oldnet|   7 +
 .../sdn_config|  35 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../net   |   7 +
 .../net.expected  |   7 +
 .../oldnet|   7 +
 .../sdn_config|  35 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  17 +++
 .../ipam_config   |   7 +
 .../net   |   7 +
 .../net.expected  |   7 +
 .../oldnet|   7 +
 .../sdn_config|  35 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../net   |   6 +
 .../net.expected  |   7 +
 .../oldnet|   7 +
 .../sdn_config|  35 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../net   |   6 +
 .../net.expected  |   7 +
 .../oldnet|   7 +
 .../sdn_config|  35 +
 .../ipv4_changeip_samevnet_with_ipam/ipam.db  |  18 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../ipv4_changeip_samevnet_with_ipam/net  |   7 +
 .../net.expected  |   7 +
 .../ipv4_changeip_samevnet_with_ipam/oldnet   |   7 +
 .../sdn_config|  35 +
 .../ipv4_changeip_vmbr0_to_ipamvnet/ipam.db   |  17 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../ipams/ipv4_changeip_vmbr0_to_ipamvnet/net |   7 +
 .../net.expected  |   7 +
 .../ipv4_changeip_vmbr0_to_ipamvnet/oldnet|   7 +
 .../sdn_config|  35 +
 .../ipv4_changeip_vmbr0_to_noipamvnet/ipam.db |  17 +++
 .../ipam.db.expected  |  17 +++
 .../ipam_config   |   7 +
 .../ipv4_changeip_vmbr0_to_noipamvnet/net |   7 +
 .../net.expected  |   7 +
 .../ipv4_changeip_vmbr0_to_noipamvnet/oldnet  |   7 +
 .../sdn_config|  35 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../net   |   7 +
 .../net.expected  |   8 +
 .../oldnet|   7 +
 .../sdn_config|  38 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  17 +++
 .../ipam_config   |   7 +
 .../net   |   8 +
 .../net.expected  |   8 +
 .../oldnet|   7 +
 .../sdn_config|  38 +
 .../ipam.db   |  18 +++
 .../ipam.db.expected  |  18 +++
 .../ipam_config   |   7 +
 .../net   |   7 +
 .../net.expected  |   7 +
 .../oldnet|   8 +
 .../sdn_config|  36 +
 .../ipams/ipv4_updateipam_ipamvnet/ipam.db   

[pve-devel] [PATCH pve-network] vnets: subroutines: return if !$vnetid

2021-05-24 Thread Alexandre Derumier
---
 PVE/Network/SDN/Vnets.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/Network/SDN/Vnets.pm b/PVE/Network/SDN/Vnets.pm
index 8c9629d..86967a3 100644
--- a/PVE/Network/SDN/Vnets.pm
+++ b/PVE/Network/SDN/Vnets.pm
@@ -52,6 +52,8 @@ sub complete_sdn_vnet {
 sub get_vnet {
 my ($vnetid, $running) = @_;
 
+return if !$vnetid;
+
 my $cfg = {};
 if($running) {
my $cfg = PVE::Network::SDN::running_config();
@@ -68,6 +70,8 @@ sub get_vnet {
 sub get_subnets {
 my ($vnetid) = @_;
 
+return if !$vnetid;
+
 my $subnets = undef;
 my $subnets_cfg = PVE::Network::SDN::Subnets::config();
 foreach my $subnetid (sort keys %{$subnets_cfg->{ids}}) {
@@ -130,6 +134,8 @@ sub get_next_free_cidr {
 sub add_cidr {
 my ($vnetid, $cidr, $hostname, $mac, $description) = @_;
 
+return if !$vnetid;
+
 my ($zone, $subnetid, $subnet, $ip) = 
PVE::Network::SDN::Vnets::get_subnet_from_vnet_cidr($vnetid, $cidr);
 PVE::Network::SDN::Subnets::add_ip($zone, $subnetid, $subnet, $ip, 
$hostname, $mac, $description);
 }
@@ -137,6 +143,8 @@ sub add_cidr {
 sub update_cidr {
 my ($vnetid, $cidr, $hostname, $oldhostname, $mac, $description) = @_;
 
+return if !$vnetid;
+
 my ($zone, $subnetid, $subnet, $ip) = 
PVE::Network::SDN::Vnets::get_subnet_from_vnet_cidr($vnetid, $cidr);
 PVE::Network::SDN::Subnets::update_ip($zone, $subnetid, $subnet, $ip, 
$hostname, $oldhostname, $mac, $description);
 }
@@ -144,6 +152,8 @@ sub update_cidr {
 sub del_cidr {
 my ($vnetid, $cidr, $hostname) = @_;
 
+return if !$vnetid;
+
 my ($zone, $subnetid, $subnet, $ip) = 
PVE::Network::SDN::Vnets::get_subnet_from_vnet_cidr($vnetid, $cidr);
 PVE::Network::SDN::Subnets::del_ip($zone, $subnetid, $subnet, $ip, 
$hostname);
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH zsync 6/6] fix #3351: allow keeping a different number of snapshots on source and destination

2021-05-24 Thread Bruce Wainer
Hello Fabian,
Since this is a series of patches, could you provide the full pve-zsync
file with all the patches? It would be easier for me to test it this way.
Thank you,
Bruce

On Tue, May 11, 2021 at 9:00 AM Fabian Ebner  wrote:

> by introducing a new dest-maxsnap parameter which can be used to override
> maxsnap for the destination side.
>
> This is useful for backups, as one can potentially save a lot of space on
> the
> source side (or the destination side if one can come up with a use case for
> that) by keeping fewer snapshots around.
>
> Signed-off-by: Fabian Ebner 
> ---
>  pve-zsync | 25 +++--
>  1 file changed, 23 insertions(+), 2 deletions(-)
>
> diff --git a/pve-zsync b/pve-zsync
> index 1213361..39ead0d 100755
> --- a/pve-zsync
> +++ b/pve-zsync
> @@ -244,6 +244,7 @@ sub parse_argv {
> verbose => undef,
> limit => undef,
> maxsnap => undef,
> +   dest_maxsnap => undef,
> name => undef,
> skip => undef,
> method => undef,
> @@ -261,6 +262,7 @@ sub parse_argv {
> 'verbose' => \$param->{verbose},
> 'limit=i' => \$param->{limit},
> 'maxsnap=i' => \$param->{maxsnap},
> +   'dest-maxsnap=i' => \$param->{dest_maxsnap},
> 'name=s' => \$param->{name},
> 'skip' => \$param->{skip},
> 'method=s' => \$param->{method},
> @@ -336,6 +338,7 @@ sub param_to_job {
>  $job->{method} = "ssh" if !$job->{method};
>  $job->{limit} = $param->{limit};
>  $job->{maxsnap} = $param->{maxsnap};
> +$job->{dest_maxsnap} = $param->{dest_maxsnap};
>  $job->{source} = $param->{source};
>  $job->{source_user} = $param->{source_user};
>  $job->{dest_user} = $param->{dest_user};
> @@ -460,6 +463,7 @@ sub format_job {
>  $text .= " root";
>  $text .= " $PROGNAME sync --source $job->{source} --dest
> $job->{dest}";
>  $text .= " --name $job->{name} --maxsnap $job->{maxsnap}";
> +$text .= " --dest-maxsnap $job->{dest_maxsnap}" if
> defined($job->{dest_maxsnap});
>  $text .= " --limit $job->{limit}" if $job->{limit};
>  $text .= " --method $job->{method}";
>  $text .= " --verbose" if $job->{verbose};
> @@ -681,20 +685,31 @@ sub sync {
>
> ($dest->{old_snap}, $dest->{last_snap}) = snapshot_get(
> $dest_dataset,
> -   $param->{maxsnap},
> +   $param->{dest_maxsnap} // $param->{maxsnap},
> $param->{name},
> $dest->{ip},
> $param->{dest_user},
> );
>
> +   ($source->{old_snap}) = snapshot_get(
> +   $source->{all},
> +   $param->{maxsnap},
> +   $param->{name},
> +   $source->{ip},
> +   $param->{source_user},
> +   );
> +
> prepare_prepended_target($source, $dest, $param->{dest_user})
> if defined($dest->{prepend});
>
> snapshot_add($source, $dest, $param->{name}, $date,
> $param->{source_user}, $param->{dest_user});
>
> send_image($source, $dest, $param);
>
> -   for my $old_snap (@{$dest->{old_snap}}) {
> +   for my $old_snap (@{$source->{old_snap}}) {
> snapshot_destroy($source->{all}, $old_snap, $source->{ip},
> $param->{source_user});
> +   }
> +
> +   for my $old_snap (@{$dest->{old_snap}}) {
> snapshot_destroy($dest_dataset, $old_snap, $dest->{ip},
> $param->{dest_user});
> }
> };
> @@ -1157,6 +1172,9 @@ $PROGNAME create --dest  --source 
> [OPTIONS]
> The number of snapshots to keep until older ones are
> erased.
> The default is 1, use 0 for unlimited.
>
> +   --dest-maxsnap   integer
> +   Override maxsnap for the destination dataset.
> +
> --name  string
> The name of the sync job, if not set it is default
>
> @@ -1197,6 +1215,9 @@ $PROGNAME sync --dest  --source 
> [OPTIONS]\n
> The number of snapshots to keep until older ones are
> erased.
> The default is 1, use 0 for unlimited.
>
> +   --dest-maxsnap   integer
> +   Override maxsnap for the destination dataset.
> +
> --name  string
> The name of the sync job, if not set it is 'default'.
> It is only necessary if scheduler allready contains this
> source.
> --
> 2.20.1
>
>
>
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] RE : pve-devel Digest, Vol 132, Issue 53

2021-05-24 Thread wb
Hi Dietmar,

Thank you for your feedback.

However, since I am starting on a new installation, I am surprised to get this 
kind of answer.
« Your cluster fs is not working (pmxcfs). See you run on a broken 
installation. »
Or 
« You need a working PVE installation before doing any API calls... »

With the following command, I have the process up!

ps aux | grep pmxcfs


I think I have enough knowledge about SAML and Perl to do it, however, the 
support of a dev would be ideal at least on the lock part.

I'm trying to implement a new api so that Proxmox authentication works with 
SAMLv2.

I would have preferred to have more info on the following part :
# this is just a readonly copy, the relevant one is in status.c from pmxcfs
# observed files are the one we can get directly through IPCC, they are cached
# using a computed version and only those can be used by the cfs_*_file methods

To try to bring a little more element, I added a file to the following list in 
the PVE::Cluster file
my $observed = {
'request.tmp' => 1,

Still in the PVE::Cluster file, It is well in the following part that it blocks 
:


If I take the error message from the first email,
«  error during cfs-locked \'file-request_tmp\' operation: pve cluster 
filesystem not online /etc/pve/priv/lock. »
If I test the dir /etc/pve/priv/lock, it exists!

Do the files we add in PVE::Cluster file need to be listed in 
/var/lib/pve-cluster/config.db, if so, any spec please?

Thanking you in advance, 

Sincerely,

Julien BLAIS


De : pve-devel-requ...@lists.proxmox.com
Envoyé le :lundi 24 mai 2021 12:00
À : pve-devel@lists.proxmox.com
Objet :pve-devel Digest, Vol 132, Issue 53

Send pve-devel mailing list submissions to
pve-devel@lists.proxmox.com

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
or, via email, send a message with subject or body 'help' to
pve-devel-requ...@lists.proxmox.com

You can reach the person managing the list at
pve-devel-ow...@lists.proxmox.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of pve-devel digest..."


Today's Topics:

   1. cfs-locked 'authkey' operation: pve cluster filesystem not
  online (wb)
   2. Re: cfs-locked 'authkey' operation: pve cluster filesystem
  not online (Dietmar Maurer)


--

Message: 1
Date: Sun, 23 May 2021 23:23:23 +0200
From: wb 
To: "pve-devel@lists.proxmox.com" 
Subject: [pve-devel] cfs-locked 'authkey' operation: pve cluster
filesystem not online
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hello to all.

I have the plan to implement the SSO authentication feature with the SAML 
protocol.
However, I have an error that prevents me from validating the authentication 
process.
It is about the locks.
The first step is to store the request_saml_id. If I try to create a file by 
your libraries, I get an 500 error with msg:
error during cfs-locked \'file-request_tmp\' operation: pve cluster filesystem 
not online /etc/pve/priv/lock.
https://github.com/jbsky/proxmox-saml2-auth/commit/d75dc621aae719c8fdd251859af9641cda0e526b
Ok, I can make a temp workaround.

2nd step?:
When I try to create a ticket with the function create_ticket in package 
PVE::API2::AccessControl;
I've got this error :
authentication failure; rhost=127.0.0.1 user=admin@DOM msg=error during 
cfs-locked 'authkey' operation: pve cluster filesystem not online 
/etc/pve/priv/lock
src : 
https://github.com/jbsky/proxmox-saml2-auth/commit/93b02727d2e172968c14c4ce3a7c27e8d5c0feb0

I have really bad luck with these locks!
Can you help me to understand the prerequisites to make the lock work?


If you want init a redirect to an identity provider(IdP, ex: Keycloak), use 
this url :
https://pve/api2/html/access/saml?realm=DOM

After an authentication side IdP, the IdP post to pve at 
https://pve/api2/html/access/saml.


I'm sorry to work on a separate repository, it's because I don't know your 
components very well.

I would be grateful if you could tell me how to debug these locks.

Thanking you in advance, 

Sincerely,

Julien BLAIS


--

Message: 2
Date: Mon, 24 May 2021 09:45:15 +0200 (CEST)
From: Dietmar Maurer 
To: Proxmox VE development discussion ,
wb 
Subject: Re: [pve-devel] cfs-locked 'authkey' operation: pve cluster
filesystem not online
Message-ID: <606562427.786.1621842315...@webmail.proxmox.com>
Content-Type: text/plain; charset=UTF-8

Hi Julien,


> Hello to all.
> 
> I have the plan to implement the SSO authentication feature with the SAML 
> protocol.
> However, I have an error that prevents me from validating the authentication 
> process.
> It is about the locks.
> The first step is to store the request_saml_id. If I try to create a file by 
> your libraries, I get an 500 error with msg:
> error during cfs-locked \'file-requ

Re: [pve-devel] [PATCH zsync 6/6] fix #3351: allow keeping a different number of snapshots on source and destination

2021-05-24 Thread Thomas Lamprecht
On 24.05.21 18:00, Bruce Wainer wrote:
> Hello Fabian,
> Since this is a series of patches, could you provide the full pve-zsync
> file with all the patches? It would be easier for me to test it this way.

FYI: If you have git ready (`apt install git` else) it wouldn't be to hard to
apply multiple patches from a mail:

1. git clone git://git.proxmox.com/git/pve-zsync.git

2. save all patch-mails from 1/6 to 6/6 in a folder, e.g. /tmp/patches

3. git am /tmp/patches/*



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] RE : pve-devel Digest, Vol 132, Issue 53

2021-05-24 Thread Thomas Lamprecht
On 24.05.21 23:45, wb wrote:
> However, since I am starting on a new installation, I am surprised to get 
> this kind of answer.
> « Your cluster fs is not working (pmxcfs). See you run on a broken 
> installation. »
> Or 
> « You need a working PVE installation before doing any API calls... »
> 
> With the following command, I have the process up!
> 
> ps aux | grep pmxcfs
> 

running does not mean working...

What's the output/status of:

# systemctl status pve-cluster 
# touch /etc/pve/foo
# findmnt /etc/pve

> 
> I think I have enough knowledge about SAML and Perl to do it, however, the 
> support of a dev would be ideal at least on the lock part.
> 

Nobody questioned that..

> I'm trying to implement a new api so that Proxmox authentication works with 
> SAMLv2.

Yes, as you stated in the initial mail..

> 
> I would have preferred to have more info on the following part :
> # this is just a readonly copy, the relevant one is in status.c from pmxcfs
> # observed files are the one we can get directly through IPCC, they are cached
> # using a computed version and only those can be used by the cfs_*_file 
> methods
> 

I'd suggest ignoring the pmxcfs internal optimized cache-using part, you do not 
need
that for a start, just use the common file_get_content / file_set_content 
helper from
the PVE::Tools module, you could do everything with those for now and only then
migrate to a optimized cfs_*_{read,write} helper.

> To try to bring a little more element, I added a file to the following list 
> in the PVE::Cluster file
> my $observed = {
> 'request.tmp' => 1,
> 
> Still in the PVE::Cluster file, It is well in the following part that it 
> blocks :
> 
> 
> If I take the error message from the first email,
> «  error during cfs-locked \'file-request_tmp\' operation: pve cluster 
> filesystem not online /etc/pve/priv/lock. »
> If I test the dir /etc/pve/priv/lock, it exists!

Existence is not a problem, pmxcfs is a clustered realtime configuration 
filesystem,
it either may not be mounted (and again, running is not always a 100% guarantee 
that
it is still mounted) or in a cluster (or thinking that's in a cluster due to
`/etc/corosync/corosync.conf` and/or `/etc/pve/corosync.conf` existing) but has 
no
quorum, i.e., read-only

> 
> Do the files we add in PVE::Cluster file need to be listed in 
> /var/lib/pve-cluster/config.db, if so, any spec please?

no, that's the backing DB, I'd heavily recommend not modifying that one 
directly if
unsure. Those files get always created on the FUSE VFS layer (besides the very 
barebone
initial one we create with a small helper).

Note: you need the correct permissions in your service, it must be in www-data 
group
to be able to read/test directory existance and run as root for writing.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel