Re: [pve-devel] [PATCH zsync 6/6] fix #3351: allow keeping a different number of snapshots on source and destination
Hello Fabian, Since this is a series of patches, could you provide the full pve-zsync file with all the patches? It would be easier for me to test it this way. Thank you, Bruce On Tue, May 11, 2021 at 9:00 AM Fabian Ebner wrote: > by introducing a new dest-maxsnap parameter which can be used to override > maxsnap for the destination side. > > This is useful for backups, as one can potentially save a lot of space on > the > source side (or the destination side if one can come up with a use case for > that) by keeping fewer snapshots around. > > Signed-off-by: Fabian Ebner > --- > pve-zsync | 25 +++-- > 1 file changed, 23 insertions(+), 2 deletions(-) > > diff --git a/pve-zsync b/pve-zsync > index 1213361..39ead0d 100755 > --- a/pve-zsync > +++ b/pve-zsync > @@ -244,6 +244,7 @@ sub parse_argv { > verbose => undef, > limit => undef, > maxsnap => undef, > + dest_maxsnap => undef, > name => undef, > skip => undef, > method => undef, > @@ -261,6 +262,7 @@ sub parse_argv { > 'verbose' => \$param->{verbose}, > 'limit=i' => \$param->{limit}, > 'maxsnap=i' => \$param->{maxsnap}, > + 'dest-maxsnap=i' => \$param->{dest_maxsnap}, > 'name=s' => \$param->{name}, > 'skip' => \$param->{skip}, > 'method=s' => \$param->{method}, > @@ -336,6 +338,7 @@ sub param_to_job { > $job->{method} = "ssh" if !$job->{method}; > $job->{limit} = $param->{limit}; > $job->{maxsnap} = $param->{maxsnap}; > +$job->{dest_maxsnap} = $param->{dest_maxsnap}; > $job->{source} = $param->{source}; > $job->{source_user} = $param->{source_user}; > $job->{dest_user} = $param->{dest_user}; > @@ -460,6 +463,7 @@ sub format_job { > $text .= " root"; > $text .= " $PROGNAME sync --source $job->{source} --dest > $job->{dest}"; > $text .= " --name $job->{name} --maxsnap $job->{maxsnap}"; > +$text .= " --dest-maxsnap $job->{dest_maxsnap}" if > defined($job->{dest_maxsnap}); > $text .= " --limit $job->{limit}" if $job->{limit}; > $text .= " --method $job->{method}"; > $text .= " --verbose" if $job->{verbose}; > @@ -681,20 +685,31 @@ sub sync { > > ($dest->{old_snap}, $dest->{last_snap}) = snapshot_get( > $dest_dataset, > - $param->{maxsnap}, > + $param->{dest_maxsnap} // $param->{maxsnap}, > $param->{name}, > $dest->{ip}, > $param->{dest_user}, > ); > > + ($source->{old_snap}) = snapshot_get( > + $source->{all}, > + $param->{maxsnap}, > + $param->{name}, > + $source->{ip}, > + $param->{source_user}, > + ); > + > prepare_prepended_target($source, $dest, $param->{dest_user}) > if defined($dest->{prepend}); > > snapshot_add($source, $dest, $param->{name}, $date, > $param->{source_user}, $param->{dest_user}); > > send_image($source, $dest, $param); > > - for my $old_snap (@{$dest->{old_snap}}) { > + for my $old_snap (@{$source->{old_snap}}) { > snapshot_destroy($source->{all}, $old_snap, $source->{ip}, > $param->{source_user}); > + } > + > + for my $old_snap (@{$dest->{old_snap}}) { > snapshot_destroy($dest_dataset, $old_snap, $dest->{ip}, > $param->{dest_user}); > } > }; > @@ -1157,6 +1172,9 @@ $PROGNAME create --dest --source > [OPTIONS] > The number of snapshots to keep until older ones are > erased. > The default is 1, use 0 for unlimited. > > + --dest-maxsnap integer > + Override maxsnap for the destination dataset. > + > --name string > The name of the sync job, if not set it is default > > @@ -1197,6 +1215,9 @@ $PROGNAME sync --dest --source > [OPTIONS]\n > The number of snapshots to keep until older ones are > erased. > The default is 1, use 0 for unlimited. > > + --dest-maxsnap integer > + Override maxsnap for the destination dataset. > + > --name string > The name of the sync job, if not set it is 'default'. > It is only necessary if scheduler allready contains this > source. > -- > 2.20.1 > > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-zsync 0/1] Allow pve-zsync jobs to share dest
Hello, It has been 5 months since my patch has been applied, however the version for pve-zsync has not been incremented and this patch is not in the version presented by the repo. What needs to be done? Thank you, Bruce Wainer On Tue, Jun 23, 2020 at 4:48 AM Wolfgang Link wrote: > Hi Bruce, > > Once you've submitted the CLA, a member of the release team will take care > of this patch. > You will be notified when it is committed or when something is missing. > > Regards > > Wolfgang > > > On 06/22/2020 2:38 PM Bruce Wainer wrote: > > > > > > Wolfgang, > > Thanks for the confirmation. What is the next step? > > Thank you, > > Bruce > > > > > On Jun 22, 2020, at 7:43 AM, Wolfgang Link wrote: > > > > > > Look good to me > > > I tested it and it works. There are no upgrade problems. > > > Even if jobs already exist. > > > > > > Regards > > > > > > Wolfgang > > > > > >> On 06/17/2020 6:44 AM Wolfgang Link wrote: > > >> > > >> > > >> Hi, > > >> > > >> thank you for this patch and the work. > > >> I will look at this patch and give you feedback. > > >> > > >> Regards > > >> Wolfgang > > >> > > >>>> On 06/16/2020 8:53 PM Bruce Wainer wrote: > > >>> > > >>> > > >>> By flipping Source and Dest in snapshot_get and snapshot_exist, we > can allow > > >>> multiple sync jobs to share the same source. > > >>> snapshot_get now checks the destination instead of source, and sets > last_sync to > > >>> the last snapshot regardless of name. old_sync and whether to delete > it is still > > >>> based on the job/name. > > >>> snapshot_exist now checks the source instead of the destination. > > >>> Other functions and/or their calls are changed to match the new > situation. > > >>> > > >>> Bruce Wainer (1): > > >>> pve-zsync: Flip Source and Dest in functions to so jobs can share > Dest > > >>> > > >>> pve-zsync | 42 +- > > >>> 1 file changed, 25 insertions(+), 17 deletions(-) > > >>> > > >>> -- > > >>> 2.20.1 > > >>> > > >>> ___ > > >>> pve-devel mailing list > > >>> pve-de...@pve.proxmox.com > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > >> > > >> ___ > > >> pve-devel mailing list > > >> pve-de...@pve.proxmox.com > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > > > > > Best Regards, > > > Wolfgang Link > > > w.l...@proxmox.com > > > http://www.proxmox.com > > > > > > Proxmox Server Solutions GmbH > > > Bräuhausgasse 37, 1050 Vienna, Austria > > > Commercial register no.: FN 258879 f > > > Registration office: Handelsgericht Wien > > > > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH pve-zsync 0/1] Allow pve-zsync jobs to share dest
Awesome, thank you very much! On Thu, Nov 26, 2020 at 12:06 AM Dietmar Maurer wrote: > > It has been 5 months since my patch has been applied, however the version > > for pve-zsync has not been incremented and this patch is not in the > version > > presented by the repo. What needs to be done? > > Ok, just bumped the version and created a new package. So this will be part > of the next release (soon) ... > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [pve-manager] sdn: Adding phpIPAM as IPAM provider
Hello, In your new SDN environment, are there plans for a plugin for phpIPAM? If not, can you provide any starting points for developing one? Thank you, Bruce Wainer ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [pve-manager] sdn: Adding phpIPAM as IPAM provider
Great thanks, I thought I had seen it in one of the patches but when I looked back at some of the more recent submissions it looked like only the internal IPAM was being implemented at first. Bruce On Thu, Dec 10, 2020 at 3:33 AM alexandre derumier wrote: > Hi, > phpIPAM is already implemented :) > > (currently phpIPAM && netbox) > > On 09/12/2020 20:01, Bruce Wainer wrote: > > Hello, > > In your new SDN environment, are there plans for a plugin for phpIPAM? If > > not, can you provide any starting points for developing one? > > Thank you, > > Bruce Wainer > > ___ > > pve-devel mailing list > > pve-devel@lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [pve-manager] sdn: Adding phpIPAM as IPAM provider
Thank you Alexandre. I will definitely be testing SDN and the integration with phpIPAM once released. Looking at the code, it doesn't appear that the MAC of the interface is being passed into phpIPAM, if this is known then I'd prefer that it be passed via add_ip and add_next_freeip. Bruce On Fri, Dec 11, 2020 at 10:18 AM alexandre derumier wrote: > The code is already commited :) > > > https://git.proxmox.com/?p=pve-network.git;a=blob;f=PVE/Network/SDN/Ipams/PhpIpamPlugin.pm;h=6261764ffb5cb9d7b2bc865704ac74663790d860;hb=HEAD > > > It's not yet released, but I'll keep you in touch for testing if you want. > > (I have implement basic things (add/del subnet, add/del ip, find next > free ip), but if you need more options, I could implement them too) > > > > > On 10/12/2020 22:41, Bruce Wainer wrote: > > Great thanks, I thought I had seen it in one of the patches but when I > > looked back at some of the more recent submissions it looked like only > the > > internal IPAM was being implemented at first. > > Bruce > > > > On Thu, Dec 10, 2020 at 3:33 AM alexandre derumier > > wrote: > > > >> Hi, > >> phpIPAM is already implemented :) > >> > >> (currently phpIPAM && netbox) > >> > >> On 09/12/2020 20:01, Bruce Wainer wrote: > >>>Hello, > >>> In your new SDN environment, are there plans for a plugin for phpIPAM? > If > >>> not, can you provide any starting points for developing one? > >>> Thank you, > >>> Bruce Wainer > >>> ___ > >>> pve-devel mailing list > >>> pve-devel@lists.proxmox.com > >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > >>> > >> > >> ___ > >> pve-devel mailing list > >> pve-devel@lists.proxmox.com > >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > >> > >> > > ___ > > pve-devel mailing list > > pve-devel@lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] ZFS ARC size not taken into account by pvestatd or ksmtuned
Hello Stephane, I believe this change is very important and would fix an issue I'm having on some of my servers. However the Proxmox team seems to be particular about how they accept patches (which I don't blame them for, it just is what it is). The details are here: https://pve.proxmox.com/wiki/Developer_Documentation - but in general you need to submit a Contributor Agreement, and submit the patch in a way that makes it easy for them to apply it. Alternatively if you are not willing to go through the steps of submitting this yourself, we might be able to work things out so I can submit it. Please let me know. Sincerely, Bruce Wainer On Wed, Jan 20, 2021 at 8:44 AM Stephane Chazelas wrote: > [note that I'm not subscribed to the list] > > Hello, > > I've been meaning to send this years ago. Sorry for the delay. > > We've been maintaining the patch below on our servers for years > now (since 2015), even before ZFS was officially supported by > PVE. > > We had experienced VM balloons swelling and processes in VMs > running out of memory even though the host had tons of RAM. > > We had tracked that down to pvestatd reclaiming memory from the > VMs. pvestatd targets 80% memory utilisation (in terms of memory > that is not free and not in buffers or caches: (memtotal - > memfree - buffers - cached) / memtotal). > > The problem is that the ZFS ARC is tracked independendly (not as > part of "buffers" or "cached" above). > > The size of that ARC cache also adapts with memory pressure. But > here, since the autoballooning frees memory as soon as it's used > up by the ARC, the ARC size grows and grows while VMs access > their disk, and we've got plenty of wasted free memory that is > never used. > > So in the end, with an ARC allowed to grow up to half the RAM, > we end up in a situation where pvestatd in effect targets 30% > max memory utilisation (with 20% free or in buffers and 50% in > ARC). > > Something similar happens for KSM (memory page deduplication). > /usr/sbin/ksmtuned monitors memory utilisation (again > total-cached-buffers-free) against kvm process memory > allocation, and tells the ksm daemon to scan more and more > pages, more and more aggressively as long as the "used" memory > is above 80%. > > That probably explains why performances decrease significantly > after a while and why doing a "echo 3 > > /proc/sys/vm/drop_caches" (which clears buffers, caches *AND* > the ZFS arc cache) gives a second life to the system. > > (by the way, a recent version of ProcFSTools.pm added a > read_pressure function, but it doesn't look like it's used > anywhere). > > --- /usr/share/perl5/PVE/ProcFSTools.pm.distrib 2020-12-03 > 15:53:17.0 + > +++ /usr/share/perl5/PVE/ProcFSTools.pm 2021-01-19 13:44:42.480272044 + > @@ -268,6 +268,19 @@ sub read_meminfo { > > $res->{memtotal} = $d->{memtotal}; > $res->{memfree} = $d->{memfree} + $d->{buffers} + $d->{cached}; > + > +# Add the ZFS ARC if any > +if (my $fh_arc = IO::File->new("/proc/spl/kstat/zfs/arcstats", "r")) { > + while (my $line = <$fh_arc>) { > + if ($line =~ m/^size .* (\d+)/) { > + # "size" already in bytes > + $res->{memfree} += $1; > + last; > + } > + } > + close($fh_arc); > +} > + > $res->{memused} = $res->{memtotal} - $res->{memfree}; > > $res->{swaptotal} = $d->{swaptotal}; > --- /usr/sbin/ksmtuned.distrib 2020-07-24 10:04:45.827828719 +0100 > +++ /usr/sbin/ksmtuned 2021-01-19 14:37:43.416360037 + > @@ -75,10 +75,17 @@ committed_memory () { > ps -C "$progname" -o vsz= | awk '{ sum += $1 }; END { print sum }' > } > > -free_memory () { > -awk '/^(MemFree|Buffers|Cached):/ {free += $2}; END {print free}' \ > -/proc/meminfo > -} > +free_memory () ( > +shopt -s nullglob > +exec awk ' > + NR == FNR { > + if (/^(MemFree|Buffers|Cached):/) free += $2 > + next > + } > + $1 == "size" {free += int($3/1024)} > + END {print free} > + ' /proc/meminfo /proc/spl/kstat/zfs/[a]rcstats > +) > > increase_npages() { > local delta > > > ___ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [pve-manager] Adding real disk usage information (discussion)
Hello, I am interested in seeing real disk usage information for VM Disks and CT Volumes, on storage types that have thin provisioning and/or snapshots. Specifically I would like to see "Current Disk Usage (Thin)" and either "Snapshot Usage" or "Total Disk Usage". I only use local ZFS on servers at this time, but I'm sure the GUI side would be best made flexible. Is someone interested in helping with this? Where would I start, especially on the GUI part, if I were to develop this myself? Thank you, Bruce Wainer ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [pve-manager] Adding real disk usage information (discussion)
Dominik, Thank you for the insight. There is certainly complexity I did not consider, even if I were to look at the narrow case of local ZFS storage. Regardless, this would be helpful to me and if I make anything then I will submit it. I already have signed the CLA and have code accepted in pve-zsync. Thank you, Bruce On Mon, Apr 19, 2021 at 2:55 AM Dominik Csapak wrote: > On 4/16/21 22:18, Bruce Wainer wrote: > > Hello, > > > > Hi, > > > I am interested in seeing real disk usage information for VM Disks and CT > > Volumes, on storage types that have thin provisioning and/or snapshots. > > Specifically I would like to see "Current Disk Usage (Thin)" and either > > "Snapshot Usage" or "Total Disk Usage". I only use local ZFS on servers > at > > this time, but I'm sure the GUI side would be best made flexible. > > while this sounds sensible, this will get hard very fast. > For example, take a LVM-Thin storage. > > I have a template which has an LV which uses some space. > This can have X linked clones, where each clone can have Y snapshots. > > since lvmthin lvs/snapshots/etc. are only very loosely coupled. > It is very hard to attribute the correct number to any > of those vms/templates. (e.g. do you want to calculate the > template storage again for each vm? only once? what if > you cloned a vm from a snapshot?) > > It gets even harder on storage that can deduplicate (e.g. ZFS) or > where the 'real' usage is dynamically inflated by some form of replica > (e.g. Ceph). > > So, while this sounds nice, and we would probably not oppose a clean > solution, this is not a trivial problem to solve. > > > > > Is someone interested in helping with this? Where would I start, > especially > > on the GUI part, if I were to develop this myself? > > anyway, to answer this question, the storage plugins in the backend can > be found in the pve-storage git repo[0] > > the point where the status api calls of the vms/cts are called live > in qemu-server[1] and pve-container[2] respectively > (the api part is in PVE/API2/) > > you can find the gui part in pve-manger[3] in www/manager6 > > also if you want to send patches, please read the developer > documentation [4] especially the bit about the CLA > > if you have any more question, please ask :) > > hope this helps > kind regards > > 0: > > https://git.proxmox.com/?p=pve-storage.git;a=tree;f=PVE/Storage;h=fd53af5e74407deda65785b164fb61a4f644a6e0;hb=refs/heads/master > 1: https://git.proxmox.com/?p=qemu-server.git;a=summary > 2: https://git.proxmox.com/?p=pve-container.git;a=summary > 3: > > https://git.proxmox.com/?p=pve-manager.git;a=tree;f=www/manager6;hb=refs/heads/master > 4: https://pve.proxmox.com/wiki/Developer_Documentation > > > > > Thank you, > > Bruce Wainer > > ___ > > pve-devel mailing list > > pve-devel@lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > > > > > > > ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] ZFS ARC size not taken into account by pvestatd or ksmtuned
--- Begin Message --- Resending this to the list, I accidentally sent it to Dominique directly. On Tue, Mar 15, 2022 at 10:21 AM Dominique Martinet wrote: ... > I wasn't able to find any follow-up after the "leaving it to you" > message here, was there something I missed? > I got scared off by the extra effort required in submitting "someone else's work" - it seems to me that Stephane would have to do about the same amount of work in allowing me to submit the change, as submitting the changes themself. I have needed to use these changes on several new servers brought up, including two just last week, so they are definitely useful. Bruce --- End Message --- ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] ZFS ARC size not taken into account by pvestatd or ksmtuned
--- Begin Message --- Roland, I'm not sure what happened with my previous email. I'll copy the contents again. Sorry all, for any inconvenience. On Tue, Mar 15, 2022 at 10:21 AM Dominique Martinet wrote: > I wasn't able to find any follow-up after the "leaving it to you" > message here, was there something I missed? > I got scared off by the extra effort required in submitting "someone else's work" - it seems to me that Stephane would have to do about the same amount of work in allowing me to submit the change, as submitting the changes themself. I have needed to use these changes on several new servers brought up, including two just last week, so they are definitely useful. Bruce --- End Message --- ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel