On 03.03.21 10:56, Stefan Reiter wrote: > Values chosen by fair dice roll, seems to be a good sweet spot on my > machine where any less causes performance degradation but any more > doesn't really make it go any faster. > > Keep in mind that those values are per drive in an actual restore. > > Signed-off-by: Stefan Reiter <s.rei...@proxmox.com> > --- > > Depends on new proxmox-backup. > > v2: > * unchanged > > src/restore.rs | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/src/restore.rs b/src/restore.rs > index 0790d7f..a1acce4 100644 > --- a/src/restore.rs > +++ b/src/restore.rs > @@ -218,15 +218,16 @@ impl RestoreTask { > > let index = client.download_fixed_index(&manifest, > &archive_name).await?; > let archive_size = index.index_bytes(); > - let most_used = index.find_most_used_chunks(8); > + let most_used = index.find_most_used_chunks(16); // 64 MB most used > cache
> > let file_info = manifest.lookup_file_info(&archive_name)?; > > - let chunk_reader = RemoteChunkReader::new( > + let chunk_reader = RemoteChunkReader::new_lru_cached( > Arc::clone(&client), > self.crypt_config.clone(), > file_info.chunk_crypt_mode(), > most_used, > + 64, // 256 MB LRU cache how does this work with low(er) memory situations? Lots of people do not over dimension their memory that much, and especially the need for mass-recovery could seem to correlate with reduced resource availability (a node failed, now I need to restore X backups on my <test/old/other-already-in-use> node, so multiple restore jobs may run in parallel, and they all may have even multiple disks, so tens of GiB of memory just for the cache are not that unlikely. How is the behavior, hard failure if memory is not available? Also, some archives may be smaller than 256 MiB (EFI disk??) so there it'd be weird to have 256 cache and get 64 of most used chunks if that's all/more than it would actually need to be.. There may be the reversed situation too, beefy fast node with lots of memory and restore is used as recovery or migration but network bw/latency to PBS is not that good - so bigger cache could be wanted. Maybe we could get the available memory and use that as hint, I mean as memory usage can be highly dynamic it will never be perfect, but better than just ignoring it.. > ); > > let reader = AsyncIndexReader::new(index, chunk_reader); > _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel