Hi Ryan, Just to tell you that guix deploy was the key :)
Here is what I came up with: https://the-dam.org/docs/explanations/GarbageCollection.html using guix deploy, in conjunction with: (modify-service os guix (channels %beaverlabs-channels) (guix (guix-for-channels %beaverlabs-channels)))) is the key to have the "Computing Guix derivation" step done on my desktop, and have an up to date guix, with custom channels, on the server. I can then update all the profiles, and run guix gc --delete-generations. Thanks for the pointer :) Cheers, Edouard. Ryan Sundberg <r...@arctype.co> writes: > Hi Edouard, I have couple of ideas for you, which may help with your > deployment process: > > 1) if possible, mount /gnu on a btrfs filesystem with zstd compression > enabled. This will let you fill up at least 3x of your physical disk space > because the store compresses very well. > > 2) for production servers you should try using `guix deploy` to push rather > than pull profiles. This will offload most of the process to the machine > pushing the deployment. > > With regards to multi user profiles, if this is a typical server (not a > terminal/shell server) I wonder why you need to have per user profiles at > all. I > only say this because I am not sure about running `guix deploy` on an > individual > user basis. > > Sincerely, > Ryan Sundberg > > May 13, 2024 8:08:35 AM Edouard Klein <e...@rdklein.fr>: > >> Hi Guix, >> >> First, I'd like to apologize for not having taken the time to answer >> those who helped me on a previous guix performance issue (with >> containers), the reason is tied to the topic of this email: the store >> has eaten all the space of my server, and solving that takes precedence >> over everything else, because no space == no services. >> >> So, I need to clear some space, and to do that I need to have every user >> run guix pull (and by that I mean root will sudo -u $user guix pull), >> then update all of their profiles, and then guix gc >> --delete-generations. >> >> This ought to turn deduplication up to 11, and enjoy a reduced store >> size. >> >> I've already solved the cache size problem: >> mount -t overlay overlay -o \ >> lowerdir="/root/.cache/guix",upperdir="/home/$user/.cache/guix-overlay",workdir="/home/$user/.cache/guix-workdir" >> \ >> "/home/$user/.cache/guix" >> >> Then >> bindfs --mirror=$user /home/$user/.cache/guix /home/$user/.cache/guix >> >> This lets root (who just ran guix pull) share its cache with every user, >> and avoid blasting away 700MB of disk space in every $HOME to reproduce >> the cache. >> >> However, now, I'm facing the previously addressed problem of guix pull >> being slow and hungry: >> https://www.mail-archive.com/guix-devel@gnu.org/msg66442.html (Guix pull >> speed) >> https://yhetil.org/guix/87h6mwf4u3....@lapenas.dev/T/ (guix pull >> performance) >> >> On my server, in order to run guix pull, I have to stop all other >> services otherwise I run out of ram. >> >> Then, one root has pulled, I need to wait 4 minutes /per user/ for guix >> pull to finish its "Computing Guix derivation" step. >> >> I would like to know two things, one is for the sake of knowledge, and >> the other is to solve the problem at hand: >> >> - Why is this step not substitutable ? The inputs are known, a hash can >> be derived, a substitute server could be queried for an output of that >> hash ? What am I missing ? Does the guix derivation not end up in the >> store ? What makes it so special that it can't be served by a substitute >> server ? >> >> - Is there a way (even a very dirty one, like hand copying stuff accross >> /var/guix/profiles/per-user/*/current-guix) I can stop paying this 4 >> minutes per user price ? As I said, this is downtime on my server, as >> I need to stop all other services to let guix pull finish. >> >> Thanks in advance. >> >> Sorry for beating a dead horse, it's just that I can't scale anything up >> until I solve these performance issues. Sure I could rent a bigger >> server, but that's just kicking the can down the road. >> >> >> Cheers, >> >> Edouard.