bug#73289: ceph 17.2.5 no longer installable

2024-09-16 Thread Yann Dupont

Hello everyone,
ceph is no longer installable, probably since the core update.

see https://ci.guix.gnu.org/build/5775507/log

The main reason seems to be the boost update. A relatively simple fix 
exists for recent versions of ceph , (see 
https://github.com/ceph/ceph/commit/244c5ebbd4d5683da7f57612cc02e946aae7fd73) 
but this has not been backported to older versions of ceph.


Adding this patch to ceph 17.2.5 works. We have several options:

-> stay in 17.2.5, add this patch, which is little work and not a big risk
-> take the opportunity to upgrade to 17.2.7 (which is currently the 
last stable version of the 17.2 series), I've tested it and it *seems* 
to work,
-> take the opportunity to upgrade to a more recent version of ceph (18 
or 19), but that's a whole other job.


What do you think?

I'll post the patch to at least correct the compilation and upgrade to 
17.2.7







bug#73289: ceph 17.2.5 no longer installable

2024-09-16 Thread Yann Dupont

here's the patch, I hope I didn't screw up, being notoriously good at it :-)

Please note: there's still a problem with the python-wcwidth module 
that's not found, but the problem existed before. My 1st fix didn't 
work, so I'll try again later.



diff --git a/gnu/packages/patches/ceph-fix-for-newer-boost.patch b/gnu/packages/patches/ceph-fix-for-newer-boost.patch
new file mode 100644
index 00..9f133fcba5
--- /dev/null
+++ b/gnu/packages/patches/ceph-fix-for-newer-boost.patch
@@ -0,0 +1,48 @@
+--- a/src/rgw/rgw_asio_client.cc	1970-01-01 01:00:01.0 +0100
 b/src/rgw/rgw_asio_client.cc	2024-09-11 08:33:21.723548804 +0200
+@@ -39,11 +39,11 @@
+ const auto& value = header->value();
+ 
+ if (field == beast::http::field::content_length) {
+-  env.set("CONTENT_LENGTH", value.to_string());
++  env.set("CONTENT_LENGTH", std::string(value));
+   continue;
+ }
+ if (field == beast::http::field::content_type) {
+-  env.set("CONTENT_TYPE", value.to_string());
++  env.set("CONTENT_TYPE", std::string(value));
+   continue;
+ }
+ 
+@@ -62,26 +62,26 @@
+ }
+ *dest = '\0';
+ 
+-env.set(buf, value.to_string());
++env.set(buf, std::string(value));
+   }
+ 
+   int major = request.version() / 10;
+   int minor = request.version() % 10;
+   env.set("HTTP_VERSION", std::to_string(major) + '.' + std::to_string(minor));
+ 
+-  env.set("REQUEST_METHOD", request.method_string().to_string());
++  env.set("REQUEST_METHOD", std::string(request.method_string()));
+ 
+   // split uri from query
+   auto uri = request.target();
+   auto pos = uri.find('?');
+   if (pos != uri.npos) {
+ auto query = uri.substr(pos + 1);
+-env.set("QUERY_STRING", query.to_string());
++env.set("QUERY_STRING", std::string(query));
+ uri = uri.substr(0, pos);
+   }
+-  env.set("SCRIPT_URI", uri.to_string());
++  env.set("SCRIPT_URI", std::string(uri));
+ 
+-  env.set("REQUEST_URI", request.target().to_string());
++  env.set("REQUEST_URI", std::string(request.target()));
+ 
+   char port_buf[16];
+   snprintf(port_buf, sizeof(port_buf), "%d", local_endpoint.port());
diff --git a/gnu/packages/storage.scm b/gnu/packages/storage.scm
index ab7eb6102c..919b72736b 100644
--- a/gnu/packages/storage.scm
+++ b/gnu/packages/storage.scm
@@ -63,17 +63,18 @@ (define-module (gnu packages storage)
 (define-public ceph
   (package
 (name "ceph")
-(version "17.2.5")
+(version "17.2.7")
 (source (origin
   (method url-fetch)
   (uri (string-append "https://download.ceph.com/tarballs/ceph-";
   version ".tar.gz"))
   (sha256
(base32
-"16mjj6cyrpdn49ig82mmrv984vqfdf24d6i4n9sghfli8z0nj8in"))
+"1612424yrf39dz010ygz8k5x1vc8731549ckfj1r39dg00m62klp"))
   (patches
(search-patches
-"ceph-disable-cpu-optimizations.patch"))
+"ceph-disable-cpu-optimizations.patch"
+"ceph-fix-for-newer-boost.patch" ))
   (modules '((guix build utils)))
   (snippet
'(for-each delete-file-recursively


bug#64057: qemu-guest-agent-shepherd-service probably lacks (requirement '(udev))

2023-06-13 Thread Yann Dupont
Hi all, we've noticed that qemu-guest-agent doesn't start reliably on 
virtual machines generated by guix system.

the log file shows the following:

2023-06-12 14:36:14 1686573373.873765: critical: error opening channel 
'/dev/virtio-ports/org.qemu.guest_agent.0': No such file or directory 
2023-06-12 14:36:14 1686573373.873779: critical: failed to create guest 
agent channel 2023-06-12 14:36:14 1686573373.873782: critical: failed to 
initialize guest agent channel


I guess the udev dependency is missing. The following patch seems to do 
the trick here:


diff --git a/gnu/services/virtualization.scm 
b/gnu/services/virtualization.scm index 2e311e3813..b1b7eafd75 100644 
--- a/gnu/services/virtualization.scm +++ 
b/gnu/services/virtualization.scm @@ -962,6 +962,7 @@ (define 
(qemu-guest-agent-shepherd-service config) (list (shepherd-service 
(provision '(qemu-guest-agent)) + (requirement '(udev)) (documentation 
"Run the QEMU guest agent.") (start #~(make-forkexec-constructor 
`(,(string-append #$qemu "/bin/qemu-ga")


Cheers,


bug#64593: ‘guix system image’ fails to create image while invoking ‘grub-bios-setup’

2023-07-21 Thread Yann Dupont
Hello, as it was after discussion with Ludovic that he posted this bug 
report, let me express my opinion as a simple user.


it's just a matter of consistency: very basically, the examples mention 
grub-bootloader. With the default image type (efi-raw), it's been 
working perfectly for years (maybe by chance, but in any case it seemed 
to be a compatible combination). The recent change means that it no 
longer works :-).  What's more, the error message isn't very explicit, 
and doesn't point to a configuration error but to what may appear to be 
a recently-introduced bug. Switching to grub-efi-bootloader allows you 
to build the image (and start it).


I don't know if it's simply possible to have a consistency check between 
the image type and the bootloader used: "Bootloader probably not 
compatible with image type, please use grub-efi-bootloader".


Maybe just change the doc to include grub-efi-bootloader in the 
examples. Or indeed have a format that remains compatible with MBR 
partitioning and the appropriate grub.


--

Yann



smime.p7s
Description: Signature cryptographique S/MIME


bug#65177: udevd error with lvm-raid array leading to race condition with luks

2023-09-14 Thread Yann Dupont

hello everyone, we're also victims of this bug, in an even simpler use case.

[…]
(file-system
    (device "/dev/mapper/VG0-DATA")
    (mount-point "/VG0-DATA")
    (type "ext4"))
[…]

The culprit seems to be 69-dm-lvm.rules

[ 18.226226] udevd[115]: failed to execute '/usr/bin/systemd-run' 
'/usr/bin/systemd-run --no-block --property DefaultDependencies=no 
--unit lvm-activate-VG0 
/gnu/store/0hndg947ywdl5izvy63ny38hyywci66k-lvm2-2.03.22/sbin/lvm vy


I can confirm that when using time-machine to revert to lvm2 2-03.11 
versions, the VM boots.


cheers,






bug#65177: udevd error with lvm-raid array leading to race condition with luks

2023-09-14 Thread Yann Dupont
Hi, as suggested by Josselin, I tested the following patch and it seems 
to do the job here.



Be careful, I'm not an udev or lvm2 specialist at all and basically, I 
don't really know if what I did is the right way to do it.



All I can say is that the VMs now boot.


Cheers,



diff --git a/gnu/packages/linux.scm b/gnu/packages/linux.scm
index 91109c41d9..28b3c1e0bf 100644
--- a/gnu/packages/linux.scm
+++ b/gnu/packages/linux.scm
@@ -4421,6 +4421,7 @@ (define-public lvm2
   (sha256
    (base32
"0z6w6bknhwh1n3qfkb5ij6x57q3wjf28lq3l8kh7rkhsplinjnjc"))
+  (patches (search-patches "lvm2-no-systemd.patch"))
   (modules '((guix build utils)))
   (snippet
    '(begin
diff --git a/gnu/packages/patches/lvm2-no-systemd.patch 
b/gnu/packages/patches/lvm2-no-systemd.patch

new file mode 100644
index 00..7e8a37abcc
--- /dev/null
+++ b/gnu/packages/patches/lvm2-no-systemd.patch
@@ -0,0 +1,13 @@
+diff --git a/udev/69-dm-lvm.rules.in b/udev/69-dm-lvm.rules.in
+index ff1568145..8879a2ef9 100644
+--- a/udev/69-dm-lvm.rules.in
 b/udev/69-dm-lvm.rules.in
+@@ -76,7 +76,7 @@ LABEL="lvm_scan"
+ # it's better suited to appearing in the journal.
+
+ IMPORT{program}="(LVM_EXEC)/lvm pvscan --cache --listvg 
--checkcomplete --vgonline --autoactivation event --udevoutput 
--journal=output $env{DEVNAME}"
+-ENV{LVM_VG_NAME_COMPLETE}=="?*", RUN+="(SYSTEMDRUN) --no-block 
--property DefaultDependencies=no --unit 
lvm-activate-$env{LVM_VG_NAME_COMPLETE} (LVM_EXEC)/lvm vgchange -aay 
--autoactivation event $env{LVM_VG_NAME_COMPLETE}"
++ENV{LVM_VG_NAME_COMPLETE}=="?*", RUN+="(LVM_EXEC)/lvm vgchange -aay 
--autoactivation event $env{LVM_VG_NAME_COMPLETE}"

+ GOTO="lvm_end"
+
+ LABEL="lvm_end"



bug#78836: /var/empty permissions problems between sshd and nslcd

2025-06-19 Thread Yann Dupont
Hi everyone, the patch eab097c682ed31efd8668f46fce8de8f73b92849 causes 
sshd to now use /var/empty as a chroot directory. sshd expects 
/var/empty to belong to root and with reduced write permissions.


Unfortunately, when the nslcd service is also present on the system, it 
creates a user whose home directory is also /var/empty, which in this 
case belongs to the nslcd user.


In this case, sshd refuses to start.

I think the patch eab097c682ed31efd8668f46fce8de8f73b92849 is correct, 
and that nslcd should be changed to create /var/empty with the directory 
property set to root. But I don't know if there are any side effects to 
worry about with nslcd ?


(I think the relevant code is in : services/authentication.scm), in 
(|define %nslcd-accounts)

|

|...|

|(home-directory "/var/empty")|


bug#78836: /var/empty permissions problems between sshd and nslcd

2025-06-20 Thread Yann Dupont



On 19/06/2025 13:19, Sergey Trofimov wrote:

Hi

Yann Dupont  writes:


I don't know if this is relevant information, but we encounter this problem on 
disposable virtual machines, freshly generated by guix
system image for one-time use, we don't reconfigure on these machines. Maybe 
this function is not called in this specific case?

I'll see if a reconfigure changes things, , but it's going to take some time, 
as our templates are a bit complex and divided into
several files that can't be found in /running/current-system/configuration.scm.

You could simply run /run/current-system/activate and check if it fixes 
permissions.
Hi Sergey, launching /run/current-system/activate does not change the 
directory property.


However, I'm afraid this could be a problem on our side. By simplifying 
a vm definition as much as possible to be able to reproduce, the nslcd 
service creates /var/empty with root as owner... so something unexpected 
is happening on our side. I'll look into it.


Thanks for your help,

--
Yann Dupont - GLiCID / HPC Pays de la Loire
Tel : 02.53.48.49.39 - yann.dup...@univ-nantes.fr






bug#78836: /var/empty permissions problems between sshd and nslcd

2025-06-20 Thread Yann Dupont


On 20/06/2025 17:57, Sergey Trofimov wrote:

If the OS is stripped to the bare minimum, I assume that it doesn't have
all the system users usually present in Guix system (daemon and
builders). It could happen that nslcd is the only user with the home dir
set to /var/empty (check /etc/passwd). In that case
activate-users+groups won't be changing the permissions because it only
does that on directories that are shared between multiple accounts.


yes, I was debugging this afternoon and just came to the same conclusion :

The culprit is this line  (modify-services %base-services (delete 
guix-service-type))


We delete it because our store is shared and GUIX_DAEMON_SOCKET set.

I think we can close this bug report, as I imagine there can't be many 
of us with this problem.


Thanks a lot for the explanation,