Hello
I am having troubles building the v9 module although I tried with various
releases and against many kernel versions.
With branch drbd-9.0 and drbd-9.0.26-0rc2 against stock Ubuntu 5.8.0-31-generic
In file included from /root/tp/drbd/drbd/drbd_main.c:19:
./include/linux/vermagic.h:6:2: err
Hey. Here is some update on my attempts to build the v9 module on Ubuntu. I
managed to build it on Slackware Linux before, but for some reason all my
attempts are failing on Ubuntu/focal.
Ubuntu 20.04.1 LTS
5.8.0-31-generic vs. 5.9.11.xenreiser4
/usr/local/bin/spatch --version
spatch version
> I tested building on 5.8.0-31-generic (Ubuntu Focal) and I'm getting the
> same error as you do, so I'd assume that drbd cannot build against that
> kernel at the moment.
> I have reverted back to 5.4.0-48-generic which seems to be ok.
Thank you! That's some kind of a workaround, but it worke
> ubuntu-focal-amd64 | ✗ 5.4.0-51; ✗ 5.4.0-52; 5.4.0-48; 5.4.0-53;
> 5.4.0-54; 5.4.0-56; 5.4.0-26
[...]
> Using one of these kernels will give you the smoothest experience when
> building DRBD. We actually pre-compute all compat patches for these
> kernels and put them in our release tarballs
Hope this helps...
My building issues were solved already, once I reverted to longterm 5.4,
as I stated in my answer to Yannis in this very thread.
In my answer to Christoph, I was rather contradicting the fact that
Coccinelle could be avoided in case of supported kernels. And yes, I
also
Hello
imagine a three node cluster, with a resource `r1` configured across
node1 and 2. How to reach the very same resource but diskless on node3?
I suppose I should look into NVME-over-TCP or iSCSI but it should be
possible with DRBD itself, right? If so, would that be with both v8
and/or
So did you define an address? :)
Sorry, I was baffled with the way DRBD node communicate, and mistaken a
listening port with an SCSI target. It works! It's pretty cool)
I've tried with Protocol A and I could also mount a resource on node3
indeed. Hmm but what's the logic behind such a set
l and rely on the DRBD
diskless feature instead. It is possible to build a fully-convergent
farm that way, without the need of iSCSI nor Linstor.
Just in case you're sharing my doubts and frustration against Linstor --
I found my way around it.
HTH
--
Pierre-Philipp Braun
SMTP Healt
ood already, be it a diskfull or
diskless DRBD device.
--
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS and verify MX certificates
<https://nethence.com/smtp/>
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-use
ep...
Is there a way to make the initial replication keep being thin (or
sparse, as in file-system terminology, when dealing with zeroes)?
Thank you!
Best regards,
--
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS and verify MX certificates
<https://neth
Check the section in DRBD user guide, "4.1.8. Skipping initial
resynchronization". I think that's what you are looking for, assuming
that there's no need to preserve the data on existing volumes (i.e you
start with blank LVM volumes).
Thanks, that helped a lot.
--
Pierr
make: *** [Makefile:132: kbuild] Error 2
Should I attempt to disable -Werror=implicit-function-declaration or not?
I tried with CFLAGS="-Wno-error" for testing but I still got the same build
error.
What to do?
Thank you
BR
--
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS
ode1 is both up-to-date and secondary, and even diskless, why
can't I take that resource out of the farm?
running on Linux 5.15.38
version: 9.1.7 (api:2/proto:110-121)
GIT-hash: bfd2450739e3e27cfd0a2eece2cde3d94ad993ae build by root@NODE3,
2022-05-11 22:21:08
Transports (api:18): tcp (9.1.7)
Th
On 12/03/2022 10:28, Pierre-Philipp Braun wrote:
Should I attempt to disable -Werror=implicit-function-declaration or not?
I tried with CFLAGS="-Wno-error" for testing but I still got the same
build error.
As nobody is answering I answer to myself. So I learn the hard way that
understand why DRBD v9 with the
diskless feature is not competing with Ceph RBD on the emerging software
distributed storage market. Maybe because some people got stuck with
Linstor, just like I did few years ago.
BR
--
Pierre-Philipp Braun
15 matches
Mail list logo