On 16/10/2023 22.19, Philipp Reisner wrote:

Hello,

This time, it took a bit longer, around ten weeks. For me, this was a
fascinating development cycle.

We got important fixes to the RDMA transport, which now works with the
more recent Mellanox cards and drivers. However, there is still room for
improvement in the performance of DRBD's RDMA transport module.

 From now on, the TCP transport can use TLS encryption. The kernel does
the data encryption/decryption with kTLS, an additional daemon in
userspace performs the TLS handshakes.

Last but not least, we got a completely new, additional TCP
implementation named 'lb-tcp'. It enables establishing DRBD connections
over multiple paths in parallel and distributing the load between
them. It is important to note that 'lb-tcp' is not "wire protocol"
compatible with the traditional TCP transport.

This is very interesting actually.

Do you also do failover of traffic in case one link/path is down? If so perhaps we should consider sharing some of the code here for path management with kronosnet and avoid to re-invent the wheel N times?

I am not talking about sending the traffic via userland, that would not work for drbd due to possibly higher latency and scheduling, but knet has a semi advanced path management to decide which links to use based on network status etc. etc. and the decision core will improve drastically in 2.0 (based on latency, packet loss and more factors than just "yes it pings, so let´s use the links with priority X/Y").

Cheers
Fabio

_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to