Package: release.debian.org
Severity: normal
Tags: bookworm
X-Debbugs-Cc: ltt-cont...@packages.debian.org
Control: affects -1 + src:ltt-control
User: release.debian....@packages.debian.org
Usertags: pu

[ Reason ]
A bug introduced in the stable-2.13 branch of lttng-tools results
in crashes of the consumerd daemon on traced application teardown.
These crashes are uncommon as they require a very specific wakeup
order of the threads but has been reported by users.

[ Impact ]
Without this update users will continue to see random crashes
on application teardown.

[ Tests ]
The upstream test suite was used to test this updated package.

[ Risks ]
The patch is not trivial but has been in the stable branch for
a while in testing.

[ Checklist ]
  [x] *all* changes are documented in the d/changelog
  [x] I reviewed all changes and I approve them
  [x] attach debdiff against the package in (old)stable
  [x] the issue is verified as fixed in unstable

[ Changes ]
The patch adds reference counting to the registered apps representation
and reworks how shared buffers are reclaimed on teardown. A very
detailled explation is included in the patch commit message in the
debdiff.

[ Other info ]
The content of the debddif is also available on salsa:
https://salsa.debian.org/debian/ltt-control/-/tree/debian/bookworm
diff -Nru ltt-control-2.13.9/debian/changelog 
ltt-control-2.13.9/debian/changelog
--- ltt-control-2.13.9/debian/changelog 2023-01-09 16:05:32.000000000 -0500
+++ ltt-control-2.13.9/debian/changelog 2025-02-06 13:14:39.000000000 -0500
@@ -1,3 +1,10 @@
+ltt-control (2.13.9-1+deb12u1) bookworm; urgency=medium
+
+  * Bookworm gbp branch config
+  * Fix consumer crash on shutdown (Closes: #1090850)
+
+ -- Michael Jeanson <mjean...@debian.org>  Thu, 06 Feb 2025 13:14:39 -0500
+
 ltt-control (2.13.9-1) unstable; urgency=medium
 
   * [861d46f] New upstream version 2.13.9
diff -Nru ltt-control-2.13.9/debian/gbp.conf ltt-control-2.13.9/debian/gbp.conf
--- ltt-control-2.13.9/debian/gbp.conf  2022-10-19 15:17:34.000000000 -0400
+++ ltt-control-2.13.9/debian/gbp.conf  2025-02-06 13:12:36.000000000 -0500
@@ -1,3 +1,3 @@
 [DEFAULT]
-upstream-branch=upstream/latest
-debian-branch=debian/sid
+upstream-branch=upstream/bookworm
+debian-branch=debian/bookworm
diff -Nru ltt-control-2.13.9/debian/patches/fix-consumer-crash.patch 
ltt-control-2.13.9/debian/patches/fix-consumer-crash.patch
--- ltt-control-2.13.9/debian/patches/fix-consumer-crash.patch  1969-12-31 
19:00:00.000000000 -0500
+++ ltt-control-2.13.9/debian/patches/fix-consumer-crash.patch  2025-02-06 
13:12:48.000000000 -0500
@@ -0,0 +1,587 @@
+From 3faa1e3d9bb6b6cd1fa370c206614af9061815be Mon Sep 17 00:00:00 2001
+From: Jérémie Galarneau <jeremie.galarn...@efficios.com>
+Date: Wed, 17 May 2023 13:41:03 -0400
+Subject: [PATCH] Fix: sessiond: bad fd used while rotating exiting app's 
buffers
+
+Issue observed
+--------------
+
+From bug #1372:
+
+We are observing seemingly random crashes in the LTTng consumer daemon
+when tracing a C++ application with LTTng-UST. Our workload has a single
+printf-like tracepoint, where each string is in the order of 1kb and the
+total output is around 30MB/s.
+
+LTTng is set up with a single session and channel enabling this
+tracepoint, and we enabled rotation with a maximum size of 100MB or
+every 30 seconds. We are periodically starting new traced processes and
+the system runs close to 100% CPU load. This ran on an AWS
+Graviton2 (ARM) instance with CentOS 7 and a 5.4 kernel, using LTTng-UST
+2.13.5 and LTTng-tools 2.13.8.
+
+The first reported error is a write to a bad file descriptor (-1),
+apparently when waking up the metadata poll thread during a rotation.
+
+Cause
+-----
+
+Inspecting the logs, we see that the metadata channel with key 574 has a
+negative poll fd write end which causes the write in
+consumer_metadata_wakeup_pipe to fail because of an invalid file
+descriptor:
+
+  DBG1 - 15:12:13.271001175 [6593/6605]: Waking up metadata poll thread 
(writing to pipe): channel name = 'metadata', channel key = 574 (in 
consumer_metadata_wakeup_pipe() at consumer.c:888)
+  DBG3 - 15:12:13.271010093 [6593/6605]: write() fd = -1 (in 
consumer_metadata_wakeup_pipe() at consumer.c:892)
+  PERROR - 15:12:13.271014655 [6593/6605]: Failed to write to UST metadata 
pipe while attempting to wake-up the metadata poll thread: Bad file descriptor 
(in consumer_metadata_wakeup_pipe() at consumer.c:907)
+  Error: Failed to dump the metadata cache
+  Error: Rotate channel failed
+
+Meanwhile, a lot of applications seem to be unregistering. Notably, the
+application associated with that metadata channel is being torn down.
+
+Leading up to the use of a bad file descriptor, the chain of events is:
+
+1) The "rotation" thread starts to issue "Consumer rotate channel" on
+   key 574 (@ `15:12:12.865621802`), but blocks on the consumer socket
+   lock. We can deduce this from the fact that thread "6605" in the
+   consumer wakes up to process an unrelated command originating from the
+   same socket.
+
+   We don't see that command being issued by the session daemon, most
+   likely because it occurs just before the captured logs start. All
+   call sites that use this socket take the socket lock, issue their
+   command, wait for a reply, and release the socket lock.
+
+2) The application unregisters (@ `15:12:13.269722736`). The
+   `registry_session`, which owns the metadata contents, is destroyed
+   during `delete_ust_app_session` which is done directly as a consequence
+   of the app unregistration (through a deferred RCU call), see
+   `ust_app_unregister`.
+
+   This is problematic since the consumer will request the metadata during
+   the rotation of the metadata channel. In the logs, we can see that
+   the "close_metadata" command blocks on the consumer socket lock.
+   However, the problem occurs when the `manage-apps` acquires the lock
+   before the "rotation" thread. In this instance, the "close-metadata"
+   command is performed by the consumer daemon, closing the metadata
+   poll file descriptor.
+
+3) As the "close_metadata" command completes, the rotation thread
+   successfully acquires the socket lock. It is not aware of the
+   unregistration of the application and of the subsequent tear-down of the
+   application, registry, and channels since it was already iterating on
+   the application's channels.
+
+   The consumer starts to process the channel rotation command (@
+   `15:12:13.270633213`) which fails on the metadata poll fd.
+
+Essentially, we must ensure that the lifetime of metadata
+channel/streams exceeds any ongoing rotation, and prevent a rotation
+from being launched when an application is being torn-down in per-PID
+buffering mode.
+
+The problem is fairly hard to reproduce as it requires threads to
+wake-up in the problematic order described above. I don't have a
+straight-forward reproducer for the moment.
+
+Solution
+--------
+
+During the execution of a rotation on a per-pid session, the session
+daemon iterates on all applications to rotate their data and metadata
+channels.
+
+The `ust_app` itself is correctly protected: it is owned by an RCU HT
+(`ust_app_ht`) and the RCU read lock is acquired as required to protect
+the lifetime of the storage of `ust_app`. However, there is no way to
+lock an `ust_app` instance itself.
+
+The rotation command assumes that if it finds the `ust_app`, it will be
+able to rotate all of its channels. This isn't true: the `ust_app` can
+be unregistered by the `manage-applications` thread which monitors the
+application sockets for their deaths in order to teardown the
+applications.
+
+The `ust_app` doesn't directly own its channels; they are owned by an
+`ust_app_session` which, itself, has a `lock` mutex. Also, the metadata
+of the application is owned by the "session registry", which itself can
+also be locked.
+
+At a high-level, we want to ensure that the metadata isn't closed while
+a rotation is being setup. The registry lock could provide this
+guarantee. However, it currently needs to remain unlocked during the
+setup of the rotation as it is used when providing the metadata to the
+consumer daemon.
+
+Taking the registry lock over the duration of the setup would result in
+a deadlock like so:
+
+- the consumer buffer consumption thread consumed a data buffer and attempts
+  a metadata sync,
+- the command handling thread of the consumer daemon attempts to rotate
+  any stream that is already at its rotation position and locks on the
+  channel lock held by the consumption thread,
+- the metadata sync launches a metadata request against the session
+  daemon which attempts to refresh the metadata contents through the
+  command socket,
+- the command handling thread never services the metadata "refresh" sent
+  by the session daemon since it is locked against the same channel as
+  the buffer consumption thread, resulting in a deadlock.
+
+Instead, a different approach is required: extending the lifetime of the
+application's channels over the duration of the setup of a rotation.
+
+To do so, the `ust_app` structure (which represents a registered
+application) is now reference-counted. A reference is acquired over the
+duration of the rotation's setup phase. This reference transitively
+holds a reference the application's tracing buffers.
+
+Note that taking a reference doesn't prevent applications from
+unregistering; it simply defers the reclamation of their buffers to the
+end of the rotation setup.
+
+As the rotation completes its setup phase, the references to the
+application (and thus, its tracing buffers) are released, allowing the
+reclamation of all buffering ressources.
+
+Note that the setup phase of the rotation doesn't last long so it
+shouldn't significantly change the observable behaviour in terms of
+memory usage. The setup phase mostly consists in sampling the
+consumption/production positions of all buffers in order to establish a
+switch-over point between the old and new files.
+
+Signed-off-by: Jérémie Galarneau <jeremie.galarn...@efficios.com>
+Change-Id: I8dc1ee45dd00c85556dd70d34a3af4f3a4d4e7cb
+---
+
+--- a/src/bin/lttng-sessiond/dispatch.c
++++ b/src/bin/lttng-sessiond/dispatch.c
+@@ -145,7 +145,7 @@
+                                       (revents & (LPOLLHUP | LPOLLERR))) {
+                               cds_list_del(&wait_node->head);
+                               wait_queue->count--;
+-                              ust_app_destroy(wait_node->app);
++                              ust_app_put(wait_node->app);
+                               free(wait_node);
+                               /*
+                                * Silence warning of use-after-free in
+--- a/src/bin/lttng-sessiond/manage-apps.c
++++ b/src/bin/lttng-sessiond/manage-apps.c
+@@ -163,7 +163,7 @@
+                                       }
+ 
+                                       /* Socket closed on remote end. */
+-                                      ust_app_unregister(pollfd);
++                                      ust_app_unregister_by_socket(pollfd);
+                               } else {
+                                       ERR("Unexpected poll events %u for sock 
%d", revents, pollfd);
+                                       goto error;
+--- a/src/bin/lttng-sessiond/session.c
++++ b/src/bin/lttng-sessiond/session.c
+@@ -1416,6 +1416,7 @@
+                */
+               session_notify_clear(session);
+       }
++
+       return ret;
+ }
+ 
+--- a/src/bin/lttng-sessiond/session.h
++++ b/src/bin/lttng-sessiond/session.h
+@@ -184,6 +184,7 @@
+        * Trigger for size-based rotations.
+        */
+       struct lttng_trigger *rotate_trigger;
++
+       LTTNG_OPTIONAL(uint64_t) most_recent_chunk_id;
+       struct lttng_trace_chunk *current_trace_chunk;
+       struct lttng_trace_chunk *chunk_being_archived;
+@@ -192,6 +193,7 @@
+       bool quiet_rotation;
+       char *last_archived_chunk_name;
+       LTTNG_OPTIONAL(uint64_t) last_archived_chunk_id;
++
+       struct lttng_dynamic_array destroy_notifiers;
+       struct lttng_dynamic_array clear_notifiers;
+       /* Session base path override. Set non-null. */
+--- a/src/bin/lttng-sessiond/ust-app.c
++++ b/src/bin/lttng-sessiond/ust-app.c
+@@ -3989,6 +3989,8 @@
+               goto error_free_pipe;
+       }
+ 
++      urcu_ref_init(&lta->ref);
++
+       lta->event_notifier_group.event_pipe = event_notifier_event_source_pipe;
+ 
+       lta->ppid = msg->ppid;
+@@ -4227,49 +4229,32 @@
+       return ret;
+ }
+ 
+-/*
+- * Unregister app by removing it from the global traceable app list and 
freeing
+- * the data struct.
+- *
+- * The socket is already closed at this point so no close to sock.
+- */
+-void ust_app_unregister(int sock)
++static void ust_app_unregister(struct ust_app *app)
+ {
+-      struct ust_app *lta;
+-      struct lttng_ht_node_ulong *node;
+-      struct lttng_ht_iter ust_app_sock_iter;
++      int ret;
+       struct lttng_ht_iter iter;
+       struct ust_app_session *ua_sess;
+-      int ret;
+ 
+       rcu_read_lock();
+ 
+-      /* Get the node reference for a call_rcu */
+-      lttng_ht_lookup(ust_app_ht_by_sock, (void *)((unsigned long) sock), 
&ust_app_sock_iter);
+-      node = lttng_ht_iter_get_node_ulong(&ust_app_sock_iter);
+-      assert(node);
+-
+-      lta = caa_container_of(node, struct ust_app, sock_n);
+-      DBG("PID %d unregistering with sock %d", lta->pid, sock);
+-
+       /*
+        * For per-PID buffers, perform "push metadata" and flush all
+        * application streams before removing app from hash tables,
+        * ensuring proper behavior of data_pending check.
+        * Remove sessions so they are not visible during deletion.
+        */
+-      cds_lfht_for_each_entry(lta->sessions->ht, &iter.iter, ua_sess,
++      cds_lfht_for_each_entry(app->sessions->ht, &iter.iter, ua_sess,
+                       node.node) {
+               struct ust_registry_session *registry;
+ 
+-              ret = lttng_ht_del(lta->sessions, &iter);
++              ret = lttng_ht_del(app->sessions, &iter);
+               if (ret) {
+                       /* The session was already removed so scheduled for 
teardown. */
+                       continue;
+               }
+ 
+               if (ua_sess->buffer_type == LTTNG_BUFFER_PER_PID) {
+-                      (void) ust_app_flush_app_session(lta, ua_sess);
++                      (void) ust_app_flush_app_session(app, ua_sess);
+               }
+ 
+               /*
+@@ -4310,41 +4295,63 @@
+                               (void) close_metadata(registry, 
ua_sess->consumer);
+                       }
+               }
+-              cds_list_add(&ua_sess->teardown_node, &lta->teardown_head);
+ 
++              cds_list_add(&ua_sess->teardown_node, &app->teardown_head);
+               pthread_mutex_unlock(&ua_sess->lock);
+       }
+ 
+-      /* Remove application from PID hash table */
+-      ret = lttng_ht_del(ust_app_ht_by_sock, &ust_app_sock_iter);
+-      assert(!ret);
+-
+       /*
+        * Remove application from notify hash table. The thread handling the
+        * notify socket could have deleted the node so ignore on error because
+        * either way it's valid. The close of that socket is handled by the
+        * apps_notify_thread.
+        */
+-      iter.iter.node = &lta->notify_sock_n.node;
++      iter.iter.node = &app->notify_sock_n.node;
+       (void) lttng_ht_del(ust_app_ht_by_notify_sock, &iter);
+ 
+-      /*
+-       * Ignore return value since the node might have been removed before by 
an
+-       * add replace during app registration because the PID can be 
reassigned by
+-       * the OS.
+-       */
+-      iter.iter.node = &lta->pid_n.node;
++      iter.iter.node = &app->pid_n.node;
+       ret = lttng_ht_del(ust_app_ht, &iter);
+       if (ret) {
+-              DBG3("Unregister app by PID %d failed. This can happen on pid 
reuse",
+-                              lta->pid);
++              WARN("Unregister app by PID %d failed", app->pid);
+       }
+ 
+-      /* Free memory */
+-      call_rcu(&lta->pid_n.head, delete_ust_app_rcu);
++      rcu_read_unlock();
++}
++
++/*
++ * Unregister app by removing it from the global traceable app list and 
freeing
++ * the data struct.
++ *
++ * The socket is already closed at this point, so there is no need to close 
it.
++ */
++void ust_app_unregister_by_socket(int sock)
++{
++      struct ust_app *app;
++      struct lttng_ht_node_ulong *node;
++      struct lttng_ht_iter ust_app_sock_iter;
++      int ret;
++
++      rcu_read_lock();
++
++      /* Get the node reference for a call_rcu */
++      lttng_ht_lookup(ust_app_ht_by_sock, (void *)((unsigned long) sock), 
&ust_app_sock_iter);
++      node = lttng_ht_iter_get_node_ulong(&ust_app_sock_iter);
++      assert(node);
++
++      app = caa_container_of(node, struct ust_app, sock_n);
++
++      DBG("PID %d unregistering with sock %d", app->pid, sock);
+ 
++      /* Remove application from socket hash table */
++      ret = lttng_ht_del(ust_app_ht_by_sock, &ust_app_sock_iter);
++      assert(!ret);
++
++      /*
++       * The socket is closed: release its reference to the application
++       * to trigger its eventual teardown.
++       */
++      ust_app_put(app);
+       rcu_read_unlock();
+-      return;
+ }
+ 
+ /*
+@@ -4640,25 +4647,18 @@
+                        * are unregistered prior to this clean-up.
+                        */
+                       
assert(lttng_ht_get_count(app->token_to_event_notifier_rule_ht) == 0);
+-
+                       ust_app_notify_sock_unregister(app->notify_sock);
+               }
+       }
+ 
+-      if (ust_app_ht) {
+-              cds_lfht_for_each_entry(ust_app_ht->ht, &iter.iter, app, 
pid_n.node) {
+-                      ret = lttng_ht_del(ust_app_ht, &iter);
+-                      assert(!ret);
+-                      call_rcu(&app->pid_n.head, delete_ust_app_rcu);
+-              }
+-      }
+-
+       /* Cleanup socket hash table */
+       if (ust_app_ht_by_sock) {
+               cds_lfht_for_each_entry(ust_app_ht_by_sock->ht, &iter.iter, app,
+                               sock_n.node) {
+                       ret = lttng_ht_del(ust_app_ht_by_sock, &iter);
+                       assert(!ret);
++
++                      ust_app_put(app);
+               }
+       }
+ 
+@@ -6988,7 +6988,7 @@
+ /*
+  * Destroy a ust app data structure and free its memory.
+  */
+-void ust_app_destroy(struct ust_app *app)
++static void ust_app_destroy(struct ust_app *app)
+ {
+       if (!app) {
+               return;
+@@ -7415,7 +7415,7 @@
+       int ret;
+       enum lttng_error_code cmd_ret = LTTNG_OK;
+       struct lttng_ht_iter iter;
+-      struct ust_app *app;
++      struct ust_app *app = NULL;
+       struct ltt_ust_session *usess = session->ust_session;
+ 
+       assert(usess);
+@@ -7488,10 +7488,20 @@
+                       struct ust_app_channel *ua_chan;
+                       struct ust_app_session *ua_sess;
+                       struct ust_registry_session *registry;
++                      bool app_reference_taken;
++
++                      app_reference_taken = ust_app_get(app);
++                      if (!app_reference_taken) {
++                              /* Application unregistered concurrently, skip 
it. */
++                              DBG("Could not get application reference as it 
is being torn down; skipping application");
++                              continue;
++                      }
+ 
+                       ua_sess = lookup_session_by_app(usess, app);
+                       if (!ua_sess) {
+                               /* Session not associated with this app. */
++                              ust_app_put(app);
++                              app = NULL;
+                               continue;
+                       }
+ 
+@@ -7503,11 +7513,9 @@
+                               goto error;
+                       }
+ 
++
+                       registry = get_session_registry(ua_sess);
+-                      if (!registry) {
+-                              DBG("Application session is being torn down. 
Skip application.");
+-                              continue;
+-                      }
++                      assert(registry);
+ 
+                       /* Rotate the data channels. */
+                       cds_lfht_for_each_entry(ua_sess->channels->ht, 
&chan_iter.iter,
+@@ -7519,9 +7527,6 @@
+                                               ua_sess->consumer,
+                                               /* is_metadata_channel */ 
false);
+                               if (ret < 0) {
+-                                      /* Per-PID buffer and application going 
away. */
+-                                      if (ret == -LTTNG_ERR_CHAN_NOT_FOUND)
+-                                              continue;
+                                       cmd_ret = 
LTTNG_ERR_ROTATION_FAIL_CONSUMER;
+                                       goto error;
+                               }
+@@ -7536,13 +7541,15 @@
+                                       ua_sess->consumer,
+                                       /* is_metadata_channel */ true);
+                       if (ret < 0) {
+-                              /* Per-PID buffer and application going away. */
+-                              if (ret == -LTTNG_ERR_CHAN_NOT_FOUND)
+-                                      continue;
+                               cmd_ret = LTTNG_ERR_ROTATION_FAIL_CONSUMER;
+                               goto error;
+                       }
++
++                      ust_app_put(app);
++                      app = NULL;
+               }
++
++              app = NULL;
+               break;
+       }
+       default:
+@@ -7553,6 +7560,7 @@
+       cmd_ret = LTTNG_OK;
+ 
+ error:
++      ust_app_put(app);
+       rcu_read_unlock();
+       return cmd_ret;
+ }
+@@ -7934,3 +7942,26 @@
+       rcu_read_unlock();
+       return ret;
+ }
++
++static void ust_app_release(struct urcu_ref *ref)
++{
++      struct ust_app *app = container_of(ref, struct ust_app, ref);
++
++      ust_app_unregister(app);
++      ust_app_destroy(app);
++}
++
++bool ust_app_get(struct ust_app *app)
++{
++      assert(app);
++      return urcu_ref_get_unless_zero(&app->ref);
++}
++
++void ust_app_put(struct ust_app *app)
++{
++      if (!app) {
++              return;
++      }
++
++      urcu_ref_put(&app->ref, ust_app_release);
++}
+--- a/src/bin/lttng-sessiond/ust-app.h
++++ b/src/bin/lttng-sessiond/ust-app.h
+@@ -10,6 +10,7 @@
+ #define _LTT_UST_APP_H
+ 
+ #include <stdint.h>
++#include <urcu/ref.h>
+ 
+ #include <common/index-allocator.h>
+ #include <common/uuid.h>
+@@ -248,6 +249,12 @@
+  * and a linked list is kept of all running traceable app.
+  */
+ struct ust_app {
++      /*
++       * The lifetime of 'sock' holds a reference to the application; the
++       * application management thread will release a reference to the
++       * application if the application dies.
++       */
++      struct urcu_ref ref;
+       int sock;
+       pthread_mutex_t sock_lock;      /* Protects sock protocol. */
+ 
+@@ -337,7 +344,7 @@
+ int ust_app_register(struct ust_register_msg *msg, int sock);
+ int ust_app_register_done(struct ust_app *app);
+ int ust_app_version(struct ust_app *app);
+-void ust_app_unregister(int sock);
++void ust_app_unregister_by_socket(int sock);
+ int ust_app_start_trace_all(struct ltt_ust_session *usess);
+ int ust_app_stop_trace_all(struct ltt_ust_session *usess);
+ int ust_app_destroy_trace_all(struct ltt_ust_session *usess);
+@@ -371,7 +378,6 @@
+ void ust_app_notify_sock_unregister(int sock);
+ ssize_t ust_app_push_metadata(struct ust_registry_session *registry,
+               struct consumer_socket *socket, int send_zero_data);
+-void ust_app_destroy(struct ust_app *app);
+ enum lttng_error_code ust_app_snapshot_record(
+               const struct ltt_ust_session *usess,
+               const struct consumer_output *output, int wait,
+@@ -407,6 +413,9 @@
+ bool ust_app_supports_notifiers(const struct ust_app *app);
+ bool ust_app_supports_counters(const struct ust_app *app);
+ 
++bool ust_app_get(struct ust_app *app);
++void ust_app_put(struct ust_app *app);
++
+ #else /* HAVE_LIBLTTNG_UST_CTL */
+ 
+ static inline
+@@ -455,7 +464,7 @@
+       return -ENOSYS;
+ }
+ static inline
+-void ust_app_unregister(int sock)
++void ust_app_unregister_by_socket(int sock)
+ {
+ }
+ static inline
+@@ -572,11 +581,6 @@
+       return 0;
+ }
+ static inline
+-void ust_app_destroy(struct ust_app *app)
+-{
+-      return;
+-}
+-static inline
+ enum lttng_error_code ust_app_snapshot_record(struct ltt_ust_session *usess,
+               const struct consumer_output *output, int wait, uint64_t 
max_stream_size)
+ {
+@@ -678,6 +682,16 @@
+       return 0;
+ }
+ 
++static inline
++void ust_app_get(struct ust_app *app)
++{
++}
++
++static inline
++void ust_app_put(struct ust_app *app)
++{
++}
++
+ #endif /* HAVE_LIBLTTNG_UST_CTL */
+ 
+ #endif /* _LTT_UST_APP_H */
diff -Nru ltt-control-2.13.9/debian/patches/series 
ltt-control-2.13.9/debian/patches/series
--- ltt-control-2.13.9/debian/patches/series    2022-04-14 11:45:19.000000000 
-0400
+++ ltt-control-2.13.9/debian/patches/series    2025-02-06 13:12:48.000000000 
-0500
@@ -1 +1,2 @@
 fix-lttng-health-check-manpage.patch
+fix-consumer-crash.patch

Reply via email to