[lttng-dev] [PATCH lttng-modules stable-2.10] Fix: btrfs: Remove fsid/metadata_fsid fields from btrfs_info
Introduced in v5.0. See upstream commit : commit de37aa513105f864d3c21105bf5542d498f21ca2 Author: Nikolay Borisov Date: Tue Oct 30 16:43:24 2018 +0200 btrfs: Remove fsid/metadata_fsid fields from btrfs_info Currently btrfs_fs_info structure contains a copy of the fsid/metadata_uuid fields. Same values are also contained in the btrfs_fs_devices structure which fs_info has a reference to. Let's reduce duplication by removing the fields from fs_info and always refer to the ones in fs_devices. No functional changes. Signed-off-by: Michael Jeanson --- instrumentation/events/lttng-module/btrfs.h | 100 +++- 1 file changed, 53 insertions(+), 47 deletions(-) diff --git a/instrumentation/events/lttng-module/btrfs.h b/instrumentation/events/lttng-module/btrfs.h index 4dfbf5b..ec45a1e 100644 --- a/instrumentation/events/lttng-module/btrfs.h +++ b/instrumentation/events/lttng-module/btrfs.h @@ -32,6 +32,12 @@ struct extent_state; #define BTRFS_UUID_SIZE 16 +#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,0,0)) +#define lttng_fs_info_fsid fs_info->fs_devices->fsid +#else +#define lttng_fs_info_fsid fs_info->fsid +#endif + #if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0) || \ LTTNG_SLE_KERNEL_RANGE(4,4,73,5,0,0, 4,4,73,6,0,0) || \ LTTNG_SLE_KERNEL_RANGE(4,4,82,6,0,0, 4,4,82,7,0,0) || \ @@ -629,7 +635,7 @@ LTTNG_TRACEPOINT_EVENT(btrfs_add_block_group, TP_ARGS(fs_info, block_group, create), TP_FIELDS( - ctf_array(u8, fsid, fs_info->fsid, BTRFS_UUID_SIZE) + ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE) ctf_integer(u64, offset, block_group->key.objectid) ctf_integer(u64, size, block_group->key.offset) ctf_integer(u64, flags, block_group->flags) @@ -647,7 +653,7 @@ LTTNG_TRACEPOINT_EVENT(btrfs_add_block_group, TP_ARGS(fs_info, block_group, create), TP_FIELDS( - ctf_array(u8, fsid, fs_info->fsid, BTRFS_UUID_SIZE) + ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE) ctf_integer(u64, offset, block_group->key.objectid) ctf_integer(u64, size, block_group->key.offset) ctf_integer(u64, flags, block_group->flags) @@ -1015,18 +1021,18 @@ LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__chunk, LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__chunk, btrfs_chunk_alloc, - TP_PROTO(const struct btrfs_fs_info *info, const struct map_lookup *map, + TP_PROTO(const struct btrfs_fs_info *fs_info, const struct map_lookup *map, u64 offset, u64 size), - TP_ARGS(info, map, offset, size) + TP_ARGS(fs_info, map, offset, size) ) LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__chunk, btrfs_chunk_free, - TP_PROTO(const struct btrfs_fs_info *info, const struct map_lookup *map, + TP_PROTO(const struct btrfs_fs_info *fs_info, const struct map_lookup *map, u64 offset, u64 size), - TP_ARGS(info, map, offset, size) + TP_ARGS(fs_info, map, offset, size) ) #elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,10,0)) @@ -1050,18 +1056,18 @@ LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__chunk, LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__chunk, btrfs_chunk_alloc, - TP_PROTO(struct btrfs_fs_info *info, struct map_lookup *map, + TP_PROTO(struct btrfs_fs_info *fs_info, struct map_lookup *map, u64 offset, u64 size), - TP_ARGS(info, map, offset, size) + TP_ARGS(fs_info, map, offset, size) ) LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__chunk, btrfs_chunk_free, - TP_PROTO(struct btrfs_fs_info *info, struct map_lookup *map, + TP_PROTO(struct btrfs_fs_info *fs_info, struct map_lookup *map, u64 offset, u64 size), - TP_ARGS(info, map, offset, size) + TP_ARGS(fs_info, map, offset, size) ) #elif (LTTNG_SLE_KERNEL_RANGE(4,4,73,5,0,0, 4,4,73,6,0,0) || \ @@ -1192,7 +1198,7 @@ LTTNG_TRACEPOINT_EVENT(btrfs_space_reservation, TP_ARGS(fs_info, type, val, bytes, reserve), TP_FIELDS( - ctf_array(u8, fsid, fs_info->fsid, BTRFS_UUID_SIZE) + ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE) ctf_string(type, type) ctf_integer(u64, val, val) ctf_integer(u64, bytes, bytes) @@ -1208,7 +1214,7 @@ LTTNG_TRACEPOINT_EVENT(btrfs_space_reservation, TP_ARGS(fs_info, type, val, bytes, reserve), TP_FIELDS( - ctf_array(u8, fsid, fs_info->fsid, BTRFS_UUID_SIZE) + ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE) ctf_string(type, type) ctf_integer(u64, val, val) ctf_integer(u64, bytes, bytes) @@ -1221,9 +1227,9 @@ LTTNG_TRACEPOINT_EVENT(btrfs_space_reservation, LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserved_extent, - TP_PROTO(const struct btrfs_fs_info *info, u64 start, u64 l
Re: [lttng-dev] tracing multithread user program and API support for enabling/disabling events and for adding/removing context fields
Mathieu, Thank you for your comments. I have another issue with tracing a multiple-threaded runtime. I saw lost events at the end of a thread execution. I have been debugging the code for two days and could not find a reason. The lost events happen at the end of a thread execution. The thread is put to suspended mode using pthread_cond_wait after the tracepoint of the lost events is triggered. I know the tracepoint was triggered, using printf to test. What could be the reason that cause this event lost? Any details about how the tracepoint is triggered and event record is sent to the buffer will help me debugging. For example, are they using signal handler to send event record asynchronously? I used babletrace to list the events. The code is too complicated and I will try reproduce using a small program. Thank you Yonghong On Thu, Dec 20, 2018 at 4:23 PM Mathieu Desnoyers < mathieu.desnoy...@efficios.com> wrote: > It will impact tracing of _all_ threads of _all_ processes tracked by the > targeted tracing session. > "lttng_enable_event()" is by no mean a "fast" operation. It is a tracing > control operation meant to > be performed outside of fast-paths. > > Changing the design of LTTng from per-cpu to something else would be a > significant endeavor. > > Thanks, > > Mathieu > > > - On Dec 20, 2018, at 3:27 PM, Yonghong Yan wrote: > > > Apologize for the wrong terms. I will ask in another way: I have > multithread code, and if a thread calls lttng_enable_event (...), will it > impact only the calling thread, or the threads spawned after that call, or > all the threads of the process? > > Got your answer about vtid context. It is similar to what I am doing. We > want to analyze behavior of all user threads. In the current LTTng, we need > have that vtid field for each event even if it is rare situation that a > thread migrate, and also for analyzing the traces, we need to check each > records and sort the traces according to the vtid. It impacts the > performance of both tracing and analysis. If I want to change the way of > how traces are fed to the buffer in LTTng, how complicated will it be? I am > guessing I will need to at least replace sched_getcpu with vtid (or alike > so all the user threads are numbered from 0), and/or have the ring buffer > bind to the user thread, and more. > > Yonghong > > > On Thu, Dec 20, 2018 at 2:49 PM Mathieu Desnoyers < > mathieu.desnoy...@efficios.com> wrote: > >> Hi, >> >> Can you define what you mean by "per-user-thread tracepoint" and >> "whole-user-process" ? AFAIK >> those concepts don't appear anywhere in the LTTng documentations. >> >> Thanks, >> >> Mathieu >> >> - On Dec 19, 2018, at 6:06 PM, Yonghong Yan >> wrote: >> >> Got another question about lttng_enable_event(): Using this API will >> impact per-user-thread tracepoint or the whole-user-process? I am thinking >> of the whole process, but want to confirm. >> >> Yonghong >> >> >> On Wed, Dec 19, 2018 at 4:20 PM Mathieu Desnoyers < >> mathieu.desnoy...@efficios.com> wrote: >> >>> Hi Yonghong, >>> >>> - On Dec 19, 2018, at 1:19 PM, Yonghong Yan >>> wrote: >>> >>> We are experimenting LTTng for tracing multi-threaded program, it works >>> very well for us. Thank you for having this great tool. But we have some >>> concerns about the overhead and scalability of the tracing. Could you share >>> some insight of the following questions? >>> 1. The session domain communicates with the user application via Unix >>> domain socket, from LTTng document. is the communication frequent, such as >>> each event requires communication, or the communication just happens at the >>> beginning to configure user space tracing? >>> >>> This Unix socket is only for "control" of tracing (infrequent >>> communication). The high-throughput tracing data goes through a shared >>> memory map (per-cpu buffers). >>> >>> 2. For the consumer domain, is the consumer domain has a thread per >>> CPU/channel to write to disk or relay the traces, or it is a single >>> threaded-process handling all the channels and ring buffers, which could >>> become a bottleneck if we have large number of user threads all feeding >>> traces? >>> >>> Each consumer daemon is a single thread at the moment. It could be >>> improved by implementing a multithreaded design in the future. It should >>> help especially in NUMA setups, where having the consumer daemon on the >>> same NUMA node as the ring buffer it reads from would minimize the amount >>> of remote NUMA accesses. >>> >>> Another point is cases where I/O is performed to various target >>> locations (different network interfaces or disks). When all I/O goes >>> through the same interface, the bottleneck becomes the block device or the >>> network interface. However, for scenarios involving many network interfaces >>> or block devices, then multithreading the consumer daemon could become >>> useful. >>> >>> This has not been a priority for anyone so far though. >>> >>> 3. In the one channel/