Jonathan Rajotte [Mon, 10 Jan 2022 22:27:13 +0000 (17:27 -0500)]
Fix: lttng-ctl: erroneous uses of LTTNG_PACKED
The LTTNG_PACKED macro uses gcc attributes to indicate that a structure
should be packed. Hence, this macro obeys the same rules as the gcc
attribute.
Various mis-uses of the LTTNG_PACKED macros may result in structure not
being packed:
- The LTTNG_PACKED macro should always be placed _before_ an identifier
when a structure is declared in-place.
- Adding LTTNG_PACKED at the definition site has no effect if the
structure was declared elsewhere.
Those mis-uses cause issues when mixing the bitness (32/64) of the
session daemon and liblttng-ctl.
Outstanding issues include the following structures that are not
tagged as LTTNG_PACKED:
- struct lttng_event
- struct lttng_channel
- struct lttng_event_context
Unfortunately, those structures are exposed by the public API and
can't be tagged as being "packed". Doing so would break the ABI
of liblttng-ctl.
These structures should be packed/unpacked explicitly.
JORAJ CHERRY PICK: Change-Id: I095dc0dffc6bf9e15dc7a7ec797958a5780ef150
Change-Id: I33f026a613813af312de65adee23ac829a071cf8
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Tue, 18 Jan 2022 14:46:48 +0000 (09:46 -0500)]
Backport: lttng_buffer_view_is_valid, lttng_buffer_view_contains_string
Change-Id: I23892e6f20e2847037ba16ad1bba8b2d5a9e0a7c
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Thu, 27 Jan 2022 19:22:22 +0000 (14:22 -0500)]
Fix: conversion from KB to bytes overflow on arm32
Observed issue
==============
On enable channel the memory available check fails on arm32 when
available memory, in bytes, is larger than 2^32.
Cause
=====
`read_proc_meminfo_field` converts the read value (in KB) to bytes and
stores it into a size_t variable.
On the system running the reproducer the value of the `value_kb` variable
is
4839692, yielding an overflow when multiplied with 1024 since
`size_t` is 32 bit long. `size_t` can be larger in certain situation
(i.e LARGEFILE) but this is irrelevant to the problem at hand.
Solution
========
Convert all the checks to use uint64_t.
Known drawbacks
=========
None.
References
==========
The multiplication overflow check scheme is borrowed from
`src/common/time.c`
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I76fe7e57da149c1b4c230a411e0346ba6f9ef7b1
Jonathan Rajotte [Tue, 11 Jan 2022 17:28:15 +0000 (12:28 -0500)]
Fix: Unexpected payload size in cmd_recv_stream_2_11
Observed issue
==============
For the following scenario:
lttng-relayd: 64 bit
lttng-sessiond: 64 bit
lttng-consumerd: 32 bit
application: 32 bit
Commands
lttng create --set-url=net://127.0.0.1
lttng enable-event -u -a
lttng start
./application
On application start the lttng-relayd reports this error:
DEBUG1 - 14:16:38.
216442600 [
2004731/
2004735]: Done receiving control command payload: fd = 19, payload size = 4376 bytes (in relay_process_control_receive_payload() at main.c:3456)
DEBUG3 - 14:16:38.
216469462 [
2004731/
2004735]: Processing "RELAYD_ADD_STREAM" command for socket 19 (in relay_process_control_command() at main.c:3327)
Error: Unexpected payload size in "cmd_recv_stream_2_11": expected >=
3519925694 bytes, got 4376 bytes
Cause
=====
In `relayd_add_stream`, instead of taking the > 2.11 protocol path, the
`relayd_add_stream_2_2` function is called.
The value of the rsock version number are:
major: 21845
minor: 2
Which is simply invalid since we know that the version should be 2.12.
The relayd sock version numbers are set during the
LTTNG_CONSUMER_ADD_RELAYD_SOCKET command between the lttng-sessiond and
the lttng-consumerd process. It is important to note here that both
processes do NOT have the same bitness.
The serialization and deserialization of `struct lttcomm_relayd_sock` is
the culprit.
`struct lttcomm_relayd_sock` contains a `struct lttcomm_sock`:
struct lttcomm_sock {
int32_t fd;
enum lttcomm_sock_proto proto;
struct lttcomm_sockaddr sockaddr;
const struct lttcomm_proto_ops *ops;
} LTTNG_PACKED;
Note that `ops` is a pointer and its size varies based on the bitness of
the application. Hence the size of the `struct lttcomm_sock` differs
across bitness. Since it is the first member of `struct
lttcomm_relayd_sock`, the memory layout is simply invalid across
bitness (amd64/x86).
This results in invalid parsing for the overall "struct
lttcomm_relayd_sock" when dealing with a lttng-consumerd with a
different bitness than the lttng-sessiond. As far as I know local
tracing scenarios are not affected since this is only relevant when
dealing with a lttng-relayd.
Solution
========
Pass the socket protocol type, relayd major, relayd minor in
`lttcomm_consumer_msg`. On the receiver side, query the network stack to
get the peer information to populate a basic `lttcomm_sock`. Leaving
this work to the OS saves us from having to serialize the `sockaddr_in*`
structs.
Known drawbacks
=========
We rely on `getpeername` for the first time. Compatibility might be a
problem.
This code path assumes a lot of thing that cannot be asserted against
such as the fact that the socket from which we fetch the info must be
`connected`. Still at this point, the socket is completely setup and the
rest of the code depends on it already.
From GETPEERNAME(2):
```
For stream sockets, once a connect(2) has been performed, either
socket can call getpeername() to obtain the address of the peer
socket. On the other hand, datagram sockets are connectionless.
Calling connect(2) on a datagram socket merely sets the peer
address for outgoing datagrams sent with write(2) or recv(2).
The caller of connect(2) can use getpeername() to obtain the
peer address that it earlier set for the socket. However, the
peer socket is unaware of this information, and calling
getpeername() on the peer socket will return no useful
information (unless a connect(2) call was also executed on the
peer). Note also that the receiver of a datagram can obtain the
address of the sender when using recvfrom(2).
```
But here we are always "the caller of connect".
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I930ef9bbbf18fa881222850ba0fbbba026dc0220
Jonathan Rajotte [Mon, 31 Jan 2022 19:47:35 +0000 (14:47 -0500)]
Tests: add kernel test into the `make check` test suite.
The Jenkins CI mostly run the `make check` suite. Only the Lava base CI
run the root_regression test suite. Most of those test can be run on
`make check` without incurring any major extra time.
Only `regression/tools/streaming/test_high_throughput_limits` is left in
root_regression since it is currently "unreasonable" in term of the time
it takes to run. This could be tackled another time.
Change-Id: I29d40fa8bec872bf2e22a8bd933f58fa6376ee22
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jérémie Galarneau [Wed, 26 Jan 2022 22:31:19 +0000 (17:31 -0500)]
Update version to v2.12.8
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Mathieu Desnoyers [Wed, 12 Jan 2022 21:46:21 +0000 (16:46 -0500)]
Relicence all source and header files included in LGPL code
All code included in libcommon-lgpl.a should be LGPL. Some were licensed
as GPLv2 by mistake. We need to relicense those.
EfficiOS owns the copyright to all of the affected source files and
agrees to this relicensing from GPLv2 to LGPLv2.1.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ib4aa1a7cc8a6f8e2a9891f1bd05c7ea2f8530e9b
Jérémie Galarneau [Wed, 26 Jan 2022 01:16:59 +0000 (20:16 -0500)]
Fix: sessiond: use of null session when session is not found
A bad backport introduced this use of `session` even when it is
NULL.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: If7d540229ef3a2a50e1e58d4f408cb61252ed45d
Mathieu Desnoyers [Wed, 19 Jan 2022 16:35:20 +0000 (11:35 -0500)]
Move utils_expand_path and utils_expand_path_keep_symlink to libpath.la
Move the GPLv2 helper functions utils_expand_path and
utils_expand_path_keep_symlink to libpath.la. This will allow utils.cpp
to be relicensed to LGPLv2.1 by making sure EfficiOS owns the copyright
for the entire source file.
Statically include libpath.la into libcommon-gpl.la.
The "lttng" executable is GPLv2 and only depends on libcommon-lgpl.la,
so it needs to explicitly list libpath.la as its dependency.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I2108e0bd35ce75797d4a20e65891c820a1dd79f8
Mathieu Desnoyers [Wed, 12 Jan 2022 20:23:09 +0000 (15:23 -0500)]
Link lttng executable on libcommon-lgpl.a
Allows testing whether liblttng-ctl.so has any dependency on
libcommon-gpl.a. Will trigger a link-time failure if it is the case.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ie6f864dccdd61f896a7e238e202dc6188ec4f9c5
Mathieu Desnoyers [Tue, 11 Jan 2022 20:57:37 +0000 (15:57 -0500)]
Introduce libcommon-lgpl for liblttng-ctl
liblttng-ctl is a LGPLv2.1 library should should not use GPLv2 code.
Introduce libcommon-lgpl as a static archive containing only LGPLv2.1
compatible code.
This also removes the dependency from liblttng-ctl to liburcu.
Include some source files in libcommon-lgpl.a which are indirectly needed
by source files required in libcommon-lgpl.a:
- endpoint.cpp,
- lttng-elf.cpp,
- lttng-elf.h.
Include some source files in libcommon-lgpl.a which are only needed to
link the lttng executable:
- domain.cpp,
- spawn-viewer.cpp, spawn-viewer.h.
Introduce the new source file hashtable/seed.cpp to move the
lttng_ht_seed symbol in a source file which does not require
liburcu-cds, so it can be present in libcommon-lgpl. This allows
building compile units which are needed in the lgpl common library which
also contain functions which directly refer to lttng_ht_seed.
Programs and libraries which use libhashtable.la are changed to use
libcommon-gpl.la instead. libhashtable becomes internal to libcommon.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I27d2acb823a6d951692a5da88ce32bbe6bafb072
Mathieu Desnoyers [Tue, 11 Jan 2022 19:46:37 +0000 (14:46 -0500)]
Rename libcommon.so to libcommon-gpl.so
libcommon is a static library is currently used by both liblttng-ctl
(LGPLv2.1) and all lttng-tools executables (GPLv2).
Given that some code in libcommon depends on liburcu, this introduces an
indirect dependency from liblttng-ctl to liburcu, which is unwanted.
This first step renames libcommon.so to libcommon-gpl.so. Following
steps will introduce a more lightweight libcommon-lgpl.so which only
contains LGPLv2.1 code, and removes the dependency on liburcu.
Backport Notes
--------------
ini_config has to link against liblttng-ctl since since the internal
configuration library is not split into ini-config and libconfig
(see
3299fd310).
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ia8f37ea229f68550200cbb1528216a505bbbd45f
Mathieu Desnoyers [Thu, 20 Jan 2022 15:30:01 +0000 (10:30 -0500)]
Copyright ownership transfer
Apply copyright ownership transfer from Jonathan Rajotte-Julien and
Olivier Cotte to EfficiOS Inc.
Link: https://lists.lttng.org/pipermail/lttng-dev/2022-January/030111.html
Link: https://lists.lttng.org/pipermail/lttng-dev/2022-January/030113.html
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Olivier Moussavou Cotte <olivier.moussavoucotte@mail.mcgill.ca>
Signed-off-by: Jonathan Rajotte-Julien <jonathan.r.julien@gmail.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I0c2800ec17f5d435c8a18f2dae096809b8037b5d
Jérémie Galarneau [Fri, 14 Jan 2022 20:42:03 +0000 (15:42 -0500)]
Copyright ownership transfer
Apply copyright ownership transfer from David Goulet, Julien Desfossez,
and Simon Marchi to EfficiOS Inc.
Link: https://lists.lttng.org/pipermail/lttng-dev/2022-January/030087.html
Link: https://lists.lttng.org/pipermail/lttng-dev/2022-January/030092.html
Link: https://lists.lttng.org/pipermail/lttng-dev/2022-January/030091.html
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Signed-off-by: David Goulet <dgoulet@ev0ke.net>
Signed-off-by: Julien Desfossez <ju@klipix.org>
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Change-Id: Id13575afd4a2a09bb91a8d2b7a12dc3db8dc329f
Jérémie Galarneau [Fri, 23 Jul 2021 20:56:53 +0000 (16:56 -0400)]
Fix: relayd: erroneous rundir permission logging message
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I2eb0901896feab684d397aa43f8c9036018d9c95
Jérémie Galarneau [Thu, 9 Dec 2021 21:27:29 +0000 (16:27 -0500)]
Fix: sessiond: rotation thread: fatal error when not finding a session
The rotation thread implements scheduled rotations (by size) by
registering a trigger that monitors the session's consumed size and
notifies when the next rotation's size threshold is exceeded.
The notification is delivered asynchronously which doesn't prevent
the session from being destroyed before the rotation thread has
had the time to process the notification (and perform a rotation).
Since it is possible for a session to be destroyed by the time the
notification is processed, the rotation thread shouldn't handle
this eventuality as a fatal error (shutting down the thread).
Note that nobody reported this issue nor did I attempt to reproduce it.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I588054bad3542854851f28d34f2c758bdf420a34
Jérémie Galarneau [Wed, 12 Jan 2022 20:48:00 +0000 (15:48 -0500)]
Fix: relayd: rotation failure for multi-domain session
Observed issue
==============
Rotating a multi-domain streaming session results in the following
error:
$ lttng rotate
Waiting for rotation to complete...
Error: Failed to retrieve rotation state.
Meanwhile, the relay daemon logs indicate the following:
DBG1 - 14:56:04.
213163667 [265774/265778]: lttng_trace_chunk_rename_path from .tmp_new_chunk to (null) (in lttng_trace_chunk_rename_path_no_lock() at trace-chunk.cpp:759)
PERROR - 14:56:04.
213242941 [265774/265778]: Failed to move trace chunk directory ".tmp_new_chunk" to "20220112T145604-0500-1": No such file or directory (in lttng_trace_chunk_rename_path_no_lock() at trace-chunk.cpp:799)
DBG1 - 14:56:04.
213396931 [265774/265778]: aborting session 2 (in session_abort() at session.cpp:588)
DBG1 - 14:56:04.
213512198 [265774/265778]: Control connection closed with 22 (in relay_thread_close_connection() at main.cpp:3874)
The 'abort' of session 2 here causes the kernel consumer to fail to
consume subbuffers:
Error: Relayd send index failed. Cleaning up relayd 3.
Error: Error consuming subbuffer: (0)
[...]
Cause
=====
Following the flow of execution in the relay daemon shows that different
trace chunks are used by the two relay sessions that result from the
streaming of a single multi-domain session. Both trace chunks "own" the
same output directory.
When a rotation is performed, the first trace chunk to be closed will
move the directory. Then, the second trace chunk to be closed will
attempt to do the same, failing to do so as seen in the relay daemon
log.
Solution
========
Using different trace chunk instances for relay sessions belonging to a
single sessiond session goes against the intended use of the sessiond
trace chunk registry.
A sessiond trace chunk registry allows the relay daemon to share trace
chunks used by different "relay sessions" when they were created for the
same user-visible session daemon session. Tracing multiple domains (e.g.
ust and kernel) results in per-domain relay sessions being created.
Sharing trace chunks, and their output directory more specifically, is
essential to properly implement session rotations. The sharing of output
directory handles allows directory renames to be performed once and
without races that would stem from from multiple renames.
The reason why sessiond trace chunk registry returns different trace
chunk instances for two relay sessions is that the wrong session `id` is
used to publish trace chunks. The `id` that must be used to share trace
chunks accross the relay sessions that belong to the same sessiond
session is `id_sessiond`.
`id_sessiond` is optional as it is only provided by consumers v2.11+.
Otherwise, it is fine to use the relay session `id`: it is a unique id
for a given session daemon instance and those consumers will not issue a
session rotation (or clear) as the feature didn't exist.
A reference counting bug revealed by this change is also fixed in the
implementation of the sessiond trace chunk registry.
When the trace chunk is first published, two references to the published
chunks exist. One is taken by the registry while the other is being
returned to the caller. In the use case of the relay daemon, the
reference held by the registry itself is undesirable.
We want the trace chunk to be removed from the registry as soon as it is
not being used by the relay daemon (through a session or a stream). This
differs from the behaviour of the consumer daemon which relies on an
explicit command from the session daemon to release the registry's
reference.
In cases where the trace chunk had already been published, the reference
belonging to the sessiond trace chunk registry instance has already been
'put' by the firt publication. We must simply return the published trace
chunk with a reference taken on behalf of the caller.
Known drawbacks
===============
None.
Change-Id: Ic33443b114a87574a1b26ac5ccb022e47f886ddd
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Mathieu Desnoyers [Tue, 11 Jan 2022 18:59:15 +0000 (13:59 -0500)]
Fix: lttng-ctl: lttng_list_sessions: initialize out_sessions to NULL when returning 0
Observed issue
==============
Users of lttng-ctl API's lttng_list_sessions observe application crash
when freeing the *out_sessions output value when lttng_list_sessions
returns 0.
Cause
=====
The implementation does not set *out_sessions to NULL when
lttng_ctl_ask_sessiond() sets the sessions variable to NULL.
This causes the user application to attempt to free(3) an uninitialized
pointer.
Solution
========
Initialize out_sessions to NULL before invoking
lttng_ctl_ask_sessiond(), so it is initialized when lttng_list_sessions
returns 0, thus allowing *out_sessions to be subsequently freed.
A free(3) on a NULL pointer is a no-op.
Known drawbacks
===============
None.
History
=======
This was introduced by those two commits:
b178f53e90 ("Generate session name and default output on sessiond's end")
27ea4ba825 ("Fix: error when listing sessions with no session")
This is a regression present in the stable-2.11, stable-2.12,
stable-2.13, and master branches.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I34125d708a32674d79b831e5004c48321ebd711e
Simon Marchi [Tue, 30 Nov 2021 03:15:53 +0000 (22:15 -0500)]
Fix: lttng: initialize variable in run_command_string
I got some crashes when using `lttng track` and hitting some error
paths. The tracker_handle variable is run_command_string is passed to
lttng_process_attr_tracker_handle_destroy uninitialized if
lttng_session_get_tracker_handle fails.
$ valgrind lttng track --kernel --pid 569878
==634572== Memcheck, a memory error detector
==634572== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==634572== Using Valgrind-3.17.0 and LibVEX; rerun with -h for copyright info
==634572== Command: lttng track --kernel --pid 569878
==634572==
Error: Unknown error
==634572== Conditional jump or move depends on uninitialised value(s)
==634572== at 0x4875007: lttng_process_attr_tracker_handle_destroy (tracker.cpp:25)
==634572== by 0x13AC55: run_command_string(cmd_type, char const*, lttng_domain_type, lttng_process_attr, char const*, mi_writer*) (track-untrack.cpp:485)
==634572== by 0x13ADA5: run_command(cmd_type, char const*, process_attr_command_args const*, mi_writer*) (track-untrack.cpp:535)
==634572== by 0x13B472: cmd_track_untrack(cmd_type, int, char const**, char const*) (track-untrack.cpp:740)
==634572== by 0x13B5D9: cmd_track(int, char const**) (track-untrack.cpp:805)
==634572== by 0x14C598: handle_command(int, char**) (lttng.cpp:237)
==634572== by 0x14CCE9: parse_args(int, char**) (lttng.cpp:426)
==634572== by 0x14CE65: main (lttng.cpp:475)
Fix it by initializing it to NULL.
Change-Id: Id2693e75c3c5c83cef58db3109973d7ab679b859
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Mathieu Desnoyers [Mon, 22 Nov 2021 19:38:35 +0000 (14:38 -0500)]
Fix: consumer-stream: live viewers observe timestamps going backwards
Observed issue
==============
When stress-testing over an entire weekend, we caught the following
occurrences of timestamps going backwards with babeltrace in live viewer
mode:
One occurrence:
# Test ust streaming live clear with viewer with new metadata after clear
# Parameters: tracing_active=0, clear_twice=0, buffer_type=pid
6-7 occurrences:
# Test ust basic streaming live with viewer
# Parameters: tracing_active=1, clear_twice=0, buffer_type=uid
Relevant log of the relayd focused on the context of the stream
triggering the issue (stream 994):
506:DBG1 - 17:26:54.
966288486 [
3866648/
3866655]: Relay viewer stream 994 not found (in viewer_stream_get_by_id() at viewer-stream.cpp:268)
603:DBG1 - 17:26:54.
970265349 [
3866648/
3866655]: Sending stream 994 to viewer (in send_viewer_streams() at live.cpp:241)
843:DBG1 - 17:26:55.
959835404 [
3866648/
3866652]: stream_add_index for stream 994 (in stream_add_index() at stream.cpp:1233)
844:DBG3 - 17:26:55.
959866303 [
3866648/
3866652]: Finding index for stream id 994 and seq_num 0 (in relay_index_get_by_id_or_create() at index.cpp:112)
845:DBG2 - 17:26:55.
959896611 [
3866648/
3866652]: Creating relay index for stream id 994 and seqnum 0 (in relay_index_create() at index.cpp:34)
846:DBG2 - 17:26:55.
959928097 [
3866648/
3866652]: Adding relay index with stream id 994 and seqnum 0 (in relay_index_add_unique() at index.cpp:70)
847:DBG2 - 17:26:55.
959968163 [
3866648/
3866652]: Index found or created in HT for stream ID 994 and seqnum 0 (in relay_index_get_by_id_or_create() at index.cpp:144)
884:DBG3 - 17:26:55.
961676906 [
3866648/
3866652]: Receiving data for stream id 994 seqnum 0, 0 bytes received, 84 bytes left to receive (in relay_process_data_receive_payload() at main.cpp:3583)
885:DBG1 - 17:26:55.
961783540 [
3866648/
3866652]: Wrote to stream 994: data_length = 84, padding_length = 0 (in stream_write() at stream.cpp:1113)
886:DBG1 - 17:26:55.
961862441 [
3866648/
3866652]: Wrote to stream 994: data_length = 0, padding_length = 4012 (in stream_write() at stream.cpp:1113)
887:DBG1 - 17:26:55.
961895593 [
3866648/
3866652]: handle_index_data: stream 994 net_seq_num 0 data offset 0 (in stream_update_index() at stream.cpp:1140)
888:DBG3 - 17:26:55.
961945901 [
3866648/
3866652]: Finding index for stream id 994 and seq_num 0 (in relay_index_get_by_id_or_create() at index.cpp:112)
889:DBG2 - 17:26:55.
961983056 [
3866648/
3866652]: Index found or created in HT for stream ID 994 and seqnum 0 (in relay_index_get_by_id_or_create() at index.cpp:144)
894:DBG2 - 17:26:55.
962334285 [
3866648/
3866652]: Writing index for stream ID 994 and seq num 0 (in relay_index_try_flush() at index.cpp:275)
895:DBG2 - 17:26:55.
962390756 [
3866648/
3866652]: index put for stream id 994 and seqnum 0 refcount 1 (in relay_index_put() at index.cpp:237)
1743:DBG1 - 17:26:56.
083287172 [
3866648/
3866655]: Check index status: index_received_seqcount 1 index_sent_seqcount 0 for stream 994 (in check_index_status() at live.cpp:1454)
1746:DBG1 - 17:26:56.
083446379 [
3866648/
3866655]: Sending viewer index for stream 994 offset 0 (in viewer_get_next_index() at live.cpp:1801)
1748:DBG1 - 17:26:56.
083544877 [
3866648/
3866655]: Index 1 for stream 994 sent (in viewer_get_next_index() at live.cpp:1842)
1751:DBG1 - 17:26:56.
083778149 [
3866648/
3866655]: Sent 4108 bytes for stream 994 (in viewer_get_packet() at live.cpp:1968)
2858:DBG1 - 17:26:56.
907744916 [
3866648/
3866652]: stream_add_index for stream 994 (in stream_add_index() at stream.cpp:1233)
2859:DBG1 - 17:26:56.
907762798 [
3866648/
3866652]: Received live beacon for stream 994 (in stream_add_index() at stream.cpp:1237)
2862:DBG3 - 17:26:56.
907855597 [
3866648/
3866652]: Receiving data for stream id 994 seqnum 1, 0 bytes received, 84 bytes left to receive (in relay_process_data_receive_payload() at main.cpp:3583)
2863:DBG1 - 17:26:56.
907950195 [
3866648/
3866652]: Wrote to stream 994: data_length = 84, padding_length = 0 (in stream_write() at stream.cpp:1113)
2864:DBG1 - 17:26:56.
908002911 [
3866648/
3866652]: Wrote to stream 994: data_length = 0, padding_length = 4012 (in stream_write() at stream.cpp:1113)
2865:DBG1 - 17:26:56.
908024312 [
3866648/
3866652]: handle_index_data: stream 994 net_seq_num 1 data offset 4096 (in stream_update_index() at stream.cpp:1140)
2866:DBG3 - 17:26:56.
908043082 [
3866648/
3866652]: Finding index for stream id 994 and seq_num 1 (in relay_index_get_by_id_or_create() at index.cpp:112)
2867:DBG2 - 17:26:56.
908061879 [
3866648/
3866652]: Creating relay index for stream id 994 and seqnum 1 (in relay_index_create() at index.cpp:34)
2868:DBG2 - 17:26:56.
908082115 [
3866648/
3866652]: Adding relay index with stream id 994 and seqnum 1 (in relay_index_add_unique() at index.cpp:70)
2869:DBG2 - 17:26:56.
908101275 [
3866648/
3866652]: Index found or created in HT for stream ID 994 and seqnum 1 (in relay_index_get_by_id_or_create() at index.cpp:144)
3011:DBG1 - 17:26:56.
913436908 [
3866648/
3866655]: Check index status: index_received_seqcount 1 index_sent_seqcount 1 for stream 994 (in check_index_status() at live.cpp:1454)
3012:DBG1 - 17:26:56.
913457688 [
3866648/
3866655]: Check index status: inactive with beacon, for stream 994 (in check_index_status() at live.cpp:1492)
3014:DBG1 - 17:26:56.
913507164 [
3866648/
3866655]: Index 1 for stream 994 sent (in viewer_get_next_index() at live.cpp:1842)
3043:DBG1 - 17:26:56.
914167206 [
3866648/
3866652]: stream_add_index for stream 994 (in stream_add_index() at stream.cpp:1233)
3044:DBG3 - 17:26:56.
914186324 [
3866648/
3866652]: Finding index for stream id 994 and seq_num 1 (in relay_index_get_by_id_or_create() at index.cpp:112)
3045:DBG2 - 17:26:56.
914205597 [
3866648/
3866652]: Index found or created in HT for stream ID 994 and seqnum 1 (in relay_index_get_by_id_or_create() at index.cpp:144)
3046:DBG2 - 17:26:56.
914227502 [
3866648/
3866652]: Writing index for stream ID 994 and seq num 1 (in relay_index_try_flush() at index.cpp:275)
3047:DBG2 - 17:26:56.
914299536 [
3866648/
3866652]: index put for stream id 994 and seqnum 1 refcount 1 (in relay_index_put() at index.cpp:237)
3587:DBG1 - 17:26:57.
067977800 [
3866648/
3866652]: Set begin data pending flag to stream 994 (in relay_begin_data_pending() at main.cpp:2240)
3644:DBG1 - 17:26:57.
070915787 [
3866648/
3866652]: Data pending for stream id 994: prev_data_seq 1, prev_index_seq 1, and last_seq 1 (in relay_data_pending() at main.cpp:2091)
3913:DBG1 - 17:26:57.
093492259 [
3866648/
3866652]: try_rotate_stream_index: Stream 994 (rotate_at_packet_seq_num = 2, received_packet_seq_num = (value = 1, is_set = 1)) (in try_rotate_stream_index() at stream.cpp:482)
3914:DBG1 - 17:26:57.
093525950 [
3866648/
3866652]: Rotating stream 994 index file (in try_rotate_stream_index() at stream.cpp:511)
3915:DBG1 - 17:26:57.
093561064 [
3866648/
3866652]: try_rotate_stream_data: Stream 994 (rotate_at_index_packet_seq_num = 2, rotate_at_prev_data_net_seq = 1, prev_data_seq = 1) (in try_rotate_stream_data() at stream.cpp:357)
3916:DBG1 - 17:26:57.
093591085 [
3866648/
3866652]: Rotating stream 994 data file with size 8192 (in stream_rotate_data_file() at stream.cpp:138)
3917:DBG1 - 17:26:57.
093626697 [
3866648/
3866652]: stream_rotate_data_file: reset tracefile_size_current for stream 994 was 8192 (in stream_rotate_data_file() at stream.cpp:169)
3918:DBG1 - 17:26:57.
093656578 [
3866648/
3866652]: Rotation completed for stream 994 (in stream_complete_rotation() at stream.cpp:66)
4238:DBG1 - 17:26:57.
635064782 [
3866648/
3866652]: Trying to close stream 994 (in try_stream_close() at stream.cpp:883)
4239:DBG1 - 17:26:57.
635098224 [
3866648/
3866652]: Succeeded in closing stream 994 (in try_stream_close() at stream.cpp:983)
4744:DBG1 - 17:26:57.
741972785 [
3866648/
3866655]: Check index status: index_received_seqcount 2 index_sent_seqcount 1 for stream 994 (in check_index_status() at live.cpp:1454)
4745:DBG1 - 17:26:57.
742030216 [
3866648/
3866655]: Sending viewer index for stream 994 offset 4096 (in viewer_get_next_index() at live.cpp:1801)
4747:DBG1 - 17:26:57.
742088421 [
3866648/
3866655]: Index 2 for stream 994 sent (in viewer_get_next_index() at live.cpp:1842)
4750:DBG1 - 17:26:57.
742197990 [
3866648/
3866655]: Sent 4108 bytes for stream 994 (in viewer_get_packet() at live.cpp:1968)
4755:DBG1 - 17:26:57.
932525633 [
3866648/
3866655]: Releasing stream id 994 (in stream_release() at stream.cpp:778)
4756:DBG1 - 17:26:57.
932555313 [
3866648/
3866655]: Rotation completed for stream 994 (in stream_complete_rotation() at stream.cpp:66)
Cause
=====
This log shows the following sequence:
consumer-data consumer-live-timer relayd-main relayd-live babeltrace
1) lttng_consumer_read_subbuffer
2) get next subbuf
3) write data packet to data socket
(starting here the data packet is
somewhere on the network)
4) put next subbuf
5) post_consume()
6) consumer_stream_sync_metadata_index()
7) wait for metadata
8) consumer_stream_sync_metadata()
9) check_stream()
10) set missed_metadata_flush
11) call send_live_beacon(()
12) sample empty ring buffer
13) read current timestamp
14) send inactivity beacon (empty packet)
15) receives a live beacon (@ 17:26:56.
907762798)
16) call consumer_stream_send_index()
17) send packet index to relayd
18) receives a data packet (@ 17:26:56.
907855597)
(at this point the data
packet is received from the
network)
19) ask for next index
20) informs the live viewer of the live beacon (@ 17:26:56.
913457688)
21) receives an index packet (@ 17:26:56.
914227502)
22) ask for next index
23) sends the packet to the viewer (@ 17:26:57.
742197990)
24) observes time going
backwards between the
previous live beacon
and the data packet.
The issue is caused by consumer_stream_sync_metadata_index which is
called after sending a data packet (therefore after having consumed a
data packet from the ring buffer). It invokes the send_live_beacon
callback before sending the index associated with the data packet that
was just sent.
However, this introduces a discrepancy between the live beacon
inactivity guarantees and the yet-to-be-sent packet index: the data
packet sent at [3] can be anywhere on the network, not even received by
the relay daemon, when the live beacon is sampling a now empty ring
buffer at [12], and thus sends a live inactivity beacon to the relay
daemon. Then, when the index is sent by consumer_stream_send_index
[16], its timestamp is in the past compared to the inactivity beacon
sent by send_live_beacon [11].
The purpose of the field "stream->indexes_in_flight" is to prevent
setting the inactivity timestamp in the relay stream when data is
missing for indexes that were received. This works because the indexes
are sent over the control socket, which is where the inactivity beacons
are also sent. It does not however prevent issues the other way around:
data sent prior to the inactivity beacon may or may not have reached
the relay daemon. It is therefore important to make sure that consuming
ring buffer data and sending that data's index vs sampling for an empty
ring buffer and sending an inactivity beacon are correctly ordered.
Solution
========
Send inactivity beacon after packet index.
Also document the purpose of sending an inactivity beacon in this
scenario.
Note
====
This issue is present since lttng-tools 2.7.0 (backported to 2.6.1),
where lttng_ustconsumer_read_subbuffer() invokes
consumer_flush_ust_index() prior to call consumer_stream_write_index().
It was introduced by commit
288bdb302a1 ("Fix: sessiond vs consumerd
push/get metadata deadlock").
Known drawbacks
===============
None.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ied43cc8229413b25e69b40ac218adad420f18797
Mathieu Desnoyers [Fri, 19 Nov 2021 20:11:37 +0000 (15:11 -0500)]
Fix: relayd: ressource leaks on viewer_stream_create error
Observed issue
==============
When facing failure to open viewer stream chunks in the context of "Fix:
relayd: failure to open chunk files concurrently with session clear",
we observe that the relay daemon triggers an assertion due to a
non-empty session hash table on cleanup.
Cause
=====
viewer_stream_create() does a stream_get(), but without any matching
stream_put() on error. This in turn holds a reference on the ctf_trace,
which holds a reference on the session.
By inspecting the code, we notice that the following ressources can be
leaked on error:
- vstream->stream_file.handle,
- vstream->index_file,
- vstream->stream.
In non-error scenarios, viewer_stream_release() is responsible for
releasing references on the composite objects.
The vstream->stream_file.trace_chunk is not an issue because it is put
in the destroy handler (as well as within the release, before having its
vstream->stream_file.trace_chunk pointer set to NULL).
Solution
========
Properly put references on all objects which are contained by the viewer
stream on error by introducing
viewer_stream_release_composite_objects(), which is used both in the
error path of viewer_stream_create() and in viewer_stream_release().
Note
====
Why not move those "put" operations in viewer_stream_destroy ?
This is done in the release to ensure we put references on composite
objects immediately when our own reference reaches 0, rather than
waiting for a grace period through call_rcu, which could then cause
chained call_rcu callbacks and require multiple invocation of
rcu_barrier on relayd exit to guarantee that all callbacks have been
executed and all ressources properly freed.
Known drawbacks
===============
None.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I864b58eda94ebda5e6faea45c922d4e814a15daa
Mathieu Desnoyers [Fri, 19 Nov 2021 16:12:25 +0000 (11:12 -0500)]
Fix: relayd: live: erroneous message timestamp observed from live viewer
Observed issue
==============
Another situation where erroneous message timestamp is observed by the
live viewer. Happens rarely (only two occurrences while running
ust_clear in stress-tests overnight on a 16-core machine).
Triggered with the following test:
# Test ust streaming live clear with viewer with new metadata after clear
# Parameters: tracing_active=0, clear_twice=1, buffer_type=pid
Babeltrace 2 error:
11-19 06:21:02.571 488497 488497 E PLUGIN/SRC.CTF.LTTNG-LIVE handle_late_message@lttng-live.c:1206 [lttng-live] Invalid live stream state: have a late message that is not a packet discarded or event discarded message: late-msg-type=PACKET_BEGINNING
11-19 06:21:02.578 488497 488497 E PLUGIN/SRC.CTF.LTTNG-LIVE next_stream_iterator_for_trace@lttng-live.c:1360 [lttng-live] Late message could not be handled correctly: lttng-live-msg-iter-addr=0x55b4cdee1590, stream-name="stream-1024", curr-msg-ts=
1637302861380968303, last-msg-ts=
1637302861472798701
11-19 06:21:02.578 488497 488497 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_msg_iter_next@lttng-live.c:1802 [lttng-live] Error preparing the next batch of messages: live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
11-19 06:21:02.579 488497 488497 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:866 Component input port message iterator's "next" method failed: iter-addr=0x55b4cdecfe30, iter-upstream-comp-name="lttng-live", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
Cause
=====
viewer_get_next_index() does not protect its use of the session
ongoing_rotation state by any synchronization mechanism. It only takes
the stream lock, which is not used to protect changes to the ongoing
rotation state with respect to the chunk rename operation performed by
chunk creation at the beginning of the clear command.
Solution
========
Protect the use of the session ongoing rotation state and file open
operations by the session lock in viewer_get_next_index().
Known drawbacks
===============
I don't expect this to cause any real scalability concern considering
the fact that the relay daemon has only two threads, one to handle
session daemon commands, and the other to handle viewer commands.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I242942ac6bc7a9199a17cd9800fc461a397b8955
Mathieu Desnoyers [Thu, 18 Nov 2021 20:22:39 +0000 (15:22 -0500)]
Fix: relayd: failure to open chunk files concurrently with session clear
Observed issue
==============
When stress-testing ust clear with an active live viewer, we observe a
situation where the live viewer thread fails to open chunk files in
make_viewer_streams() when executed after the creation of the new trace
chunk at the beginning of the clear command:
DBG3 - 16:19:50.
923577790 [40834/40838]: Processing "RELAYD_CREATE_TRACE_CHUNK" command for socket 19 (in relay_process_control_command() at main.cpp:3262)
DBG1 - 16:19:50.
923577730 [40834/40841]: Relay viewer stream 225 not found (in viewer_stream_get_by_id() at viewer-stream.cpp:265)
DBG1 - 16:19:50.
923600762 [40834/40838]: lttng_trace_chunk_rename_path from to .tmp_old_chunk (in lttng_trace_chunk_rename_path_no_lock() at trace-chunk.cpp:759)
DBG1 - 16:19:50.
923627202 [40834/40841]: Opening trace chunk file "ust/uid/0/64-bit/my_chan-0_26" (in _lttng_trace_chunk_open_fs_handle_locked() at trace-chunk.cpp:1359)
DBG1 - 16:19:50.
923664685 [40834/40841]: Adding new file "ust/uid/0/64-bit/my_chan-0_26" to trace chunk "(unnamed)" (in lttng_trace_chunk_add_file() at trace-chunk.cpp:1309)
DBG1 - 16:19:50.
923706441 [40834/40841]: Relay viewer stream 226 not found (in viewer_stream_get_by_id() at viewer-stream.cpp:265)
DBG1 - 16:19:50.
923727770 [40834/40841]: Opening trace chunk file "ust/uid/0/64-bit/my_chan-0_27" (in _lttng_trace_chunk_open_fs_handle_locked() at trace-chunk.cpp:1359)
DBG1 - 16:19:50.
923744686 [40834/40841]: Adding new file "ust/uid/0/64-bit/my_chan-0_27" to trace chunk "(unnamed)" (in lttng_trace_chunk_add_file() at trace-chunk.cpp:1309)
DBG1 - 16:19:50.
923773427 [40834/40841]: Relay viewer stream 227 not found (in viewer_stream_get_by_id() at viewer-stream.cpp:265)
DBG1 - 16:19:50.
923803791 [40834/40841]: Opening trace chunk file "ust/uid/0/64-bit/my_chan-0_28" (in _lttng_trace_chunk_open_fs_handle_locked() at trace-chunk.cpp:1359)
DBG1 - 16:19:50.
923831589 [40834/40841]: Adding new file "ust/uid/0/64-bit/my_chan-0_28" to trace chunk "(unnamed)" (in lttng_trace_chunk_add_file() at trace-chunk.cpp:1309)
DBG1 - 16:19:50.
923865981 [40834/40841]: Relay viewer stream 228 not found (in viewer_stream_get_by_id() at viewer-stream.cpp:265)
DBG1 - 16:19:50.
923889329 [40834/40841]: Opening trace chunk file "ust/uid/0/64-bit/index/my_chan-0_29.idx" (in _lttng_trace_chunk_open_fs_handle_locked() at trace-chunk.cpp:1359)
DBG1 - 16:19:50.
923905142 [40834/40838]: Creating trace chunk: chunk_id = 1, creation time =
20211118-161950 (in lttng_trace_chunk_create() at trace-chunk.cpp:440)
DBG1 - 16:19:50.
923907984 [40834/40841]: Adding new file "ust/uid/0/64-bit/index/my_chan-0_29.idx" to trace chunk "(unnamed)" (in lttng_trace_chunk_add_file() at trace-chunk.cpp:1309)
DBG1 - 16:19:50.
923937804 [40834/40838]: Chunk name set to "20211118T161950+0000-1" (in lttng_trace_chunk_create() at trace-chunk.cpp:471)
PERROR - 16:19:50.
923984288 [40834/40841]: Failed to open fs handle to ust/uid/0/64-bit/index/my_chan-0_29.idx, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.cpp:548)
DBG1 - 16:19:50.
924050763 [40834/40841]: Opening trace chunk file "ust/uid/0/64-bit/my_chan-0_29" (in _lttng_trace_chunk_open_fs_handle_locked() at trace-chunk.cpp:1359)
DBG1 - 16:19:50.
924074480 [40834/40841]: Adding new file "ust/uid/0/64-bit/my_chan-0_29" to trace chunk "(unnamed)" (in lttng_trace_chunk_add_file() at trace-chunk.cpp:1309)
PERROR - 16:19:50.
924094720 [40834/40841]: Failed to open fs handle to ust/uid/0/64-bit/my_chan-0_29, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.cpp:548)
DBG1 - 16:19:50.
924193679 [40834/40841]: Viewer connection closed with 23 (in thread_worker() at live.cpp:2542)
DBG1 - 16:19:50.
924227482 [40834/40838]: Attempting to publish trace chunk: sessiond {
34038782-6f74-4b2d-801e-
752cf3d8885e}, session_id = 7, chunk_id = 1 (in sessiond_trace_chunk_registry_publish_chunk() at sessiond-trace-chunks.cpp:385)
DBG1 - 16:19:50.
924312916 [40834/40838]: Reset communication state of relay connection (fd = 19) (in connection_reset_protocol_state() at connection.cpp:82)
DBG3 - 16:19:50.
924350200 [40834/40838]: Relayd worker thread polling... (in relay_thread_worker() at main.cpp:3833)
DBG3 - 16:19:50.
924365222 [40834/40841]: Relayd live viewer worker thread polling... (in thread_worker() at live.cpp:2456)
DBG1 - 16:19:50.
926718319 [40834/40838]: Done receiving control command header: fd = 19, cmd = 18, cmd_version = 0, payload size = 532 bytes (in relay_process_control_receive_header() at main.cpp:3422)
DBG3 - 16:19:50.
926755574 [40834/40838]: Relayd worker thread polling... (in relay_thread_worker() at main.cpp:3833)
DBG1 - 16:19:50.
926787638 [40834/40838]: Done receiving control command payload: fd = 19, payload size = 532 bytes (in relay_process_control_receive_payload() at main.cpp:3339)
DBG3 - 16:19:50.
926811247 [40834/40838]: Processing "RELAYD_ROTATE_STREAMS" command for socket 19 (in relay_process_control_command() at main.cpp:3258)
Cause
=====
This is caused by relay_create_trace_chunk() using
lttng_trace_chunk_rename_path() to move away each trace subdirectory
into the subdirectory .tmp_old_chunk, and making this the new top-level
chunk directory (temporarily). This is a temporary state which will be
resorbed on relay_close_trace_chunk(), moving back the top-level chunk
directory to its original place.
Attempts to open chunk files from the prior chunk may result in failures,
because the chunk lock protecting the chunk rename operation only
protects the chunk owned by the relay thread, not its copy(ies) owned by
the live viewer thread.
This intermediate state should _not_ be observed by the live viewer
thread. The session ongoing rotation state should prevent the live
viewer threads from observing this.
Solution
========
Set the ongoing rotation state in relay_create_trace_chunk() earlier:
before invoking lttng_trace_chunk_rename_path(). Also ensure that the
session ongoing rotation state is protected by the session lock.
On the live thread side, introduce use of the session ongoing rotation
state in viewer_get_new_streams() and viewer_attach_session() to
effectively skip creation of the viewer streams if a session has a
rotation ongoing.
Viewers are expected to deal with the LTTNG_VIEWER_NEW_STREAMS_NO_NEW
reply (or handle the fact that no streams are currently available) and
try again later.
Both Babeltrace 2.0 and Babeltrace 1.5 appear to handle those replies
correctly.
Known drawbacks
===============
None.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I3ad8d59dff2994bfb62efa02a620f3c0ffbb5e3b
Mathieu Desnoyers [Wed, 17 Nov 2021 16:55:36 +0000 (11:55 -0500)]
Fix: relayd: live: metadata stream reference count < 0 assert
Observed issue
==============
While running tests/regression/tools/clear/test_ust test in a loop we
eventually witness the following error:
The symptom on the Babeltrace side is a Connection reset by peer. This
is caused by a relayd abort after an assertion failure due to a
reference count being lower than 0.
Test case:
# Test ust streaming live clear with viewer
# Parameters: tracing_active=0, clear_twice=0, buffer_type=pid
Tools commit: https://review.lttng.org/c/lttng-tools/+/6754/1
Relay daemon assertion failure stack trace:
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffa2e850859 in __GI_abort () at abort.c:79
#2 0x00007ffa2e850729 in __assert_fail_base (fmt=0x7ffa2e9e6588 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x56221bdb96f4 "res >= 0",
file=0x56221bdb97a0 "/root/virtenv/usr/include/urcu/ref.h", line=66, function=<optimized out>) at assert.c:92
#3 0x00007ffa2e861f36 in __GI___assert_fail (assertion=assertion@entry=0x56221bdb96f4 "res >= 0", file=file@entry=0x56221bdb97a0 "/root/virtenv/usr/include/urcu/ref.h", line=line@entry=66,
function=function@entry=0x56221bdbd638 <__PRETTY_FUNCTION__.7554> "urcu_ref_put") at assert.c:101
#4 0x000056221bd6a1cc in urcu_ref_put (release=<optimized out>, ref=0x7ffa24008cb0) at /root/virtenv/usr/include/urcu/ref.h:66
#5 viewer_stream_put (vstream=vstream@entry=0x7ffa24008cb0) at viewer-stream.c:279
#6 0x000056221bd5e4c5 in viewer_get_metadata (conn=conn@entry=0x7ffa0c000fc0) at live.c:2211
#7 0x000056221bd63778 in process_control (conn=0x7ffa0c000fc0, recv_hdr=0x7ffa297c5af0) at live.c:2376
#8 thread_worker (data=<optimized out>) at live.c:2541
#9 0x00007ffa2ea28609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#10 0x00007ffa2e94d293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Babeltrace crash:
11-17 05:13:05.197 47007 47007 D PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_get_next_index@viewer-connection.c:1494 [live] Requesting next index for stream: cmd=GET_NEXT_INDEX, viewer-stream-id=3352
11-17 05:13:05.215 47007 47007 E PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_recv@viewer-connection.c:245 [live] Error receiving from Relay: Connection reset by peer.
11-17 05:13:05.215 47007 47007 E PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_get_next_index@viewer-connection.c:1522 [live] Error receiving get next index reply
11-17 05:13:05.215 47007 47007 D PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_iterator_next_msg_on_stream@lttng-live.c:1069 [live] Returning from advancing live stream iterator: status=LTTNG_LIVE_ITERATOR_STATUS_ERROR, stream-name="stream-3352", viewer-stream-id=3352
11-17 05:13:05.215 47007 47007 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_msg_iter_next@lttng-live.c:1802 [live] Error preparing the next batch of messages: live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
11-17 05:13:05.216 47007 47007 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:866 Component input port message iterator's "next" method failed: iter-addr=0x563b7dd07700, iter-upstream-comp-name="live", iter-upstream-comp-log-level=TRACE, iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUT
PUT, iter-upstream-port-name="out", status=ERROR
11-17 05:13:05.216 47007 47007 E PLUGIN/FLT.UTILS.MUXER muxer_upstream_msg_iter_next@muxer.c:446 [mux] Upstream iterator's next method returned an error: status=ERROR
11-17 05:13:05.216 47007 47007 E PLUGIN/FLT.UTILS.MUXER validate_muxer_upstream_msg_iters@muxer.c:989 [mux] Cannot validate muxer's upstream message iterator wrapper: muxer-msg-iter-addr=0x563b7dd04740, muxer-upstream-msg-iter-wrap-addr=0x563b7dd0db30
11-17 05:13:05.216 47007 47007 E PLUGIN/FLT.UTILS.MUXER muxer_msg_iter_next@muxer.c:1417 [mux] Cannot get next message: comp-addr=0x563b7dd1a760, muxer-comp-addr=0x563b7dd1b930, muxer-msg-iter-addr=0x563b7dd04740, msg-iter-addr=0x563b7dd07590, status=ERROR
11-17 05:13:05.216 47007 47007 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:866 Component input port message iterator's "next" method failed: iter-addr=0x563b7dd07590, iter-upstream-comp-name="mux", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER, iter-upstream-comp-class-name="muxer", iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu", iter-upstream-port-type=OUTPUT,
iter-upstream-port-name="out", status=ERROR
11-17 05:13:05.216 47007 47007 W LIB/GRAPH consume_graph_sink@graph.c:462 Component's "consume" method failed: status=ERROR, comp-addr=0x563b7dd06d20, comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK, comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages (`text` fo", comp-class-is-frozen=1, comp-class-so-handle-addr=0x563b7dd051b0, comp-class-so-handle-path="/root/virtenv/usr/lib/babeltr
ace2/plugins/babeltrace-plugin-text.so", comp-input-port-count=1, comp-output-port-count=0
11-17 05:13:05.217 47007 47007 E CLI cmd_run@babeltrace2.c:2537 Graph failed to complete successfully
Cause
=====
Both relay and live threads can put the ownership reference on the
metadata viewer stream concurrently without synchronization, thus
leading to a reference count going lower than 0.
The viewer stream ownership design initially planned for being owned by
the live thread, thus allowing the live thread to put the ownership
reference as soon as the associated relay stream is observed as closed,
and the viewer stream is considered as hung up.
However, in the specific case of the metadata viewer stream, the
responsibility of closing the metadata viewer stream is shared between
the relay and live threads, because the viewers expect to observe a
LTTNG_VIEWER_NO_NEW_METADATA message before the metadata stream
hangs up (see comment in viewer_get_metadata()). Therefore, if
viewer_get_metadata() is done before the metadata stream is closed, the
viewer will receive the LTTNG_VIEWER_NO_NEW_METADATA message, and set
the no_new_metadata_notified state to true. It's then the relay thread's
relay_close_stream() which will invoke the ownership put. However,
the live thread may concurrently try to put the viewer stream ownership
as well from a subsequent viewer_get_metadata(), thus leading to a
reference count < 0.
Solution
========
Fix this by putting the ownership reference from the live viewer thread
rather than the relay thread. This can be done by tracking the state of
no_new_metadata_notified within the live viewer thread.
Known drawbacks
===============
This will postpone reclaim of the metadata viewer stream from the
relay stream close to the following viewer_get_metadata (after a
LTTNG_VIEWER_NO_NEW_METADATA message has been replied), which I don't
think is an issue.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I9783d3a8402dddaa364a7a69c0764eaf30c16fde
Jérémie Galarneau [Fri, 17 Dec 2021 17:00:36 +0000 (12:00 -0500)]
Update version to v2.12.7
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Mathieu Desnoyers [Wed, 24 Nov 2021 20:56:16 +0000 (15:56 -0500)]
Validate channel context mismatch across UST applications
Observed issue
==============
Applications traced with LTTng-UST are expected to all provide the exact
same layout for their channel's context fields, else it leads to
corrupted traces. This is only enforced within LTTng-UST. There is
nothing in the session daemon that prevents this scenario, and it is
only observable when reading the corrupted trace.
This makes the entire trace unreadable from the point where it is
corrupted.
Cause
=====
Even though LTTng-UST sends the entire description of its context fields
along with the channel registration notification, there is no validation
of the context fields' content against the context fields present in the
ust registry.
Solution
========
Validate each registered UST channel context fields against the fields
present in the registry. Reject the application if there is a mismatch.
Known drawbacks
===============
None.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I8b49032bf4f766e549dfccfafdce8cddcbb2873f
Mathieu Desnoyers [Thu, 18 Nov 2021 15:37:29 +0000 (10:37 -0500)]
Fix: relayd: compare viewer chunks by ID rather than address
Observed issue
==============
In "Fix: relayd: live: erroneous message timestamp observed from live
viewer", we observe that sometimes viewer streams unexpectedly end up
performing a viewer stream rotation at session destroy.
Cause
=====
This may happen in the following scenario:
1) Relay stream A is rotated to NULL.
2) viewer_get_next_index for viewer stream A:
2.1) observes a NULL rstream->trace_chunk, updates the viewer session
current trace chunk to NULL (viewer_session_set_trace_chunk_copy).
2.2) "Transition the viewer stream into the latest trace chunk
available." does not issue viewer_stream_rotate_to_trace_chunk, because
the condition (rstream->completed_rotation_count ==
vstream->last_seen_rotation_count + 1 && !rstream->trace_chunk)
evaluates to "true", and thus the entire if () evaluates to false.
3) check_index_status detects rstream->closed and
index_received_seqcount == index_sent_seqcount, thus replying HUP to
viewer, effectively releasing ownership of the viewer stream.
4) viewer_get_next_index for viewer stream B (not rotated to NULL yet):
4.1) observes a non-NULL rstream->trace_chunk, updates the viewer
session current trace chunk to *a new copy* of the non-NULL
rstream->trace_chunk (viewer_session_set_trace_chunk_copy).
4.2) the comparison (conn->viewer_session->current_trace_chunk !=
vstream->stream_file.trace_chunk) done by pointer don't match, because
the viewer session current trace chunk is a new copy.
Therefore, due to those stream close scenarios where the viewer session
can go back and forth between NULL and _different copies_ of the relay
chunk, we cannot use a comparison of chunks by address on the viewer
chunks.
Solution
========
Compare the viewer stream chunks by ID rather than address.
Known drawbacks
===============
The comparison is probably slightly slower, but I don't expect this to
be significant.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I05e4f97f26b2659c007726cc29d3edafa17bdb98
Mathieu Desnoyers [Tue, 16 Nov 2021 20:59:01 +0000 (15:59 -0500)]
Fix: relayd: live: erroneous message timestamp observed from live viewer
Observed issue
==============
While running tests/regression/tools/clear/test_ust test in a loop we
eventually witness the following error:
# Test ust basic streaming live with viewer
# Parameters: tracing_active=0, clear_twice=0, buffer_type=uid
ok 425 - Create session 6jbcTSKUG7s2RIp5 with uri:net://localhost and opts: --live
ok 426 - Enable channel chan for session 6jbcTSKUG7s2RIp5
ok 427 - Enable ust event tp:tptest for session 6jbcTSKUG7s2RIp5
ok 428 - Start tracing for session 6jbcTSKUG7s2RIp5
# Waiting for live trace at url: net://localhost
ok 429 - Waiting for live trace at url: net://localhost
# Waiting for live viewers on url: net://localhost
ok 430 - Waiting for live viewers on url: net://localhost
# Wait until viewer sees all 10 expected events
ok 431 - Live viewer read 10 events, expect 10
ok 432 - Destroy session 6jbcTSKUG7s2RIp5
# Waiting for application to exit
ok 433 - Wait for application to exit
# Wait for viewer to exit
10-28 22:07:37.935 764967 764967 E PLUGIN/SRC.CTF.LTTNG-LIVE next_stream_iterator_for_trace@lttng-live.c:1222 [lttng-live] Message's timestamp is less than lttng-live's message iterator's last returned timestamp: lttng-live-msg-iter-addr=0x55fe45d4977
0, ts=
1635458857116911882, last-msg-ts=
1635458857123908033
10-28 22:07:37.937 764967 764967 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_msg_iter_next@lttng-live.c:1662 [lttng-live] Error preparing the next batch of messages: live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
10-28 22:07:37.937 764967 764967 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:861 Component input port message iterator's "next" method failed: iter-addr=0x55fe45d38c50, iter-upstream-comp-name="lttng-live", iter-upstream-comp-log-level=WARNING,
iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
10-28 22:07:37.937 764967 764967 E PLUGIN/FLT.UTILS.MUXER muxer_upstream_msg_iter_next@muxer.c:452 [muxer] Upstream iterator's next method returned an error: status=ERROR
10-28 22:07:37.938 764967 764967 E PLUGIN/FLT.UTILS.MUXER validate_muxer_upstream_msg_iters@muxer.c:986 [muxer] Cannot validate muxer's upstream message iterator wrapper: muxer-msg-iter-addr=0x55fe45d33640, muxer-upstream-msg-iter-wrap-addr=0x55fe45d3
af20
10-28 22:07:37.938 764967 764967 E PLUGIN/FLT.UTILS.MUXER muxer_msg_iter_next@muxer.c:1411 [muxer] Cannot get next message: comp-addr=0x55fe45d49cb0, muxer-comp-addr=0x55fe45d49d30, muxer-msg-iter-addr=0x55fe45d33640, msg-iter-addr=0x55fe45d38ae0, sta
tus=ERROR
10-28 22:07:37.938 764967 764967 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:861 Component input port message iterator's "next" method failed: iter-addr=0x55fe45d38ae0, iter-upstream-comp-name="muxer", iter-upstream-comp-log-level=WARNING, iter
-upstream-comp-class-type=FILTER, iter-upstream-comp-class-name="muxer", iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
10-28 22:07:37.938 764967 764967 W LIB/GRAPH consume_graph_sink@graph.c:469 Component's "consume" method failed: status=ERROR, comp-addr=0x55fe45d49550, comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK, comp-class-name="pretty", comp-c
lass-partial-descr="Pretty-print messages (`text` fo", comp-class-is-frozen=0, comp-class-so-handle-addr=0x55fe45d38460, comp-class-so-handle-path="/root/virtenv/usr/lib/babeltrace2/plugins/babeltrace-plugin-text.la", comp-input-port-count=1, comp-out
put-port-count=0
10-28 22:07:37.938 764967 764967 E CLI cmd_run@babeltrace2.c:2547 Graph failed to complete successfully
Cause
=====
When doing the rotation associated with a clear,
viewer_stream_sync_tracefile_array_tail aims at pushing forward the
index_sent_seqcount for sessions in tracefile rotation mode to the
oldest available seqcount (tail) for this stream. It does not take into
consideration that the current index_sent_seqcount may already be passed
that oldest available seqcount position, thus eventually re-emitting
trace data.
For sessions *not* in tracefile rotation mode (this is known because
`tracefile_array_get_seq_tail()` returns -1ULL), this function
erroneously resets the index_sent_seqcount to 0, thus also causing trace
data to be re-emitted.
Solution
========
Solve this by using the maximum between the current index_sent_seqcount
and the tracefile array tail position as new position in
viewer_stream_sync_tracefile_array_tail.
Notes
=====
This symptom is also observed without using the clear command, simply on
destroy with a live viewer attached. This is caused by another issue
(not addressed by this patch) which causes
viewer_stream_rotate_to_trace_chunk to be sometimes invoked when streams
are closed on destroy.
Known drawbacks
===============
None.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ifb877d6e2bd94831ddf93077eac3cc968945d6f7
Jérémie Galarneau [Tue, 14 Dec 2021 19:52:24 +0000 (14:52 -0500)]
Fix: relayd comm: improperly packed rotate streams command header
Observed issue
==============
The implicit rotation of a session performed during its destruction
fails on LTTng 2.12 (and upwards) with the following errors:
Error: Relayd rotate streams replied error 152
Error: Relayd rotate stream failed. Cleaning up relayd 2
Error: Rotate channel failed
Failed to find relay daemon socket: relayd_id = 2
Error: Failed to perform a quiet rotation as part of the destruction of session "my_session": Rotation failure on consumer
Cause
=====
Error 152 matches the LTTNG_ERR_INVALID_PROTOCOL error, which implies
that the consumer daemon sent an unexpected command to the relay daemon.
It was determined that the RELAYD_ROTATE_STREAMS command header is not
properly packed since the LTTNG_PACKED annotation was omitted from its
`new_chunk_id` optional field. The documentation of LTTNG_OPTIONAL_COMM
duly indicates that this is required.
Without the use of LTTNG_PACKED, various lengths of padding (3 or 7
bytes) are inserted between new_chunk_id's `is_set` and `value` field to
align `value`, which results in an incorrect interpretation of the
command's arguments.
The relay daemon catches the protocol error when it is built in a
configuration that inserts 7 bytes of padding, while the consumer only
inserts three.
Solution
========
The solution proposed here is not perfect, see "Known drawbacks".
First, if we were to annotate the field, patched consumer daemons would
send unintelligible command headers to unpatched relay daemons. Leaving
it as is is the least of all evils, see "Known drawbacks" for more
details.
From the relay daemon end, we can easily anticipate the padding problem
by reading the `stream_count` field and use it to determine the expected
size of the payload.
The difference between the actual size of the payload and the expected
size allows us to determine the padding size and use the appropriate
declaration of the structure to interpret the command's arguments.
Known drawbacks
===============
While this fix causes the relay daemon to handle all improperly packed
command headers received from an unpatched consumer daemon, the reverse
is not completely true.
The following tables show which cases are known to work and which are
known to be broken when patched and unpatched versions of the relay
and consumer daemons are mixed, with the various alignment constraints.
Note that here:
- 4 byte alignement implies "daemon running on an architecture on
which uint64_t is aligned on an 4-byte boundary" (e.g. x86),
- 8 byte alignement implies "daemon running on an architecture on
which uint64_t is aligned on an 8-byte boundary" (e.g. x86-64).
Scenario 1 - Unpatched relay daemon and unpatched consumer daemon
-----------------------------------------------------------------------------------
| Architecture alignment | 4 byte alignement consumerd | 8 byte alignment consumerd |
|------------------------|-----------------------------|----------------------------|
| 4 byte alignment relay | Works | Fails |
| 8 byte alignment relay | Fails | Works |
-----------------------------------------------------------------------------------
Scenario 2 - Patched relay daemon and unpatched consumer daemon
-----------------------------------------------------------------------------------
| Architecture alignment | 4 byte alignement consumerd | 8 byte alignment consumerd |
|------------------------|-----------------------------|----------------------------|
| 4 byte alignment relay | Works | Works |
| 8 byte alignment relay | Works | Works |
-----------------------------------------------------------------------------------
Scenario 3 - Unpatched relay daemon and patched consumer daemon
-----------------------------------------------------------------------------------
| Architecture alignment | 4 byte alignement consumerd | 8 byte alignment consumerd |
|------------------------|-----------------------------|----------------------------|
| 4 byte alignment relay | Works | Fails |
| 8 byte alignment relay | Fails | Works |
-----------------------------------------------------------------------------------
Scenario 4 - Patched relay daemon and patched consumer daemon
-----------------------------------------------------------------------------------
| Architecture alignment | 4 byte alignement consumerd | 8 byte alignment consumerd |
|------------------------|-----------------------------|----------------------------|
| 4 byte alignment relay | Works | Works |
| 8 byte alignment relay | Works | Works |
-----------------------------------------------------------------------------------
Note that Scenarios 1 and 3 are the same since this fix doesn't
change the behaviour of the consumer daemon.
Also note that packing the `new_chunk_id` field would break the two
working cases of scenario 3 which are, in all likelyhood, the more
common cases.
A new command using a properly packed version of the command's header
could be implemented in future versions, but this presents no benefit as
part of this fix.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I3b822a9766ea5be51996a771a0d4a90efe29ce0b
Jonathan Rajotte [Thu, 18 Nov 2021 15:47:16 +0000 (10:47 -0500)]
test: snapshot after regenerate metadata
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I48ae0e9d3d4e67172c4a686d42ceadf6bcb81ead
Jonathan Rajotte [Wed, 17 Nov 2021 21:18:59 +0000 (16:18 -0500)]
Fix: ust-consumer: segfault on snapshot after regenerate metadata
Observed issue
==============
lttng-consumerd segfaults for the following scenario:
$ lttng create test --snapshot
$ lttng enable-event -u -an
$ lttng start
# Run an app just to have some events
$ lttng regenerate metadata
$ lttng snapshot record
The following backtrace is obtained:
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x130) at ../nptl/pthread_mutex_lock.c:67
#1 0x000055b383cfaed3 in lttng_ustconsumer_recv_metadata (sock=29, key=4, offset=0, len=12391, version=1, channel=0x7fe90000a760, timer=0, wait=1) at ust-consumer.c:1347
#2 0x000055b383d00197 in lttng_ustconsumer_request_metadata (ctx=0x55b3855a1e50, channel=0x7fe90000a760, timer=0, wait=1) at ust-consumer.c:3336
#3 0x000055b383cf9e76 in snapshot_metadata (metadata_channel=0x7fe90000a760, key=4, path=0x7fe911a09944 "uid/1000/64-bit", relayd_id=
18446744073709551615, ctx=0x55b3855a1e50) at ust-consum
#4 0x000055b383cfbe73 in lttng_ustconsumer_recv_cmd (ctx=0x55b3855a1e50, sock=28, consumer_sockpoll=0x7fe911a0dbb0) at ust-consumer.c:1853
#5 0x000055b383ccf9b7 in lttng_consumer_recv_cmd (ctx=0x55b3855a1e50, sock=28, consumer_sockpoll=0x7fe911a0dbb0) at consumer.c:2097
#6 0x000055b383cd3bfd in consumer_thread_sessiond_poll (data=0x55b3855a1e50) at consumer.c:3284
#7 0x00007fe914c22609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#8 0x00007fe914b47293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) up
#1 0x000055b383cfaed3 in lttng_ustconsumer_recv_metadata (sock=29, key=4, offset=0, len=12391, version=1, channel=0x7fe90000a760, timer=0, wait=1) at ust-consumer.c:1347
1347 pthread_mutex_lock(&channel->metadata_stream->lock);
(gdb) print channel->metadata_stream
$1 = (struct lttng_consumer_stream *) 0x0
Note that the following scenario also leads to a similar backtrace:
$ lttng create test --snapshot
$ lttng enable-event -u -an
$ lttng start
# Run an app just to have some events with a short duration
$ lttng regenerate metadata
# Run a second app just to have some events and a metadata flush while
# the metadata cache status is set as `invalidated`.
^ lttng-consumerd segfault on app termination.
The backtrace:
(gdb) bt
#0 __GI___pthread_mutex_lock (mutex=0x130) at ../nptl/pthread_mutex_lock.c:67
#1 0x00005612a5a13ed3 in lttng_ustconsumer_recv_metadata (sock=28, key=2, offset=0, len=12391, version=1, channel=0x7f3978005400, timer=0, wait=1) at ust-consumer.c:1347
#2 0x00005612a5a14d0a in lttng_ustconsumer_recv_cmd (ctx=0x5612a6feee50, sock=28, consumer_sockpoll=0x7f3989494bb0) at ust-consumer.c:1818
#3 0x00005612a59e89b7 in lttng_consumer_recv_cmd (ctx=0x5612a6feee50, sock=28, consumer_sockpoll=0x7f3989494bb0) at consumer.c:2097
#4 0x00005612a59ecbfd in consumer_thread_sessiond_poll (data=0x5612a6feee50) at consumer.c:3284
#5 0x00007f398c6a9609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6 0x00007f398c5ce293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) up
#1 0x00005612a5a13ed3 in lttng_ustconsumer_recv_metadata (sock=28, key=2, offset=0, len=12391, version=1, channel=0x7f3978005400, timer=0, wait=1) at ust-consumer.c:1347
1347 pthread_mutex_lock(&channel->metadata_stream->lock);
(gdb) print channel->metadata_stream
$1 = (struct lttng_consumer_stream *) 0x0
(gdb)
Cause
=====
For session configured in snapshot mode the metadata channel
metadata_stream field is NULL except for a "short" window during the
actual snapshot record action (snapshot_metadata).
The `regenerate metadata` effectively flag the metadata cache as invalid
leading to handling the cache invalidation state
(`CONSUMER_METADATA_CACHE_WRITE_STATUS_INVALIDATED`) in
`lttng_ustconsumer_recv_metadata`. This was introduced by
b1316da1ffbd276fc8271e7a9438e683ad352781 [1].
At that point the function expects the `channel->metadata_stream` to be
non null. This is simply not true for snapshot session metadata channels.
Note that we cannot simply swap `lttng_ustconsumer_request_metadata` and
`create_ust_streams` in `snapshot_metadata` to ensure that the
`channel->metadata_stream` is non null since
`lttng_ustconsumer_recv_metadata` ends up being called on metadata flush when
an app quit. This sequence of events corresponds to the second scenario
put forward in the `Observed Issue` section.
Solution
========
Null check `channel->metadata_stream` and perform only the operation
when it is non null. This partly mirror what is done in `consumer_metadata_wakeup_pipe`.
I am not sure if the check on `channel->monitor` is required but it
seems irrelevant to the notion of resetting the stream consumed position
when the stream exists.
With this taken care off, we find ourself with another
backtrace:
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 [93/122]
#1 0x00007f75cf9b3859 in __GI_abort () at abort.c:79
#2 0x00007f75cf9b3729 in __assert_fail_base (fmt=0x7f75cfb49588 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55ab004e9c68 "pthread_mutex_trylock(&stream->lock)", file=0x55ab004
#3 0x00007f75cf9c4f36 in __GI___assert_fail (assertion=0x55ab004e9c68 "pthread_mutex_trylock(&stream->lock)", file=0x55ab004e8d7a "ust-consumer.c", line=1285, function=0x55ab004eb8a0 <__PR
#4 0x000055ab004b1b9c in metadata_stream_reset_cache_consumed_position (stream=0x7f75b400a850) at ust-consumer.c:1285
#5 0x000055ab004b4fef in commit_one_metadata_packet (stream=0x7f75b400a850) at ust-consumer.c:2551
#6 0x000055ab004b5f46 in get_next_subbuffer_metadata (stream=0x7f75b400a850, subbuffer=0x7f75cc972630) at ust-consumer.c:2927
#7 0x000055ab0048b6a9 in lttng_consumer_read_subbuffer (stream=0x7f75b400a850, ctx=0x55ab01d4ee50, locked_by_caller=true) at consumer.c:3522
#8 0x000055ab004b0f66 in snapshot_metadata (metadata_channel=0x7f75b4005400, key=2, path=0x7f75cc972944 "uid/1000/64-bit", relayd_id=
18446744073709551615, ctx=0x55ab01d4ee50) at ust-consum
#9 0x000055ab004b2e86 in lttng_ustconsumer_recv_cmd (ctx=0x55ab01d4ee50, sock=28, consumer_sockpoll=0x7f75cc976bb0) at ust-consumer.c:1861
#10 0x000055ab004869b7 in lttng_consumer_recv_cmd (ctx=0x55ab01d4ee50, sock=28, consumer_sockpoll=0x7f75cc976bb0) at consumer.c:2097
#11 0x000055ab0048abfd in consumer_thread_sessiond_poll (data=0x55ab01d4ee50) at consumer.c:3284
#12 0x00007f75cfb8b609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#13 0x00007f75cfab0293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Which is also caused in part to the invalidation of the cache.
`metadata_stream_reset_cache_consumed_position` expect that the stream
at that point be locked. Which is not the case despite what the last argument
to `lttng_consumer_read_subbuffer` indicates. To alleviate that we hold
the lock during the call to `lttng_consumer_read_subbuffer`.
Known drawbacks
=========
None.
References
=========
[1] https://git.lttng.org/?p=lttng-tools.git;a=commit;h=
b1316da1ffbd276fc8271e7a9438e683ad352781
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ib09fdc989b1baae26d0f496afeabc3e7cb27228c
Jérémie Galarneau [Fri, 10 Dec 2021 19:10:13 +0000 (14:10 -0500)]
Tests: live kernel: no plan printed when non-root
The live kernel test does not produce a valid TAP output when
skipping due to not running the test as root.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I4a10de494260084216ddb1b8f4ee27a546c4d8ed
Jérémie Galarneau [Wed, 3 Nov 2021 02:31:15 +0000 (22:31 -0400)]
Fix: sessiond: snapshot: leak of trace chunk
Valgrind reports a leak after every snapshot record command:
==827791== 430 (280 direct, 150 indirect) bytes in 1 blocks are definitely lost in loss record 34 of 37
==827791== at 0x48435FF: calloc (vg_replace_malloc.c:1117)
==827791== by 0x223D01: zmalloc (macros.h:45)
==827791== by 0x224B79: lttng_trace_chunk_allocate (trace-chunk.c:387)
==827791== by 0x224E41: lttng_trace_chunk_create (trace-chunk.c:427)
==827791== by 0x150B55: session_create_new_trace_chunk (session.c:656)
==827791== by 0x164A11: snapshot_record (cmd.c:5113)
==827791== by 0x1651EE: cmd_snapshot_record (cmd.c:5302)
==827791== by 0x196E74: process_client_msg (client.c:2166)
==827791== by 0x198AF1: thread_manage_clients (client.c:2742)
==827791== by 0x18E245: launch_thread (thread.c:66)
==827791== by 0x4B9E258: start_thread (in /usr/lib/libpthread-2.33.so)
==827791== by 0x4CB45E2: clone (in /usr/lib/libc-2.33.so)
session_set_trace_chunk() on line 5162 returns a reference to the
current trace chunk which is never released.
This also causes tests/regression/tools/snapshots/test_ust_long to fail
due to a file descriptor exhaustion (presumably from using too many
directory file descriptors) when it is executed by an unprivileged user.
The CI doesn't catch this since the long regression test suite is
executed as root.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I4b6e45df48c3daafa2294c80ccd8a2b4d91401e1
Jonathan Rajotte [Tue, 23 Nov 2021 17:15:42 +0000 (12:15 -0500)]
Fix: test: use BABELTRACE_BIN instead of babeltrace
Observed issue
==============
The System tests jobs fails on multi-session test since the move to bt2.
Cause
=====
The tests uses `babeltrace` instead of `BABELTRACE_BIN`.
Solution
========
Use `BABELTRACE_BIN`.
Add a babelrace bail out.
While there fix easy shellcheck warning.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I441d736e85c447c5765bffd520ec2f267c86048f
Mathieu Desnoyers [Thu, 2 Dec 2021 22:33:55 +0000 (17:33 -0500)]
Fix: relayd: `!vsession->current_trace_chunk` assertion failed
Observed issue
==============
When performing:
#!/bin/bash
lttng create py_syscalls --live
lttng enable-event -u -a
lttng enable-event -k -a
lttng start
babeltrace2 -i lttng-live net://localhost/host/raton/py_syscalls
The relay daemon hits this assertion:
Thread 8 (Thread 0x7fffeeffd700 (LWP 167040) "lttng-relayd"):
#0 0x00007ffff7b1618b in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff7af5859 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007ffff7af5729 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007ffff7b06f36 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00005555555889bb in viewer_session_attach (vsession=0x7fffdc001400, session=session@entry=0x7fffe8001180) at viewer-session.c:80
#5 0x000055555557bcff in viewer_attach_session (conn=0x7fffd0001140) at live.c:1275
#6 process_control (conn=0x7fffd0001140, recv_hdr=0x7fffeeffcaf0) at live.c:2341
#7 thread_worker (data=<optimized out>) at live.c:2515
#8 0x00007ffff7ccd609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#9 0x00007ffff7bf2293 in clone () from /lib/x86_64-linux-gnu/libc.so.6
Cause
=====
This assert appears to be entirely wrong.
It checks that the "viewer session" has a NULL current trace chunk when
attaching a session to a viewer session, but in the case where a viewer
session has multiple sessions (e.g. with kernel and ust tracing
combined), we are attaching each session individually to the viewer
session, and we set the current trace chunk of the viewer session when
we attach the first session to it.
So it is expected to be non-NULL when attaching the second session.
Solution
========
Remove the assertion.
Known limitations
=================
None.
Fixes: #1335
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I4d8f5d5347b4588144ddf449976cae5a94b81b3a
Simon Marchi [Wed, 10 Nov 2021 13:42:25 +0000 (08:42 -0500)]
Fix: tests: fix unused-but-set warning in test_fd_tracker.c
When building with clang-14 on Ubuntu 20.04, I get:
CC test_fd_tracker.o
/home/smarchi/src/lttng-tools/tests/unit/test_fd_tracker.c:169:15: error: variable 'fds_set_to_minus_1' set but not used [-Werror,-Wunused-but-set-variable]
unsigned int fds_set_to_minus_1 = 0;
^
The compiler seems right, so remove fds_set_to_minus_1. It might be
that the intention was to assert something using this variable, but I
couldn't figure it out.
Change-Id: I12bfd07bca7829de8d5b85d375d9b52bd84d677a
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Simon Marchi [Wed, 10 Nov 2021 13:39:22 +0000 (08:39 -0500)]
Fix: sessiond: fix possible buffer overflow warning
When compiling with clang-14 on Ubuntu 20.04, I get:
CC lttng-syscall.lo
/home/smarchi/src/lttng-tools/src/bin/lttng-sessiond/lttng-syscall.c:70:13: error: 'fscanf' may overflow; destination buffer in argument 4 has size 255, but the corresponding specifier may require size 256 [-Werror,-Wfortify-source]
&index, name, &bitness) == 3) {
^
I think the compiler is right, we read a string when length up to 255 in
a buffer of size 255. We need one more byte for the NULL terminator,
fix that.
Change-Id: I6b2eec401af3ef6230dd4b6c8559032de9b54584
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Tue, 19 Oct 2021 19:22:39 +0000 (15:22 -0400)]
Fix: lttng-ctl: tracing_group memory leaks
Observed issue
==============
liblttng-ctl leaks memory if `lttng_set_tracing_group` is called at least
1 time by an API client.
joraj@~/lttng/master/lttng-tools-dev [master][]$ valgrind --leak-check=full lttng --group=joraj list
==24823== Memcheck, a memory error detector
==24823== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==24823== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==24823== Command: lttng --group=joraj list
==24823==
Error: No session daemon is available
==24823==
==24823== HEAP SUMMARY:
==24823== in use at exit: 8 bytes in 1 blocks
==24823== total heap usage: 55 allocs, 54 frees, 87,023 bytes allocated
==24823==
==24823== 8 bytes in 1 blocks are definitely lost in loss record 1 of 1
==24823== at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==24823== by 0x4BA7DC7: __vasprintf_internal (vasprintf.c:71)
==24823== by 0x4C4B742: __asprintf_chk (asprintf_chk.c:34)
==24823== by 0x48687D9: asprintf (stdio2.h:181)
==24823== by 0x48687D9: lttng_set_tracing_group (lttng-ctl.c:2620)
==24823== by 0x4011B89: call_init.part.0 (dl-init.c:72)
==24823== by 0x4011C90: call_init (dl-init.c:30)
==24823== by 0x4011C90: _dl_init (dl-init.c:119)
==24823== by 0x4001139: ??? (in /usr/lib/x86_64-linux-gnu/ld-2.31.so)
==24823== by 0x2: ???
==24823== by 0x1FFEFFFCFE: ???
==24823== by 0x1FFEFFFD04: ???
==24823== by 0x1FFEFFFD12: ???
==24823==
==24823== LEAK SUMMARY:
==24823== definitely lost: 8 bytes in 1 blocks
==24823== indirectly lost: 0 bytes in 0 blocks
==24823== possibly lost: 0 bytes in 0 blocks
==24823== still reachable: 0 bytes in 0 blocks
==24823== suppressed: 0 bytes in 0 blocks
==24823==
==24823== For lists of detected and suppressed errors, rerun with: -s
==24823== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Cause
=====
The allocated pointer in the library constructor is not freed on
subsequent assignation.
Solution
========
Free the pointer.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ie1d4c45df2764a88c74d56de691783df9215633c
Francis Deslauriers [Mon, 1 Nov 2021 19:31:26 +0000 (15:31 -0400)]
Fix: Tests: unchecked `close()` return value
CID
1465101 (#1 of 1): Unchecked return value (CHECKED_RETURN)
9. check_return: Calling close without checking return value (as is done
elsewhere 177 out of 185 times).
CID
1465100 (#1 of 1): Unchecked return value (CHECKED_RETURN)
4. check_return: Calling close without checking return value (as is done
elsewhere 177 out of 185 times)
CID
1465099 (#1 of 1): Unchecked return value (CHECKED_RETURN) 4.
check_return: Calling close without checking return value (as is done
elsewhere 177 out of 185 times).
CID
1465098 (#1 of 1): Unchecked return value (CHECKED_RETURN) 4.
check_return: Calling close without checking return value (as is done
elsewhere 177 out of 185 times).
CID
1465097 (#1 of 1): Unchecked return value (CHECKED_RETURN) 4.
check_return: Calling close without checking return value (as is done
elsewhere 177 out of 185 times).
Reported-by: Coverity Scan
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I8e2552c75ab7cec5aa3707e2c1c4d9f2484b501a
Jérémie Galarneau [Mon, 1 Nov 2021 19:43:55 +0000 (15:43 -0400)]
Fix: relayd: live: mishandled initial null trace chunk
Observed issue
==============
As reported in #1323 (https://bugs.lttng.org/issues/1323), crashes of
the relay daemon are observed when running the user space clear tests.
The crash occurs with the following stack trace:
#0 0x000055fbb861d6ae in urcu_ref_get_unless_zero (ref=0x28) at /usr/local/include/urcu/ref.h:85
#1 lttng_trace_chunk_get (chunk=0x0) at trace-chunk.c:1836
#2 0x000055fbb86051e2 in make_viewer_streams (relay_session=relay_session@entry=0x7f6ea002d540, viewer_session=<optimized out>, seek_t=seek_t@entry=LTTNG_VIEWER_SEEK_BEGINNING, nb_total=nb_total@entry=0x7f6ea9607b00, nb_unsent=nb_unsent@entry=0x7f6ea9607aec, nb_created=nb_created@entry=0x7f6ea9607ae8, closed=<optimized out>) at live.c:405
#3 0x000055fbb86061d9 in viewer_get_new_streams (conn=0x7f6e94000fc0) at live.c:1155
#4 process_control (conn=0x7f6e94000fc0, recv_hdr=0x7f6ea9607af0) at live.c:2353
#5 thread_worker (data=<optimized out>) at live.c:2515
#6 0x00007f6eae86a609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7 0x00007f6eae78f293 in clone () from /lib/x86_64-linux-gnu/libc.so.6
The race window during which this occurs seems very small as it can take
hours to reproduce this crash. However, a minimal reproducer could be
identified, as stated in the bug report.
Essentially, the same crash can be reproduced by attaching a live viewer
to a session that has seen events being produced, been stopped and been
cleared.
Cause
=====
The crash occurs as an attempt is made to take a reference to a viewer
session’s trace chunk as viewer streams are created. The crux of the
problem is that the code doesn’t expect a viewer session’s trace chunk
to be NULL.
The viewer session’s current trace chunk is initially set, when a viewer
attaches to the viewer session, to a copy the corresponding
relay_session’s current trace chunk.
A live session always attempts to "catch-up" to the newest available
trace chunk. This means that when a viewer reaches the end of a trace
chunk, the viewer session may not transition to the "next" one: it jumps
to the most recent trace chunk available (the one being produced by the
relay_session). Hence, if the producer performs multiple rotations
before a viewer completes the consumption of a trace chunk, it will skip
over those "intermediary" trace chunks.
A viewer session updates its current trace chunk when:
1) new viewer streams are created,
2) a new index is requested,
3) metadata is requested.
Hence, as a general principle, the viewer session will reference the
most recent trace chunk available _even if its streams do not point to
it_. It indicates which trace chunk viewer streams should transition to
when the end of their current trace chunk is reached.
The live code properly handles transitions to a null chunk. This can be
verified by attaching a viewer to a live session, stopping the session,
clearing it (thus entering a null trace chunk), and resuming tracing.
The only issue is that the case where the first trace chunk of a viewer
session is "null" (no active trace chunk) is mishandled in two places:
1) in make_viewer_streams(), where the crash is observed,
2) in viewer_get_metadata().
Solution
========
In make_viewer_streams(), it is assumed that a viewer session will have
a non-null trace chunk whenever a rotation is not ongoing. This is
reflected by the fact that a reference is always acquired on the viewer
session’s trace chunk.
That code is one of the three places that can cause a viewer session’s
trace chunk to be updated. We still want to update the viewer session to
the most recently seen trace chunk (null, in this case). However, there
is no reference to acquire and the trace chunk to use for the creation
of the viewer stream is NULL. This is properly handled by
viewer_stream_create().
The second site to change is viewer_get_metadata() which doesn’t handle
a viewer metadata stream not having an active trace chunk at all.
Thankfully, the protocol allows us to express this condition by
returning the LTTNG_VIEWER_NO_NEW_METADATA status code when a viewer
metadata stream doesn’t have an open file and doesn’t have a current
trace chunk.
Surprisingly, this bug didn’t trigger in the case where a transition to
a null chunk occurred _after_ attaching to a viewer session.
This is because viewers will typically ask for metadata as a result of an
LTTNG_VIEWER_FLAG_NEW_METADATA reply to the GET_NEXT_INDEX command. When
a session is stopped and all data was consumed, this command returns
that no new data is available, causing the viewers to wait and ask again
later.
However, when attaching, babeltrace2 (at least, and probably babeltrace 1.x)
always asks for an initial segment of metadata before asking for an
index.
Known drawbacks
===============
None.
Fixes: #1323
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I516fca60755e6897f6b7170c12d706ef57ad61a5
Francis Deslauriers [Thu, 30 Sep 2021 18:43:11 +0000 (14:43 -0400)]
Fix: configure.ac: reporting SDT uprobe as a UST feature
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I86638d6a148b04e7131e4af7ec830c5e56817fdc
Francis Deslauriers [Thu, 7 Oct 2021 18:52:27 +0000 (14:52 -0400)]
Fix: Tests: leaking epoll fd
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I5ec4fcdb87159f35932c20e7314cda764d14967c
Jérémie Galarneau [Mon, 18 Oct 2021 21:10:08 +0000 (17:10 -0400)]
Update version to v2.12.6
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Thu, 7 Oct 2021 20:19:41 +0000 (16:19 -0400)]
Fix: sessiond: previously created channel cannot be enabled
Observed issue
==============
A previously created channel cannot be enabled back once a session is
started.
Cause
=====
The check validating that the session was started is to early in the
`cmd_enable_channel` function.
Solution
========
Move the check at the creation code path when the channel is not found.
Known drawbacks
=========
None.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Change-Id: I8e7d62b7e97246e65f1cf9022270293a6dd34cc9
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Thu, 8 Jul 2021 18:17:51 +0000 (14:17 -0400)]
Fix: ust: app stuck on recv message during UST comm timeout scenario
Observed issue
==============
The following scenario lead to the UST thread to be "stuck" on recvmsg
on the notify socket.
The problem manifest itself when an application is unresponsive during
the ustctl_start_session call. Note that the default timeout for ust
communication is 5 seconds.
# Start an instrumented app
./app
gdb lttng-sessiond
# put a breakpoint on ustctl_start_session
lttng create my_session
lttng enable-event -u -a
lttng start
# The tracepoint should hit. Do not continue.
kill -s SIGSTOP $(pgrep app)
# Continue lttng-sessiond.
sleep 5 # This make sure lttng-sessiond unregister the app from its point of view
kill -s SIGCONT $(pgrep app)
gdb -p $(pgrep app)
thread apply all bt
App stack trace:
Thread 3 (Thread 0x7fe2c6f58700 (LWP 48172)):
#0 __libc_recvmsg (flags=0, msg=0x7fe2c6f56ac0, fd=4) at ../sysdeps/unix/sysv/linux/recvmsg.c:28
#1 __libc_recvmsg (fd=fd@entry=4, msg=msg@entry=0x7fe2c6f56ac0, flags=flags@entry=0) at ../sysdeps/unix/sysv/linux/recvmsg.c:25
#2 0x00007fe2c7a010ba in ustcomm_recv_unix_sock (sock=sock@entry=4, buf=buf@entry=0x7fe2c6f56ea0, len=len@entry=48) at lttng-ust-comm.c:308
#3 0x00007fe2c7a037c3 in ustcomm_register_channel (sock=4, session=session@entry=0x7fe2c0000ba0, session_objd=<optimized out>, channel_objd=<optimized out>, nr_ctx_fields=nr_ctx_fields@entry=0, ctx_fields=<optimized out>, chan_id=0x7fe2
c6f5716c, header_type=0x7fe2c0012b18) at lttng-ust-comm.c:1544
#4 0x00007fe2c7a10787 in lttng_session_enable (session=0x7fe2c0000ba0) at lttng-events.c:444
#5 0x00007fe2c7a0b785 in lttng_session_cmd (objd=1, cmd=128, arg=
140611977311672, uargs=0x7fe2c6f57800, owner=0x7fe2c7a5da00 <local_apps>) at lttng-ust-abi.c:576
#6 0x00007fe2c7a07d6d in handle_message (lum=0x7fe2c6f57590, sock=3, sock_info=0x7fe2c7a5da00 <local_apps>) at lttng-ust-comm.c:1003
#7 ust_listener_thread (arg=0x7fe2c7a5da00 <local_apps>) at lttng-ust-comm.c:1712
#8 0x00007fe2c7993609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#9 0x00007fe2c78ba293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
...
Cause
=====
When the app continues after the timeout from lttng-sessiond side, the
actual start_session message is received on the application side then
UST, app side, send commands on the notify socket. On lttng-sessiond
side, the command is received but no reply is sent.
This is due to the fact that the lookup against the
ust_app_ht_by_notify_sock hash table (find_app_by_notify_sock)
return nothing since the app is unregistered at this point and the hash
table node was removed on unregistration.
Solution
========
When the app lookup fails, return an error that will trigger the cleanup
of the notify socket.
Known drawbacks
=========
None
Note
=========
Subsequent error path in reply_ust_register_channel,
add_event_ust_registry, and add_enum_ust_registry might lead to the same
type of problem since no reply is sent to the app. Still, for those
cases the complete application/notify socket should not be destroyed
since the error path relate to either a session or a sub object of a
session.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Change-Id: Iea0dc027ca1ee772e84c7e545114f1be69fd1f63
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Wed, 23 Jun 2021 02:17:03 +0000 (22:17 -0400)]
Fix: ust: UST communication can return -EAGAIN
Observed issue
==============
The following scenario lead to an abort on event creation. The
problem manifest itself when an application is unresponsive. Note that
the default timeout for ust communication is 5 seconds.
# Start an instrumented app
./app
gdb lttng-sessiond
# put a breakpoint on ustctl_create_event.
lttng create my_session
lttng enable-event -u -a
lttng start
# The tracepoint should hit. Do not continue.
kill -s SIGSTOP $(pgrep app)
# Continue lttng-sessiond.
# lttng-sessiond will abort.
Note that for UST this is not an expected behaviour. Expected
communication failure with a single app should not invalidate the
complete channel, compromise its setup or result in an abort.
Note that a similar scenario for the following ustctl call sites also
lead to scenario where failure of a single app lead to error reporting
and/or error propagation to upper level object.
Problematic callsites:
ustctl_set_exclusion
ustctl_set_filter
ustctl_disable_channel
These callsites are also fixed by this patch.
Cause
=====
For an unresponsive application, EAGAIN is returned and is treated as an
"unknown" hard error.
In this particular case the abort() call was introduced by commit:
88e3c2f5610b9ac89b0923d448fee34140fc46fb [1]. It is not clear if this is
a leftover from debugging session since this is the only callsite where
an abort is issued on communication failure via ustctl.
Solution
========
Handle EAGAIN coming from ustctl_* and treat it the same way a
dying application is handled. The only minor difference is that we WARN
on communication time out. Albeit not the most useful thing for a CLI
client, it could help overall user of lttng-sessiond in time out
situation.
Most call site already handled "unknown" error correctly. For those call
site we simply end up bringing more info in regards to the timeout
issue instead of mentioning that "-11" was returned.
Note, the reclamation of "app" is handled by the poll loop and
ust_app_unregister since the socket is shutdown by lttng-ust internally
on error, including EAGAIN.
Note that the application will try to register itself back to the
lttng-sessiond based on its configuration.
Known drawbacks
=========
None
References
==========
[1] https://github.com/lttng/lttng-tools/commit/
88e3c2f5610b9ac89b0923d448fee34140fc46fb
Fixes: #1384
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Change-Id: If364b5d48e7fd2b664276a0fb1b7eec2c45ed683
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Mon, 12 Jul 2021 20:44:38 +0000 (16:44 -0400)]
Fix: ust: segfault on lttng start on filter bytecode copy
Observed issue
==============
A segmentation fault is observed for multiple UST timeout scenarios.
Backtrace:
#0 __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:384
#1 0x0000557fe0395df9 in copy_filter_bytecode (orig_f=0x7f9c5802b790) at ust-app.c:1196
#2 0x0000557fe0397702 in shadow_copy_event (ua_event=0x7f9c58025ff0, uevent=0x7f9c58033560) at ust-app.c:1824
#3 0x0000557fe039ac46 in create_ust_app_event (ua_sess=0x7f9c5802ec20, ua_chan=0x7f9c58025cc0, uevent=0x7f9c58033560, app=0x7f9c5c001da0) at ust-app.c:3192
#4 0x0000557fe03a054d in ust_app_channel_synchronize_event (ua_chan=0x7f9c58025cc0, uevent=0x7f9c58033560, ua_sess=0x7f9c5802ec20, app=0x7f9c5c001da0) at ust-app.c:5096
#5 0x0000557fe03a0772 in ust_app_synchronize (usess=0x7f9c580074a0, app=0x7f9c5c001da0) at ust-app.c:5173
#6 0x0000557fe03a0a70 in ust_app_global_update (usess=0x7f9c580074a0, app=0x7f9c5c001da0) at ust-app.c:5255
#7 0x0000557fe03a00e0 in ust_app_start_trace_all (usess=0x7f9c580074a0) at ust-app.c:4987
#8 0x0000557fe0355c6a in cmd_start_trace (session=0x7f9c5800a190) at cmd.c:2668
#9 0x0000557fe0382e70 in process_client_msg (cmd_ctx=0x7f9c58003d70, sock=0x7f9c74bf44e0, sock_error=0x7f9c74bf44e4) at client.c:1527
#10 0x0000557fe03848a2 in thread_manage_clients (data=0x557fe06d9440) at client.c:2200
#11 0x0000557fe037d1cb in launch_thread (data=0x557fe06d94b0) at thread.c:75
#12 0x00007f9c796af609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#13 0x00007f9c795b6293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
The scenario:
# Start an instrumented app
./app
gdb lttng-sessiond
# put a breakpoint on ustctl_set_filter
lttng create my_session
lttng enable-event -u tp:tp_test
lttng start
lttng enable-event -u __dummy --filter 'my_field == "user34"'
# The tracepoint should hit. Do not continue.
kill -s SIGSTOP $(pgrep app)
# Continue lttng-sessiond.
# enable-event will return an error. This a bug in itself, still let's
# continue with the current bug.
lttng stop
# Start a new app that will register.
./app &
sleep 1
lttng start
# lttng-sessiond should segfault.
Cause
=====
During the "lttng enable-event" command, the timeout error bubbles up
all the way to event_ust_enable_tracepoint and is different from
LTTNG_UST_ERR_EXIST. `trace_ust_destroy_event` is called and frees the
`uevent` object. Note that contrary to the comment `uevent` is added to
the channel event hash table at this point.
On the next `lttng start` command, the event node is still present in
the hash table and is iterated on. lttng-sessiond segfault on the first
data access of the previously freed memory.
The problem was introduced by commit
88e3c2f5610b9ac89b0923d448fee34140fc46fb [1]. Which essentially move the
callsite of `add_unique_ust_event` before `ust_app_*_event_glb` calls.
Solution
========
Go to `end` label to prevent freeing of the uevent object.
Note that app synchronization should not force an error at the channel
level, since a single app can fail but the whole channel should not.
The `error` label is now obsolete.
Known drawbacks
=========
None.
References
==========
[1] https://github.com/lttng/lttng-tools/commit/
88e3c2f5610b9ac89b0923d448fee34140fc46fb
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Change-Id: Ifaf3f4c71bb2da869c7b441aaa4b367f8f7cbdd6
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jérémie Galarneau [Fri, 15 Oct 2021 21:03:38 +0000 (17:03 -0400)]
Build fix: Missing message in LTTNG_DEPRECATED invocation
Coverity scan build jobs fail since LTTNG_DEPRECATED expects a string
and none is provided at the lttng_metadata_regenerate use site.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I7e6701abd24c679f578b0adead771ac93b6566cd
Simon Marchi [Wed, 6 Oct 2021 14:57:24 +0000 (10:57 -0400)]
include: remove spurious spaces in condition/session-rotation.h
Change-Id: Ia525d24c3b4098dff5c50fb2c5d93c16f6e08f5c
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Simon Marchi [Tue, 5 Oct 2021 20:10:18 +0000 (16:10 -0400)]
tests: fix header of regression/ust/getcpu-override/run-getcpu-override
The "SPDX-License-Identifier:" header is not in a comment, so is
interpreted as a bash command. This is harmless, but it appears in the
test output:
ok 13 - Start tracing for session sequence-cpu
# Launching app with getcpu-plugin wrapper
./tests/regression/ust/getcpu-override//run-getcpu-override: 2: SPDX-License-Identifier:: not found
ok 14 - Application with wrapper done
Fix that, and add a proper copyright notice, based on the other files
that were added at the same time as this one.
Change-Id: Icdf5e2fd5aec4080b2e5cad10cca4813bad26394
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Michael Jeanson [Thu, 5 Aug 2021 20:48:51 +0000 (16:48 -0400)]
fix: wrong define used for GCC version check
As far as I can tell, the __GNUC_MAJOR__ define has never existed, the
proper define for the major version is __GNUC__. See
https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html for
more details.
Change-Id: I0d47d524e7efd204fd2f8976311c62e872eb6170
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jérémie Galarneau [Mon, 4 Oct 2021 16:41:51 +0000 (12:41 -0400)]
Fix: userspace-probe: unreported error on string copy error
Issue
=====
String copy errors, either due to the length or an allocation failure,
are not reported by
lttng_userspace_probe_location_tracepoint_create_from_payload
and don't log a clear error message.
This allowed truncation bugs like the one fixed in
b45a296 to go
unnoticed.
Fix
===
Return an "invalid" status code and log a more descriptive error
message.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ia07cac7cba315ea79337262e9082dd06eb60950f
Francis Deslauriers [Fri, 1 Oct 2021 20:10:24 +0000 (16:10 -0400)]
Fix: userspace-probe: truncating binary path for SDT
Issue
=====
This issue was uncovered when we enabled the testing of the SDT
userspace probe instrumentation on the CI, where the paths to file are
specially long.
The reported error is:
- rule: ma-probe-sdt (type: kernel:uprobe, location type: SDT, location: /root/workspace/dev_gerrit_lttng-tools_rootbuild/arch/amd64/babeltrace_version/stable-2.0/build/std/conf/agents/liburcu_version/master/node/amd64-rootnode/test_type/base/src/lttng-tools/tests/utils/testapp/userspace-probe-sdt-binary/.libs/userspace-probe-sdt-binary:foobar:tp1)
+ rule: ma-probe-sdt (type: kernel:uprobe, location type: SDT, location: /root/workspace/dev_gerrit_lttng-tools_rootbuild/arch/amd64/babeltrace_version/stable-2.0/build/std/conf/agents/liburcu_version/master/node/amd64-rootnode/test_type/base/src/lttng-tools/tests/utils/testapp/userspace-probe-sdt-binary/.libs/userspace-probe-s:foobar:tp1)
The important part to notice is that the path to the binary is truncated
compared to was is expected by the test case.
The problem is caused by the
`lttng_userspace_probe_location_tracepoint_create_from_payload()`
function that strdup() the path string using the wrong defined value.
Fix
===
Use LTTNG_PATH_MAX rather then LTTNG_SYMBOL_NAME_LEN to copy the binary
path.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I24cbf413baba405bf4c4b534ccbc2b18f8d5d43f
Francis Deslauriers [Thu, 19 Aug 2021 21:14:46 +0000 (17:14 -0400)]
Fix: statements with side-effects in assert statements
Background
==========
When building with the NDEBUG definition the `assert()` statements are
removed.
Issue
=====
Currently, a few `assert()` statements in the code base contain
statements that have side effects and removing them changes the
behavior for the program.
Fix
===
Extract the statements with side effects out of the `assert()`
statements.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I0b11c8e25c3380563332b4c0fad15f70b09a7335
Jonathan Rajotte [Thu, 16 Sep 2021 15:20:07 +0000 (11:20 -0400)]
Fix: lttng_trace_archive_location_serialize is called on freed memory
Observed issue
==============
The following backtrace have been reported [1].
#0 __GI_raise (sig=sig@entry=6) at /usr/src/debug/glibc/2.31+gitAUTOINC+
f84949f1c4-r0/git/sysdeps/unix/sysv/linux/raise.c:50
#1 0x0000003123025528 in __GI_abort () at /usr/src/debug/glibc/2.31+gitAUTOINC+
f84949f1c4-r0/git/stdlib/abort.c:79
#2 0x0000000000419884 in lttng_trace_archive_location_serialize (location=0x7f1c9c001160, buffer=0x7f1cb961c320) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/common/location.c:230
#3 0x00000000004c8f06 in lttng_evaluation_session_rotation_serialize (evaluation=0x7f1cb000a7f0, payload=0x7f1cb961c320) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/common/conditions/session-rotation.c:539
#4 0x00000000004a80fa in lttng_evaluation_serialize (evaluation=0x7f1cb000a7f0, payload=0x7f1cb961c320) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/common/evaluation.c:42
#5 0x00000000004bc24f in lttng_notification_serialize (notification=0x7f1cb961c310, payload=0x7f1cb961c320) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/common/notification.c:63
#6 0x0000000000458b7d in notification_client_list_send_evaluation (client_list=0x7f1cb0008f90, trigger=0x7f1ca40113d0, evaluation=<optimized out>, source_object_creds=0x7f1cb000a874, client_report=0x475840 <client_handle_transmission_status>, user_data=0x7f1cb0006010) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/bin/lttng-sessiond/notification-thread-events.c:4379
#7 0x0000000000476586 in action_executor_generic_handler (item=0x7f1cb0009600, work_item=0x7f1cb000a820, executor=0x7f1cb0006010) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/bin/lttng-sessiond/action-executor.c:696
#8 action_work_item_execute (work_item=0x7f1cb000a820, executor=0x7f1cb0006010) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/bin/lttng-sessiond/action-executor.c:715
#9 action_executor_thread (_data=0x7f1cb0006010) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/bin/lttng-sessiond/action-executor.c:797
#10 0x0000000000462327 in launch_thread (data=0x7f1cb00060b0) at /usr/src/debug/lttng-tools/2.13.0-r0/lttng-tools-2.13.0/src/bin/lttng-sessiond/thread.c:66
#11 0x0000003123408ea4 in start_thread (arg=<optimized out>) at /usr/src/debug/glibc/2.31+gitAUTOINC+
f84949f1c4-r0/git/nptl/pthread_create.c:477
#12 0x00000031230f8dcf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
This can be easily reproduced with the following session and trigger
configuration:
lttng create test
lttng enable-event -u -a
lttng start
# Register two similar triggers via a dummy C program since rotation
# completed condition is not exposed on the CLI for now. Yielding the
# following triggers:
lttng list-triggers
- name: trigger0
owner uid: 1000
condition: session rotation completed
session name: test
errors: none
action:notify
errors: none
- name: trigger1
owner uid: 1000
condition: session rotation completed
session name: test
errors: none
action:notify
errors: none
lttng rotate <- abort happens here.
Cause
=====
The problem lies in how the location (`lttng_trace_archive_location`)
object is assigned to the `lttng_evaluation` objects. A single location
object can end up being shared between multiple `lttng_evaluation` objects
since we iterate over all triggers and create an `lttng_evaluation` object
with the location each time as needed.
See `src/bin/lttng-sessiond/notification-thread-events.c:1956`.
The location object is then freed when the first notification is
completely serialized. The second serialization end up having a
reference to a freed `lttng_trace_archive_location` object.
Solution
========
Implement ref counting for the lttng_trace_archive_location object.
Note
=======
This also fixes a leak that was present in `cmd_destroy_session_reply`.
The location is created by `session_get_trace_archive_location` and is
never `destroyed`/`put`.
Known drawbacks
=========
None.
References
==========
[1] https://bugs.lttng.org/issues/1325
Fixes: #1325
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Change-Id: I99dc595ee5b0288c727b193ed061f5273752bd24
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Mon, 13 Sep 2021 20:49:48 +0000 (16:49 -0400)]
Fix: sessiond: ust session is inactive during ust_app_global_update
Observed issue
==============
The following scenario leads to an abort of lttng-sessiond.
lttng-sessiond (with kernel tracing available)
lttng create system-trace --snapshot -U /tmp/snapshot
lttng enable-channel -k system-trace --subbuf-size=4k --num-subbuf=256
lttng enable-event -c system-trace -k 'sched_wak*' -s system-trace
lttng start system-trace
lttng enable-event -u -a
Fails as expected with:
Error: Events: The command tried to enable an event in a new domain for
a session that has already been started once. (channel channel0,
session system-trace)
Launch any ust app such as easy_ust from the lttng-ust repository.
The following backtrace is generated:
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7af0859 in __GI_abort () at abort.c:79
#2 0x00007ffff7af0729 in __assert_fail_base (fmt=0x7ffff7c86588 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555564b765 "usess->active", file=0x555555649a60 "ust-app.c", line
#3 0x00007ffff7b01f36 in __GI___assert_fail (assertion=0x55555564b765 "usess->active", file=0x555555649a60 "ust-app.c", line=5123, function=0x55555564ecf0 <__PRETTY_FUNCTION__.14199> "ust_
#4 0x00005555555d1f5e in ust_app_global_update (usess=0x7fffe001fb90, app=0x7fffac000b80) at ust-app.c:5123
#5 0x00005555555b60d4 in update_ust_app (app_sock=82) at dispatch.c:71
#6 0x00005555555b7025 in thread_dispatch_ust_registration (data=0x5555556a07f0) at dispatch.c:409
#7 0x00005555555ad5ab in launch_thread (data=0x5555556a0810) at thread.c:65
#8 0x00007ffff7ce6609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#9 0x00007ffff7bed293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
This also happens for the track command. You can replace the `lttng
enable-event -u -a` with `lttng track --userspace --vuid=0` then launch
an app and the same backtrace gets generated.
Cause
=====
During `process_client_msg` the `create_ust_session` function is called
and a ust session is assigned to the "system_trace" session with a
state of `active` set to 0 (false). This is not a problem.
The problem seems to lie with a single call site for
`ust_app_global_update` in `update_ust_app`. The status of the ust
session is not checked before calling the `ust_app_global_update`. It is
important to note that all `ust_app_global_update_all` callsites guard
the call with a check against the status of the session.
Solution
========
Guard the call to `ust_app_global_update` with a check of the ust
session active state.
Known drawbacks
=========
None.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I14d25d99d0609689247cdfa86130bd0219613581
Francis Deslauriers [Wed, 8 Sep 2021 14:16:23 +0000 (10:16 -0400)]
Fix: Tests: race condition in test_ns_contexts_change
Issue
=====
The test script doesn't wait for the test application to complete before
stopping the tracing session. The race is that depending on the
scheduling the application is not always done generating events when the
session is stopped.
Fix
===
Make the test script wait for the termination of the test app before
stopping the session.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I29d9b41d2a2ed60a6c42020509c2067442ae332c
Jérémie Galarneau [Wed, 8 Sep 2021 19:54:32 +0000 (15:54 -0400)]
Fix: Tests: race condition in test_event_tracker
Background
==========
The `test_event_tracker` file contains test cases when the event
generating app in executed in two distinct steps. Those two steps are
preparation and execution.
1. the preparation is the launching the app in the background, and
2. the execution is actually generating the event that should or
should not be traced depending on the test case.
This is useful to test the tracker feature since we want to ensure that
already running apps are notified properly when changing their tracking
status.
Issue
=====
The `test_event_vpid_track_untrack` test case suffers from a race
condition that is easy to reproduce on Yocto.
The issue is that sometimes events are end up the trace when none is
expected.
This is due to the absence of synchronization point at the launch of the
app which leads to the app being scheduled in-between the track-untrack
calls leading to events being recorded to the trace.
It's easy to reproduce this issue on my machine by adding a `sleep 5`
between the track and untrack calls and setting the `NR_USEC_WAIT`
variable to 1.
Fix
===
Using the testapp `--sync-before-last-event-touch` flag to make the app
create a file when all but the last event are executed. We then have the
app wait until we create a file (`--sync-before-last-event`) to generate
that last event. This way, we are sure no event will be generated when
running the track and untrack commands.
Notes
=====
- This issue affects other test cases in this file.
- This commit fixes a typo in the test header.
- This commit adds `diag` calls to help tracking to what test the output
relates to when reading the log.
Backport
========
The changes applying to `test_event_tracker` introduced by
33e5571 were
squashed into this commit as the clean-up affected a lot of areas
impacted by this change.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I3c7dcb8e3c3427944fb9c07d2d0b9b6213b6ab32
Jérémie Galarneau [Fri, 20 Aug 2021 19:12:20 +0000 (15:12 -0400)]
Fix: man: lttng-rotate: trace file count/size limitation does not apply
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I21f4809215df1b8e9b1cf24edabd478e9b530969
Jérémie Galarneau [Fri, 6 Aug 2021 16:50:37 +0000 (12:50 -0400)]
Update version to v2.12.5
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Francis Deslauriers [Fri, 6 Aug 2021 13:40:20 +0000 (09:40 -0400)]
Fix: runas: less-than-zero comparison of an unsigned value
Fixes two defects found by Coverity related to unsigned integers being
treated as signed.
Reported by Coverity:
CID
1461333: Control flow issues (NO_EFFECT)
This less-than-zero comparison of an unsigned value is never true. "buf_size < 0UL".
CID
1461332: Integer handling issues (NEGATIVE_RETURNS)
"buf_size" is passed to a parameter that cannot be negative.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Id6d4a71960f2ef34f14c05e66ef5d934b7a3e524
Francis Deslauriers [Fri, 23 Jul 2021 20:27:00 +0000 (16:27 -0400)]
Fix: runas: supplementary groups are ignored on lttng save
Observed issue
==============
On `lttng save` the following is reported to the user:
$ sudo -u my_user lttng save -o /tmp/my_dir my_session_name
Error: Permission denied
Note that:
* the running lttng-sessiond is root,
* "my_user" is part of the tracing group,
* "my_user" primary group is "my_user" and is part of group "my_dummy_group"
* The "/tmp/my_dir" has the following permissions:
drwxrwx--- 2 root my_dummy_group 4096 Jul 26 16:39 /tmp/my_dir/
Cause
=====
The supplementary groups are not initialized when the run-as process
demote itself to the user "my_user" to perform the recursive mkdir
required by the `lttng save` command.
From the point of the view the kernel, at the moment of performing the
mkdir call the permissions looks like this:
euid: uid of "my_user"
egid: primary gid of "my_user"
supplementary group list: "root"
Note that the kernel does not treat the presence of the root group in
the supplementary group list in any special way. Since "root gid" !=
"my_dummy_group gid" the directory creation is refused.
Solution
========
Use initgroups(3) to initialize the supplementary group list.
Known drawbacks
=========
None.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I58656a3107e4f7b59a2391a4759988401cad7a2b
Simon Marchi [Mon, 2 Aug 2021 01:02:39 +0000 (21:02 -0400)]
Fix: lttng: free sessions in cmd_destroy
When doing `lttng destroy`, I get:
Direct leak of 4385 byte(s) in 1 object(s) allocated from:
#0 0x7f74ae025459 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7f74add4129a in zmalloc /home/simark/src/lttng-tools/src/common/macros.h:45
#2 0x7f74add42b9d in recv_sessiond_optional_data /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:494
#3 0x7f74add42f9a in lttng_ctl_ask_sessiond_fds_varlen /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:596
#4 0x7f74add41714 in lttng_ctl_ask_sessiond_varlen_no_cmd_header /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl-helper.h:58
#5 0x7f74add41747 in lttng_ctl_ask_sessiond /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl-helper.h:78
#6 0x7f74add4a922 in lttng_list_sessions /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:2105
#7 0x56472bcbdf80 in cmd_destroy /home/simark/src/lttng-tools/src/bin/lttng/commands/destroy.c:330
#8 0x56472bd00764 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#9 0x56472bd01218 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:426
#10 0x56472bd0151a in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:475
#11 0x7f74ad963b24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
This is due to cmd_destroy not free'ing the result of
lttng_list_sessions. Fix that.
Change-Id: Iff2e75e6ec1cdcd0bdfdbbc3d5099422e592905b
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Simon Marchi [Mon, 2 Aug 2021 00:33:23 +0000 (20:33 -0400)]
Fix: lttng: free domains and channels in get_session_stats_str
When doing `lttng stop`, I get:
Direct leak of 656 byte(s) in 1 object(s) allocated from:
#0 0x7f970719e459 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7f9706eba29a in zmalloc /home/simark/src/lttng-tools/src/common/macros.h:45
#2 0x7f9706ebbb9d in recv_sessiond_optional_data /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:494
#3 0x7f9706ebbf9a in lttng_ctl_ask_sessiond_fds_varlen /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:596
#4 0x7f9706eba714 in lttng_ctl_ask_sessiond_varlen_no_cmd_header /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl-helper.h:58
#5 0x7f9706eba747 in lttng_ctl_ask_sessiond /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl-helper.h:78
#6 0x7f9706ec4604 in lttng_list_channels /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:2262
#7 0x55837235c4e7 in get_session_stats_str /home/simark/src/lttng-tools/src/bin/lttng/utils.c:499
#8 0x55837235bf73 in print_session_stats /home/simark/src/lttng-tools/src/bin/lttng/utils.c:445
#9 0x55837231cc12 in stop_tracing /home/simark/src/lttng-tools/src/bin/lttng/commands/stop.c:138
#10 0x55837231d062 in cmd_stop /home/simark/src/lttng-tools/src/bin/lttng/commands/stop.c:229
#11 0x55837235e63e in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#12 0x55837235f0f2 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:426
#13 0x55837235f3f4 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:475
#14 0x7f9706adcb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Direct leak of 308 byte(s) in 1 object(s) allocated from:
#0 0x7f970719e459 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7f9706eba29a in zmalloc /home/simark/src/lttng-tools/src/common/macros.h:45
#2 0x7f9706ebbb9d in recv_sessiond_optional_data /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:494
#3 0x7f9706ebbf9a in lttng_ctl_ask_sessiond_fds_varlen /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:596
#4 0x7f9706eba714 in lttng_ctl_ask_sessiond_varlen_no_cmd_header /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl-helper.h:58
#5 0x7f9706eba747 in lttng_ctl_ask_sessiond /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl-helper.h:78
#6 0x7f9706ec421c in lttng_list_domains /home/simark/src/lttng-tools/src/lib/lttng-ctl/lttng-ctl.c:2220
#7 0x55837235c3d3 in get_session_stats_str /home/simark/src/lttng-tools/src/bin/lttng/utils.c:484
#8 0x55837235bf73 in print_session_stats /home/simark/src/lttng-tools/src/bin/lttng/utils.c:445
#9 0x55837231cc12 in stop_tracing /home/simark/src/lttng-tools/src/bin/lttng/commands/stop.c:138
#10 0x55837231d062 in cmd_stop /home/simark/src/lttng-tools/src/bin/lttng/commands/stop.c:229
#11 0x55837235e63e in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#12 0x55837235f0f2 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:426
#13 0x55837235f3f4 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:475
#14 0x7f9706adcb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
This is due to the get_session_stats_str function not free'ing the
results of lttng_list_channels and lttng_list_domains. Fix that.
Change-Id: I4c200d3df41bf09bdce8eadb000abbff7fe5a751
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jérémie Galarneau [Wed, 16 Jun 2021 19:08:21 +0000 (15:08 -0400)]
Fix: rotation client example: leak of handle on error
1452927 Resource leak
The system resource will not be reclaimed and reused, reducing the
future availability of the resource.
In setup_session: Leak of memory or pointers to system
resources (CWE-404)
CID
1452927 (#1 of 1): Resource leak (RESOURCE_LEAK)8. leaked_storage:
Variable chan_handle going out of scope leaks the storage it points to
Reported-by: Coverity Scan
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I4c215ac4a86f9f70fd5c9d3aa13f944d3d7a2cc7
Michael Jeanson [Tue, 15 Jun 2021 18:59:21 +0000 (14:59 -0400)]
.gitreview: Set default branch to 'stable-2.12'
Change-Id: I5ba90465e49e01203172d3db7d252cfda2e4fcf8
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Michael Jeanson [Tue, 11 May 2021 14:29:50 +0000 (10:29 -0400)]
tests: Move tap-driver.sh out of the autotools aux directory
We have made local modifications to this script, move it to the test
suite directory so it doesn't get overwritten by an updated version from
autotools.
Change-Id: I22823176cc5b98901865c9415250ce8e49384cbd
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Francis Deslauriers [Wed, 16 Jun 2021 16:10:42 +0000 (12:10 -0400)]
Fix: use of uninitialised bytes valgrind warning
Issue
=====
Valgrind reports usage of uninitialised stack allocated memory:
==
2961363== Thread 9 Client manageme:
==
2961363== Syscall param sendmsg(msg.msg_iov[0]) points to uninitialised byte(s)
==
2961363== at 0x521418D: __libc_sendmsg (sendmsg.c:28)
==
2961363== by 0x521418D: sendmsg (sendmsg.c:25)
==
2961363== by 0x53411B: lttcomm_send_unix_sock (unix.c:294)
==
2961363== by 0x48AA8C: send_unix_sock (client.c:896)
==
2961363== by 0x484F45: thread_manage_clients (client.c:2865)
==
2961363== by 0x480FB4: launch_thread (thread.c:66)
==
2961363== by 0x5208608: start_thread (pthread_create.c:477)
==
2961363== by 0x5346292: clone (clone.S:95)
==
2961363== Address 0x7575389 is 25 bytes inside a block of size 16,384 alloc'd
==
2961363== at 0x483DFAF: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==
2961363== by 0x4EB618: lttng_dynamic_buffer_set_capacity (dynamic-buffer.c:166)
==
2961363== by 0x4EB52C: lttng_dynamic_buffer_append (dynamic-buffer.c:55)
==
2961363== by 0x48CBA1: setup_lttng_msg (client.c:125)
==
2961363== by 0x48AD70: setup_lttng_msg_no_cmd_header (client.c:860)
==
2961363== by 0x489825: process_client_msg (client.c:2253)
==
2961363== by 0x484A97: thread_manage_clients (client.c:2807)
==
2961363== by 0x480FB4: launch_thread (thread.c:66)
==
2961363== by 0x5208608: start_thread (pthread_create.c:477)
==
2961363== by 0x5346292: clone (clone.S:95)
==
2961363== Uninitialised value was created by a stack allocation
==
2961363== at 0x485FE4: process_client_msg (client.c:928)
After some digging, I found that this warning was caused by the padding
of the `struct lttng_session_list_schedules_return` during the
`LTTNG_SESSION_LIST_ROTATION_SCHEDULES` command.
All the fields are of the stack allocated struct are initialised by the
designated initializer but the padding is not.
These padding bytes are reported by Valgrind as being used
uninitialised.
Fix
===
Remove the padding by adding the LTTNG_PACKED attribute to the nested
structs in `struct lttng_session_list_schedules_return`.
Notes
=====
In light of the actual root cause, this is stacktrace is not really
useful.
The realloc call to grow the buffer makes it hard to find what is the
actual uninitialised stack allocation because Valgrind reports the
realloc call as the problematic site.
I was able to track this issue by adding a "consuming" step in the
`lttng_dynamic_buffer_append()` function. This consuming step would sum
all the bytes of the `buf` parameter so as to force Valgrind to check
each byte and not wait until the `sendmsg()` call. This way, I was able
to get a more precise location of the root cause of the issue.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ib4a729575e9117cf95716ad25e1417c833f4232b
Jérémie Galarneau [Mon, 14 Jun 2021 19:21:23 +0000 (15:21 -0400)]
Fix: bump minimal urcu dependency to 0.11
cds_lfht_destroy is used from within RCU read-side critical sections
which is only allowed for urcu >= 0.10 (see userspace-rcu commit
d0ec0ed2f).
Such uses were introduced as part of the 2.11 release (contemporary
to urcu 0.10).
This version of the fix differs from
879000628 since the configure
script has changed significantly for the 2.13 release (use of PKG_CONFIG
rather than looking for a version-specific symbol).
The configure script looks for `urcu_memb_barrier` which was introduced
in liburcu 0.11.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I7d8daba877cbdcfaf93cdc4c9b16a3b3df974782
Mathieu Desnoyers [Thu, 27 May 2021 20:45:55 +0000 (16:45 -0400)]
Fix: consumer: unbalanced RCU read-side lock on error
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I22ddc8585af84d07e8f3d19e9df582599352a1b8
Jonathan Rajotte [Mon, 15 Mar 2021 17:13:37 +0000 (13:13 -0400)]
Fix: tests: missing LOG_DRIVER and LOG_DRIVER_FLAGS
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I808bb2bb80824a574850cc3015cb24eec7b3ff81
Mathieu Desnoyers [Thu, 27 May 2021 21:11:22 +0000 (17:11 -0400)]
Fix: list_lttng_agent_events: unbalanced RCU read-side lock on error
The error label jumps to the end label which releases the RCU read-side
lock. There are many error paths in this function which goto error
without holding the RCU read-side lock, thus causing unbalanced RCU
read-side lock.
There is no point in keeping so short RCU read-side critical sections,
so cover the entire function with a single read-side critical section.
[ Applies to stable-2.12 and possibly prior versions. Does _not_ apply
to stable-2.13+. ]
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I5b20c229a5df22d22ecfdc64dbbb87ee118649d2
Jérémie Galarneau [Tue, 18 May 2021 21:53:45 +0000 (17:53 -0400)]
Update version to v2.12.4
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Jonathan Rajotte [Tue, 1 Dec 2020 17:19:53 +0000 (12:19 -0500)]
Fix: snapshot path have domain subdir duplicate "ust/ust" or "kernel/kernel"
This is a combination of 2 commits.
This is the 1st commit message:
Observed issue
==============
lttng-ivc observed unexpected path generated for streamed snapshot:
joraj-alpa/
test-
20190319-120000-
20210113-110101/
snapshot-1-
20210113-110102-0/
* ust/
* ust/
pid/
app-ust-
2362198-
20210113-110101/
channel_0
channel_1
metadata
channel_2
channel_3
"ust" the domain subdir is present two-time instead off only one time.
The same problem is seen for kernel snapshots.
Cause
=====
Based on dissection the problem was introduced by commit
5da88b0f58d7f838068037ea449ddfb25d3e85ad [1]
For snapshots, the consumer output object of the snapshot output and
information is fetched from the *original* consumer output.
(code block around src/bin/lttng-sessiond/cmd.c:4748)
The snapshot consumer output does not contains the necessary information
to populate domain_subdir on copy (domain_subdir is '\0').
This would lead to a len evaluation of 1 for the consumer_path_offset in
setup_channel_trace_path:70. This would end up not "skipping" the "ust"
and "kernel" part of the path.
Solution
========
Part of the solution is to copy the domain_subdir from the original
session consumer output to the snapshot output.
Still, the problem was still present since that now that the
domain_subdir was not "\0", the value was suffixed to the passed
session_path. In the snapshot code path, "ust/" and "kernel/" were
already present in the session_path passed to setup_channel_trace_path.
A quick modification at the caller level in the snapshot code path fixes
the issues once and for all.
Tests
=======
The test suit is augmented for certain key tests to validate the
complete path of a trace.
Path validation is based on crude bash pattern matching.
Known drawbacks
=========
None
References
==========
[1] https://github.com/lttng/lttng-tools/commit/
5da88b0f58d7f838068037ea449ddfb25d3e85ad
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
This is the commit message #2:
Fix: expected procname should not have -ust suffix
commit ("Fix: ustcomm: application name uses the '-ust'-suffixed thread
name") in LTTng-UST removes "-ust" suffix from procname, which appears
in the output trace directory hierarchy.
Adapt tests to not expect this quirk.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ie7cd2d5471ee3a942fa511e2f4cab09e3aa499e4
Jérémie Galarneau [Wed, 7 Oct 2020 18:10:35 +0000 (14:10 -0400)]
Fix: relayd: failure to read index entry or stream packet after clear
Observed issue
==============
The clear tests occasionally fail with the following babeltrace error
when a live session is stopped following a "clear". Unfortunately, this
problem only seems to occur on certain machines. In my case, I only
managed to reproduce this on the CI's workers.
10-07 12:39:48.333 7679 7679 E PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_get_stream_bytes@viewer-connection.c:1610 [lttng-live] Received get_data_packet response: error
10-07 12:39:48.333 7679 7679 E PLUGIN/CTF/MSG-ITER request_medium_bytes@msg-iter.c:563 [lttng-live] User function failed: status=ERROR
10-07 12:39:48.333 7679 7679 E PLUGIN/CTF/MSG-ITER ctf_msg_iter_get_next_message@msg-iter.c:2899 [lttng-live] Cannot handle state: msg-it-addr=0x5603c28e2830, state=DSCOPE_TRACE_PACKET_HEADER_BEGIN
10-07 12:39:48.333 7679 7679 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_iterator_next_handle_one_active_data_stream@lttng-live.c:845 [lttng-live] CTF message iterator failed to get next message: msg-iter=0x5603c28e2830, msg-iter-status=ERROR
10-07 12:39:48.333 7679 7679 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_msg_iter_next@lttng-live.c:1665 [lttng-live] Error preparing the next batch of messages: live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
10-07 12:39:48.333 7679 7679 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:864 Component input port message iterator's "next" method failed: iter-addr=0x5603c28cb0f0, iter-upstream-comp-name="lttng-live", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
10-07 12:39:48.333 7679 7679 E PLUGIN/FLT.UTILS.MUXER muxer_upstream_msg_iter_next@muxer.c:454 [muxer] Upstream iterator's next method returned an error: status=ERROR
10-07 12:39:48.333 7679 7679 E PLUGIN/FLT.UTILS.MUXER validate_muxer_upstream_msg_iters@muxer.c:991 [muxer] Cannot validate muxer's upstream message iterator wrapper: muxer-msg-iter-addr=0x5603c28dbe70, muxer-upstream-msg-iter-wrap-addr=0x5603c28cd0f0
10-07 12:39:48.333 7679 7679 E PLUGIN/FLT.UTILS.MUXER muxer_msg_iter_next@muxer.c:1415 [muxer] Cannot get next message: comp-addr=0x5603c28dc960, muxer-comp-addr=0x5603c28db0a0, muxer-msg-iter-addr=0x5603c28dbe70, msg-iter-addr=0x5603c28caf80, status=ERROR
10-07 12:39:48.333 7679 7679 W LIB/MSG-ITER bt_message_iterator_next@iterator.c:864 Component input port message iterator's "next" method failed: iter-addr=0x5603c28caf80, iter-upstream-comp-name="muxer", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER, iter-upstream-comp-class-name="muxer", iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
10-07 12:39:48.333 7679 7679 W LIB/GRAPH consume_graph_sink@graph.c:473 Component's "consume" method failed: status=ERROR, comp-addr=0x5603c28dcb60, comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK, comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages (`text` fo", comp-class-is-frozen=0, comp-class-so-handle-addr=0x5603c28c8140, comp-class-so-handle-path="/home/jenkins/jgalar-debug/build/usr/lib/babeltrace2/plugins/babeltrace-plugin-text.so", comp-input-port-count=1, comp-output-port-count=0
10-07 12:39:48.333 7679 7679 E CLI cmd_run@babeltrace2.c:2548 Graph failed to complete successfully
10-07 12:39:48.333 7679 7679 E PLUGIN/SRC.CTF.LTTNG-LIVE/VIEWER lttng_live_session_detach@viewer-connection.c:1227 [lttng-live] Unknown detach return code 0
ERROR: [Babeltrace CLI] (babeltrace2.c:2548)
Graph failed to complete successfully
CAUSED BY [libbabeltrace2] (graph.c:473)
Component's "consume" method failed: status=ERROR, comp-addr=0x5603c28dcb60,
comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK,
comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages
(`text` fo", comp-class-is-frozen=0, comp-class-so-handle-addr=0x5603c28c8140,
comp-class-so-handle-path="/home/jenkins/jgalar-debug/build/usr/lib/babeltrace2/plugins/babeltrace-plugin-text.so",
comp-input-port-count=1, comp-output-port-count=0
CAUSED BY [libbabeltrace2] (iterator.c:864)
Component input port message iterator's "next" method failed:
iter-addr=0x5603c28caf80, iter-upstream-comp-name="muxer",
iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER,
iter-upstream-comp-class-name="muxer",
iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu",
iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
CAUSED BY [muxer: 'filter.utils.muxer'] (muxer.c:991)
Cannot validate muxer's upstream message iterator wrapper:
muxer-msg-iter-addr=0x5603c28dbe70,
muxer-upstream-msg-iter-wrap-addr=0x5603c28cd0f0
CAUSED BY [muxer: 'filter.utils.muxer'] (muxer.c:454)
Upstream iterator's next method returned an error: status=ERROR
CAUSED BY [libbabeltrace2] (iterator.c:864)
Component input port message iterator's "next" method failed:
iter-addr=0x5603c28cb0f0, iter-upstream-comp-name="lttng-live",
iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE,
iter-upstream-comp-class-name="lttng-live",
iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon",
iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (lttng-live.c:1665)
Error preparing the next batch of messages:
live-iter-status=LTTNG_LIVE_ITERATOR_STATUS_ERROR
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (lttng-live.c:845)
CTF message iterator failed to get next message: msg-iter=0x5603c28e2830,
msg-iter-status=ERROR
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (msg-iter.c:2899)
Cannot handle state: msg-it-addr=0x5603c28e2830,
state=DSCOPE_TRACE_PACKET_HEADER_BEGIN
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (msg-iter.c:563)
User function failed: status=ERROR
CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (viewer-connection.c:1610)
Received get_data_packet response: error
This occurs immediately following a 'stop' on the session. As the error
indicates, a request to obtain a data packet fails with a generic
error reply.
Moreover, the following LTTNG_VIEWER_DETACH_SESSION appears to fail
with an invalid status code. This is addressed in a different commit.
Reproducing the test's failure without redirecting the relay daemon's
allows us to see the following errors after the first stop:
PERROR - 14:33:44.
929675253 [25108/25115]: Failed to open fs handle to ust/uid/1001/64-bit/index/chan_0.idx, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.c:550)
PERROR - 14:33:45.
030037417 [25108/25115]: Failed to open fs handle to ust/uid/1001/64-bit/index/chan_0.idx, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.c:550)
PERROR - 14:33:45.
130429370 [25108/25115]: Failed to open fs handle to ust/uid/1001/64-bit/index/chan_0.idx, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.c:550)
PERROR - 14:33:45.
230829447 [25108/25115]: Failed to open fs handle to ust/uid/1001/64-bit/index/chan_0.idx, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.c:550)
PERROR - 14:33:45.
331223320 [25108/25115]: Failed to open fs handle to ust/uid/1001/64-bit/index/chan_0.idx, open() returned: No such file or directory (in fd_tracker_open_fs_handle() at fd-tracker.c:550)
This is produced with the following back-trace:
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff69648b1 in __GI_abort () at abort.c:79
#2 0x00005555555b4f1f in fd_tracker_open_fs_handle (tracker=0x55555582c620, directory=0x7fffe8006680,
path=0x7ffff0a25870 "ust/uid/1001/64-bit/index/chan_1.idx", flags=0, mode=0x7ffff0a24508) at fd-tracker.c:550
#3 0x0000555555595c34 in _lttng_trace_chunk_open_fs_handle_locked (chunk=0x7fffe0002130, file_path=0x7ffff0a25870 "ust/uid/1001/64-bit/index/chan_1.idx",
flags=0, mode=432, out_handle=0x7ffff0a24710, expect_no_file=true) at trace-chunk.c:1388
#4 0x0000555555595eef in lttng_trace_chunk_open_fs_handle (chunk=0x7fffe0002130, file_path=0x7ffff0a25870 "ust/uid/1001/64-bit/index/chan_1.idx", flags=0,
mode=432, out_handle=0x7ffff0a24710, expect_no_file=true) at trace-chunk.c:1433
#5 0x00005555555da6c2 in _lttng_index_file_create_from_trace_chunk (chunk=0x7fffe0002130, channel_path=0x7fffe8018c30 "ust/uid/1001/64-bit",
stream_name=0x7fffe8018c10 "chan_1", stream_file_size=0, stream_file_index=0, index_major=1, index_minor=1, unlink_existing_file=false, flags=0,
expect_no_file=true, file=0x7fffe0002270) at index.c:97
#6 0x00005555555dad8a in lttng_index_file_create_from_trace_chunk_read_only (chunk=0x7fffe0002130, channel_path=0x7fffe8018c30 "ust/uid/1001/64-bit",
stream_name=0x7fffe8018c10 "chan_1", stream_file_size=0, stream_file_index=0, index_major=1, index_minor=1, expect_no_file=true, file=0x7fffe0002270)
at index.c:186
#7 0x000055555557640f in try_open_index (vstream=0x7fffe0002250, rstream=0x7fffe8018c50) at live.c:1378
#8 0x0000555555577155 in viewer_get_next_index (conn=0x7fffd4001440) at live.c:1643
#9 0x0000555555579a01 in process_control (recv_hdr=0x7ffff0a27c30, conn=0x7fffd4001440) at live.c:2311
#10 0x000055555557a1db in thread_worker (data=0x0) at live.c:2482
#11 0x00007ffff6d1c6db in start_thread (arg=0x7ffff0a28700) at pthread_create.c:463
#12 0x00007ffff6a45a3f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
That problem is mostly cosmetic in nature (the open can fail
"legitimately") as the PERROR should simply not be printed and is
addressed in a different commit.
This error is also produced after a 'clear' is issued:
PERROR - 14:33:45.
532782268 [25108/25115]: Failed to read from file system handle of viewer stream id 1, offset: 4096: No such file or directory (in viewer_get_packet() at live.c:1849)
Which is produced with the following back-trace:
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007f53e297c8b1 in __GI_abort () at abort.c:79
#2 0x000055dd77ccef2c in viewer_get_packet (conn=0x7f53c4001100) at live.c:1850
#3 0x000055dd77cd0a15 in process_control (recv_hdr=0x7f53dca3fc30, conn=0x7f53c4001100) at live.c:2315
#4 0x000055dd77cd11db in thread_worker (data=0x0) at live.c:2483
#5 0x00007f53e2d346db in start_thread (arg=0x7f53dca40700) at pthread_create.c:463
#6 0x00007f53e2a5da3f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
A similar problem occurs, although more rarely, when reading an
index entry in viewer_get_next_index().
Cause
=====
The following situation leads to both failures to get a
packet and failures to get the next index:
- Viewer connects to an existing session,
- Viewer consumes a number of packets, alternating the
GET_NEXT_INDEX and GET_PACKET command,
- The session's streams are rotated to a new trace chunk
(as part of a clear),
- The session is started and stopped, causing new packets
to be produced and received,
- The session is stopped and destroyed, causing the session's
streams to rotate into a "null" trace chunk (no active
trace files),
- Viewer issues GET_NEXT_INDEX or GET_PACKET, but the fact
that a rotation occurred on the receiving end is not detected
as the relay streams' trace chunk are "null".
The crux of the problem is that lttng_trace_chunk_ids_equal() is
bypassed when the current trace chunk of a relay stream is "null".
The rationale for skipping this check is that it is assumed that the
files currently opened by the live server can can still be used even
if the consumer has rotated the corresponding streams into a 'null'
trace chunk, meaning no trace chunk is 'set' for those streams.
This makes sense in one scenario: the session was destroyed and we wish
to allow a connected live client to finish consuming the trace packets
up to the end of the session's lifetime.
Here, the situation is different. The viewer is reading chunk 'A'.
Meanwhile, a rotation occurs into chunk 'B' and packets are received for
chunk 'B'. Then, a rotation to a 'null' chunk (no active chunk) occurs.
In essence, the live server never sees the rotation between chunk 'A'
and 'B', and simply assumes that a rotation from 'A' to 'null' occurred,
as would happen at the end of a session.
In terms of the code, in viewer_get_next_index(), a call to
check_index_status() is performed to determine if an index is available.
The function checks that `index_received_seqcount` is greater than
`index_sent_seqcount`. In that case, it determines that an index must be
available.
Unfortunately, there is no way for the live server to determine that the
remaining indexes are in a chunk that doesn't exist anymore (chunk 'B').
Thus, viewer_get_next_index() attempts to read an index entry from the
current index file and fails.
Solution
========
1) lttng_trace_chunk_ids_equal() is modified to properly handle
'null' trace chunks:
- A null and a non-null trace chunk are not equal,
- Two null trace chunks are equal.
2) Rotation count
A rotation counter is introduced to track the number of rotations
that occurred during a relay stream's lifetime. This counter is
sampled by the matching viewer streams on creation and on rotation
and is used to determine if all rotations were "seen" by the viewer
stream.
Hence, this allows us to handle the special case where a viewer
is consuming the contents of a relay stream that just transitioned
into a 'null' trace chunk (see comments in patch).
The rest of the modifications simply allow the live server to handle
null trace chunks in viewer streams. This fixes another unrelated bug
that I observed while investigating this: sessions that don't have an
active trace chunk are not shown when listing sessions with babeltrace.
To reproduce, simply stop, clear a session, and attempt to list the
sessions of the associated relay daemon.
Known drawbacks
===============
None.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ibb3116990e34b7ec3b477f3482d0c0ff1e848d09
Jérémie Galarneau [Thu, 22 Apr 2021 18:47:14 +0000 (14:47 -0400)]
Fix: sessiond: leak of config_path on duplicate --config option
1452207 Resource leak
The system resource will not be reclaimed and reused, reducing the
future availability of the resource.
CID
1452373 (#3 of 3): Resource leak (RESOURCE_LEAK)
26. overwrite_var: Overwriting config_path in config_path =
utils_expand_path(optarg) leaks the storage that config_path points to.
Reported-by: Coverity Scan
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I70eff0f4de5ac9b84b699e88232669259eab54db
Jonathan Rajotte [Thu, 6 May 2021 15:14:52 +0000 (11:14 -0400)]
Fix: relayd: live: data is missing between viewer attach and retry
Observed issue
==============
Data produced between the time a live viewer attach and a viewer
subsequent "polling" is missing from the point of view of the viewer.
Using the following reproducer:
# A lttng-sessiond is already running
lttng-relayd -b -vvv > relayd.log 2>&1
lttng create live --live
lttng enable-event -u "hello_world:*"
lttng start
date
# Set the "polling" interval at ~30 seconds
babeltrace2 --retry-duration=
30000000 -i lttng-live net://localhost/host/$HOSTNAME/live &
pid=$!
# Make sure we are in the retry phase of bt2
date
sleep 2
# Produce events
# Note here that hello perform a sleep of 1 seconds before generating
# 10 events to make sure lttng-ust had time to register. A total of
# 100 events is generated.
date
for i in `seq 1 10`; do
./hello
done
# Here we expect to see 100 events outputted at some point (~18
# seconds (30 - 2 - 10) from that time).
# No events are received by babeltrace2.
date
sleep 40
date
# We retry the same exercise a second time.
echo "Moving to phase 2"
date
for i in `seq 1 10`; do
./hello
done
# Here data is received correctly.
date
sleep 40
date
kill $pid
wait
lttng destroy live
pkill lttng-relayd
Cause
=====
At the moment the viewer attach is done, no streams exist. On the
following viewer_get_new_streams (~30 seconds later), streams now exist
since they were allocated lazily following the first application
registration, but LTTNG_VIEWER_SEEK_LAST is used. This essentially
"discards" any events in the stream that happened between the attach
time and the viewer_get_new_stream execution.
Note that the same problem would happen if a new UID (in per-uid mode)
would happen to begin tracing in between viewer retry.
Solution
========
Always use `LTTNG_VIEWER_SEEK_BEGINNING` during
`viewer_get_new_streams`.
Known drawbacks
=========
This does not fix the per-pid problem where an app run and die in between
the viewer retries.
References
==========
https://lists.lttng.org/pipermail/lttng-dev/2021-May/029953.html
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: If0e2df639572e4bce91bac03994494ddbe138315
Jérémie Galarneau [Mon, 3 May 2021 16:50:25 +0000 (12:50 -0400)]
Fix: lttng-ctl: erroneous check if user is part of the tracing group
in_tgroup is set to `-1` whenever the current user is not part of the
tracing group _or_ if an error occurred while looking up if the user
is part of the tracing group. In other words, the value '0' is unused.
in_tgroup must be explicitly checked against '1' and can't be assumed
to behave as a boolean value.
This is _not_ a security issue: if the user is not part of the tracing
group, she will fail to open the root session damon's socket because
of the kernel-side permission checking. However, the behaviour of the
lttng client (and error reporting) will be confusing.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I1c12285fd0f42bb73c560a32a56cab03f1a5a6e1
Mathieu Desnoyers [Fri, 30 Apr 2021 16:02:47 +0000 (12:02 -0400)]
Fix: kernel consumer: get next subbuffer EAGAIN handling
The caller of get next subbuffer (data and metadata) callbacks only
expects -ENODATA when there is no data to read. However, the kernel
tracer distinguishes between no data for a finalized stream (-ENODATA)
and no data for a non-finalized stream (-EAGAIN).
Given that the consumer daemon uses the POLLHUP returned by epoll to
detect stream end of life, it does not care about the distinction
between -EAGAIN and -ENODATA when streaming.
However, taking a snapshot of a metadata stream uses the distinction
between nodata and again. Change this so it considers a return value of
0 from lttng_consumer_read_subbuffer to mean there is no more data to
read, so we can combine -EAGAIN and -ENODATA within get next subbuffer
callbacks and return -ENODATA for both.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ifb157dbe28498279dce30e9efa3b5aedcf3f9b1d
Mathieu Desnoyers [Fri, 30 Apr 2021 14:44:44 +0000 (10:44 -0400)]
Fix: kernel consumer: signal metadata ready condition variable
The behaviour of the kernel consumer does not match that of the user
space consumer. When a live metadata stream goes back to "sleep", it
must broadcast on the metadata_rdv to wake-up any thread that is
performing a metadata sync.
This mismatch causes a hang in the kernel consumer during the kernel
clear tests.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I7c93007b6d08f340881c08f23931582443bfa397
Jérémie Galarneau [Thu, 29 Apr 2021 20:17:45 +0000 (16:17 -0400)]
Fix: consumerd: unbalanced subbuffer 'get' when checking operation availability
A WARN_ON in lttng-modules is hit in the get_subbuf ioctl, which
pointed to an unbalanced get/put pair when accessing the subbuffers
of a channel.
517716.168856] ------------[ cut here ]------------
[517716.171559] WARNING: CPU: 1 PID: 19313 at /home/efficios/git/lttng-modules/src/lib/ringbuffer/ring_buffer_frontend.c:1263 lib_ring_buffer_get_subbuf+0x24f/0x260 [lttng_lib_ring_buffer]
[517716.180096] Modules linked in: lttng_test(O) lttng_probe_x86_exceptions(O) lttng_probe_x86_irq_vectors(O) lttng_probe_writeback(O) lttng_probe_workqueue(O) lttng_probe_vmscan(O) lttng_probe_udp(O) lttng_probe_timer(O) lttng_probe_sunrpc(O) lttng_probe_statedump(O) lttng_probe_sock(O) lttng_probe_skb(O) lttng_probe_signal(O) lttng_probe_scsi(O) lttng_probe_sched(O) lttng_probe_regulator(O) lttng_probe_regmap(O) lttng_probe_rcu(O) lttng_probe_random(O) lttng_probe_printk(O) lttng_probe_power(O) lttng_probe_net(O) lttng_probe_napi(O) lttng_probe_module(O) lttng_probe_kmem(O) lttng_probe_jbd2(O) lttng_probe_irq(O) lttng_probe_i2c(O) lttng_probe_gpio(O) lttng_probe_ext4(O) lttng_probe_compaction(O) lttng_probe_btrfs(O) lttng_probe_block(O) lttng_counter_client_percpu_32_modular(O) lttng_counter_client_percpu_64_modular(O) lttng_counter(O) lttng_ring_buffer_event_notifier_client(O) lttng_ring_buffer_metadata_mmap_client(O) lttng_ring_buffer_client_mmap_overwrite(O)
[517716.180815] lttng_ring_buffer_client_mmap_discard(O) lttng_ring_buffer_metadata_client(O) lttng_ring_buffer_client_overwrite(O) lttng_ring_buffer_client_discard(O) lttng_tracer(O) lttng_statedump(O) lttng_wrapper(O) lttng_uprobes(O) lttng_clock(O) lttng_kprobes(O) lttng_lib_ring_buffer(O) lttng_kretprobes(O) [last unloaded: lttng_wrapper]
[517716.213228] CPU: 1 PID: 19313 Comm: lttng-consumerd Tainted: G O 5.11.2 #80
[517716.215573] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
[517716.220341] RIP: 0010:lib_ring_buffer_get_subbuf+0x24f/0x260 [lttng_lib_ring_buffer]
[517716.222579] Code: 50 f0 ff 00 0f 0b 49 03 5f 28 44 8b 85 80 00 00 00 49 8b 77 30 45 85 c0 48 89 d9 0f 85 4f ff ff ff e9 25 ff ff ff f0 ff 45 00 <0f> 0b b8 f0 ff ff ff e9 a6 fe ff ff 0f 1f 44 00 00 0f 1f 44 00 00
[517716.227610] RSP: 0018:
ffffbafd09023e88 EFLAGS:
00010202
[517716.229147] RAX:
0000000000000000 RBX:
ffff9906ed069a00 RCX:
ffff9905c4e4a400
[517716.231186] RDX:
ffffdafcffc4ec90 RSI:
0000000000200000 RDI:
ffffdafcffc4e9f0
[517716.233265] RBP:
ffff9905c4e4a400 R08:
0000000000300000 R09:
0000000000200000
[517716.235284] R10:
0000000000200000 R11:
0000000000000000 R12:
0000000000000000
[517716.237333] R13:
0000000000000000 R14:
000000000000005e R15:
0000000000000000
[517716.239360] FS:
00007ff327fff700(0000) GS:
ffff9905a7a40000(0000) knlGS:
0000000000000000
[517716.241634] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[517716.243262] CR2:
000055b91cf07730 CR3:
000000076f45e003 CR4:
00000000001706e0
[517716.245316] Call Trace:
[517716.246281] lib_ring_buffer_ioctl+0x181/0x300 [lttng_lib_ring_buffer]
[517716.248301] lttng_stream_ring_buffer_ioctl+0x1a3/0x200 [lttng_tracer]
[517716.252621] __x64_sys_ioctl+0x8e/0xd0
[517716.253931] do_syscall_64+0x33/0x80
[517716.255016] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[517716.256448] RIP: 0033:0x7ff3372f46d7
[517716.257586] Code: b3 66 90 48 8b 05 b1 47 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 47 2d 00 f7 d8 64 89 01 48
[517716.262472] RSP: 002b:
00007ff327ffe2c8 EFLAGS:
00000246 ORIG_RAX:
0000000000000010
[517716.264634] RAX:
ffffffffffffffda RBX:
00007ff310002740 RCX:
00007ff3372f46d7
[517716.266674] RDX:
0000000000000000 RSI:
000000000000f605 RDI:
000000000000005e
[517716.268713] RBP:
00007ff327ffe310 R08:
00007ff310002870 R09:
a002000000000000
[517716.270732] R10:
000055992b0a6530 R11:
0000000000000246 R12:
000055992c47da70
[517716.272768] R13:
00007ff318005e80 R14:
00007ff310002740 R15:
000055992b0a6528
[517716.274800] irq event stamp:
4526705
[517716.275897] hardirqs last enabled at (
4526713): [<
ffffffff9016a474>] console_unlock+0x4b4/0x5b0
[517716.278320] hardirqs last disabled at (
4526722): [<
ffffffff9016a3d0>] console_unlock+0x410/0x5b0
[517716.280759] softirqs last enabled at (
4526658): [<
ffffffff9120030f>] __do_softirq+0x30f/0x432
[517716.285125] softirqs last disabled at (
4526653): [<
ffffffff91001052>] asm_call_irq_on_stack+0x12/0x20
[517716.287648] ---[ end trace
506e55b312b731bf ]---
The check for the availability of the 'get_next_check_metadata'
operation attempts to use the operation at the creation of the metadata
stream. Most of the time this occurs before any metadata could be
generated.
However, the check will sometimes (very rarely) occur after the
generation of some metadata causing the 'get' to succeed and,
consequently, a subbuffer to be acquired. In those cases, the subbuffer
must be released immediately.
Fixes #1313
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ic4f3deb6fa665a13e3673fd859b23d7ef803f963
Francis Deslauriers [Thu, 22 Apr 2021 21:24:57 +0000 (17:24 -0400)]
Fix: tests: gen-ust-events-ns: Uninitialized argument value
If both `if (snprintf(...` of the `get_ns_inum()` function fail, the
function will not uninitialize the `ns_inum` output parameter and still
return 0. Leading to the argument `ns1` of debug_printf() being used
uninitialized.
Reported-by: scan-build.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I090e57d5c896fbac7381989e887ae7281697b4de
Jérémie Galarneau [Thu, 22 Apr 2021 23:54:47 +0000 (19:54 -0400)]
Fix: consumerd: strlen called on uninitialized path
closed_trace_chunk_path is uninitialized when relayd_id != 0.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I71564bdce0617de7749fc5848764a8cda4d122ea
Jérémie Galarneau [Thu, 22 Apr 2021 16:32:26 +0000 (12:32 -0400)]
Fix: tests: health thread stall: only stop consumerd when required
Since
a0f8e3109, stop_lttng_consumerd will report a failure when
there is no consumer daemon to kill. This fix ensures it is only
invoked for tests that launch a consumer daemon.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ifba6c0a87158d5cf20eeacb2b65fe049197d9b55
Francis Deslauriers [Mon, 19 Apr 2021 18:58:31 +0000 (14:58 -0400)]
Fix: tests: quote variable in case it's not set
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I34d39498a271e3a1644f49ac7f86f3e7f6d37a0f
Jonathan Rajotte [Mon, 15 Mar 2021 21:52:24 +0000 (17:52 -0400)]
Fix: sessiond: session destroy hang in per-uid when context cannot be added
Observed issue
==============
The system_test CI jobs hang on the perf test suite during the destroy
command steps of the ust perf raw subtest.
Cause
=====
The system_test are running inside a kvm as root. It turns out that the
PMU (UNHALTED_REFERENCE_CYCLES) the test suite is trying to add is
unavailable on the qemu host.
ustctl_add_context return -1024 since it fails to add the context.
This leads us down the error path for the callstack leading to the
ustctl_add_context call.
1) `ust_app_channel_create` returns `ret` != 0;
2) `find_or_create_ust_app_channel` returns `ret != 0`;
3) `ust_app_synchronize` based on the `ret` value goes directly to the
end of the function to an error path without passing on the
`create_ust_app_metadata` function and clean-up structure related to
the app.
Note that being in per-uid mode, data and metadata
channel/streams/buffer allocation is done on the fly for the first app
during `ust_app_synchronize` and its callee. For the current problematic
scenario, only the data channels have been allocated on the consumer for
the uid at that point. The metadata for that uid is not yet created.
Now that we know more of what is going on during an ""add context""
let's take a look at the actual hang.
The client never complete the destroy command since the consumerd
indicates that the trace chunk for the session is not closed. The trace
chunk still exists despite the fact that a close chunk command has been
issued. This is the case since its refcount never reaches zero and thus
the release does not complete.
In a normal execution without the use of contexts, the release of the
trace hunk (refcount == 0) occurs during the final rotation on destroy.
Upon further comparison between a working execution and a non-working
execution, in a non-working execution the `cmd_rotate_session` does not
issue the rotation for the data channels since the loop detects that no
metadata is present. Which, as we discussed earlier, can happen if we
fail to add the context to the app channel.
[1]
```
cds_list_for_each_entry(reg, &usess->buffer_reg_uid_list, lnode) {
struct buffer_reg_channel *reg_chan;
struct consumer_socket *socket;
if (!reg->registry->reg.ust->metadata_key) {
/* Skip since no metadata is present */
continue;
}
....
/* Rotate the data channels. */
cds_lfht_for_each_entry(reg->registry->channels->ht, &iter.iter,
reg_chan, node.node) {
ret = consumer_rotate_channel(socket,
reg_chan->consumer_key,
usess->uid, usess->gid,
usess->consumer,
/* is_metadata_channel */ false);
if (ret < 0) {
cmd_ret = LTTNG_ERR_ROTATION_FAIL_CONSUMER;
goto error;
}
}
....
}
```
Solution
========
Move the metadata check after the data channel rotation since it is
possible to have data channels but no metadata channel, although it is a
corner case.
Note that per-pid mode and kernel are not affected by the current bug
since a complete teardown of all objects is done. This only affect
per-uid due to the "on the fly" allocation nature of it since we need to
share the channel/stream/buffers across apps.
Known drawbacks
=========
None.
References
==========
[1] https://github.com/lttng/lttng-tools/blob/
3d1384a4add389c38f8554130e8dec2e2d06009d/src/bin/lttng-sessiond/ust-app.c#L7057
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I611761ac4051c9a80158705a87e59541fb37660c
Jonathan Rajotte [Wed, 13 Jan 2021 15:09:06 +0000 (10:09 -0500)]
Fix: backward relayd: path contains a leading "ust" folder
Observed issue
==============
test_output_path_relayd[lttng-tools-2.13-lttng-tools-2.10-uid] from
the lttng-ivc fails on path validation for the resulting trace.
Here lttng-sessiond is 2.13 and lttng-relayd is 2.10
Traces are generated and the following hierarchy is found in the
lttng-relayd trace folder.
lttng_home
└── lttng-traces
* └── ust
└── joraj-alpa
├── auto-
20210113-165054
│ └── ust
│ └── uid
Note the extra "ust" (*) in the hierarchy.
The tests expects:
lttng_home
└── lttng-traces
└── joraj-alpa
├── auto-
20210113-165054
│ └── ust
│ └── uid
Cause
=====
Introduced by:
5da88b0f58d7f838068037ea449ddfb25d3e85ad [1]
relayd_add_stream now suffixes the domain_name to the pathname.
This is only necessary for cases where the corresponding
lttng-relayd version is greater than 2.10. In other cases, modification
of pathname is not necessary.
Solution
========
Perform domain suffixing only for relayd > 2.10.
Known drawbacks
=========
None.
References
==========
[1] https://github.com/lttng/lttng-tools/commit/
5da88b0f58d7f838068037ea449ddfb25d3e85ad
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: If43719c0cf8285d3242d846316d7277d4f5519f1
Christophe Bedard [Fri, 28 Aug 2020 15:00:30 +0000 (11:00 -0400)]
Fix: lttng_destroy_session_no_wait: return 0 on success
lttng_destroy_session_no_wait() is supposed to behave like
lttng_destroy_session():
> Return 0 on success else a negative LTTNg error code.
However, it returns LTTNG_OK on success. Make it return 0 instead.
Signed-off-by: Christophe Bedard <christophe.bedard@apex.ai>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I787307d4291babfb9ddcb12937efdbfbb5fd7009
Simon Marchi [Fri, 2 Apr 2021 19:52:27 +0000 (15:52 -0400)]
config: fix typo in error message
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Iba45933cf70452a8c4e89cb471a11cd42bcd08b8
Mathieu Desnoyers [Mon, 25 May 2020 20:19:18 +0000 (16:19 -0400)]
Tests: array expressions without contant index are invalid
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Iaff8d199295359aae9d82d645a30f6b3bf81212b
Mathieu Desnoyers [Mon, 25 May 2020 20:15:40 +0000 (16:15 -0400)]
Fix: validate that array expression contains constant
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ic4f9b178d918b2c7c9dd27bc6a30c1798a171727
Jonathan Rajotte [Wed, 2 Dec 2020 22:01:35 +0000 (17:01 -0500)]
Fix: test: base-path tests are not run
Observed issue
==============
The base-path tests are never run during `make check`.
Cause
=====
Albeit
2a1668643ca94195d5c3889d0337e19165805a42 [1] introduces the tests,
the test file is not added to the test list.
Solution
========
Add the test file to the test list.
One of the substest is failing.
not ok 20 - Snapshot recorded
# Failed test 'Snapshot recorded'
# in .//../../../utils/utils.sh:lttng_snapshot_record() at line 1468.
ok 21 - Destroy session ust_app_snapshot_base_path
not ok 22 - Validate trace for event tp:tptest
This is caused by the removal of `trace_path=$2` in
c28fcefd993b7539716bb5cd9557a08a217ec463 [2]. The removal is most
probably a merge error or simply a mistake on my end. This commit revert
that change.
Also add load-stream-extra-path.lttng in EXTRA_DIST.
Known drawbacks
=========
None
References
==========
[1] https://github.com/lttng/lttng-tools/commit/
2a1668643ca94195d5c3889d0337e19165805a42
[2] https://github.com/lttng/lttng-tools/commit/
2a1668643ca94195d5c3889d0337e19165805a42
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I255908165d59e02e0e11b123d3538a74f53fb584
Simon Marchi [Fri, 9 Apr 2021 15:19:17 +0000 (11:19 -0400)]
Fix: filter: memory leak in filter_parser_ctx
When running
$ lttng add-trigger --condition on-event -u ust_tests_demo2:loop --capture intfield --action notify
I get the leaks pasted below. It seems like filter_parser_ctx_free
doesn't free everything in filter_parser_ctx. Add what's missing.
Re-order the frees so that they are in the same order as the members of
the struct, just because it's easier to follow and make sure we didn't
forget anything.
=================================================================
==
1073803==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7ffff767783a in __interceptor_realloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:164
#1 0x5555556833be in bytecode_reserve /home/simark/src/lttng-tools/src/common/bytecode/bytecode.c:59
#2 0x55555568360f in bytecode_push /home/simark/src/lttng-tools/src/common/bytecode/bytecode.c:79
#3 0x5555556a3d61 in filter_visitor_bytecode_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:667
#4 0x55555569c9b1 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:394
#5 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#6 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#7 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#8 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#9 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#10 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#11 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#12 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Direct leak of 112 byte(s) in 1 object(s) allocated from:
#0 0x7ffff767783a in __interceptor_realloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:164
#1 0x5555556833be in bytecode_reserve /home/simark/src/lttng-tools/src/common/bytecode/bytecode.c:59
#2 0x55555568360f in bytecode_push /home/simark/src/lttng-tools/src/common/bytecode/bytecode.c:79
#3 0x5555556a1b94 in visit_node_load_expression_legacy /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:198
#4 0x5555556a1d18 in visit_node_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:231
#5 0x5555556a2540 in visit_node_load /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:399
#6 0x5555556a3a8b in recursive_visit_gen_bytecode /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:622
#7 0x5555556a12fa in visit_node_root /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:53
#8 0x5555556a3a76 in recursive_visit_gen_bytecode /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:620
#9 0x5555556a3c55 in filter_visitor_bytecode_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-bytecode.c:661
#10 0x55555569c9b1 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:394
#11 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#12 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#13 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#14 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#15 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#16 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#17 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#18 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Direct leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x5555556a3dd2 in make_op_root /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:35
#2 0x5555556a73a5 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:874
#3 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#4 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#5 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#6 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#7 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#8 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#9 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#10 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#11 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#12 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Indirect leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x5555556a4f1d in make_op_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:280
#2 0x5555556a696f in make_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:637
#3 0x5555556a73df in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:882
#4 0x5555556a7382 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:870
#5 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#6 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#7 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#8 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#9 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#10 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#11 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#12 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#13 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#14 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x5555556a484d in create_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:201
#2 0x5555556a5040 in make_op_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:287
#3 0x5555556a696f in make_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:637
#4 0x5555556a73df in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:882
#5 0x5555556a7382 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:870
#6 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#7 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#8 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#9 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#10 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#11 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#12 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#13 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#14 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#15 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x5555556a4e64 in create_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:262
#2 0x5555556a5040 in make_op_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:287
#3 0x5555556a696f in make_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:637
#4 0x5555556a73df in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:882
#5 0x5555556a7382 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:870
#6 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#7 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#8 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#9 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#10 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#11 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#12 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#13 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#14 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#15 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x5555556a4bbc in create_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:233
#2 0x5555556a5040 in make_op_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:287
#3 0x5555556a696f in make_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:637
#4 0x5555556a73df in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:882
#5 0x5555556a7382 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:870
#6 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#7 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#8 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#9 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#10 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#11 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#12 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#13 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#14 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#15 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Indirect leak of 9 byte(s) in 1 object(s) allocated from:
#0 0x7ffff761fa69 in __interceptor_strdup /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:452
#1 0x5555556a4c41 in create_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:238
#2 0x5555556a5040 in make_op_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:287
#3 0x5555556a696f in make_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:637
#4 0x5555556a73df in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:882
#5 0x5555556a7382 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:870
#6 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#7 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#8 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#9 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#10 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#11 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#12 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#13 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#14 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#15 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
Indirect leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x5555556a4829 in create_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:196
#2 0x5555556a5040 in make_op_load_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:287
#3 0x5555556a696f in make_expression /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:637
#4 0x5555556a73df in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:882
#5 0x5555556a7382 in generate_ir_recursive /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:870
#6 0x5555556a74d6 in filter_visitor_ir_generate /home/simark/src/lttng-tools/src/common/filter/filter-visitor-generate-ir.c:903
#7 0x55555569c859 in filter_parser_ctx_create_from_filter_expression /home/simark/src/lttng-tools/src/common/filter/filter-parser.y:353
#8 0x55555560542e in parse_event_rule /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:704
#9 0x555555607429 in handle_condition_event /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1088
#10 0x555555608760 in parse_condition /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1326
#11 0x55555560bca0 in cmd_add_trigger /home/simark/src/lttng-tools/src/bin/lttng/commands/add_trigger.c:1925
#12 0x555555616b55 in handle_command /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:237
#13 0x555555617516 in parse_args /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:421
#14 0x555555617812 in main /home/simark/src/lttng-tools/src/bin/lttng/lttng.c:470
#15 0x7ffff700bb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
SUMMARY: AddressSanitizer: 409 byte(s) leaked in 9 allocation(s).
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ic338ea1689d3f002bf9cade6d4f23e62d935968b
Simon Marchi [Wed, 31 Mar 2021 19:06:34 +0000 (15:06 -0400)]
Fix: sessiond: fix -Wshadow error in save.c
I think this actually fixes a bug. Because of the second ret variable,
the return value from init_ust_event_from_agent_event or save_ust_event
would not be forwarded correctly, in case of error.
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ieb61d9b267ab90e18cc349d86754d9b89b5703f1
Simon Marchi [Mon, 12 Apr 2021 17:23:39 +0000 (13:23 -0400)]
Fix: utils: avoid strncpy overlap in utils_partial_realpath
When running the test_utils_expand_path test with ASan enabled, I get:
➜ lttng-tools ./tests/unit/test_utils_expand_path
1..29
INPUT: /a/b/c/d/e
=================================================================
==
1485873==ERROR: AddressSanitizer: strncpy-param-overlap: memory ranges [0x621000021d00,0x621000021d0b) and [0x621000021d00, 0x621000021d0b) overlap
#0 0x7ffff761fd97 in __interceptor_strncpy /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:481
#1 0x555555573834 in utils_partial_realpath /home/simark/src/lttng-tools/src/common/utils.c:195
#2 0x55555557410b in _utils_expand_path /home/simark/src/lttng-tools/src/common/utils.c:374
#3 0x555555574340 in utils_expand_path /home/simark/src/lttng-tools/src/common/utils.c:420
#4 0x555555570b28 in test_utils_expand_path /home/simark/src/lttng-tools/tests/unit/test_utils_expand_path.c:274
#5 0x55555557119e in main /home/simark/src/lttng-tools/tests/unit/test_utils_expand_path.c:345
#6 0x7ffff725fb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
#7 0x55555556fa3d in _start (/home/simark/build/lttng-tools/tests/unit/test_utils_expand_path+0x1ba3d)
0x621000021d00 is located 0 bytes inside of 4096-byte region [0x621000021d00,0x621000022d00)
allocated by thread T0 here:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x55555557269d in zmalloc /home/simark/src/lttng-tools/src/common/macros.h:45
#2 0x555555573d34 in _utils_expand_path /home/simark/src/lttng-tools/src/common/utils.c:335
#3 0x555555574340 in utils_expand_path /home/simark/src/lttng-tools/src/common/utils.c:420
#4 0x555555570b28 in test_utils_expand_path /home/simark/src/lttng-tools/tests/unit/test_utils_expand_path.c:274
#5 0x55555557119e in main /home/simark/src/lttng-tools/tests/unit/test_utils_expand_path.c:345
#6 0x7ffff725fb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
0x621000021d00 is located 0 bytes inside of 4096-byte region [0x621000021d00,0x621000022d00)
allocated by thread T0 here:
#0 0x7ffff7677639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x55555557269d in zmalloc /home/simark/src/lttng-tools/src/common/macros.h:45
#2 0x555555573d34 in _utils_expand_path /home/simark/src/lttng-tools/src/common/utils.c:335
#3 0x555555574340 in utils_expand_path /home/simark/src/lttng-tools/src/common/utils.c:420
#4 0x555555570b28 in test_utils_expand_path /home/simark/src/lttng-tools/tests/unit/test_utils_expand_path.c:274
#5 0x55555557119e in main /home/simark/src/lttng-tools/tests/unit/test_utils_expand_path.c:345
#6 0x7ffff725fb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
The sole caller of utils_partial_realpath, _utils_expand_path, passes
the same buffer (resolved_path) for the input and output. This causes
utils_partial_realpath to call strncpy with overlapping strings.
Fix it by making utils_partial_realpath allocate new memory for the
returned string itself. This causes one more allocation than the
current code, because we don't re-use the existing buffer, but this
should be fine since this isn't exactly performance-critical code.
I think the code is easier to follow as a result.
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Iab983e2f44fa57563b11ac6e9c03a41150669d9e
Simon Marchi [Mon, 12 Apr 2021 17:18:15 +0000 (13:18 -0400)]
Clean-up: utils: make utils_partial_realpath static
It is only used within its file.
Signed-off-by: Simon Marchi <simon.marchi@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I1467f5eeb11c66ecbe358290a172c37db34d881a
Anders Wallin via lttng-dev [Wed, 31 Mar 2021 18:54:39 +0000 (20:54 +0200)]
Fix: added missing test_channel and removed double defined test_syscall
Signed-off-by: Anders Wallin <wallinux@gmail.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Ia65951dffc641bffa41793a5c52474b8db90eb4d
Jérémie Galarneau [Wed, 7 Apr 2021 15:55:26 +0000 (11:55 -0400)]
Fix: sessiond: kernel: invalid error code check
The `EEXIST` error code does not make sense for a kernctl_disable()
operation. In fact, not existing would be an error.
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I8f66c6d6769a4a10e8ff446736c6a7d55a9c32be
Jérémie Galarneau [Wed, 7 Apr 2021 15:49:39 +0000 (11:49 -0400)]
Tests: fix: unchecked sscanf return value
1407934 Unchecked return value
If the function returns an error value, the error value may be mistaken for a normal value.
In parse_arguments: Value returned from a function is not checked for errors before being used (CWE-252)
Reported-by: Coverity Scan
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: I26e4d324c97833acedab4ebd030d412848dcbfe4
Mathieu Desnoyers [Thu, 25 Mar 2021 19:18:13 +0000 (15:18 -0400)]
compiler warning cleanup: is_signed_type: compare -1 to 1
Comparing -1 to 0 triggers compiler warnings (gcc -Wtype-limits and
-Wbool-compare) and Coverity warning "Macro compares unsigned to 0".
Comparing -1 to 1 instead takes care of silencing those warnings while
keeping the same behavior.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Jérémie Galarneau <jeremie.galarneau@efficios.com>
Change-Id: Id4aebce6eed214f0ccba2c0289fb3b034808cb64
This page took 0.079472 seconds and 4 git commands to generate.