Introduce a new lttng_perf_lock to protect the lttng perf context
data structures from concurrent modifications and from fork. This
lock can be nested within the ust_lock, but never the opposite.
This removes the circular locking dependency involving urcu bp.
Fix: fd tracker: do not allow signal handlers to close lttng-ust FDs
Split the thread_fd_tracking state from the ust_fd_mutex_nest used to
track whether a signal handler is nested over a fd tracker lock.
lttng-ust listener threads need to invoke
lttng_ust_fd_tracker_register_thread() so the fd tracker can
distinguish them from application threads.
Otherwise, using ust_fd_mutex_nest to try to distinguish between
ust and application threads makes it possible for signal handlers
to appear as if they are ust listener threads, and thus attempt to
close UST file descriptors.
Fix: fd tracker: provide async-signal-safety for close wrapper
close(3) is part of the async-signal-safe functions. Therefore, it is
expected that the close wrapper provided by liblttng-ust-fd-tracker
behaves in a async-signal-safe way.
Use a similar strategy as ust_lock() does: disable signals when taking
and releasing the lock, and keep track of nesting with a TLS variable.
This ensures signals are restored to their original state when close(3)
ends up being invoked.
If fork() is performed while other threads are holding the fd tracker
lock, it will stay in locked state in the child process and eventually
cause a deadlock.
One way to solve this is to hold the fd tracker lock across fork(), in
the same way we do for the ust_lock. This ensures no other threads are
holding that lock in the parent, and therefore provides a consistent
lock state in the child.
Fix: wait for initial statedump before proceeding to the main program
In the case of short lived applications, the application may exit before
the initial statedump has completed.
Higher-level trace analysis features such as translating addresses to
symbols rely on statedump. That information is required for those
analyses to work on such short-lived applications.
Force the statedump to occur before handing the control to the
application.
Jonathan Rajotte [Mon, 29 Jul 2019 18:49:59 +0000 (14:49 -0400)]
Use MAP_POPULATE to reduce pagefault when available
Any ring buffer configuration bigger than PAGE_SIZE would result
in an increased latency for the first tracepoint hit (1200ns) landing on a
new PAGE_SIZE sized chunk of the mapped memory. This happens at least
for the first ring buffer traversal.
To alleviate this we can use MAP_POPULATE that will "prefault" the page
tables.
A similar flag seems to exist on freebsd (MAP_PREFAULT_READ) but I do
not have access to a system to test it and ensure it does indeed results
in the same effect. It mostly indicates that it prefaults for the
read case so I doubt it is the case.
Default to using MAP_POPULATE on Linux only for now. Support of
prefaulting on other platforms will be added as needed.
Commit 973eac638e4fd introduces an uninitialised value that may prevent
shared memory from being allocated. The compiler didn't give any warning
because the pointer to the value is sent to a function that don't do anything
with it. We simply pass NULL to that function.
-Waddress-of-packed-member, enabled by default, warns about an
unaligned pointer value from the address of a packed member of a
struct or union.
The warning is triggered in some place in LTTng-UST in cases where we
pass a pointer to get a result. Rather than passing the pointer directly
from the struct member, we get the result into a local storage, then
write into in the struct.
Michael Jeanson [Mon, 3 Jun 2019 19:25:32 +0000 (15:25 -0400)]
Fix: namespace our gettid wrapper
Since glibc 2.30, a gettid wrapper was added that conflicts with our
static declaration. Namespace our wrapper so there is no conflict,
we'll add support for the glibc provided wrapper in a further commit.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: bitfield: shift undefined/implementation defined behaviors
bitfield.h uses the left shift operator with a left operand which
may be negative. The C99 standard states that shifting a negative
value is undefined.
When building with -Wshift-negative-value, we get this gcc warning:
In file included from /home/smarchi/src/babeltrace/include/babeltrace/ctfser-internal.h:44:0,
from /home/smarchi/src/babeltrace/ctfser/ctfser.c:42:
/home/smarchi/src/babeltrace/include/babeltrace/ctfser-internal.h: In function ‘bt_ctfser_write_unsigned_int’:
/home/smarchi/src/babeltrace/include/babeltrace/bitfield-internal.h:116:24: error: left shift of negative value [-Werror=shift-negative-value]
mask = ~((~(type) 0) << (__start % ts)); \
^
/home/smarchi/src/babeltrace/include/babeltrace/bitfield-internal.h:222:2: note: in expansion of macro ‘_bt_bitfield_write_le’
_bt_bitfield_write_le(ptr, type, _start, _length, _v)
^~~~~~~~~~~~~~~~~~~~~
/home/smarchi/src/babeltrace/include/babeltrace/ctfser-internal.h:418:3: note: in expansion of macro ‘bt_bitfield_write_le’
bt_bitfield_write_le(mmap_align_addr(ctfser->base_mma) +
^~~~~~~~~~~~~~~~~~~~
This boils down to the fact that the expression ~((uint8_t)0) has type
"signed int", which is used as an operand of the left shift. This is due
to the integer promotion rules of C99 (6.3.3.1):
If an int can represent all values of the original type, the value is
converted to an int; otherwise, it is converted to an unsigned int.
These are called the integer promotions. All other types are unchanged
by the integer promotions.
We also need to cast the result explicitly into the left hand
side type to deal with:
warning: large integer implicitly truncated to unsigned type [-Woverflow]
The C99 standard states that a right shift has implementation-defined
behavior when shifting a signed negative value. Add a preprocessor check
that the compiler provides the expected behavior, else provide an
alternative implementation which guarantees the intended behavior.
A preprocessor check is also added to ensure that the compiler
representation for signed values is two's complement, which is expected
by this header.
Document that this header strictly respects the C99 standard, with
the exception of its use of __typeof__.
Fix: alignment of ring buffer shm space reservation
commit a9ff648cc "Implement file-backed ring buffer" changes the order
of backend fields with respect to the frontend per-subbuffer
commit_counters_hot and commit_counters_cold arrays, but does not change
that order when calculating the space needed in the initial pass.
This discrepancy can be an issue for field alignment calculation.
Let's analyse the situation. If the incorrect position of alignment
calculation leads to a larger space reserved than the actual
allocations, no ill effect will be perceived by the user. However,
if space calculation is less than the allocations, it will cause the
ring buffer (and thus channel) creation to fail.
The fields that are incorrectly misplaced in size calculation (in
officially released versions) are:
* struct commit_counters_hot is aligned on CAA_CACHE_LINE_SIZE,
* struct commit_counters_cold is aligned on CAA_CACHE_LINE_SIZE,
Those are placed after (should be before) the backend fields:
* struct lttng_ust_lib_ring_buffer_backend_pages_shmp aligned on the
natural alignment of ssize_t,
* alignment on page size,
* struct lttng_ust_lib_ring_buffer_backend_pages, aligned on the natural
alignment of ssize_t,
* struct lttng_ust_lib_ring_buffer_backend_subbuffer, aligned on natural
alignment of unsigned long,
* struct lttng_ust_lib_ring_buffer_backend_counts, aligned on natural
alignment of uint64_t.
The largest alignment is the alignment on page size in the backend
fields. If we have a channel configured within specific ranges of
sub-buffer count, we should reach commit counters array dimensions
which cause the page size alignment to be lower than it should be in
the space calculation, and therefore leads to a problematic scenario
where space allocation will fail, thus leading to channel creation
failures.
Allocate the memory used by the ts_end field added by commit 6c737d05.
When allocating lots of subbuffer for a channel (512 or more),
zalloc_shm() will fail to allocate all the objects because the allocated
memory map didn't take account the newly added field.
Fix: timestamp_end field should include all events within sub-buffer
Fix for timestamp_end not including all events within sub-buffer. This
happens if a thread is preempted/interrupted for a long time between
reserve and commit (e.g. in the middle of a packet), which causes the
timestamp used for timestamp_end field of the packet header to be lower
than the timestamp of the last events in the buffer (those following the
event that was preempted/interrupted between reserve and commit).
The fix involves sampling the timestamp when doing the last space
reservation in a sub-buffer (which necessarily happens before doing the
delivery after its last commit). Save this timestamp temporarily in a
per-sub-buffer control area (we have exclusive access to that area until
we increment the commit counter).
Then, that timestamp value will be read when delivering the sub-buffer,
whichever event or switch happens to be the last to increment the commit
counter to perform delivery. The timestamp value can be read without
worrying about concurrent access, because at that point sub-buffer
delivery has exclusive access to the sub-buffer.
This ensures the timestamp_end value is always larger or equal to the
timestamp of the last event, always below or equal the timestamp_begin
of the following packet, and always below or equal the timestamp of the
first event in the following packet.
This changes the layout of the ring buffer shared memory area, so we
need to bump the LTTNG_UST_ABI version from 7.2 to 8.0, thus requiring
locked-step upgrade between liblttng-ust in applications, session
daemon, and consumer daemon. This fix therefore cannot be backported
to existing stable releases.
Fix: don't access packet header for stream_id and stream_instance_id getters
The stream ID and stream instance ID are invariant for a stream, so
there is no point reading them from the packet header currently owned by
the consumer (between get/put subbuf).
Actually, the consumer try to access the stream_id from the live timer
when sending a live beacon without getting the reader subbuffer first.
Doing so is racy against producers. In typical live scenarios
(non-overwrite channels), the producers will always write the same
stream id and stream instance id values at the same header offsets,
which will "work", except for the initial state of an empty buffer:
the value "0" will be returned (erroneously).
For the less frequently used scenario of a live session with "overwrite"
channels, this is handled by issuing a CHAN_WARN_ON, which disables
tracing for the channel, and prints warning to the consumerd console
when running consumerd with LTTNG_UST_DEBUG=1.
In the case where a ring buffer does not have any data ready, it makes
no sense to try to get a subbuffer for reading anyway, so the approach
was broken.
So return the stream id and stream instance id from the internal
data structures rather than reading it from the ring buffer.
Michael Jeanson [Wed, 20 Mar 2019 15:07:35 +0000 (11:07 -0400)]
compat: work around broken _SC_NPROCESSORS_CONF on MUSL libc
On MUSL libc the _SC_NPROCESSORS_CONF sysconf will report the number of
CPUs allocated to the task based on the affinity mask instead of the
total number of CPUs configured on the system.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Move wait_shm_mmap initialization to library constructor
Prevent us from deadlocking ourself if some glibc implementation
decide to hold the dl_load_* locks on fork operation.
This happens on Yocto Rocko and up when performing python tracing (import
lttngust). Why Yocto decided to patch glibc this way is a mystery
(ongoing effort) [1][2][3].
Anyhow, we can prevent this by moving the initialization of the
wait_shm_mmap to the library constructor since the dl_load_* locks are
nestable mutex.
Nothing in the git log for the wait_shm_mmap indicate a specific reason
to why it was done inside the listener thread. Doing it inside
wait_for_sessiond can help in some corner cases were /dev/shm
(or the shm path) files are unlinked. This is not much of an advantage.
Omair Majid [Thu, 11 Oct 2018 18:28:49 +0000 (14:28 -0400)]
Fix: address shellcheck warnings/errors in example scripts
ShellCheck points out a number of warnings in the example scripts. In
particular, a number of normal and special shell variables are not
quoted correctly.
Fix: check for event class/instance prototype mismatch
The TP_ARGS() for an event instance belonging to an event class
must have compatible types with the event class TP_ARGS().
Failure to follow this rule leads to a prototype mismatch between the
tracepoint call site and the probe function. A common effect perceived
is that events with prototype mismatch between call site and probe
function are never traced.
Fix this by enforcing a compile-time check of the event instance and
class prototypes, similarly to what is done in LTTng modules.
Fix: race between statedump and library destructor
The locking scheme for ust_lock() returns a teardown state (variable
lttng_ust_comm_should_quit) which is set by library destructor with lock
held.
It requires that when ust listener threads use this lock to protect
against concurrent accesses to a data structure, in addition to take
the lock, they need to check the return value of ust_lock() and
skip their critical section entirely if the return value indicates
that teardown is ongoing.
Iteration over all loaded libraries by lttng_ust_dl_update() starts by
iter_begin which grabs the lock, and sets data->cancel state
appropriately if teardown is ongoing. Then extract_bin_info_events()
uses the data->cancel state to skip over use of the protected structures
as needed, but iter_end() fails to take this data->cancel state into
account. Therefore, it can access data structures concurrently while
their teardown is ongoing which leads to crashes.
procname
Thread name, as set by exec(3) or prctl(2). It is recommended
that programs set their thread name with prctl(2) before
hitting the first tracepoint for that thread.
We can rightfully expect that this applies to the first thread created
within a child process upon fork. Reset the procname cache in the child
on fork.
Move symbol preventing unloading of probe providers
Issue
=====
Calling dlclose on the probe provider library that first loaded
__tracepoints__disable_destructors in the symbol table does not
unregister the probes from the callsites as the destructors are not
executed.
The __tracepoints__disable_destructors weak symbol is exposed by probe
providers, liblttng-ust.so and liblttng-ust-tracepoint.so libraries. If
a probe provider is loaded first into the address space, its definition
is bound to the symbol. All the subsequent loaded libraries using the
symbol will use the existing definition of the symbol, thus creating a
situation where liblttng-ust.so or liblttng-ust-tracepoint.so depend on
the probe provider library.
This prevents the dynamic loader from unloading the library as it is
still in use by other libraries. Because of this, the execution of its
destructors and the unregistration of the probes is postponed. Since the
unregistration of the probes is postponed, event will be generated if
the callsite is executed even though the probes should not be loaded.
Solution
========
To overcome this issue, we no longer expose this symbol in the
tracepoint.h file to remove the explicit dependency of the probe
provider on the symbol. We instead use the existing dlopen handle on
liblttng-ust-tracepoint.so and use dlsym to get handles on functions
that disable and get the state of the destructors.
Version compatibility
=====================
- This change is backward compatible with UST applications and libraries
built on lttng-ust version before 2.11. Those applications will use
the __tracepoints__disable_destructors symbol that is now exposed
as a weak symbol in the liblttng-ust-tracepoint.so library. This
symbol is alway checked in 2.11 in case an old app is running.
- Applications built with this change will also work in older versions
of lttng-ust as there is a check to see if the new destructor state
checking method should be used, if it is not we fallback to a
compatibility method. To ensure compatibility in this case, we also
look up and keep up-to-date the __tracepoints__disable_destructors
value using the dlopen-dlsym combo.
- A mix of applications/probes builds in part against 2.10 and 2.11
also work. When setting the destructor state from a binary built
against 2.11 headers, both old/new states are set, so a binary built
against 2.10 will correctly see the old state. When querying the state
from a binary built against 2.11 headers, both old and new states are
queried, so if the state has been set from a binary built against
2.10 headers, the old state will be set.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Fri, 2 Mar 2018 22:36:26 +0000 (17:36 -0500)]
Fix: cache the result of getpid() internally
On Linux we called getpid() directly on each tracepoint and relied on
the glibc pid cache. However, in glibc 2.25, released on 2017-02-05, the
pid cache was removed which results in a getpid syscall on each event
when the vpid context is enabled.
Remove the Linux specific case and use our internal cache all the time.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Fri, 2 Mar 2018 22:36:25 +0000 (17:36 -0500)]
Fix: reset cached vpid context on fork
We currently reset the cached vtid on fork but not the vpid. This is not
a problem on Linux because we don't cache the vpid internally but call
getpid() directly and rely on the glibc pid cache.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Wed, 25 Oct 2017 18:28:04 +0000 (14:28 -0400)]
Fix: build example SO when PIE is enabled
In the example Makefiles, when building shared object libraires, make sure
we set the custom linker options after the CFLAGS/LDFLAGS so that it
overrides them. This is useful when the build system set some hardening
features like PIE in the CFLAGS.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
With this commit, it's now possible to dlclose() a library containing an
actively used probe provider.
The destructor of such library will now iterate over all the sessions
and over all probe definitions to unregister them from the respective
callsites in the process.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
dlopen() liblttng-ust.so from constructor to prevent unloading
The support of probe provider dlclose() allows for the following
problematic scenario:
- Application is not linked against the liblttng-ust.so
- Application dlopen() a probe provider library that is linked against
liblttng-ust.so
- Application dlclose() the probe provider
In this scenario, the probe provider has a dependency on
liblttng-ust.so, so when it's loaded by the application, liblttng-ust.so
is loaded too. The probe provider library now has the only reference to
the liblttng-ust.so library. When the application calls dlclose() on
it, all its references are dropped, thus triggering the unloading of
both the probe provider library and liblttng-ust.so.
This scenario is problematic because lttng ust_listener_threads are in
DETACHED state. We cannot join them and therefore we cannot unload the
library containing the code they run. Only the operating system can free
those resources.
The reason why those threads are in DETACHED state is to quickly
teardown applications on process exit.
A possible solution to investigate: if we can determine whether
liblttng-ust.so is being dlopen (directly or undirectly) or it's linked
against the application, we could set the detached state accordingly.
To prevent that unloading, we pin it in memory by grabbing an extra
reference on the library, with a RTLD_NODELETE flag. This will prevent
the dynamic loader from ever removing the liblttng-ust.so library from
the process' address space.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
It's now possible to register a probe provider with a name that has
already been registered. This is useful when wanting to load a new
version of a shared library on a already running process.
Changes are necessary in the lttng-session daemon to support cases where
the newly register event has a different probe payload.
Taking a simple case where a probe provider is registered twice, the
tracepoint call site will have two probes registered to it and thus will
generate two events in the trace.
08:51:35 lttng-ust-fd-tracker.c: In function 'dup_std_fd':
08:51:35 lttng-ust-fd-tracker.c:174:2: error: 'for' loop initial
declarations are only allowed in C99 mode
08:51:35 for (int i = 0; i < STDERR_FILENO + 1; i++) {
08:51:35 ^
08:51:35 lttng-ust-fd-tracker.c:174:2: note: use option -std=c99 or
-std=gnu99 to compile your code
08:51:35 lttng-ust-fd-tracker.c:195:11: error: redefinition of 'i'
08:51:35 for (int i = 0; i < fd_to_close_count; i++) {
08:51:35 ^
08:51:35 lttng-ust-fd-tracker.c:174:11: note: previous definition of 'i'
was here
08:51:35 for (int i = 0; i < STDERR_FILENO + 1; i++) {
08:51:35 ^
08:51:35 lttng-ust-fd-tracker.c:195:2: error: 'for' loop initial
declarations are only allowed in C99 mode
08:51:35 for (int i = 0; i < fd_to_close_count; i++) {
08:51:35 ^
08:51:35 Makefile:412: recipe for target 'lttng-ust-fd-tracker.lo'
failed
08:51:35 make[2]: *** [lttng-ust-fd-tracker.lo] Error 1
08:51:35 make[2]: *** Waiting for unfinished jobs....
Michael Jeanson [Tue, 21 Nov 2017 16:11:15 +0000 (11:11 -0500)]
Fix: specify SONAME in python-lttngust LoadLibrary
When loading the python agent library with ctypes in the python
bindings, specify the SONAME. This will make sure we load the proper
library in the event of a SONAME bump and the bindings will work without
having to install the "dev" package which in most distros contains the
non-versionned ".so".
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Jonathan Rajotte [Fri, 10 Nov 2017 16:06:41 +0000 (11:06 -0500)]
Fix: fd of an elf object must be registered to the fd tracker
The open call take place inside ust, it must be tracked to prevent external
closing.
The bug can be hit during tracing of an application for which the probe
provider is loaded using LD_PRELOAD in combination with the fd utility
shared object. The application is responsible for closing all possible fd.
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The initial-exec model seems to behave differently than global-dynamic
with respect to lazy initialization, causing locks to be taken then
first time each thread touch the TLS. This introduces deadlocks with
library constructors waiting on other threads.
The initial-exec model seems to behave differently than global-dynamic
with respect to lazy initialization, causing locks to be taken then
first time each thread touch the TLS. This introduces deadlocks with
library constructors waiting on other threads.
Philippe Proulx [Mon, 6 Nov 2017 20:46:03 +0000 (15:46 -0500)]
configure.ac: add --disable-examples option to not build/install examples
Some environments and distributions do not need the LTTng-UST examples
to be built because they remove them anyway. Continue to build them by
default, but add --disable-examples to explicitly disable them.
Signed-off-by: Philippe Proulx <eeppeliteloop@gmail.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 6 Nov 2017 19:09:30 +0000 (14:09 -0500)]
Disable NUMA by default on 32bit arm
There is currently no NUMA support on 32bit arm, disable the dependency
on libnuma by default on this architecture. It can still be force with
--enable-numa.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: sync buffer file metadata on buffer allocation
Synchronizing the file metadata on disk after zeroing the whole file (on
buffer allocation) will make the crash extraction feature (--shm-path
create option) more robust. It ensures the content of the file metadata
backing the buffers does not have to be updated while tracing into the
memory map. Therefore, the on-disk metadata will never be out of sync at
the point where a system crash occurs.
Philippe Proulx [Thu, 27 Jul 2017 23:28:40 +0000 (19:28 -0400)]
Fix: doc/man: use a single XSL file and match local names
Matching the local name instead of the full name, that is:
*[local-name() = 'co']
instead of just `co` matches both the non-namespaced element and the
DocBook-namespaced element whether we're using the DocBook 4.5 or
DocBook 5.0 stylesheets.
Signed-off-by: Philippe Proulx <eeppeliteloop@gmail.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Introduce the LTTNG_UST_ALLOW_BLOCKING env. var. to control whether
applications are allowed to block when a buffer is full. If set, it
allows the tracer to block the application when buffers are full.
The blocking is now controlled by a per-channel configuration option in
the LTTng control interface for channels with the "--blocking-timeout"
parameter, which is specified in usec (or -1 to block forever).
This replaces the LTTNG_UST_BLOCKING_RETRY_TIMEOUT env. var., which
actually never made it into a stable release (we therefore remove this
env. var).
Allow context length calculation to have side-effects which trigger
event tracing by moving the calculation outside of the buffer space
reservation retry loop.
This also paves the way to have dynamically sized contexts in lttng-ust,
which would expect to put their size of the internal stack. Note that
the context length calculation is performed *after* the event payload
field length calculation, so the stack needs to be used accordingly.
Currently, the only dynamically sized contexts we have are provided by
Java integration, which keeps its own stack.