Create the tracepoint provider object file:
[role="term"]
---------------
-cc -c -I. tp.c
---------------
+----
+$ cc -c -I. tp.c
+----
NOTE: Although an application instrumented with LTTng-UST tracepoints
can be compiled with a C++ compiler, tracepoint probes should be
tracepoint provider object files, as a static library:
[role="term"]
----------------
-ar rc tp.a tp.o
----------------
+----
+$ ar rc tp.a tp.o
+----
Using a static library does have the advantage of centralising the
tracepoint providers objects so they can be shared between multiple
(`libc` on a BSD system):
[role="term"]
--------------------------------------
-cc -o app tp.o app.o -llttng-ust -ldl
--------------------------------------
+----
+$ cc -o app tp.o app.o -llttng-ust -ldl
+----
[[build-dynamic]]
nloption:-fpic option:
[role="term"]
---------------------
-cc -c -fpic -I. tp.c
---------------------
+----
+$ cc -c -fpic -I. tp.c
+----
It is then linked as a shared library like this:
[role="term"]
--------------------------------------------------------
-cc -shared -Wl,--no-as-needed -o tp.so tp.o -llttng-ust
--------------------------------------------------------
+----
+$ cc -shared -Wl,--no-as-needed -o tp.so tp.o -llttng-ust
+----
This tracepoint provider shared object isn't linked with the user
application: it must be loaded manually. This is why the application is
libdl:
[role="term"]
---------------------------------
-cc -o app app.o tp-define.o -ldl
---------------------------------
+----
+$ cc -o app app.o tp-define.o -ldl
+----
There are two ways to dynamically load the tracepoint provider shared
object:
The following context fields are supported by LTTng-UST:
-`cpu_id`::
+General context fields::
++
+`cpu_id`:::
CPU ID.
+
NOTE: This context field is always enabled, and it cannot be added
dynamic event filtering. See man:lttng-enable-event(1) for more
information about event filtering.
-`ip`::
+`ip`:::
Instruction pointer: enables recording the exact address from which
an event was emitted. This context field can be used to
reverse-lookup the source location that caused the event
to be emitted.
-`perf:thread:COUNTER`::
+`pthread_id`:::
+ POSIX thread identifier.
++
+Can be used on architectures where `pthread_t` maps nicely to an
+`unsigned long` type.
+
+Process context fields::
++
+`procname`:::
+ Thread name, as set by man:exec(3) or man:prctl(2). It is
+ recommended that programs set their thread name with man:prctl(2)
+ before hitting the first tracepoint for that thread.
+
+`vpid`:::
+ Virtual process ID: process ID as seen from the point of view of the
+ current process ID namespace (see man:pid_namespaces(7)).
+
+`vtid`:::
+ Virtual thread ID: thread ID as seen from the point of view of the
+ current process ID namespace (see man:pid_namespaces(7)).
+
+perf context fields::
++
+`perf:thread:COUNTER`:::
perf counter named 'COUNTER'. Use `lttng add-context --list` to
list the available perf counters.
+
Only available on IA-32 and x86-64 architectures.
-`perf:thread:raw:rN:NAME`::
+`perf:thread:raw:rN:NAME`:::
perf counter with raw ID 'N' and custom name 'NAME'. See
man:lttng-add-context(1) for more details.
-`pthread_id`::
- POSIX thread identifier. Can be used on architectures where
- `pthread_t` maps nicely to an `unsigned long` type.
+Namespace context fields (see man:namespaces(7))::
++
+`cgroup_ns`:::
+ Inode number of the current control group namespace (see
+ man:cgroup_namespaces(7)) in the proc file system.
-`procname`::
- Thread name, as set by man:exec(3) or man:prctl(2). It is
- recommended that programs set their thread name with man:prctl(2)
- before hitting the first tracepoint for that thread.
+`ipc_ns`:::
+ Inode number of the current IPC namespace (see
+ man:ipc_namespaces(7)) in the proc file system.
+
+`mnt_ns`:::
+ Inode number of the current mount point namespace (see
+ man:mount_namespaces(7)) in the proc file system.
-`vpid`::
- Virtual process ID: process ID as seen from the point of view of
- the process namespace.
+`net_ns`:::
+ Inode number of the current network namespace (see
+ man:network_namespaces(7)) in the proc file system.
-`vtid`::
- Virtual thread ID: thread ID as seen from the point of view of
- the process namespace.
+`pid_ns`:::
+ Inode number of the current process ID namespace (see
+ man:pid_namespaces(7)) in the proc file system.
+
+`time_ns`:::
+ Inode number of the current clock namespace (see
+ man:time_namespaces(7)) in the proc file system.
+
+`user_ns`:::
+ Inode number of the current user namespace (see
+ man:user_namespaces(7)) in the proc file system.
+
+`uts_ns`:::
+ Inode number of the current UTS namespace (see
+ man:uts_namespaces(7)) in the proc file system.
+
+Credential context fields (see man:credentials(7))::
++
+`vuid`:::
+ Virtual real user ID: real user ID as seen from the point of view of
+ the current user namespace (see man:user_namespaces(7)).
+
+`vgid`:::
+ Virtual real group ID: real group ID as seen from the point of view
+ of the current user namespace (see man:user_namespaces(7)).
+
+`veuid`:::
+ Virtual effective user ID: effective user ID as seen from the point
+ of view of the current user namespace (see man:user_namespaces(7)).
+
+`vegid`:::
+ Virtual effective group ID: effective group ID as seen from the
+ point of view of the current user namespace (see
+ man:user_namespaces(7)).
+
+`vsuid`:::
+ Virtual saved set-user ID: saved set-user ID as seen from the point
+ of view of the current user namespace (see man:user_namespaces(7)).
+
+`vsgid`:::
+ Virtual saved set-group ID: saved set-group ID as seen from the
+ point of view of the current user namespace (see
+ man:user_namespaces(7)).
[[state-dump]]
|Debug link file name.
|===
+`lttng_ust_statedump:procname`::
+ The process procname at process start.
++
+Fields:
++
+[options="header"]
+|===
+|Field name |Description
+
+|`procname`
+|The process name.
+
+|===
+
[[ust-lib]]
Shared library load/unload tracking
EXAMPLE
-------
NOTE: A few examples are available in the
-https://github.com/lttng/lttng-ust/tree/master/doc/examples[`doc/examples`]
+https://github.com/lttng/lttng-ust/tree/v{lttng_version}/doc/examples[`doc/examples`]
directory of LTTng-UST's source tree.
This example shows all the features documented in the previous
like this:
[role="term"]
--------------------------------------
-cc -c -I. tp.c
-cc -c app.c
-cc -o app tp.o app.o -llttng-ust -ldl
--------------------------------------
+----
+$ cc -c -I. tp.c
+$ cc -c app.c
+$ cc -o app tp.o app.o -llttng-ust -ldl
+----
Using the man:lttng(1) tool, create an LTTng tracing session, enable
all the events of this tracepoint provider, and start tracing:
[role="term"]
-----------------------------------------------
-lttng create my-session
-lttng enable-event --userspace 'my_provider:*'
-lttng start
-----------------------------------------------
+----
+$ lttng create my-session
+$ lttng enable-event --userspace 'my_provider:*'
+$ lttng start
+----
You may also enable specific events:
[role="term"]
-----------------------------------------------------------
-lttng enable-event --userspace my_provider:big_event
-lttng enable-event --userspace my_provider:event_instance2
-----------------------------------------------------------
+----
+$ lttng enable-event --userspace my_provider:big_event
+$ lttng enable-event --userspace my_provider:event_instance2
+----
Run the application:
[role="term"]
---------------------
-./app some arguments
---------------------
+----
+$ ./app some arguments
+----
Stop the current tracing session and inspect the recorded events:
[role="term"]
-----------
-lttng stop
-lttng view
-----------
+----
+$ lttng stop
+$ lttng view
+----
Tracepoint provider header file
are located in a specific directory under `$LTTNG_HOME` (or `$HOME` if
`$LTTNG_HOME` is not set).
-`LTTNG_UST_BLOCKING_RETRY_TIMEOUT`::
- Maximum duration (milliseconds) to retry event tracing when
- there's no space left for the event record in the sub-buffer.
+`LTTNG_UST_ALLOW_BLOCKING`::
+ If set, allow the application to retry event tracing when there's
+ no space left for the event record in the sub-buffer, therefore
+ effectively blocking the application until space is made available
+ or the configured timeout is reached.
+
---
-`0` (default)::
- Never block the application.
-
-Positive value::
- Block the application for the specified number of milliseconds. If
- there's no space left after this duration, discard the event
- record.
-
-Negative value::
- Block the application until there's space left for the event record.
---
+To allow an application to block during tracing, you also need to
+specify a blocking timeout when you create a channel with the
+nloption:--blocking-timeout option of the man:lttng-enable-channel(1)
+command.
+
This option can be useful in workloads generating very large trace data
throughput, where blocking the application is an acceptable trade-off to
prevent discarding event records.
+
-WARNING: Setting this environment variable to a non-zero value may
-significantly affect application timings.
+WARNING: Setting this environment variable may significantly
+affect application timings.
`LTTNG_UST_CLOCK_PLUGIN`::
Path to the shared object which acts as the clock override plugin.
An example of such a plugin can be found in the LTTng-UST
documentation under
- https://github.com/lttng/lttng-ust/tree/master/doc/examples/clock-override[`examples/clock-override`].
+ https://github.com/lttng/lttng-ust/tree/v{lttng_version}/doc/examples/clock-override[`examples/clock-override`].
`LTTNG_UST_DEBUG`::
- Activates `liblttng-ust`'s debug and error output if set to `1`.
+ If set, enable `liblttng-ust`'s debug and error output.
`LTTNG_UST_GETCPU_PLUGIN`::
Path to the shared object which acts as the `getcpu()` override
plugin. An example of such a plugin can be found in the LTTng-UST
documentation under
- https://github.com/lttng/lttng-ust/tree/master/doc/examples/getcpu-override[`examples/getcpu-override`].
+ https://github.com/lttng/lttng-ust/tree/v{lttng_version}/doc/examples/getcpu-override[`examples/getcpu-override`].
`LTTNG_UST_REGISTER_TIMEOUT`::
Waiting time for the _registration done_ session daemon command
Setting this environment variable to `0` is recommended for applications
with time constraints on the process startup time.
+
-Default: {lttng_ust_register_timeout}.
-
-`LTTNG_UST_BLOCKING_RETRY_TIMEOUT`::
- Maximum time during which event tracing retry is attempted on buffer
- full condition (millliseconds). Setting this environment to non-zero
- value effectively blocks the application on buffer full condition.
- Setting this environment variable to non-zero values may
- significantly affect application timings. Setting this to a negative
- value may block the application indefinitely if there is no consumer
- emptying the ring buffer. The delay between retry attempts is the
- minimum between the specified timeout value and 100ms. This option
- can be useful in workloads generating very large trace data
- throughput, where blocking the application is an acceptable
- trade-off to not discard events. _Use with caution_.
-+
-The value `0` means _do not retry_. The value `-1` means _retry forever_.
-Value > `0` means a maximum timeout of the given value.
-+
-Default: {lttng_ust_blocking_retry_timeout}.
+Default: 3000.
`LTTNG_UST_WITHOUT_BADDR_STATEDUMP`::
- Prevents `liblttng-ust` from performing a base address state dump
- (see the <<state-dump,LTTng-UST state dump>> section above) if
- set to `1`.
+ If set, prevents `liblttng-ust` from performing a base address state
+ dump (see the <<state-dump,LTTng-UST state dump>> section above).
+
+`LTTNG_UST_WITHOUT_PROCNAME_STATEDUMP`::
+ If set, prevents `liblttng-ust` from performing a procname state
+ dump (see the <<state-dump,LTTng-UST state dump>> section above).
include::common-footer.txt[]