1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
7 include::../common/copyright.txt[]
10 include::../common/warning-not-maintained.txt[]
13 include::../common/welcome.txt[]
16 include::../common/audience.txt[]
20 === What's in this documentation?
22 The LTTng Documentation is divided into the following sections:
24 * **<<nuts-and-bolts,Nuts and bolts>>** explains the
25 rudiments of software tracing and the rationale behind the
28 You can skip this section if you’re familiar with software tracing and
29 with the LTTng project.
31 * **<<installing-lttng,Installation>>** describes the steps to
32 install the LTTng packages on common Linux distributions and from
35 You can skip this section if you already properly installed LTTng on
38 * **<<getting-started,Quick start>>** is a concise guide to
39 getting started quickly with LTTng kernel and user space tracing.
41 We recommend this section if you're new to LTTng or to software tracing
44 You can skip this section if you're not new to LTTng.
46 * **<<core-concepts,Core concepts>>** explains the concepts at
49 It's a good idea to become familiar with the core concepts
50 before attempting to use the toolkit.
52 * **<<plumbing,Components of LTTng>>** describes the various components
53 of the LTTng machinery, like the daemons, the libraries, and the
54 command-line interface.
55 * **<<instrumenting,Instrumentation>>** shows different ways to
56 instrument user applications and the Linux kernel.
58 Instrumenting source code is essential to provide a meaningful
61 You can skip this section if you do not have a programming background.
63 * **<<controlling-tracing,Tracing control>>** is divided into topics
64 which demonstrate how to use the vast array of features that
65 LTTng{nbsp}{revision} offers.
66 * **<<reference,Reference>>** contains reference tables.
67 * **<<glossary,Glossary>>** is a specialized dictionary of terms related
68 to LTTng or to the field of software tracing.
71 include::../common/convention.txt[]
74 include::../common/acknowledgements.txt[]
78 == What's new in LTTng {revision}?
80 LTTng{nbsp}{revision} bears the name _Joannès_. A Berliner Weisse style
81 beer from the http://letreflenoir.com/[Trèfle Noir] microbrewery in
82 https://en.wikipedia.org/wiki/Rouyn-Noranda[Rouyn-Noranda], the
83 https://www.beeradvocate.com/beer/profile/20537/238967/[_**Joannès**_]
84 is a tangy beer with a distinct pink dress and intense fruit flavor,
85 thanks to the presence of fresh blackcurrant grown in Témiscamingue.
87 New features and changes in LTTng{nbsp}{revision}:
89 * **Tracing control**:
90 ** You can override the name or the URL of a tracing session
91 configuration when you use man:lttng-load(1) thanks to the new
92 opt:lttng-load(1):--override-name and
93 opt:lttng-load(1):--override-url options.
94 ** The new `lttng regenerate` command replaces the now deprecated
95 `lttng metadata` command of LTTng 2.8. man:lttng-regenerate(1) can
96 also <<regenerate-statedump,generate the state dump event records>>
97 of a given tracing session on demand, a handy feature when
98 <<taking-a-snapshot,taking a snapshot>>.
99 ** You can add PMU counters by raw ID with man:lttng-add-context(1):
104 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
108 The format of the raw ID is the same as used with man:perf-record(1).
109 See <<adding-context,Add context fields to a channel>> for more
112 ** The LTTng <<lttng-relayd,relay daemon>> is now supported on
113 OS{nbsp}X and macOS for a smoother integration within a trace
114 analysis workflow, regardless of the platform used.
116 * **User space tracing**:
117 ** Improved performance (tested on x86-64 and ARMv7-A
118 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
120 ** New helper library (`liblttng-ust-fd`) to help with
121 <<liblttng-ust-fd,applications which close file descriptors that
122 don't belong to them>>, for example, in a loop which closes file
123 descriptors after man:fork(2), or BSD's `closeall()`.
124 ** More accurate <<liblttng-ust-dl,dynamic linker instrumentation>> and
125 state dump event records, especially when a dynamically loaded
126 library manually loads its own dependencies.
127 ** New `ctf_*()` field definition macros (see man:lttng-ust(3)):
128 *** `ctf_array_hex()`
129 *** `ctf_array_network()`
130 *** `ctf_array_network_hex()`
131 *** `ctf_sequence_hex()`
132 *** `ctf_sequence_network()`
133 *** `ctf_sequence_network_hex()`
134 ** New `lttng_ust_loaded` weak symbol defined by `liblttng-ust` for
135 an application to know if the LTTng-UST shared library is loaded
143 int lttng_ust_loaded __attribute__((weak));
147 if (lttng_ust_loaded) {
148 puts("LTTng-UST is loaded!");
150 puts("LTTng-UST is not loaded!");
158 ** LTTng-UST thread names have the `-ust` suffix.
160 * **Linux kernel tracing**:
161 ** Improved performance (tested on x86-64 and ARMv7-A
162 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
164 ** New enumeration <<lttng-modules-tp-fields,field definition macros>>:
165 `ctf_enum()` and `ctf_user_enum()`.
166 ** IPv4, IPv6, and TCP header data is recorded in the event records
167 produced by tracepoints starting with `net_`.
168 ** Detailed system call event records: `select`, `pselect6`, `poll`,
169 `ppoll`, `epoll_wait`, `epoll_pwait`, and `epoll_ctl` on all
170 architectures supported by LTTng-modules, and `accept4` on x86-64.
171 ** New I²C instrumentation: the `extract_sensitive_payload` parameter
172 of the new `lttng-probe-i2c` LTTng module controls whether or not
173 the payloads of I²C messages are recorded in I²C event records, since
174 they may contain sensitive data (for example, keystrokes).
175 ** When the LTTng kernel modules are built into the Linux kernel image,
176 the `CONFIG_TRACEPOINTS` configuration option is automatically
183 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
184 generation_ is a modern toolkit for tracing Linux systems and
185 applications. So your first question might be:
192 As the history of software engineering progressed and led to what
193 we now take for granted--complex, numerous and
194 interdependent software applications running in parallel on
195 sophisticated operating systems like Linux--the authors of such
196 components, software developers, began feeling a natural
197 urge to have tools that would ensure the robustness and good performance
198 of their masterpieces.
200 One major achievement in this field is, inarguably, the
201 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
202 an essential tool for developers to find and fix bugs. But even the best
203 debugger won't help make your software run faster, and nowadays, faster
204 software means either more work done by the same hardware, or cheaper
205 hardware for the same work.
207 A _profiler_ is often the tool of choice to identify performance
208 bottlenecks. Profiling is suitable to identify _where_ performance is
209 lost in a given software. The profiler outputs a profile, a statistical
210 summary of observed events, which you may use to discover which
211 functions took the most time to execute. However, a profiler won't
212 report _why_ some identified functions are the bottleneck. Bottlenecks
213 might only occur when specific conditions are met, conditions that are
214 sometimes impossible to capture by a statistical profiler, or impossible
215 to reproduce with an application altered by the overhead of an
216 event-based profiler. For a thorough investigation of software
217 performance issues, a history of execution is essential, with the
218 recorded values of variables and context fields you choose, and
219 with as little influence as possible on the instrumented software. This
220 is where tracing comes in handy.
222 _Tracing_ is a technique used to understand what goes on in a running
223 software system. The software used for tracing is called a _tracer_,
224 which is conceptually similar to a tape recorder. When recording,
225 specific instrumentation points placed in the software source code
226 generate events that are saved on a giant tape: a _trace_ file. You
227 can trace user applications and the operating system at the same time,
228 opening the possibility of resolving a wide range of problems that would
229 otherwise be extremely challenging.
231 Tracing is often compared to _logging_. However, tracers and loggers are
232 two different tools, serving two different purposes. Tracers are
233 designed to record much lower-level events that occur much more
234 frequently than log messages, often in the range of thousands per
235 second, with very little execution overhead. Logging is more appropriate
236 for a very high-level analysis of less frequent events: user accesses,
237 exceptional conditions (errors and warnings, for example), database
238 transactions, instant messaging communications, and such. Simply put,
239 logging is one of the many use cases that can be satisfied with tracing.
241 The list of recorded events inside a trace file can be read manually
242 like a log file for the maximum level of detail, but it is generally
243 much more interesting to perform application-specific analyses to
244 produce reduced statistics and graphs that are useful to resolve a
245 given problem. Trace viewers and analyzers are specialized tools
248 In the end, this is what LTTng is: a powerful, open source set of
249 tools to trace the Linux kernel and user applications at the same time.
250 LTTng is composed of several components actively maintained and
251 developed by its link:/community/#where[community].
254 [[lttng-alternatives]]
255 === Alternatives to noch:{LTTng}
257 Excluding proprietary solutions, a few competing software tracers
260 * https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
261 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
262 user scripts and is responsible for loading code into the
263 Linux kernel for further execution and collecting the outputted data.
264 * https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
265 subsystem in the Linux kernel in which a virtual machine can execute
266 programs passed from the user space to the kernel. You can attach
267 such programs to tracepoints and KProbes thanks to a system call, and
268 they can output data to the user space when executed thanks to
269 different mechanisms (pipe, VM register values, and eBPF maps, to name
271 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
272 is the de facto function tracer of the Linux kernel. Its user
273 interface is a set of special files in sysfs.
274 * https://perf.wiki.kernel.org/[perf] is
275 a performance analyzing tool for Linux which supports hardware
276 performance counters, tracepoints, as well as other counters and
277 types of probes. perf's controlling utility is the cmd:perf command
279 * http://linux.die.net/man/1/strace[strace]
280 is a command-line utility which records system calls made by a
281 user process, as well as signal deliveries and changes of process
282 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
283 to fulfill its function.
284 * http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
285 analyze Linux kernel events. You write scripts, or _chisels_ in
286 sysdig's jargon, in Lua and sysdig executes them while the system is
287 being traced or afterwards. sysdig's interface is the cmd:sysdig
288 command-line tool as well as the curses-based cmd:csysdig tool.
289 * https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
290 user space tracer which uses custom user scripts to produce plain text
291 traces. SystemTap converts the scripts to the C language, and then
292 compiles them as Linux kernel modules which are loaded to produce
293 trace data. SystemTap's primary user interface is the cmd:stap
296 The main distinctive features of LTTng is that it produces correlated
297 kernel and user space traces, as well as doing so with the lowest
298 overhead amongst other solutions. It produces trace files in the
299 http://diamon.org/ctf[CTF] format, a file format optimized
300 for the production and analyses of multi-gigabyte data.
302 LTTng is the result of more than 10 years of active open source
303 development by a community of passionate developers.
304 LTTng{nbsp}{revision} is currently available on major desktop and server
307 The main interface for tracing control is a single command-line tool
308 named cmd:lttng. The latter can create several tracing sessions, enable
309 and disable events on the fly, filter events efficiently with custom
310 user expressions, start and stop tracing, and much more. LTTng can
311 record the traces on the file system or send them over the network, and
312 keep them totally or partially. You can view the traces once tracing
313 becomes inactive or in real-time.
315 <<installing-lttng,Install LTTng now>> and
316 <<getting-started,start tracing>>!
322 **LTTng** is a set of software <<plumbing,components>> which interact to
323 <<instrumenting,instrument>> the Linux kernel and user applications, and
324 to <<controlling-tracing,control tracing>> (start and stop
325 tracing, enable and disable event rules, and the rest). Those
326 components are bundled into the following packages:
328 * **LTTng-tools**: Libraries and command-line interface to
330 * **LTTng-modules**: Linux kernel modules to instrument and
332 * **LTTng-UST**: Libraries and Java/Python packages to instrument and
333 trace user applications.
335 Most distributions mark the LTTng-modules and LTTng-UST packages as
336 optional when installing LTTng-tools (which is always required). In the
337 following sections, we always provide the steps to install all three,
340 * You only need to install LTTng-modules if you intend to trace the
342 * You only need to install LTTng-UST if you intend to trace user
346 .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 22 January 2018.
348 |Distribution |Available in releases |Alternatives
350 |https://www.ubuntu.com/[Ubuntu]
351 |<<ubuntu,Ubuntu{nbsp}17.04 _Zesty Zapus_ and Ubuntu{nbsp}17.10 _Artful Aardvark_>>.
353 Ubuntu{nbsp}14.04 _Trusty Tahr_ and Ubuntu{nbsp}16.04 _Xenial Xerus_:
354 <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
355 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
356 other Ubuntu releases.
358 |https://getfedora.org/[Fedora]
359 |<<fedora,Fedora{nbsp}26>>.
360 |link:/docs/v2.10#doc-fedora[LTTng{nbsp}2.10 for Fedora{nbsp}27].
362 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
363 other Fedora releases.
365 |https://www.debian.org/[Debian]
366 |<<debian,Debian "stretch" (stable)>>.
367 |link:/docs/v2.10#doc-debian[LTTng{nbsp}2.10 for Debian "buster" (testing)
368 and Debian "sid" (unstable)].
371 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
372 other Debian releases.
374 |https://www.archlinux.org/[Arch Linux]
376 |link:/docs/v2.10#doc-arch-linux[LTTng{nbsp}2.10 for the current Arch Linux build].
378 <<building-from-source,Build LTTng{nbsp}{revision} from source>>.
380 |https://alpinelinux.org/[Alpine Linux]
382 |link:/docs/v2.10#doc-alpine-linux[LTTng{nbsp}2.10 for Alpine Linux{nbsp}3.7
383 and Alpine Linux{nbsp}"edge"].
385 <<building-from-source,Build LTTng{nbsp}{revision} from source>>.
387 |https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
388 |See http://packages.efficios.com/[EfficiOS Enterprise Packages].
391 |https://buildroot.org/[Buildroot]
392 |<<"buildroot", "Buildroot{nbsp}2017.02, Buildroot{nbsp}2017.05, Buildroot{nbsp}2017.08, and Buildroot{nbsp}2017.11">>.
393 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
394 other Buildroot releases.
396 |http://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
397 https://www.yoctoproject.org/[Yocto]
398 |<<oe-yocto,Yocto Project{nbsp}2.3 _Pyro_ and Yocto Project{nbsp}2.4 _Rocko_>>
399 (`openembedded-core` layer).
400 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
401 other Yocto/OpenEmbedded releases.
406 === [[ubuntu-official-repositories]]Ubuntu
408 LTTng{nbsp}{revision} is available on Ubuntu{nbsp}17.04 _Zesty Zapus_
409 and Ubuntu{nbsp}17.10 _Artful Aardvark_. For previous releases of
410 Ubuntu, <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
412 To install LTTng{nbsp}{revision} on Ubuntu{nbsp}17.04 _Zesty Zapus_:
414 . Install the main LTTng{nbsp}{revision} packages:
419 # apt-get install lttng-tools
420 # apt-get install lttng-modules-dkms
421 # apt-get install liblttng-ust-dev
425 . **If you need to instrument and trace
426 <<java-application,Java applications>>**, install the LTTng-UST
432 # apt-get install liblttng-ust-agent-java
436 . **If you need to instrument and trace
437 <<python-application,Python{nbsp}3 applications>>**, install the
438 LTTng-UST Python agent:
443 # apt-get install python3-lttngust
449 ==== noch:{LTTng} Stable {revision} PPA
451 The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
452 Stable{nbsp}{revision} PPA] offers the latest stable
453 LTTng{nbsp}{revision} packages for:
455 * Ubuntu{nbsp}14.04 _Trusty Tahr_
456 * Ubuntu{nbsp}16.04 _Xenial Xerus_
458 To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA:
460 . Add the LTTng Stable{nbsp}{revision} PPA repository and update the
466 # apt-add-repository ppa:lttng/stable-2.9
471 . Install the main LTTng{nbsp}{revision} packages:
476 # apt-get install lttng-tools
477 # apt-get install lttng-modules-dkms
478 # apt-get install liblttng-ust-dev
482 . **If you need to instrument and trace
483 <<java-application,Java applications>>**, install the LTTng-UST
489 # apt-get install liblttng-ust-agent-java
493 . **If you need to instrument and trace
494 <<python-application,Python{nbsp}3 applications>>**, install the
495 LTTng-UST Python agent:
500 # apt-get install python3-lttngust
508 To install LTTng{nbsp}{revision} on Fedora{nbsp}26:
510 . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision}
516 # yum install lttng-tools
517 # yum install lttng-ust
521 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
527 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
528 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
529 cd lttng-modules-2.9.* &&
531 sudo make modules_install &&
537 .Java and Python application instrumentation and tracing
539 If you need to instrument and trace <<java-application,Java
540 applications>> on Fedora, you need to build and install
541 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
542 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
543 `--enable-java-agent-all` options to the `configure` script, depending
544 on which Java logging framework you use.
546 If you need to instrument and trace <<python-application,Python
547 applications>> on Fedora, you need to build and install
548 LTTng-UST{nbsp}{revision} from source and pass the
549 `--enable-python-agent` option to the `configure` script.
556 To install LTTng{nbsp}{revision} on Debian "stretch" (stable):
558 . Install the main LTTng{nbsp}{revision} packages:
563 # apt-get install lttng-modules-dkms
564 # apt-get install liblttng-ust-dev
565 # apt-get install lttng-tools
569 . **If you need to instrument and trace <<java-application,Java
570 applications>>**, install the LTTng-UST Java agent:
575 # apt-get install liblttng-ust-agent-java
579 . **If you need to instrument and trace <<python-application,Python
580 applications>>**, install the LTTng-UST Python agent:
585 # apt-get install python3-lttngust
590 [[enterprise-distributions]]
591 === RHEL, SUSE, and other enterprise distributions
593 To install LTTng on enterprise Linux distributions, such as Red Hat
594 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SUSE), please
595 see http://packages.efficios.com/[EfficiOS Enterprise Packages].
601 To install LTTng{nbsp}{revision} on Buildroot{nbsp}2017.02,
602 Buildroot{nbsp}2017.05, Buildroot{nbsp}2017.08, or
603 Buildroot{nbsp}2017.11:
605 . Launch the Buildroot configuration tool:
614 . In **Kernel**, check **Linux kernel**.
615 . In **Toolchain**, check **Enable WCHAR support**.
616 . In **Target packages**{nbsp}→ **Debugging, profiling and benchmark**,
617 check **lttng-modules** and **lttng-tools**.
618 . In **Target packages**{nbsp}→ **Libraries**{nbsp}→
619 **Other**, check **lttng-libust**.
623 === OpenEmbedded and Yocto
625 LTTng{nbsp}{revision} recipes are available in the
626 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
627 layer for Yocto Project{nbsp}2.3 _Pyro_ and Yocto Project{nbsp}2.4 _Rocko_
628 under the following names:
634 With BitBake, the simplest way to include LTTng recipes in your target
635 image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}:
638 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
643 . Select a machine and an image recipe.
644 . Click **Edit image recipe**.
645 . Under the **All recipes** tab, search for **lttng**.
646 . Check the desired LTTng recipes.
649 .Java and Python application instrumentation and tracing
651 If you need to instrument and trace <<java-application,Java
652 applications>> on Yocto/OpenEmbedded, you need to build and install
653 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
654 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
655 `--enable-java-agent-all` options to the `configure` script, depending
656 on which Java logging framework you use.
658 If you need to instrument and trace <<python-application,Python
659 applications>> on Yocto/OpenEmbedded, you need to build and install
660 LTTng-UST{nbsp}{revision} from source and pass the
661 `--enable-python-agent` option to the `configure` script.
665 [[building-from-source]]
666 === Build from source
668 To build and install LTTng{nbsp}{revision} from source:
670 . Using your distribution's package manager, or from source, install
671 the following dependencies of LTTng-tools and LTTng-UST:
674 * https://sourceforge.net/projects/libuuid/[libuuid]
675 * http://directory.fsf.org/wiki/Popt[popt]
676 * http://liburcu.org/[Userspace RCU]
677 * http://www.xmlsoft.org/[libxml2]
680 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
686 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
687 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
688 cd lttng-modules-2.9.* &&
690 sudo make modules_install &&
695 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
701 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
702 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
703 cd lttng-ust-2.9.* &&
713 .Java and Python application tracing
715 If you need to instrument and trace <<java-application,Java
716 applications>>, pass the `--enable-java-agent-jul`,
717 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
718 `configure` script, depending on which Java logging framework you use.
720 If you need to instrument and trace <<python-application,Python
721 applications>>, pass the `--enable-python-agent` option to the
722 `configure` script. You can set the `PYTHON` environment variable to the
723 path to the Python interpreter for which to install the LTTng-UST Python
731 By default, LTTng-UST libraries are installed to
732 dir:{/usr/local/lib}, which is the de facto directory in which to
733 keep self-compiled and third-party libraries.
735 When <<building-tracepoint-providers-and-user-application,linking an
736 instrumented user application with `liblttng-ust`>>:
738 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
740 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
741 man:gcc(1), man:g++(1), or man:clang(1).
745 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
751 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
752 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
753 cd lttng-tools-2.9.* &&
761 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
762 previous steps automatically for a given version of LTTng and confine
763 the installed files in a specific directory. This can be useful to test
764 LTTng without installing it on your system.
770 This is a short guide to get started quickly with LTTng kernel and user
773 Before you follow this guide, make sure to <<installing-lttng,install>>
776 This tutorial walks you through the steps to:
778 . <<tracing-the-linux-kernel,Trace the Linux kernel>>.
779 . <<tracing-your-own-user-application,Trace a user application>> written
781 . <<viewing-and-analyzing-your-traces,View and analyze the
785 [[tracing-the-linux-kernel]]
786 === Trace the Linux kernel
788 The following command lines start with the `#` prompt because you need
789 root privileges to trace the Linux kernel. You can also trace the kernel
790 as a regular user if your Unix user is a member of the
791 <<tracing-group,tracing group>>.
793 . Create a <<tracing-session,tracing session>> which writes its traces
794 to dir:{/tmp/my-kernel-trace}:
799 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
803 . List the available kernel tracepoints and system calls:
808 # lttng list --kernel
809 # lttng list --kernel --syscall
813 . Create <<event,event rules>> which match the desired instrumentation
814 point names, for example the `sched_switch` and `sched_process_fork`
815 tracepoints, and the man:open(2) and man:close(2) system calls:
820 # lttng enable-event --kernel sched_switch,sched_process_fork
821 # lttng enable-event --kernel --syscall open,close
825 You can also create an event rule which matches _all_ the Linux kernel
826 tracepoints (this will generate a lot of data when tracing):
831 # lttng enable-event --kernel --all
835 . <<basic-tracing-session-control,Start tracing>>:
844 . Do some operation on your system for a few seconds. For example,
845 load a website, or list the files of a directory.
846 . <<basic-tracing-session-control,Stop tracing>> and destroy the
857 The man:lttng-destroy(1) command does not destroy the trace data; it
858 only destroys the state of the tracing session.
860 . For the sake of this example, make the recorded trace accessible to
866 # chown -R $(whoami) /tmp/my-kernel-trace
870 See <<viewing-and-analyzing-your-traces,View and analyze the
871 recorded events>> to view the recorded events.
874 [[tracing-your-own-user-application]]
875 === Trace a user application
877 This section steps you through a simple example to trace a
878 _Hello world_ program written in C.
880 To create the traceable user application:
882 . Create the tracepoint provider header file, which defines the
883 tracepoints and the events they can generate:
889 #undef TRACEPOINT_PROVIDER
890 #define TRACEPOINT_PROVIDER hello_world
892 #undef TRACEPOINT_INCLUDE
893 #define TRACEPOINT_INCLUDE "./hello-tp.h"
895 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
898 #include <lttng/tracepoint.h>
908 ctf_string(my_string_field, my_string_arg)
909 ctf_integer(int, my_integer_field, my_integer_arg)
913 #endif /* _HELLO_TP_H */
915 #include <lttng/tracepoint-event.h>
919 . Create the tracepoint provider package source file:
925 #define TRACEPOINT_CREATE_PROBES
926 #define TRACEPOINT_DEFINE
928 #include "hello-tp.h"
932 . Build the tracepoint provider package:
937 $ gcc -c -I. hello-tp.c
941 . Create the _Hello World_ application source file:
948 #include "hello-tp.h"
950 int main(int argc, char *argv[])
954 puts("Hello, World!\nPress Enter to continue...");
957 * The following getchar() call is only placed here for the purpose
958 * of this demonstration, to pause the application in order for
959 * you to have time to list its tracepoints. It is not
965 * A tracepoint() call.
967 * Arguments, as defined in hello-tp.h:
969 * 1. Tracepoint provider name (required)
970 * 2. Tracepoint name (required)
971 * 3. my_integer_arg (first user-defined argument)
972 * 4. my_string_arg (second user-defined argument)
974 * Notice the tracepoint provider and tracepoint names are
975 * NOT strings: they are in fact parts of variables that the
976 * macros in hello-tp.h create.
978 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
980 for (x = 0; x < argc; ++x) {
981 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
984 puts("Quitting now!");
985 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
992 . Build the application:
1001 . Link the application with the tracepoint provider package,
1002 `liblttng-ust`, and `libdl`:
1007 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
1011 Here's the whole build process:
1014 .User space tracing tutorial's build steps.
1015 image::ust-flow.png[]
1017 To trace the user application:
1019 . Run the application with a few arguments:
1024 $ ./hello world and beyond
1033 Press Enter to continue...
1037 . Start an LTTng <<lttng-sessiond,session daemon>>:
1042 $ lttng-sessiond --daemonize
1046 Note that a session daemon might already be running, for example as
1047 a service that the distribution's service manager started.
1049 . List the available user space tracepoints:
1054 $ lttng list --userspace
1058 You see the `hello_world:my_first_tracepoint` tracepoint listed
1059 under the `./hello` process.
1061 . Create a <<tracing-session,tracing session>>:
1066 $ lttng create my-user-space-session
1070 . Create an <<event,event rule>> which matches the
1071 `hello_world:my_first_tracepoint` event name:
1076 $ lttng enable-event --userspace hello_world:my_first_tracepoint
1080 . <<basic-tracing-session-control,Start tracing>>:
1089 . Go back to the running `hello` application and press Enter. The
1090 program executes all `tracepoint()` instrumentation points and exits.
1091 . <<basic-tracing-session-control,Stop tracing>> and destroy the
1102 The man:lttng-destroy(1) command does not destroy the trace data; it
1103 only destroys the state of the tracing session.
1105 By default, LTTng saves the traces in
1106 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
1107 where +__name__+ is the tracing session name. The
1108 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
1110 See <<viewing-and-analyzing-your-traces,View and analyze the
1111 recorded events>> to view the recorded events.
1114 [[viewing-and-analyzing-your-traces]]
1115 === View and analyze the recorded events
1117 Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
1118 kernel>> and <<tracing-your-own-user-application,Trace a user
1119 application>> tutorials, you can inspect the recorded events.
1121 Many tools are available to read LTTng traces:
1123 * **cmd:babeltrace** is a command-line utility which converts trace
1124 formats; it supports the format that LTTng produces, CTF, as well as a
1125 basic text output which can be ++grep++ed. The cmd:babeltrace command
1126 is part of the http://diamon.org/babeltrace[Babeltrace] project.
1127 * Babeltrace also includes
1128 **https://www.python.org/[Python] bindings** so
1129 that you can easily open and read an LTTng trace with your own script,
1130 benefiting from the power of Python.
1131 * http://tracecompass.org/[**Trace Compass**]
1132 is a graphical user interface for viewing and analyzing any type of
1133 logs or traces, including LTTng's.
1134 * https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
1135 project which includes many high-level analyses of LTTng kernel
1136 traces, like scheduling statistics, interrupt frequency distribution,
1137 top CPU usage, and more.
1139 NOTE: This section assumes that the traces recorded during the previous
1140 tutorials were saved to their default location, in the
1141 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
1142 environment variable defaults to `$HOME` if not set.
1145 [[viewing-and-analyzing-your-traces-bt]]
1146 ==== Use the cmd:babeltrace command-line tool
1148 The simplest way to list all the recorded events of a trace is to pass
1149 its path to cmd:babeltrace with no options:
1153 $ babeltrace ~/lttng-traces/my-user-space-session*
1156 cmd:babeltrace finds all traces recursively within the given path and
1157 prints all their events, merging them in chronological order.
1159 You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
1164 $ babeltrace /tmp/my-kernel-trace | grep _switch
1167 You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
1168 count the recorded events:
1172 $ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
1176 [[viewing-and-analyzing-your-traces-bt-python]]
1177 ==== Use the Babeltrace Python bindings
1179 The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
1180 is useful to isolate events by simple matching using man:grep(1) and
1181 similar utilities. However, more elaborate filters, such as keeping only
1182 event records with a field value falling within a specific range, are
1183 not trivial to write using a shell. Moreover, reductions and even the
1184 most basic computations involving multiple event records are virtually
1185 impossible to implement.
1187 Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
1188 to read the event records of an LTTng trace sequentially and compute the
1189 desired information.
1191 The following script accepts an LTTng Linux kernel trace path as its
1192 first argument and prints the short names of the top 5 running processes
1193 on CPU 0 during the whole trace:
1198 from collections import Counter
1204 if len(sys.argv) != 2:
1205 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
1206 print(msg, file=sys.stderr)
1209 # A trace collection contains one or more traces
1210 col = babeltrace.TraceCollection()
1212 # Add the trace provided by the user (LTTng traces always have
1214 if col.add_trace(sys.argv[1], 'ctf') is None:
1215 raise RuntimeError('Cannot add trace')
1217 # This counter dict contains execution times:
1219 # task command name -> total execution time (ns)
1220 exec_times = Counter()
1222 # This contains the last `sched_switch` timestamp
1226 for event in col.events:
1227 # Keep only `sched_switch` events
1228 if event.name != 'sched_switch':
1231 # Keep only events which happened on CPU 0
1232 if event['cpu_id'] != 0:
1236 cur_ts = event.timestamp
1242 # Previous task command (short) name
1243 prev_comm = event['prev_comm']
1245 # Initialize entry in our dict if not yet done
1246 if prev_comm not in exec_times:
1247 exec_times[prev_comm] = 0
1249 # Compute previous command execution time
1250 diff = cur_ts - last_ts
1252 # Update execution time of this command
1253 exec_times[prev_comm] += diff
1255 # Update last timestamp
1259 for name, ns in exec_times.most_common(5):
1261 print('{:20}{} s'.format(name, s))
1266 if __name__ == '__main__':
1267 sys.exit(0 if top5proc() else 1)
1274 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1280 swapper/0 48.607245889 s
1281 chromium 7.192738188 s
1282 pavucontrol 0.709894415 s
1283 Compositor 0.660867933 s
1284 Xorg.bin 0.616753786 s
1287 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1288 weren't using the CPU that much when tracing, its first position in the
1293 == [[understanding-lttng]]Core concepts
1295 From a user's perspective, the LTTng system is built on a few concepts,
1296 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1297 operates by sending commands to the <<lttng-sessiond,session daemon>>.
1298 Understanding how those objects relate to eachother is key in mastering
1301 The core concepts are:
1303 * <<tracing-session,Tracing session>>
1304 * <<domain,Tracing domain>>
1305 * <<channel,Channel and ring buffer>>
1306 * <<"event","Instrumentation point, event rule, event, and event record">>
1312 A _tracing session_ is a stateful dialogue between you and
1313 a <<lttng-sessiond,session daemon>>. You can
1314 <<creating-destroying-tracing-sessions,create a new tracing
1315 session>> with the `lttng create` command.
1317 Anything that you do when you control LTTng tracers happens within a
1318 tracing session. In particular, a tracing session:
1321 * Has its own set of trace files.
1322 * Has its own state of activity (started or stopped).
1323 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1325 * Has its own <<channel,channels>> which have their own
1326 <<event,event rules>>.
1329 .A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1330 image::concepts.png[]
1332 Those attributes and objects are completely isolated between different
1335 A tracing session is analogous to a cash machine session:
1336 the operations you do on the banking system through the cash machine do
1337 not alter the data of other users of the same system. In the case of
1338 the cash machine, a session lasts as long as your bank card is inside.
1339 In the case of LTTng, a tracing session lasts from the `lttng create`
1340 command to the `lttng destroy` command.
1343 .Each Unix user has its own set of tracing sessions.
1344 image::many-sessions.png[]
1347 [[tracing-session-mode]]
1348 ==== Tracing session mode
1350 LTTng can send the generated trace data to different locations. The
1351 _tracing session mode_ dictates where to send it. The following modes
1352 are available in LTTng{nbsp}{revision}:
1355 LTTng writes the traces to the file system of the machine being traced
1358 Network streaming mode::
1359 LTTng sends the traces over the network to a
1360 <<lttng-relayd,relay daemon>> running on a remote system.
1363 LTTng does not write the traces by default. Instead, you can request
1364 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1365 current tracing buffers, and to write it to the target's file system
1366 or to send it over the network to a <<lttng-relayd,relay daemon>>
1367 running on a remote system.
1370 This mode is similar to the network streaming mode, but a live
1371 trace viewer can connect to the distant relay daemon to
1372 <<lttng-live,view event records as LTTng generates them>> by
1379 A _tracing domain_ is a namespace for event sources. A tracing domain
1380 has its own properties and features.
1382 There are currently five available tracing domains:
1386 * `java.util.logging` (JUL)
1390 You must specify a tracing domain when using some commands to avoid
1391 ambiguity. For example, since all the domains support named tracepoints
1392 as event sources (instrumentation points that you manually insert in the
1393 source code), you need to specify a tracing domain when
1394 <<enabling-disabling-events,creating an event rule>> because all the
1395 tracing domains could have tracepoints with the same names.
1397 Some features are reserved to specific tracing domains. Dynamic function
1398 entry and return instrumentation points, for example, are currently only
1399 supported in the Linux kernel tracing domain, but support for other
1400 tracing domains could be added in the future.
1402 You can create <<channel,channels>> in the Linux kernel and user space
1403 tracing domains. The other tracing domains have a single default
1408 === Channel and ring buffer
1410 A _channel_ is an object which is responsible for a set of ring buffers.
1411 Each ring buffer is divided into multiple sub-buffers. When an LTTng
1412 tracer emits an event, it can record it to one or more
1413 sub-buffers. The attributes of a channel determine what to do when
1414 there's no space left for a new event record because all sub-buffers
1415 are full, where to send a full sub-buffer, and other behaviours.
1417 A channel is always associated to a <<domain,tracing domain>>. The
1418 `java.util.logging` (JUL), log4j, and Python tracing domains each have
1419 a default channel which you cannot configure.
1421 A channel also owns <<event,event rules>>. When an LTTng tracer emits
1422 an event, it records it to the sub-buffers of all
1423 the enabled channels with a satisfied event rule, as long as those
1424 channels are part of active <<tracing-session,tracing sessions>>.
1427 [[channel-buffering-schemes]]
1428 ==== Per-user vs. per-process buffering schemes
1430 A channel has at least one ring buffer _per CPU_. LTTng always
1431 records an event to the ring buffer associated to the CPU on which it
1434 Two _buffering schemes_ are available when you
1435 <<enabling-disabling-channels,create a channel>> in the
1436 user space <<domain,tracing domain>>:
1438 Per-user buffering::
1439 Allocate one set of ring buffers--one per CPU--shared by all the
1440 instrumented processes of each Unix user.
1444 .Per-user buffering scheme.
1445 image::per-user-buffering.png[]
1448 Per-process buffering::
1449 Allocate one set of ring buffers--one per CPU--for each
1450 instrumented process.
1454 .Per-process buffering scheme.
1455 image::per-process-buffering.png[]
1458 The per-process buffering scheme tends to consume more memory than the
1459 per-user option because systems generally have more instrumented
1460 processes than Unix users running instrumented processes. However, the
1461 per-process buffering scheme ensures that one process having a high
1462 event throughput won't fill all the shared sub-buffers of the same
1465 The Linux kernel tracing domain has only one available buffering scheme
1466 which is to allocate a single set of ring buffers for the whole system.
1467 This scheme is similar to the per-user option, but with a single, global
1468 user "running" the kernel.
1471 [[channel-overwrite-mode-vs-discard-mode]]
1472 ==== Overwrite vs. discard event loss modes
1474 When an event occurs, LTTng records it to a specific sub-buffer (yellow
1475 arc in the following animation) of a specific channel's ring buffer.
1476 When there's no space left in a sub-buffer, the tracer marks it as
1477 consumable (red) and another, empty sub-buffer starts receiving the
1478 following event records. A <<lttng-consumerd,consumer daemon>>
1479 eventually consumes the marked sub-buffer (returns to white).
1482 [role="docsvg-channel-subbuf-anim"]
1487 In an ideal world, sub-buffers are consumed faster than they are filled,
1488 as is the case in the previous animation. In the real world,
1489 however, all sub-buffers can be full at some point, leaving no space to
1490 record the following events.
1492 By design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer is
1493 available, it is acceptable to lose event records when the alternative
1494 would be to cause substantial delays in the instrumented application's
1495 execution. LTTng privileges performance over integrity; it aims at
1496 perturbing the traced system as little as possible in order to make
1497 tracing of subtle race conditions and rare interrupt cascades possible.
1499 When it comes to losing event records because no empty sub-buffer is
1500 available, the channel's _event loss mode_ determines what to do. The
1501 available event loss modes are:
1504 Drop the newest event records until a the tracer
1505 releases a sub-buffer.
1508 Clear the sub-buffer containing the oldest event records and start
1509 writing the newest event records there.
1511 This mode is sometimes called _flight recorder mode_ because it's
1513 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1514 always keep a fixed amount of the latest data.
1516 Which mechanism you should choose depends on your context: prioritize
1517 the newest or the oldest event records in the ring buffer?
1519 Beware that, in overwrite mode, the tracer abandons a whole sub-buffer
1520 as soon as a there's no space left for a new event record, whereas in
1521 discard mode, the tracer only discards the event record that doesn't
1524 In discard mode, LTTng increments a count of lost event records when an
1525 event record is lost and saves this count to the trace. In overwrite
1526 mode, since LTTng 2.8, LTTng increments a count of lost sub-buffers when
1527 a sub-buffer is lost and saves this count to the trace. In this mode,
1528 the exact number of lost event records in those lost sub-buffers is not
1529 saved to the trace. Trace analyses can use the trace's saved discarded
1530 event record and sub-buffer counts to decide whether or not to perform
1531 the analyses even if trace data is known to be missing.
1533 There are a few ways to decrease your probability of losing event
1535 <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1536 how you can fine-une the sub-buffer count and size of a channel to
1537 virtually stop losing event records, though at the cost of greater
1541 [[channel-subbuf-size-vs-subbuf-count]]
1542 ==== Sub-buffer count and size
1544 When you <<enabling-disabling-channels,create a channel>>, you can
1545 set its number of sub-buffers and their size.
1547 Note that there is noticeable CPU overhead introduced when
1548 switching sub-buffers (marking a full one as consumable and switching
1549 to an empty one for the following events to be recorded). Knowing this,
1550 the following list presents a few practical situations along with how
1551 to configure the sub-buffer count and size for them:
1553 * **High event throughput**: In general, prefer bigger sub-buffers to
1554 lower the risk of losing event records.
1556 Having bigger sub-buffers also ensures a lower
1557 <<channel-switch-timer,sub-buffer switching frequency>>.
1559 The number of sub-buffers is only meaningful if you create the channel
1560 in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1561 other sub-buffers are left unaltered.
1563 * **Low event throughput**: In general, prefer smaller sub-buffers
1564 since the risk of losing event records is low.
1566 Because events occur less frequently, the sub-buffer switching frequency
1567 should remain low and thus the tracer's overhead should not be a
1570 * **Low memory system**: If your target system has a low memory
1571 limit, prefer fewer first, then smaller sub-buffers.
1573 Even if the system is limited in memory, you want to keep the
1574 sub-buffers as big as possible to avoid a high sub-buffer switching
1577 Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1578 which means event data is very compact. For example, the average
1579 LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1580 sub-buffer size of 1{nbsp}MiB is considered big.
1582 The previous situations highlight the major trade-off between a few big
1583 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1584 frequency vs. how much data is lost in overwrite mode. Assuming a
1585 constant event throughput and using the overwrite mode, the two
1586 following configurations have the same ring buffer total size:
1589 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1594 * **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1595 switching frequency, but if a sub-buffer overwrite happens, half of
1596 the event records so far (4{nbsp}MiB) are definitely lost.
1597 * **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1598 overhead as the previous configuration, but if a sub-buffer
1599 overwrite happens, only the eighth of event records so far are
1602 In discard mode, the sub-buffers count parameter is pointless: use two
1603 sub-buffers and set their size according to the requirements of your
1607 [[channel-switch-timer]]
1608 ==== Switch timer period
1610 The _switch timer period_ is an important configurable attribute of
1611 a channel to ensure periodic sub-buffer flushing.
1613 When the _switch timer_ expires, a sub-buffer switch happens. You can
1614 set the switch timer period attribute when you
1615 <<enabling-disabling-channels,create a channel>> to ensure that event
1616 data is consumed and committed to trace files or to a distant relay
1617 daemon periodically in case of a low event throughput.
1620 [role="docsvg-channel-switch-timer"]
1625 This attribute is also convenient when you use big sub-buffers to cope
1626 with a sporadic high event throughput, even if the throughput is
1630 [[channel-read-timer]]
1631 ==== Read timer period
1633 By default, the LTTng tracers use a notification mechanism to signal a
1634 full sub-buffer so that a consumer daemon can consume it. When such
1635 notifications must be avoided, for example in real-time applications,
1636 you can use the channel's _read timer_ instead. When the read timer
1637 fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1638 consumable sub-buffers.
1641 [[tracefile-rotation]]
1642 ==== Trace file count and size
1644 By default, trace files can grow as large as needed. You can set the
1645 maximum size of each trace file that a channel writes when you
1646 <<enabling-disabling-channels,create a channel>>. When the size of
1647 a trace file reaches the channel's fixed maximum size, LTTng creates
1648 another file to contain the next event records. LTTng appends a file
1649 count to each trace file name in this case.
1651 If you set the trace file size attribute when you create a channel, the
1652 maximum number of trace files that LTTng creates is _unlimited_ by
1653 default. To limit them, you can also set a maximum number of trace
1654 files. When the number of trace files reaches the channel's fixed
1655 maximum count, the oldest trace file is overwritten. This mechanism is
1656 called _trace file rotation_.
1660 === Instrumentation point, event rule, event, and event record
1662 An _event rule_ is a set of conditions which must be **all** satisfied
1663 for LTTng to record an occuring event.
1665 You set the conditions when you <<enabling-disabling-events,create
1668 You always attach an event rule to <<channel,channel>> when you create
1671 When an event passes the conditions of an event rule, LTTng records it
1672 in one of the attached channel's sub-buffers.
1674 The available conditions, as of LTTng{nbsp}{revision}, are:
1676 * The event rule _is enabled_.
1677 * The instrumentation point's type _is{nbsp}T_.
1678 * The instrumentation point's name (sometimes called _event name_)
1679 _matches{nbsp}N_, but _is not{nbsp}E_.
1680 * The instrumentation point's log level _is as severe as{nbsp}L_, or
1681 _is exactly{nbsp}L_.
1682 * The fields of the event's payload _satisfy_ a filter
1683 expression{nbsp}__F__.
1685 As you can see, all the conditions but the dynamic filter are related to
1686 the event rule's status or to the instrumentation point, not to the
1687 occurring events. This is why, without a filter, checking if an event
1688 passes an event rule is not a dynamic task: when you create or modify an
1689 event rule, all the tracers of its tracing domain enable or disable the
1690 instrumentation points themselves once. This is possible because the
1691 attributes of an instrumentation point (type, name, and log level) are
1692 defined statically. In other words, without a dynamic filter, the tracer
1693 _does not evaluate_ the arguments of an instrumentation point unless it
1694 matches an enabled event rule.
1696 Note that, for LTTng to record an event, the <<channel,channel>> to
1697 which a matching event rule is attached must also be enabled, and the
1698 tracing session owning this channel must be active.
1701 .Logical path from an instrumentation point to an event record.
1702 image::event-rule.png[]
1704 .Event, event record, or event rule?
1706 With so many similar terms, it's easy to get confused.
1708 An **event** is the consequence of the execution of an _instrumentation
1709 point_, like a tracepoint that you manually place in some source code,
1710 or a Linux kernel KProbe. An event is said to _occur_ at a specific
1711 time. Different actions can be taken upon the occurrence of an event,
1712 like record the event's payload to a buffer.
1714 An **event record** is the representation of an event in a sub-buffer. A
1715 tracer is responsible for capturing the payload of an event, current
1716 context variables, the event's ID, and the event's timestamp. LTTng
1717 can append this sub-buffer to a trace file.
1719 An **event rule** is a set of conditions which must all be satisfied for
1720 LTTng to record an occuring event. Events still occur without
1721 satisfying event rules, but LTTng does not record them.
1726 == Components of noch:{LTTng}
1728 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1729 to call LTTng a simple _tool_ since it is composed of multiple
1730 interacting components. This section describes those components,
1731 explains their respective roles, and shows how they connect together to
1732 form the LTTng ecosystem.
1734 The following diagram shows how the most important components of LTTng
1735 interact with user applications, the Linux kernel, and you:
1738 .Control and trace data paths between LTTng components.
1739 image::plumbing.png[]
1741 The LTTng project incorporates:
1743 * **LTTng-tools**: Libraries and command-line interface to
1744 control tracing sessions.
1745 ** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1746 ** <<lttng-consumerd,Consumer daemon>> (cmd:lttng-consumerd).
1747 ** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1748 ** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1749 ** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1750 * **LTTng-UST**: Libraries and Java/Python packages to trace user
1752 ** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1753 headers to instrument and trace any native user application.
1754 ** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1755 *** `liblttng-ust-libc-wrapper`
1756 *** `liblttng-ust-pthread-wrapper`
1757 *** `liblttng-ust-cyg-profile`
1758 *** `liblttng-ust-cyg-profile-fast`
1759 *** `liblttng-ust-dl`
1760 ** User space tracepoint provider source files generator command-line
1761 tool (man:lttng-gen-tp(1)).
1762 ** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1763 Java applications using `java.util.logging` or
1764 Apache log4j 1.2 logging.
1765 ** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1766 Python applications using the standard `logging` package.
1767 * **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1769 ** LTTng kernel tracer module.
1770 ** Tracing ring buffer kernel modules.
1771 ** Probe kernel modules.
1772 ** LTTng logger kernel module.
1776 === Tracing control command-line interface
1779 .The tracing control command-line interface.
1780 image::plumbing-lttng-cli.png[]
1782 The _man:lttng(1) command-line tool_ is the standard user interface to
1783 control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1784 is part of LTTng-tools.
1786 The cmd:lttng tool is linked with
1787 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1788 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1790 The cmd:lttng tool has a Git-like interface:
1794 $ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1797 The <<controlling-tracing,Tracing control>> section explores the
1798 available features of LTTng using the cmd:lttng tool.
1801 [[liblttng-ctl-lttng]]
1802 === Tracing control library
1805 .The tracing control library.
1806 image::plumbing-liblttng-ctl.png[]
1808 The _LTTng control library_, `liblttng-ctl`, is used to communicate
1809 with a <<lttng-sessiond,session daemon>> using a C API that hides the
1810 underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1812 The <<lttng-cli,cmd:lttng command-line tool>>
1813 is linked with `liblttng-ctl`.
1815 You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1820 #include <lttng/lttng.h>
1823 Some objects are referenced by name (C string), such as tracing
1824 sessions, but most of them require to create a handle first using
1825 `lttng_create_handle()`.
1827 The best available developer documentation for `liblttng-ctl` is, as of
1828 LTTng{nbsp}{revision}, its installed header files. Every function and
1829 structure is thoroughly documented.
1833 === User space tracing library
1836 .The user space tracing library.
1837 image::plumbing-liblttng-ust.png[]
1839 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1840 is the LTTng user space tracer. It receives commands from a
1841 <<lttng-sessiond,session daemon>>, for example to
1842 enable and disable specific instrumentation points, and writes event
1843 records to ring buffers shared with a
1844 <<lttng-consumerd,consumer daemon>>.
1845 `liblttng-ust` is part of LTTng-UST.
1847 Public C header files are installed beside `liblttng-ust` to
1848 instrument any <<c-application,C or $$C++$$ application>>.
1850 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1851 packages, use their own library providing tracepoints which is
1852 linked with `liblttng-ust`.
1854 An application or library does not have to initialize `liblttng-ust`
1855 manually: its constructor does the necessary tasks to properly register
1856 to a session daemon. The initialization phase also enables the
1857 instrumentation points matching the <<event,event rules>> that you
1861 [[lttng-ust-agents]]
1862 === User space tracing agents
1865 .The user space tracing agents.
1866 image::plumbing-lttng-ust-agents.png[]
1868 The _LTTng-UST Java and Python agents_ are regular Java and Python
1869 packages which add LTTng tracing capabilities to the
1870 native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1872 In the case of Java, the
1873 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1874 core logging facilities] and
1875 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1876 Note that Apache Log4{nbsp}2 is not supported.
1878 In the case of Python, the standard
1879 https://docs.python.org/3/library/logging.html[`logging`] package
1880 is supported. Both Python 2 and Python 3 modules can import the
1881 LTTng-UST Python agent package.
1883 The applications using the LTTng-UST agents are in the
1884 `java.util.logging` (JUL),
1885 log4j, and Python <<domain,tracing domains>>.
1887 Both agents use the same mechanism to trace the log statements. When an
1888 agent is initialized, it creates a log handler that attaches to the root
1889 logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1890 When the application executes a log statement, it is passed to the
1891 agent's log handler by the root logger. The agent's log handler calls a
1892 native function in a tracepoint provider package shared library linked
1893 with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1894 other fields, like its logger name and its log level. This native
1895 function contains a user space instrumentation point, hence tracing the
1898 The log level condition of an
1899 <<event,event rule>> is considered when tracing
1900 a Java or a Python application, and it's compatible with the standard
1901 JUL, log4j, and Python log levels.
1905 === LTTng kernel modules
1908 .The LTTng kernel modules.
1909 image::plumbing-lttng-modules.png[]
1911 The _LTTng kernel modules_ are a set of Linux kernel modules
1912 which implement the kernel tracer of the LTTng project. The LTTng
1913 kernel modules are part of LTTng-modules.
1915 The LTTng kernel modules include:
1917 * A set of _probe_ modules.
1919 Each module attaches to a specific subsystem
1920 of the Linux kernel using its tracepoint instrument points. There are
1921 also modules to attach to the entry and return points of the Linux
1922 system call functions.
1924 * _Ring buffer_ modules.
1926 A ring buffer implementation is provided as kernel modules. The LTTng
1927 kernel tracer writes to the ring buffer; a
1928 <<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1930 * The _LTTng kernel tracer_ module.
1931 * The _LTTng logger_ module.
1933 The LTTng logger module implements the special path:{/proc/lttng-logger}
1934 file so that any executable can generate LTTng events by opening and
1935 writing to this file.
1937 See <<proc-lttng-logger-abi,LTTng logger>>.
1939 Generally, you do not have to load the LTTng kernel modules manually
1940 (using man:modprobe(8), for example): a root <<lttng-sessiond,session
1941 daemon>> loads the necessary modules when starting. If you have extra
1942 probe modules, you can specify to load them to the session daemon on
1945 The LTTng kernel modules are installed in
1946 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1947 the kernel release (see `uname --kernel-release`).
1954 .The session daemon.
1955 image::plumbing-sessiond.png[]
1957 The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
1958 managing tracing sessions and for controlling the various components of
1959 LTTng. The session daemon is part of LTTng-tools.
1961 The session daemon sends control requests to and receives control
1964 * The <<lttng-ust,user space tracing library>>.
1966 Any instance of the user space tracing library first registers to
1967 a session daemon. Then, the session daemon can send requests to
1968 this instance, such as:
1971 ** Get the list of tracepoints.
1972 ** Share an <<event,event rule>> so that the user space tracing library
1973 can enable or disable tracepoints. Amongst the possible conditions
1974 of an event rule is a filter expression which `liblttng-ust` evalutes
1975 when an event occurs.
1976 ** Share <<channel,channel>> attributes and ring buffer locations.
1979 The session daemon and the user space tracing library use a Unix
1980 domain socket for their communication.
1982 * The <<lttng-ust-agents,user space tracing agents>>.
1984 Any instance of a user space tracing agent first registers to
1985 a session daemon. Then, the session daemon can send requests to
1986 this instance, such as:
1989 ** Get the list of loggers.
1990 ** Enable or disable a specific logger.
1993 The session daemon and the user space tracing agent use a TCP connection
1994 for their communication.
1996 * The <<lttng-modules,LTTng kernel tracer>>.
1997 * The <<lttng-consumerd,consumer daemon>>.
1999 The session daemon sends requests to the consumer daemon to instruct
2000 it where to send the trace data streams, amongst other information.
2002 * The <<lttng-relayd,relay daemon>>.
2004 The session daemon receives commands from the
2005 <<liblttng-ctl-lttng,tracing control library>>.
2007 The root session daemon loads the appropriate
2008 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2009 a <<lttng-consumerd,consumer daemon>> as soon as you create
2010 an <<event,event rule>>.
2012 The session daemon does not send and receive trace data: this is the
2013 role of the <<lttng-consumerd,consumer daemon>> and
2014 <<lttng-relayd,relay daemon>>. It does, however, generate the
2015 http://diamon.org/ctf/[CTF] metadata stream.
2017 Each Unix user can have its own session daemon instance. The
2018 tracing sessions managed by different session daemons are completely
2021 The root user's session daemon is the only one which is
2022 allowed to control the LTTng kernel tracer, and its spawned consumer
2023 daemon is the only one which is allowed to consume trace data from the
2024 LTTng kernel tracer. Note, however, that any Unix user which is a member
2025 of the <<tracing-group,tracing group>> is allowed
2026 to create <<channel,channels>> in the
2027 Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
2030 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2031 session daemon when using its `create` command if none is currently
2032 running. You can also start the session daemon manually.
2039 .The consumer daemon.
2040 image::plumbing-consumerd.png[]
2042 The _consumer daemon_, cmd:lttng-consumerd, is a daemon which shares
2043 ring buffers with user applications or with the LTTng kernel modules to
2044 collect trace data and send it to some location (on disk or to a
2045 <<lttng-relayd,relay daemon>> over the network). The consumer daemon
2046 is part of LTTng-tools.
2048 You do not start a consumer daemon manually: a consumer daemon is always
2049 spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
2050 <<event,event rule>>, that is, before you start tracing. When you kill
2051 its owner session daemon, the consumer daemon also exits because it is
2052 the session daemon's child process. Command-line options of
2053 man:lttng-sessiond(8) target the consumer daemon process.
2055 There are up to two running consumer daemons per Unix user, whereas only
2056 one session daemon can run per user. This is because each process can be
2057 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2058 and 64-bit processes, it is more efficient to have separate
2059 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2060 exception: it can have up to _three_ running consumer daemons: 32-bit
2061 and 64-bit instances for its user applications, and one more
2062 reserved for collecting kernel trace data.
2070 image::plumbing-relayd.png[]
2072 The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
2073 between remote session and consumer daemons, local trace files, and a
2074 remote live trace viewer. The relay daemon is part of LTTng-tools.
2076 The main purpose of the relay daemon is to implement a receiver of
2077 <<sending-trace-data-over-the-network,trace data over the network>>.
2078 This is useful when the target system does not have much file system
2079 space to record trace files locally.
2081 The relay daemon is also a server to which a
2082 <<lttng-live,live trace viewer>> can
2083 connect. The live trace viewer sends requests to the relay daemon to
2084 receive trace data as the target system emits events. The
2085 communication protocol is named _LTTng live_; it is used over TCP
2088 Note that you can start the relay daemon on the target system directly.
2089 This is the setup of choice when the use case is to view events as
2090 the target system emits them without the need of a remote system.
2094 == [[using-lttng]]Instrumentation
2096 There are many examples of tracing and monitoring in our everyday life:
2098 * You have access to real-time and historical weather reports and
2099 forecasts thanks to weather stations installed around the country.
2100 * You know your heart is safe thanks to an electrocardiogram.
2101 * You make sure not to drive your car too fast and to have enough fuel
2102 to reach your destination thanks to gauges visible on your dashboard.
2104 All the previous examples have something in common: they rely on
2105 **instruments**. Without the electrodes attached to the surface of your
2106 body's skin, cardiac monitoring is futile.
2108 LTTng, as a tracer, is no different from those real life examples. If
2109 you're about to trace a software system or, in other words, record its
2110 history of execution, you better have **instrumentation points** in the
2111 subject you're tracing, that is, the actual software.
2113 Various ways were developed to instrument a piece of software for LTTng
2114 tracing. The most straightforward one is to manually place
2115 instrumentation points, called _tracepoints_, in the software's source
2116 code. It is also possible to add instrumentation points dynamically in
2117 the Linux kernel <<domain,tracing domain>>.
2119 If you're only interested in tracing the Linux kernel, your
2120 instrumentation needs are probably already covered by LTTng's built-in
2121 <<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
2122 user application which is already instrumented for LTTng tracing.
2123 In such cases, you can skip this whole section and read the topics of
2124 the <<controlling-tracing,Tracing control>> section.
2126 Many methods are available to instrument a piece of software for LTTng
2129 * <<c-application,User space instrumentation for C and $$C++$$
2131 * <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
2132 * <<java-application,User space Java agent>>.
2133 * <<python-application,User space Python agent>>.
2134 * <<proc-lttng-logger-abi,LTTng logger>>.
2135 * <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
2139 === [[cxx-application]]User space instrumentation for C and $$C++$$ applications
2141 The procedure to instrument a C or $$C++$$ user application with
2142 the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
2144 . <<tracepoint-provider,Create the source files of a tracepoint provider
2146 . <<probing-the-application-source-code,Add tracepoints to
2147 the application's source code>>.
2148 . <<building-tracepoint-providers-and-user-application,Build and link
2149 a tracepoint provider package and the user application>>.
2151 If you need quick, man:printf(3)-like instrumentation, you can skip
2152 those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
2155 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2156 instrument a user application with `liblttng-ust`.
2159 [[tracepoint-provider]]
2160 ==== Create the source files of a tracepoint provider package
2162 A _tracepoint provider_ is a set of compiled functions which provide
2163 **tracepoints** to an application, the type of instrumentation point
2164 supported by LTTng-UST. Those functions can emit events with
2165 user-defined fields and serialize those events as event records to one
2166 or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
2167 macro, which you <<probing-the-application-source-code,insert in a user
2168 application's source code>>, calls those functions.
2170 A _tracepoint provider package_ is an object file (`.o`) or a shared
2171 library (`.so`) which contains one or more tracepoint providers.
2172 Its source files are:
2174 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2175 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2177 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2178 the LTTng user space tracer, at run time.
2181 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2182 image::ust-app.png[]
2184 NOTE: If you need quick, man:printf(3)-like instrumentation, you can
2185 skip creating and using a tracepoint provider and use
2186 <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
2190 ===== Create a tracepoint provider header file template
2192 A _tracepoint provider header file_ contains the tracepoint
2193 definitions of a tracepoint provider.
2195 To create a tracepoint provider header file:
2197 . Start from this template:
2201 .Tracepoint provider header file template (`.h` file extension).
2203 #undef TRACEPOINT_PROVIDER
2204 #define TRACEPOINT_PROVIDER provider_name
2206 #undef TRACEPOINT_INCLUDE
2207 #define TRACEPOINT_INCLUDE "./tp.h"
2209 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
2212 #include <lttng/tracepoint.h>
2215 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
2216 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
2221 #include <lttng/tracepoint-event.h>
2227 * `provider_name` with the name of your tracepoint provider.
2228 * `"tp.h"` with the name of your tracepoint provider header file.
2230 . Below the `#include <lttng/tracepoint.h>` line, put your
2231 <<defining-tracepoints,tracepoint definitions>>.
2233 Your tracepoint provider name must be unique amongst all the possible
2234 tracepoint provider names used on the same target system. We
2235 suggest to include the name of your project or company in the name,
2236 for example, `org_lttng_my_project_tpp`.
2238 TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
2239 this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
2240 write are the <<defining-tracepoints,tracepoint definitions>>.
2243 [[defining-tracepoints]]
2244 ===== Create a tracepoint definition
2246 A _tracepoint definition_ defines, for a given tracepoint:
2248 * Its **input arguments**. They are the macro parameters that the
2249 `tracepoint()` macro accepts for this particular tracepoint
2250 in the user application's source code.
2251 * Its **output event fields**. They are the sources of event fields
2252 that form the payload of any event that the execution of the
2253 `tracepoint()` macro emits for this particular tracepoint.
2255 You can create a tracepoint definition by using the
2256 `TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2258 <<tpp-header,tracepoint provider header file template>>.
2260 The syntax of the `TRACEPOINT_EVENT()` macro is:
2263 .`TRACEPOINT_EVENT()` macro syntax.
2266 /* Tracepoint provider name */
2269 /* Tracepoint name */
2272 /* Input arguments */
2277 /* Output event fields */
2286 * `provider_name` with your tracepoint provider name.
2287 * `tracepoint_name` with your tracepoint name.
2288 * `arguments` with the <<tpp-def-input-args,input arguments>>.
2289 * `fields` with the <<tpp-def-output-fields,output event field>>
2292 This tracepoint emits events named `provider_name:tracepoint_name`.
2295 .Event name's length limitation
2297 The concatenation of the tracepoint provider name and the
2298 tracepoint name must not exceed **254 characters**. If it does, the
2299 instrumented application compiles and runs, but LTTng throws multiple
2300 warnings and you could experience serious issues.
2303 [[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
2306 .`TP_ARGS()` macro syntax.
2315 * `type` with the C type of the argument.
2316 * `arg_name` with the argument name.
2318 You can repeat `type` and `arg_name` up to 10 times to have
2319 more than one argument.
2321 .`TP_ARGS()` usage with three arguments.
2333 The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2334 tracepoint definition with no input arguments.
2336 [[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2337 `ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2338 man:lttng-ust(3) for a complete description of the available `ctf_*()`
2339 macros. A `ctf_*()` macro specifies the type, size, and byte order of
2342 Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2343 C expression that the tracer evalutes at the `tracepoint()` macro site
2344 in the application's source code. This expression provides a field's
2345 source of data. The argument expression can include input argument names
2346 listed in the `TP_ARGS()` macro.
2348 Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2349 must be unique within a given tracepoint definition.
2351 Here's a complete tracepoint definition example:
2353 .Tracepoint definition.
2355 The following tracepoint definition defines a tracepoint which takes
2356 three input arguments and has four output event fields.
2360 #include "my-custom-structure.h"
2366 const struct my_custom_structure*, my_custom_structure,
2371 ctf_string(query_field, query)
2372 ctf_float(double, ratio_field, ratio)
2373 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2374 ctf_integer(int, send_size, my_custom_structure->send_size)
2379 You can refer to this tracepoint definition with the `tracepoint()`
2380 macro in your application's source code like this:
2384 tracepoint(my_provider, my_tracepoint,
2385 my_structure, some_ratio, the_query);
2389 NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2390 if they satisfy an enabled <<event,event rule>>.
2393 [[using-tracepoint-classes]]
2394 ===== Use a tracepoint class
2396 A _tracepoint class_ is a class of tracepoints which share the same
2397 output event field definitions. A _tracepoint instance_ is one
2398 instance of such a defined tracepoint class, with its own tracepoint
2401 The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2402 shorthand which defines both a tracepoint class and a tracepoint
2403 instance at the same time.
2405 When you build a tracepoint provider package, the C or $$C++$$ compiler
2406 creates one serialization function for each **tracepoint class**. A
2407 serialization function is responsible for serializing the event fields
2408 of a tracepoint to a sub-buffer when tracing.
2410 For various performance reasons, when your situation requires multiple
2411 tracepoint definitions with different names, but with the same event
2412 fields, we recommend that you manually create a tracepoint class
2413 and instantiate as many tracepoint instances as needed. One positive
2414 effect of such a design, amongst other advantages, is that all
2415 tracepoint instances of the same tracepoint class reuse the same
2416 serialization function, thus reducing
2417 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2419 .Use a tracepoint class and tracepoint instances.
2421 Consider the following three tracepoint definitions:
2433 ctf_integer(int, userid, userid)
2434 ctf_integer(size_t, len, len)
2446 ctf_integer(int, userid, userid)
2447 ctf_integer(size_t, len, len)
2459 ctf_integer(int, userid, userid)
2460 ctf_integer(size_t, len, len)
2465 In this case, we create three tracepoint classes, with one implicit
2466 tracepoint instance for each of them: `get_account`, `get_settings`, and
2467 `get_transaction`. However, they all share the same event field names
2468 and types. Hence three identical, yet independent serialization
2469 functions are created when you build the tracepoint provider package.
2471 A better design choice is to define a single tracepoint class and three
2472 tracepoint instances:
2476 /* The tracepoint class */
2477 TRACEPOINT_EVENT_CLASS(
2478 /* Tracepoint provider name */
2481 /* Tracepoint class name */
2484 /* Input arguments */
2490 /* Output event fields */
2492 ctf_integer(int, userid, userid)
2493 ctf_integer(size_t, len, len)
2497 /* The tracepoint instances */
2498 TRACEPOINT_EVENT_INSTANCE(
2499 /* Tracepoint provider name */
2502 /* Tracepoint class name */
2505 /* Tracepoint name */
2508 /* Input arguments */
2514 TRACEPOINT_EVENT_INSTANCE(
2523 TRACEPOINT_EVENT_INSTANCE(
2536 [[assigning-log-levels]]
2537 ===== Assign a log level to a tracepoint definition
2539 You can assign an optional _log level_ to a
2540 <<defining-tracepoints,tracepoint definition>>.
2542 Assigning different levels of severity to tracepoint definitions can
2543 be useful: when you <<enabling-disabling-events,create an event rule>>,
2544 you can target tracepoints having a log level as severe as a specific
2547 The concept of LTTng-UST log levels is similar to the levels found
2548 in typical logging frameworks:
2550 * In a logging framework, the log level is given by the function
2551 or method name you use at the log statement site: `debug()`,
2552 `info()`, `warn()`, `error()`, and so on.
2553 * In LTTng-UST, you statically assign the log level to a tracepoint
2554 definition; any `tracepoint()` macro invocation which refers to
2555 this definition has this log level.
2557 You can assign a log level to a tracepoint definition with the
2558 `TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2559 <<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2560 <<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2563 The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2566 .`TRACEPOINT_LOGLEVEL()` macro syntax.
2568 TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2573 * `provider_name` with the tracepoint provider name.
2574 * `tracepoint_name` with the tracepoint name.
2575 * `log_level` with the log level to assign to the tracepoint
2576 definition named `tracepoint_name` in the `provider_name`
2577 tracepoint provider.
2579 See man:lttng-ust(3) for a list of available log level names.
2581 .Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2585 /* Tracepoint definition */
2594 ctf_integer(int, userid, userid)
2595 ctf_integer(size_t, len, len)
2599 /* Log level assignment */
2600 TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2606 ===== Create a tracepoint provider package source file
2608 A _tracepoint provider package source file_ is a C source file which
2609 includes a <<tpp-header,tracepoint provider header file>> to expand its
2610 macros into event serialization and other functions.
2612 You can always use the following tracepoint provider package source
2616 .Tracepoint provider package source file template.
2618 #define TRACEPOINT_CREATE_PROBES
2623 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2624 header file>> name. You may also include more than one tracepoint
2625 provider header file here to create a tracepoint provider package
2626 holding more than one tracepoint providers.
2629 [[probing-the-application-source-code]]
2630 ==== Add tracepoints to an application's source code
2632 Once you <<tpp-header,create a tracepoint provider header file>>, you
2633 can use the `tracepoint()` macro in your application's
2634 source code to insert the tracepoints that this header
2635 <<defining-tracepoints,defines>>.
2637 The `tracepoint()` macro takes at least two parameters: the tracepoint
2638 provider name and the tracepoint name. The corresponding tracepoint
2639 definition defines the other parameters.
2641 .`tracepoint()` usage.
2643 The following <<defining-tracepoints,tracepoint definition>> defines a
2644 tracepoint which takes two input arguments and has two output event
2648 .Tracepoint provider header file.
2650 #include "my-custom-structure.h"
2657 const char*, cmd_name
2660 ctf_string(cmd_name, cmd_name)
2661 ctf_integer(int, number_of_args, argc)
2666 You can refer to this tracepoint definition with the `tracepoint()`
2667 macro in your application's source code like this:
2670 .Application's source file.
2674 int main(int argc, char* argv[])
2676 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2682 Note how the application's source code includes
2683 the tracepoint provider header file containing the tracepoint
2684 definitions to use, path:{tp.h}.
2687 .`tracepoint()` usage with a complex tracepoint definition.
2689 Consider this complex tracepoint definition, where multiple event
2690 fields refer to the same input arguments in their argument expression
2694 .Tracepoint provider header file.
2696 /* For `struct stat` */
2697 #include <sys/types.h>
2698 #include <sys/stat.h>
2710 ctf_integer(int, my_constant_field, 23 + 17)
2711 ctf_integer(int, my_int_arg_field, my_int_arg)
2712 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2713 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2714 my_str_arg[2] + my_str_arg[3])
2715 ctf_string(my_str_arg_field, my_str_arg)
2716 ctf_integer_hex(off_t, size_field, st->st_size)
2717 ctf_float(double, size_dbl_field, (double) st->st_size)
2718 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2719 size_t, strlen(my_str_arg) / 2)
2724 You can refer to this tracepoint definition with the `tracepoint()`
2725 macro in your application's source code like this:
2728 .Application's source file.
2730 #define TRACEPOINT_DEFINE
2737 stat("/etc/fstab", &s);
2738 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2744 If you look at the event record that LTTng writes when tracing this
2745 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2746 it should look like this:
2748 .Event record fields
2750 |Field's name |Field's value
2751 |`my_constant_field` |40
2752 |`my_int_arg_field` |23
2753 |`my_int_arg_field2` |529
2755 |`my_str_arg_field` |`Hello, World!`
2756 |`size_field` |0x12d
2757 |`size_dbl_field` |301.0
2758 |`half_my_str_arg_field` |`Hello,`
2762 Sometimes, the arguments you pass to `tracepoint()` are expensive to
2763 compute--they use the call stack, for example. To avoid this
2764 computation when the tracepoint is disabled, you can use the
2765 `tracepoint_enabled()` and `do_tracepoint()` macros.
2767 The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2771 .`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2773 tracepoint_enabled(provider_name, tracepoint_name)
2774 do_tracepoint(provider_name, tracepoint_name, ...)
2779 * `provider_name` with the tracepoint provider name.
2780 * `tracepoint_name` with the tracepoint name.
2782 `tracepoint_enabled()` returns a non-zero value if the tracepoint named
2783 `tracepoint_name` from the provider named `provider_name` is enabled
2786 `do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2787 if the tracepoint is enabled. Using `tracepoint()` with
2788 `tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2789 the `tracepoint_enabled()` check, thus a race condition is
2790 possible in this situation:
2793 .Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2795 if (tracepoint_enabled(my_provider, my_tracepoint)) {
2796 stuff = prepare_stuff();
2799 tracepoint(my_provider, my_tracepoint, stuff);
2802 If the tracepoint is enabled after the condition, then `stuff` is not
2803 prepared: the emitted event will either contain wrong data, or the whole
2804 application could crash (segmentation fault, for example).
2806 NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2807 `STAP_PROBEV()` call. If you need it, you must emit
2811 [[building-tracepoint-providers-and-user-application]]
2812 ==== Build and link a tracepoint provider package and an application
2814 Once you have one or more <<tpp-header,tracepoint provider header
2815 files>> and a <<tpp-source,tracepoint provider package source file>>,
2816 you can create the tracepoint provider package by compiling its source
2817 file. From here, multiple build and run scenarios are possible. The
2818 following table shows common application and library configurations
2819 along with the required command lines to achieve them.
2821 In the following diagrams, we use the following file names:
2824 Executable application.
2827 Application's object file.
2830 Tracepoint provider package object file.
2833 Tracepoint provider package archive file.
2836 Tracepoint provider package shared object file.
2839 User library object file.
2842 User library shared object file.
2844 We use the following symbols in the diagrams of table below:
2847 .Symbols used in the build scenario diagrams.
2848 image::ust-sit-symbols.png[]
2850 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2851 variable in the following instructions.
2853 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2854 .Common tracepoint provider package scenarios.
2856 |Scenario |Instructions
2859 The instrumented application is statically linked with
2860 the tracepoint provider package object.
2862 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2865 include::../common/ust-sit-step-tp-o.txt[]
2867 To build the instrumented application:
2869 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2874 #define TRACEPOINT_DEFINE
2878 . Compile the application source file:
2887 . Build the application:
2892 $ gcc -o app app.o tpp.o -llttng-ust -ldl
2896 To run the instrumented application:
2898 * Start the application:
2908 The instrumented application is statically linked with the
2909 tracepoint provider package archive file.
2911 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2914 To create the tracepoint provider package archive file:
2916 . Compile the <<tpp-source,tracepoint provider package source file>>:
2925 . Create the tracepoint provider package archive file:
2930 $ ar rcs tpp.a tpp.o
2934 To build the instrumented application:
2936 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2941 #define TRACEPOINT_DEFINE
2945 . Compile the application source file:
2954 . Build the application:
2959 $ gcc -o app app.o tpp.a -llttng-ust -ldl
2963 To run the instrumented application:
2965 * Start the application:
2975 The instrumented application is linked with the tracepoint provider
2976 package shared object.
2978 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
2981 include::../common/ust-sit-step-tp-so.txt[]
2983 To build the instrumented application:
2985 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2990 #define TRACEPOINT_DEFINE
2994 . Compile the application source file:
3003 . Build the application:
3008 $ gcc -o app app.o -ldl -L. -ltpp
3012 To run the instrumented application:
3014 * Start the application:
3024 The tracepoint provider package shared object is preloaded before the
3025 instrumented application starts.
3027 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3030 include::../common/ust-sit-step-tp-so.txt[]
3032 To build the instrumented application:
3034 . In path:{app.c}, before including path:{tpp.h}, add the
3040 #define TRACEPOINT_DEFINE
3041 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3045 . Compile the application source file:
3054 . Build the application:
3059 $ gcc -o app app.o -ldl
3063 To run the instrumented application with tracing support:
3065 * Preload the tracepoint provider package shared object and
3066 start the application:
3071 $ LD_PRELOAD=./libtpp.so ./app
3075 To run the instrumented application without tracing support:
3077 * Start the application:
3087 The instrumented application dynamically loads the tracepoint provider
3088 package shared object.
3090 See the <<dlclose-warning,warning about `dlclose()`>>.
3092 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3095 include::../common/ust-sit-step-tp-so.txt[]
3097 To build the instrumented application:
3099 . In path:{app.c}, before including path:{tpp.h}, add the
3105 #define TRACEPOINT_DEFINE
3106 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3110 . Compile the application source file:
3119 . Build the application:
3124 $ gcc -o app app.o -ldl
3128 To run the instrumented application:
3130 * Start the application:
3140 The application is linked with the instrumented user library.
3142 The instrumented user library is statically linked with the tracepoint
3143 provider package object file.
3145 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3148 include::../common/ust-sit-step-tp-o-fpic.txt[]
3150 To build the instrumented user library:
3152 . In path:{emon.c}, before including path:{tpp.h}, add the
3158 #define TRACEPOINT_DEFINE
3162 . Compile the user library source file:
3167 $ gcc -I. -fpic -c emon.c
3171 . Build the user library shared object:
3176 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3180 To build the application:
3182 . Compile the application source file:
3191 . Build the application:
3196 $ gcc -o app app.o -L. -lemon
3200 To run the application:
3202 * Start the application:
3212 The application is linked with the instrumented user library.
3214 The instrumented user library is linked with the tracepoint provider
3215 package shared object.
3217 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3220 include::../common/ust-sit-step-tp-so.txt[]
3222 To build the instrumented user library:
3224 . In path:{emon.c}, before including path:{tpp.h}, add the
3230 #define TRACEPOINT_DEFINE
3234 . Compile the user library source file:
3239 $ gcc -I. -fpic -c emon.c
3243 . Build the user library shared object:
3248 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3252 To build the application:
3254 . Compile the application source file:
3263 . Build the application:
3268 $ gcc -o app app.o -L. -lemon
3272 To run the application:
3274 * Start the application:
3284 The tracepoint provider package shared object is preloaded before the
3287 The application is linked with the instrumented user library.
3289 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3292 include::../common/ust-sit-step-tp-so.txt[]
3294 To build the instrumented user library:
3296 . In path:{emon.c}, before including path:{tpp.h}, add the
3302 #define TRACEPOINT_DEFINE
3303 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3307 . Compile the user library source file:
3312 $ gcc -I. -fpic -c emon.c
3316 . Build the user library shared object:
3321 $ gcc -shared -o libemon.so emon.o -ldl
3325 To build the application:
3327 . Compile the application source file:
3336 . Build the application:
3341 $ gcc -o app app.o -L. -lemon
3345 To run the application with tracing support:
3347 * Preload the tracepoint provider package shared object and
3348 start the application:
3353 $ LD_PRELOAD=./libtpp.so ./app
3357 To run the application without tracing support:
3359 * Start the application:
3369 The application is linked with the instrumented user library.
3371 The instrumented user library dynamically loads the tracepoint provider
3372 package shared object.
3374 See the <<dlclose-warning,warning about `dlclose()`>>.
3376 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3379 include::../common/ust-sit-step-tp-so.txt[]
3381 To build the instrumented user library:
3383 . In path:{emon.c}, before including path:{tpp.h}, add the
3389 #define TRACEPOINT_DEFINE
3390 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3394 . Compile the user library source file:
3399 $ gcc -I. -fpic -c emon.c
3403 . Build the user library shared object:
3408 $ gcc -shared -o libemon.so emon.o -ldl
3412 To build the application:
3414 . Compile the application source file:
3423 . Build the application:
3428 $ gcc -o app app.o -L. -lemon
3432 To run the application:
3434 * Start the application:
3444 The application dynamically loads the instrumented user library.
3446 The instrumented user library is linked with the tracepoint provider
3447 package shared object.
3449 See the <<dlclose-warning,warning about `dlclose()`>>.
3451 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3454 include::../common/ust-sit-step-tp-so.txt[]
3456 To build the instrumented user library:
3458 . In path:{emon.c}, before including path:{tpp.h}, add the
3464 #define TRACEPOINT_DEFINE
3468 . Compile the user library source file:
3473 $ gcc -I. -fpic -c emon.c
3477 . Build the user library shared object:
3482 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3486 To build the application:
3488 . Compile the application source file:
3497 . Build the application:
3502 $ gcc -o app app.o -ldl -L. -lemon
3506 To run the application:
3508 * Start the application:
3518 The application dynamically loads the instrumented user library.
3520 The instrumented user library dynamically loads the tracepoint provider
3521 package shared object.
3523 See the <<dlclose-warning,warning about `dlclose()`>>.
3525 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3528 include::../common/ust-sit-step-tp-so.txt[]
3530 To build the instrumented user library:
3532 . In path:{emon.c}, before including path:{tpp.h}, add the
3538 #define TRACEPOINT_DEFINE
3539 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3543 . Compile the user library source file:
3548 $ gcc -I. -fpic -c emon.c
3552 . Build the user library shared object:
3557 $ gcc -shared -o libemon.so emon.o -ldl
3561 To build the application:
3563 . Compile the application source file:
3572 . Build the application:
3577 $ gcc -o app app.o -ldl -L. -lemon
3581 To run the application:
3583 * Start the application:
3593 The tracepoint provider package shared object is preloaded before the
3596 The application dynamically loads the instrumented user library.
3598 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3601 include::../common/ust-sit-step-tp-so.txt[]
3603 To build the instrumented user library:
3605 . In path:{emon.c}, before including path:{tpp.h}, add the
3611 #define TRACEPOINT_DEFINE
3612 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3616 . Compile the user library source file:
3621 $ gcc -I. -fpic -c emon.c
3625 . Build the user library shared object:
3630 $ gcc -shared -o libemon.so emon.o -ldl
3634 To build the application:
3636 . Compile the application source file:
3645 . Build the application:
3650 $ gcc -o app app.o -L. -lemon
3654 To run the application with tracing support:
3656 * Preload the tracepoint provider package shared object and
3657 start the application:
3662 $ LD_PRELOAD=./libtpp.so ./app
3666 To run the application without tracing support:
3668 * Start the application:
3678 The application is statically linked with the tracepoint provider
3679 package object file.
3681 The application is linked with the instrumented user library.
3683 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3686 include::../common/ust-sit-step-tp-o.txt[]
3688 To build the instrumented user library:
3690 . In path:{emon.c}, before including path:{tpp.h}, add the
3696 #define TRACEPOINT_DEFINE
3700 . Compile the user library source file:
3705 $ gcc -I. -fpic -c emon.c
3709 . Build the user library shared object:
3714 $ gcc -shared -o libemon.so emon.o
3718 To build the application:
3720 . Compile the application source file:
3729 . Build the application:
3734 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3738 To run the instrumented application:
3740 * Start the application:
3750 The application is statically linked with the tracepoint provider
3751 package object file.
3753 The application dynamically loads the instrumented user library.
3755 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3758 include::../common/ust-sit-step-tp-o.txt[]
3760 To build the application:
3762 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3767 #define TRACEPOINT_DEFINE
3771 . Compile the application source file:
3780 . Build the application:
3785 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3790 The `--export-dynamic` option passed to the linker is necessary for the
3791 dynamically loaded library to ``see'' the tracepoint symbols defined in
3794 To build the instrumented user library:
3796 . Compile the user library source file:
3801 $ gcc -I. -fpic -c emon.c
3805 . Build the user library shared object:
3810 $ gcc -shared -o libemon.so emon.o
3814 To run the application:
3816 * Start the application:
3828 .Do not use man:dlclose(3) on a tracepoint provider package
3830 Never use man:dlclose(3) on any shared object which:
3832 * Is linked with, statically or dynamically, a tracepoint provider
3834 * Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3835 package shared object.
3837 This is currently considered **unsafe** due to a lack of reference
3838 counting from LTTng-UST to the shared object.
3840 A known workaround (available since glibc 2.2) is to use the
3841 `RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3842 effect of not unloading the loaded shared object, even if man:dlclose(3)
3845 You can also preload the tracepoint provider package shared object with
3846 the env:LD_PRELOAD environment variable to overcome this limitation.
3850 [[using-lttng-ust-with-daemons]]
3851 ===== Use noch:{LTTng-UST} with daemons
3853 If your instrumented application calls man:fork(2), man:clone(2),
3854 or BSD's man:rfork(2), without a following man:exec(3)-family
3855 system call, you must preload the path:{liblttng-ust-fork.so} shared
3856 object when you start the application.
3860 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3863 If your tracepoint provider package is
3864 a shared library which you also preload, you must put both
3865 shared objects in env:LD_PRELOAD:
3869 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3875 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3877 If your instrumented application closes one or more file descriptors
3878 which it did not open itself, you must preload the
3879 path:{liblttng-ust-fd.so} shared object when you start the application:
3883 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3886 Typical use cases include closing all the file descriptors after
3887 man:fork(2) or man:rfork(2) and buggy applications doing
3891 [[lttng-ust-pkg-config]]
3892 ===== Use noch:{pkg-config}
3894 On some distributions, LTTng-UST ships with a
3895 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3896 metadata file. If this is your case, then you can use cmd:pkg-config to
3897 build an application on the command line:
3901 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3905 [[instrumenting-32-bit-app-on-64-bit-system]]
3906 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3908 In order to trace a 32-bit application running on a 64-bit system,
3909 LTTng must use a dedicated 32-bit
3910 <<lttng-consumerd,consumer daemon>>.
3912 The following steps show how to build and install a 32-bit consumer
3913 daemon, which is _not_ part of the default 64-bit LTTng build, how to
3914 build and install the 32-bit LTTng-UST libraries, and how to build and
3915 link an instrumented 32-bit application in that context.
3917 To build a 32-bit instrumented application for a 64-bit target system,
3918 assuming you have a fresh target system with no installed Userspace RCU
3921 . Download, build, and install a 32-bit version of Userspace RCU:
3926 $ cd $(mktemp -d) &&
3927 wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3928 tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3929 cd userspace-rcu-0.9.* &&
3930 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3932 sudo make install &&
3937 . Using your distribution's package manager, or from source, install
3938 the following 32-bit versions of the following dependencies of
3939 LTTng-tools and LTTng-UST:
3942 * https://sourceforge.net/projects/libuuid/[libuuid]
3943 * http://directory.fsf.org/wiki/Popt[popt]
3944 * http://www.xmlsoft.org/[libxml2]
3947 . Download, build, and install a 32-bit version of the latest
3948 LTTng-UST{nbsp}{revision}:
3953 $ cd $(mktemp -d) &&
3954 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
3955 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
3956 cd lttng-ust-2.9.* &&
3957 ./configure --libdir=/usr/local/lib32 \
3958 CFLAGS=-m32 CXXFLAGS=-m32 \
3959 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
3961 sudo make install &&
3968 Depending on your distribution,
3969 32-bit libraries could be installed at a different location than
3970 `/usr/lib32`. For example, Debian is known to install
3971 some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
3973 In this case, make sure to set `LDFLAGS` to all the
3974 relevant 32-bit library paths, for example:
3978 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
3982 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
3983 the 32-bit consumer daemon:
3988 $ cd $(mktemp -d) &&
3989 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
3990 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
3991 cd lttng-tools-2.9.* &&
3992 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
3993 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
3994 --disable-bin-lttng --disable-bin-lttng-crash \
3995 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
3997 cd src/bin/lttng-consumerd &&
3998 sudo make install &&
4003 . From your distribution or from source,
4004 <<installing-lttng,install>> the 64-bit versions of
4005 LTTng-UST and Userspace RCU.
4006 . Download, build, and install the 64-bit version of the
4007 latest LTTng-tools{nbsp}{revision}:
4012 $ cd $(mktemp -d) &&
4013 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
4014 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
4015 cd lttng-tools-2.9.* &&
4016 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4017 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4019 sudo make install &&
4024 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4025 when linking your 32-bit application:
4028 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4029 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4032 For example, let's rebuild the quick start example in
4033 <<tracing-your-own-user-application,Trace a user application>> as an
4034 instrumented 32-bit application:
4039 $ gcc -m32 -c -I. hello-tp.c
4040 $ gcc -m32 -c hello.c
4041 $ gcc -m32 -o hello hello.o hello-tp.o \
4042 -L/usr/lib32 -L/usr/local/lib32 \
4043 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4048 No special action is required to execute the 32-bit application and
4049 to trace it: use the command-line man:lttng(1) tool as usual.
4056 man:tracef(3) is a small LTTng-UST API designed for quick,
4057 man:printf(3)-like instrumentation without the burden of
4058 <<tracepoint-provider,creating>> and
4059 <<building-tracepoint-providers-and-user-application,building>>
4060 a tracepoint provider package.
4062 To use `tracef()` in your application:
4064 . In the C or C++ source files where you need to use `tracef()`,
4065 include `<lttng/tracef.h>`:
4070 #include <lttng/tracef.h>
4074 . In the application's source code, use `tracef()` like you would use
4082 tracef("my message: %d (%s)", my_integer, my_string);
4088 . Link your application with `liblttng-ust`:
4093 $ gcc -o app app.c -llttng-ust
4097 To trace the events that `tracef()` calls emit:
4099 * <<enabling-disabling-events,Create an event rule>> which matches the
4100 `lttng_ust_tracef:*` event name:
4105 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4110 .Limitations of `tracef()`
4112 The `tracef()` utility function was developed to make user space tracing
4113 super simple, albeit with notable disadvantages compared to
4114 <<defining-tracepoints,user-defined tracepoints>>:
4116 * All the emitted events have the same tracepoint provider and
4117 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4118 * There is no static type checking.
4119 * The only event record field you actually get, named `msg`, is a string
4120 potentially containing the values you passed to `tracef()`
4121 using your own format string. This also means that you cannot filter
4122 events with a custom expression at run time because there are no
4124 * Since `tracef()` uses the C standard library's man:vasprintf(3)
4125 function behind the scenes to format the strings at run time, its
4126 expected performance is lower than with user-defined tracepoints,
4127 which do not require a conversion to a string.
4129 Taking this into consideration, `tracef()` is useful for some quick
4130 prototyping and debugging, but you should not consider it for any
4131 permanent and serious applicative instrumentation.
4137 ==== Use `tracelog()`
4139 The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
4140 the difference that it accepts an additional log level parameter.
4142 The goal of `tracelog()` is to ease the migration from logging to
4145 To use `tracelog()` in your application:
4147 . In the C or C++ source files where you need to use `tracelog()`,
4148 include `<lttng/tracelog.h>`:
4153 #include <lttng/tracelog.h>
4157 . In the application's source code, use `tracelog()` like you would use
4158 man:printf(3), except for the first parameter which is the log
4166 tracelog(TRACE_WARNING, "my message: %d (%s)",
4167 my_integer, my_string);
4173 See man:lttng-ust(3) for a list of available log level names.
4175 . Link your application with `liblttng-ust`:
4180 $ gcc -o app app.c -llttng-ust
4184 To trace the events that `tracelog()` calls emit with a log level
4185 _as severe as_ a specific log level:
4187 * <<enabling-disabling-events,Create an event rule>> which matches the
4188 `lttng_ust_tracelog:*` event name and a minimum level
4194 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4195 --loglevel=TRACE_WARNING
4199 To trace the events that `tracelog()` calls emit with a
4200 _specific log level_:
4202 * Create an event rule which matches the `lttng_ust_tracelog:*`
4203 event name and a specific log level:
4208 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4209 --loglevel-only=TRACE_INFO
4214 [[prebuilt-ust-helpers]]
4215 === Prebuilt user space tracing helpers
4217 The LTTng-UST package provides a few helpers in the form or preloadable
4218 shared objects which automatically instrument system functions and
4221 The helper shared objects are normally found in dir:{/usr/lib}. If you
4222 built LTTng-UST <<building-from-source,from source>>, they are probably
4223 located in dir:{/usr/local/lib}.
4225 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4228 path:{liblttng-ust-libc-wrapper.so}::
4229 path:{liblttng-ust-pthread-wrapper.so}::
4230 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4231 memory and POSIX threads function tracing>>.
4233 path:{liblttng-ust-cyg-profile.so}::
4234 path:{liblttng-ust-cyg-profile-fast.so}::
4235 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4237 path:{liblttng-ust-dl.so}::
4238 <<liblttng-ust-dl,Dynamic linker tracing>>.
4240 To use a user space tracing helper with any user application:
4242 * Preload the helper shared object when you start the application:
4247 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4251 You can preload more than one helper:
4256 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4262 [[liblttng-ust-libc-pthread-wrapper]]
4263 ==== Instrument C standard library memory and POSIX threads functions
4265 The path:{liblttng-ust-libc-wrapper.so} and
4266 path:{liblttng-ust-pthread-wrapper.so} helpers
4267 add instrumentation to some C standard library and POSIX
4271 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4273 |TP provider name |TP name |Instrumented function
4275 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4276 |`calloc` |man:calloc(3)
4277 |`realloc` |man:realloc(3)
4278 |`free` |man:free(3)
4279 |`memalign` |man:memalign(3)
4280 |`posix_memalign` |man:posix_memalign(3)
4284 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4286 |TP provider name |TP name |Instrumented function
4288 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4289 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4290 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4291 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4294 When you preload the shared object, it replaces the functions listed
4295 in the previous tables by wrappers which contain tracepoints and call
4296 the replaced functions.
4299 [[liblttng-ust-cyg-profile]]
4300 ==== Instrument function entry and exit
4302 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4303 to the entry and exit points of functions.
4305 man:gcc(1) and man:clang(1) have an option named
4306 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4307 which generates instrumentation calls for entry and exit to functions.
4308 The LTTng-UST function tracing helpers,
4309 path:{liblttng-ust-cyg-profile.so} and
4310 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4311 to add tracepoints to the two generated functions (which contain
4312 `cyg_profile` in their names, hence the helper's name).
4314 To use the LTTng-UST function tracing helper, the source files to
4315 instrument must be built using the `-finstrument-functions` compiler
4318 There are two versions of the LTTng-UST function tracing helper:
4320 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4321 that you should only use when it can be _guaranteed_ that the
4322 complete event stream is recorded without any lost event record.
4323 Any kind of duplicate information is left out.
4325 Assuming no event record is lost, having only the function addresses on
4326 entry is enough to create a call graph, since an event record always
4327 contains the ID of the CPU that generated it.
4329 You can use a tool like man:addr2line(1) to convert function addresses
4330 back to source file names and line numbers.
4332 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4333 which also works in use cases where event records might get discarded or
4334 not recorded from application startup.
4335 In these cases, the trace analyzer needs more information to be
4336 able to reconstruct the program flow.
4338 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4339 points of this helper.
4341 All the tracepoints that this helper provides have the
4342 log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4344 TIP: It's sometimes a good idea to limit the number of source files that
4345 you compile with the `-finstrument-functions` option to prevent LTTng
4346 from writing an excessive amount of trace data at run time. When using
4347 man:gcc(1), you can use the
4348 `-finstrument-functions-exclude-function-list` option to avoid
4349 instrument entries and exits of specific function names.
4354 ==== Instrument the dynamic linker
4356 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4357 man:dlopen(3) and man:dlclose(3) function calls.
4359 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4364 [[java-application]]
4365 === User space Java agent
4367 You can instrument any Java application which uses one of the following
4370 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4371 (JUL) core logging facilities.
4372 * http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4373 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4376 .LTTng-UST Java agent imported by a Java application.
4377 image::java-app.png[]
4379 Note that the methods described below are new in LTTng{nbsp}2.8.
4380 Previous LTTng versions use another technique.
4382 NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4383 and https://ci.lttng.org/[continuous integration], thus this version is
4384 directly supported. However, the LTTng-UST Java agent is also tested
4385 with OpenJDK{nbsp}7.
4390 ==== Use the LTTng-UST Java agent for `java.util.logging`
4392 To use the LTTng-UST Java agent in a Java application which uses
4393 `java.util.logging` (JUL):
4395 . In the Java application's source code, import the LTTng-UST
4396 log handler package for `java.util.logging`:
4401 import org.lttng.ust.agent.jul.LttngLogHandler;
4405 . Create an LTTng-UST JUL log handler:
4410 Handler lttngUstLogHandler = new LttngLogHandler();
4414 . Add this handler to the JUL loggers which should emit LTTng events:
4419 Logger myLogger = Logger.getLogger("some-logger");
4421 myLogger.addHandler(lttngUstLogHandler);
4425 . Use `java.util.logging` log statements and configuration as usual.
4426 The loggers with an attached LTTng-UST log handler can emit
4429 . Before exiting the application, remove the LTTng-UST log handler from
4430 the loggers attached to it and call its `close()` method:
4435 myLogger.removeHandler(lttngUstLogHandler);
4436 lttngUstLogHandler.close();
4440 This is not strictly necessary, but it is recommended for a clean
4441 disposal of the handler's resources.
4443 . Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4444 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4446 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4447 path] when you build the Java application.
4449 The JAR files are typically located in dir:{/usr/share/java}.
4451 IMPORTANT: The LTTng-UST Java agent must be
4452 <<installing-lttng,installed>> for the logging framework your
4455 .Use the LTTng-UST Java agent for `java.util.logging`.
4460 import java.io.IOException;
4461 import java.util.logging.Handler;
4462 import java.util.logging.Logger;
4463 import org.lttng.ust.agent.jul.LttngLogHandler;
4467 private static final int answer = 42;
4469 public static void main(String[] argv) throws Exception
4472 Logger logger = Logger.getLogger("jello");
4474 // Create an LTTng-UST log handler
4475 Handler lttngUstLogHandler = new LttngLogHandler();
4477 // Add the LTTng-UST log handler to our logger
4478 logger.addHandler(lttngUstLogHandler);
4481 logger.info("some info");
4482 logger.warning("some warning");
4484 logger.finer("finer information; the answer is " + answer);
4486 logger.severe("error!");
4488 // Not mandatory, but cleaner
4489 logger.removeHandler(lttngUstLogHandler);
4490 lttngUstLogHandler.close();
4499 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4502 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4503 <<enabling-disabling-events,create an event rule>> matching the
4504 `jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4509 $ lttng enable-event --jul jello
4513 Run the compiled class:
4517 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4520 <<basic-tracing-session-control,Stop tracing>> and inspect the
4530 In the resulting trace, an <<event,event record>> generated by a Java
4531 application using `java.util.logging` is named `lttng_jul:event` and
4532 has the following fields:
4535 Log record's message.
4541 Name of the class in which the log statement was executed.
4544 Name of the method in which the log statement was executed.
4547 Logging time (timestamp in milliseconds).
4550 Log level integer value.
4553 ID of the thread in which the log statement was executed.
4555 You can use the opt:lttng-enable-event(1):--loglevel or
4556 opt:lttng-enable-event(1):--loglevel-only option of the
4557 man:lttng-enable-event(1) command to target a range of JUL log levels
4558 or a specific JUL log level.
4563 ==== Use the LTTng-UST Java agent for Apache log4j
4565 To use the LTTng-UST Java agent in a Java application which uses
4568 . In the Java application's source code, import the LTTng-UST
4569 log appender package for Apache log4j:
4574 import org.lttng.ust.agent.log4j.LttngLogAppender;
4578 . Create an LTTng-UST log4j log appender:
4583 Appender lttngUstLogAppender = new LttngLogAppender();
4587 . Add this appender to the log4j loggers which should emit LTTng events:
4592 Logger myLogger = Logger.getLogger("some-logger");
4594 myLogger.addAppender(lttngUstLogAppender);
4598 . Use Apache log4j log statements and configuration as usual. The
4599 loggers with an attached LTTng-UST log appender can emit LTTng events.
4601 . Before exiting the application, remove the LTTng-UST log appender from
4602 the loggers attached to it and call its `close()` method:
4607 myLogger.removeAppender(lttngUstLogAppender);
4608 lttngUstLogAppender.close();
4612 This is not strictly necessary, but it is recommended for a clean
4613 disposal of the appender's resources.
4615 . Include the LTTng-UST Java agent's common and log4j-specific JAR
4616 files, path:{lttng-ust-agent-common.jar} and
4617 path:{lttng-ust-agent-log4j.jar}, in the
4618 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4619 path] when you build the Java application.
4621 The JAR files are typically located in dir:{/usr/share/java}.
4623 IMPORTANT: The LTTng-UST Java agent must be
4624 <<installing-lttng,installed>> for the logging framework your
4627 .Use the LTTng-UST Java agent for Apache log4j.
4632 import org.apache.log4j.Appender;
4633 import org.apache.log4j.Logger;
4634 import org.lttng.ust.agent.log4j.LttngLogAppender;
4638 private static final int answer = 42;
4640 public static void main(String[] argv) throws Exception
4643 Logger logger = Logger.getLogger("jello");
4645 // Create an LTTng-UST log appender
4646 Appender lttngUstLogAppender = new LttngLogAppender();
4648 // Add the LTTng-UST log appender to our logger
4649 logger.addAppender(lttngUstLogAppender);
4652 logger.info("some info");
4653 logger.warn("some warning");
4655 logger.debug("debug information; the answer is " + answer);
4657 logger.fatal("error!");
4659 // Not mandatory, but cleaner
4660 logger.removeAppender(lttngUstLogAppender);
4661 lttngUstLogAppender.close();
4667 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4672 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4675 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4676 <<enabling-disabling-events,create an event rule>> matching the
4677 `jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4682 $ lttng enable-event --log4j jello
4686 Run the compiled class:
4690 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4693 <<basic-tracing-session-control,Stop tracing>> and inspect the
4703 In the resulting trace, an <<event,event record>> generated by a Java
4704 application using log4j is named `lttng_log4j:event` and
4705 has the following fields:
4708 Log record's message.
4714 Name of the class in which the log statement was executed.
4717 Name of the method in which the log statement was executed.
4720 Name of the file in which the executed log statement is located.
4723 Line number at which the log statement was executed.
4729 Log level integer value.
4732 Name of the Java thread in which the log statement was executed.
4734 You can use the opt:lttng-enable-event(1):--loglevel or
4735 opt:lttng-enable-event(1):--loglevel-only option of the
4736 man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4737 or a specific log4j log level.
4741 [[java-application-context]]
4742 ==== Provide application-specific context fields in a Java application
4744 A Java application-specific context field is a piece of state provided
4745 by the application which <<adding-context,you can add>>, using the
4746 man:lttng-add-context(1) command, to each <<event,event record>>
4747 produced by the log statements of this application.
4749 For example, a given object might have a current request ID variable.
4750 You can create a context information retriever for this object and
4751 assign a name to this current request ID. You can then, using the
4752 man:lttng-add-context(1) command, add this context field by name to
4753 the JUL or log4j <<channel,channel>>.
4755 To provide application-specific context fields in a Java application:
4757 . In the Java application's source code, import the LTTng-UST
4758 Java agent context classes and interfaces:
4763 import org.lttng.ust.agent.context.ContextInfoManager;
4764 import org.lttng.ust.agent.context.IContextInfoRetriever;
4768 . Create a context information retriever class, that is, a class which
4769 implements the `IContextInfoRetriever` interface:
4774 class MyContextInfoRetriever implements IContextInfoRetriever
4777 public Object retrieveContextInfo(String key)
4779 if (key.equals("intCtx")) {
4781 } else if (key.equals("strContext")) {
4782 return "context value!";
4791 This `retrieveContextInfo()` method is the only member of the
4792 `IContextInfoRetriever` interface. Its role is to return the current
4793 value of a state by name to create a context field. The names of the
4794 context fields and which state variables they return depends on your
4797 All primitive types and objects are supported as context fields.
4798 When `retrieveContextInfo()` returns an object, the context field
4799 serializer calls its `toString()` method to add a string field to
4800 event records. The method can also return `null`, which means that
4801 no context field is available for the required name.
4803 . Register an instance of your context information retriever class to
4804 the context information manager singleton:
4809 IContextInfoRetriever cir = new MyContextInfoRetriever();
4810 ContextInfoManager cim = ContextInfoManager.getInstance();
4811 cim.registerContextInfoRetriever("retrieverName", cir);
4815 . Before exiting the application, remove your context information
4816 retriever from the context information manager singleton:
4821 ContextInfoManager cim = ContextInfoManager.getInstance();
4822 cim.unregisterContextInfoRetriever("retrieverName");
4826 This is not strictly necessary, but it is recommended for a clean
4827 disposal of some manager's resources.
4829 . Build your Java application with LTTng-UST Java agent support as
4830 usual, following the procedure for either the <<jul,JUL>> or
4831 <<log4j,Apache log4j>> framework.
4834 .Provide application-specific context fields in a Java application.
4839 import java.util.logging.Handler;
4840 import java.util.logging.Logger;
4841 import org.lttng.ust.agent.jul.LttngLogHandler;
4842 import org.lttng.ust.agent.context.ContextInfoManager;
4843 import org.lttng.ust.agent.context.IContextInfoRetriever;
4847 // Our context information retriever class
4848 private static class MyContextInfoRetriever
4849 implements IContextInfoRetriever
4852 public Object retrieveContextInfo(String key) {
4853 if (key.equals("intCtx")) {
4855 } else if (key.equals("strContext")) {
4856 return "context value!";
4863 private static final int answer = 42;
4865 public static void main(String args[]) throws Exception
4867 // Get the context information manager instance
4868 ContextInfoManager cim = ContextInfoManager.getInstance();
4870 // Create and register our context information retriever
4871 IContextInfoRetriever cir = new MyContextInfoRetriever();
4872 cim.registerContextInfoRetriever("myRetriever", cir);
4875 Logger logger = Logger.getLogger("jello");
4877 // Create an LTTng-UST log handler
4878 Handler lttngUstLogHandler = new LttngLogHandler();
4880 // Add the LTTng-UST log handler to our logger
4881 logger.addHandler(lttngUstLogHandler);
4884 logger.info("some info");
4885 logger.warning("some warning");
4887 logger.finer("finer information; the answer is " + answer);
4889 logger.severe("error!");
4891 // Not mandatory, but cleaner
4892 logger.removeHandler(lttngUstLogHandler);
4893 lttngUstLogHandler.close();
4894 cim.unregisterContextInfoRetriever("myRetriever");
4903 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4906 <<creating-destroying-tracing-sessions,Create a tracing session>>
4907 and <<enabling-disabling-events,create an event rule>> matching the
4913 $ lttng enable-event --jul jello
4916 <<adding-context,Add the application-specific context fields>> to the
4921 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
4922 $ lttng add-context --jul --type='$app.myRetriever:strContext'
4925 <<basic-tracing-session-control,Start tracing>>:
4932 Run the compiled class:
4936 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4939 <<basic-tracing-session-control,Stop tracing>> and inspect the
4951 [[python-application]]
4952 === User space Python agent
4954 You can instrument a Python 2 or Python 3 application which uses the
4955 standard https://docs.python.org/3/library/logging.html[`logging`]
4958 Each log statement emits an LTTng event once the
4959 application module imports the
4960 <<lttng-ust-agents,LTTng-UST Python agent>> package.
4963 .A Python application importing the LTTng-UST Python agent.
4964 image::python-app.png[]
4966 To use the LTTng-UST Python agent:
4968 . In the Python application's source code, import the LTTng-UST Python
4978 The LTTng-UST Python agent automatically adds its logging handler to the
4979 root logger at import time.
4981 Any log statement that the application executes before this import does
4982 not emit an LTTng event.
4984 IMPORTANT: The LTTng-UST Python agent must be
4985 <<installing-lttng,installed>>.
4987 . Use log statements and logging configuration as usual.
4988 Since the LTTng-UST Python agent adds a handler to the _root_
4989 logger, you can trace any log statement from any logger.
4991 .Use the LTTng-UST Python agent.
5002 logging.basicConfig()
5003 logger = logging.getLogger('my-logger')
5006 logger.debug('debug message')
5007 logger.info('info message')
5008 logger.warn('warn message')
5009 logger.error('error message')
5010 logger.critical('critical message')
5014 if __name__ == '__main__':
5018 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5019 logging handler which prints to the standard error stream, is not
5020 strictly required for LTTng-UST tracing to work, but in versions of
5021 Python preceding 3.2, you could see a warning message which indicates
5022 that no handler exists for the logger `my-logger`.
5024 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5025 <<enabling-disabling-events,create an event rule>> matching the
5026 `my-logger` Python logger, and <<basic-tracing-session-control,start
5032 $ lttng enable-event --python my-logger
5036 Run the Python script:
5043 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5053 In the resulting trace, an <<event,event record>> generated by a Python
5054 application is named `lttng_python:event` and has the following fields:
5057 Logging time (string).
5060 Log record's message.
5066 Name of the function in which the log statement was executed.
5069 Line number at which the log statement was executed.
5072 Log level integer value.
5075 ID of the Python thread in which the log statement was executed.
5078 Name of the Python thread in which the log statement was executed.
5080 You can use the opt:lttng-enable-event(1):--loglevel or
5081 opt:lttng-enable-event(1):--loglevel-only option of the
5082 man:lttng-enable-event(1) command to target a range of Python log levels
5083 or a specific Python log level.
5085 When an application imports the LTTng-UST Python agent, the agent tries
5086 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5087 <<start-sessiond,start the session daemon>> _before_ you run the Python
5088 application. If a session daemon is found, the agent tries to register
5089 to it during 5{nbsp}seconds, after which the application continues
5090 without LTTng tracing support. You can override this timeout value with
5091 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5094 If the session daemon stops while a Python application with an imported
5095 LTTng-UST Python agent runs, the agent retries to connect and to
5096 register to a session daemon every 3{nbsp}seconds. You can override this
5097 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5102 [[proc-lttng-logger-abi]]
5105 The `lttng-tracer` Linux kernel module, part of
5106 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
5107 path:{/proc/lttng-logger} when it's loaded. Any application can write
5108 text data to this file to emit an LTTng event.
5111 .An application writes to the LTTng logger file to emit an LTTng event.
5112 image::lttng-logger.png[]
5114 The LTTng logger is the quickest method--not the most efficient,
5115 however--to add instrumentation to an application. It is designed
5116 mostly to instrument shell scripts:
5120 $ echo "Some message, some $variable" > /proc/lttng-logger
5123 Any event that the LTTng logger emits is named `lttng_logger` and
5124 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5125 other instrumentation points in the kernel tracing domain, **any Unix
5126 user** can <<enabling-disabling-events,create an event rule>> which
5127 matches its event name, not only the root user or users in the
5128 <<tracing-group,tracing group>>.
5130 To use the LTTng logger:
5132 * From any application, write text data to the path:{/proc/lttng-logger}
5135 The `msg` field of `lttng_logger` event records contains the
5138 NOTE: The maximum message length of an LTTng logger event is
5139 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5140 than one event to contain the remaining data.
5142 You should not use the LTTng logger to trace a user application which
5143 can be instrumented in a more efficient way, namely:
5145 * <<c-application,C and $$C++$$ applications>>.
5146 * <<java-application,Java applications>>.
5147 * <<python-application,Python applications>>.
5149 .Use the LTTng logger.
5154 echo 'Hello, World!' > /proc/lttng-logger
5156 df --human-readable --print-type / > /proc/lttng-logger
5159 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5160 <<enabling-disabling-events,create an event rule>> matching the
5161 `lttng_logger` Linux kernel tracepoint, and
5162 <<basic-tracing-session-control,start tracing>>:
5167 $ lttng enable-event --kernel lttng_logger
5171 Run the Bash script:
5178 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5189 [[instrumenting-linux-kernel]]
5190 === LTTng kernel tracepoints
5192 NOTE: This section shows how to _add_ instrumentation points to the
5193 Linux kernel. The kernel's subsystems are already thoroughly
5194 instrumented at strategic places for LTTng when you
5195 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5199 There are two methods to instrument the Linux kernel:
5201 . <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
5202 tracepoint which uses the `TRACE_EVENT()` API.
5204 Choose this if you want to instrumentation a Linux kernel tree with an
5205 instrumentation point compatible with ftrace, perf, and SystemTap.
5207 . Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
5208 instrument an out-of-tree kernel module.
5210 Choose this if you don't need ftrace, perf, or SystemTap support.
5214 [[linux-add-lttng-layer]]
5215 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5217 This section shows how to add an LTTng layer to existing ftrace
5218 instrumentation using the `TRACE_EVENT()` API.
5220 This section does not document the `TRACE_EVENT()` macro. You can
5221 read the following articles to learn more about this API:
5223 * http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
5224 * http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
5225 * http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
5227 The following procedure assumes that your ftrace tracepoints are
5228 correctly defined in their own header and that they are created in
5229 one source file using the `CREATE_TRACE_POINTS` definition.
5231 To add an LTTng layer over an existing ftrace tracepoint:
5233 . Make sure the following kernel configuration options are
5239 * `CONFIG_HIGH_RES_TIMERS`
5240 * `CONFIG_TRACEPOINTS`
5243 . Build the Linux source tree with your custom ftrace tracepoints.
5244 . Boot the resulting Linux image on your target system.
5246 Confirm that the tracepoints exist by looking for their names in the
5247 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5248 is your subsystem's name.
5250 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5255 $ cd $(mktemp -d) &&
5256 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
5257 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
5258 cd lttng-modules-2.9.*
5262 . In dir:{instrumentation/events/lttng-module}, relative to the root
5263 of the LTTng-modules source tree, create a header file named
5264 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5265 LTTng-modules tracepoint definitions using the LTTng-modules
5268 Start with this template:
5272 .path:{instrumentation/events/lttng-module/my_subsys.h}
5275 #define TRACE_SYSTEM my_subsys
5277 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5278 #define _LTTNG_MY_SUBSYS_H
5280 #include "../../../probes/lttng-tracepoint-event.h"
5281 #include <linux/tracepoint.h>
5283 LTTNG_TRACEPOINT_EVENT(
5285 * Format is identical to TRACE_EVENT()'s version for the three
5286 * following macro parameters:
5289 TP_PROTO(int my_int, const char *my_string),
5290 TP_ARGS(my_int, my_string),
5292 /* LTTng-modules specific macros */
5294 ctf_integer(int, my_int_field, my_int)
5295 ctf_string(my_bar_field, my_bar)
5299 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5301 #include "../../../probes/define_trace.h"
5305 The entries in the `TP_FIELDS()` section are the list of fields for the
5306 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5307 ftrace's `TRACE_EVENT()` macro.
5309 See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
5310 complete description of the available `ctf_*()` macros.
5312 . Create the LTTng-modules probe's kernel module C source file,
5313 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5318 .path:{probes/lttng-probe-my-subsys.c}
5320 #include <linux/module.h>
5321 #include "../lttng-tracer.h"
5324 * Build-time verification of mismatch between mainline
5325 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5326 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5328 #include <trace/events/my_subsys.h>
5330 /* Create LTTng tracepoint probes */
5331 #define LTTNG_PACKAGE_BUILD
5332 #define CREATE_TRACE_POINTS
5333 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5335 #include "../instrumentation/events/lttng-module/my_subsys.h"
5337 MODULE_LICENSE("GPL and additional rights");
5338 MODULE_AUTHOR("Your name <your-email>");
5339 MODULE_DESCRIPTION("LTTng my_subsys probes");
5340 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5341 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5342 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5343 LTTNG_MODULES_EXTRAVERSION);
5347 . Edit path:{probes/KBuild} and add your new kernel module object
5348 next to the existing ones:
5352 .path:{probes/KBuild}
5356 obj-m += lttng-probe-module.o
5357 obj-m += lttng-probe-power.o
5359 obj-m += lttng-probe-my-subsys.o
5365 . Build and install the LTTng kernel modules:
5370 $ make KERNELDIR=/path/to/linux
5371 # make modules_install && depmod -a
5375 Replace `/path/to/linux` with the path to the Linux source tree where
5376 you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5378 Note that you can also use the
5379 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5380 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5381 C code that need to be executed before the event fields are recorded.
5383 The best way to learn how to use the previous LTTng-modules macros is to
5384 inspect the existing LTTng-modules tracepoint definitions in the
5385 dir:{instrumentation/events/lttng-module} header files. Compare them
5386 with the Linux kernel mainline versions in the
5387 dir:{include/trace/events} directory of the Linux source tree.
5391 [[lttng-tracepoint-event-code]]
5392 ===== Use custom C code to access the data for tracepoint fields
5394 Although we recommended to always use the
5395 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5396 the arguments and fields of an LTTng-modules tracepoint when possible,
5397 sometimes you need a more complex process to access the data that the
5398 tracer records as event record fields. In other words, you need local
5399 variables and multiple C{nbsp}statements instead of simple
5400 argument-based expressions that you pass to the
5401 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5403 You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5404 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5405 a block of C{nbsp}code to be executed before LTTng records the fields.
5406 The structure of this macro is:
5409 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5411 LTTNG_TRACEPOINT_EVENT_CODE(
5413 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5414 * version for the following three macro parameters:
5417 TP_PROTO(int my_int, const char *my_string),
5418 TP_ARGS(my_int, my_string),
5420 /* Declarations of custom local variables */
5423 unsigned long b = 0;
5424 const char *name = "(undefined)";
5425 struct my_struct *my_struct;
5429 * Custom code which uses both tracepoint arguments
5430 * (in TP_ARGS()) and local variables (in TP_locvar()).
5432 * Local variables are actually members of a structure pointed
5433 * to by the special variable tp_locvar.
5437 tp_locvar->a = my_int + 17;
5438 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5439 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5440 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5441 put_my_struct(tp_locvar->my_struct);
5450 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5451 * version for this, except that tp_locvar members can be
5452 * used in the argument expression parameters of
5453 * the ctf_*() macros.
5456 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5457 ctf_integer(int, my_struct_a, tp_locvar->a)
5458 ctf_string(my_string_field, my_string)
5459 ctf_string(my_struct_name, tp_locvar->name)
5464 IMPORTANT: The C code defined in `TP_code()` must not have any side
5465 effects when executed. In particular, the code must not allocate
5466 memory or get resources without deallocating this memory or putting
5467 those resources afterwards.
5470 [[instrumenting-linux-kernel-tracing]]
5471 ==== Load and unload a custom probe kernel module
5473 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5474 kernel module>> in the kernel before it can emit LTTng events.
5476 To load the default probe kernel modules and a custom probe kernel
5479 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5480 probe modules to load when starting a root <<lttng-sessiond,session
5484 .Load the `my_subsys`, `usb`, and the default probe modules.
5488 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5493 You only need to pass the subsystem name, not the whole kernel module
5496 To load _only_ a given custom probe kernel module:
5498 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5499 modules to load when starting a root session daemon:
5502 .Load only the `my_subsys` and `usb` probe modules.
5506 # lttng-sessiond --kmod-probes=my_subsys,usb
5511 To confirm that a probe module is loaded:
5518 $ lsmod | grep lttng_probe_usb
5522 To unload the loaded probe modules:
5524 * Kill the session daemon with `SIGTERM`:
5529 # pkill lttng-sessiond
5533 You can also use man:modprobe(8)'s `--remove` option if the session
5534 daemon terminates abnormally.
5537 [[controlling-tracing]]
5540 Once an application or a Linux kernel is
5541 <<instrumenting,instrumented>> for LTTng tracing,
5544 This section is divided in topics on how to use the various
5545 <<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5546 command-line tool>>, to _control_ the LTTng daemons and tracers.
5548 NOTE: In the following subsections, we refer to an man:lttng(1) command
5549 using its man page name. For example, instead of _Run the `create`
5550 command to..._, we use _Run the man:lttng-create(1) command to..._.
5554 === Start a session daemon
5556 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5557 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5560 You will see the following error when you run a command while no session
5564 Error: No session daemon is available
5567 The only command that automatically runs a session daemon is
5568 man:lttng-create(1), which you use to
5569 <<creating-destroying-tracing-sessions,create a tracing session>>. While
5570 this is most of the time the first operation that you do, sometimes it's
5571 not. Some examples are:
5573 * <<list-instrumentation-points,List the available instrumentation points>>.
5574 * <<saving-loading-tracing-session,Load a tracing session configuration>>.
5576 [[tracing-group]] Each Unix user must have its own running session
5577 daemon to trace user applications. The session daemon that the root user
5578 starts is the only one allowed to control the LTTng kernel tracer. Users
5579 that are part of the _tracing group_ can control the root session
5580 daemon. The default tracing group name is `tracing`; you can set it to
5581 something else with the opt:lttng-sessiond(8):--group option when you
5582 start the root session daemon.
5584 To start a user session daemon:
5586 * Run man:lttng-sessiond(8):
5591 $ lttng-sessiond --daemonize
5595 To start the root session daemon:
5597 * Run man:lttng-sessiond(8) as the root user:
5602 # lttng-sessiond --daemonize
5606 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5607 start the session daemon in foreground.
5609 To stop a session daemon, use man:kill(1) on its process ID (standard
5612 Note that some Linux distributions could manage the LTTng session daemon
5613 as a service. In this case, you should use the service manager to
5614 start, restart, and stop session daemons.
5617 [[creating-destroying-tracing-sessions]]
5618 === Create and destroy a tracing session
5620 Almost all the LTTng control operations happen in the scope of
5621 a <<tracing-session,tracing session>>, which is the dialogue between the
5622 <<lttng-sessiond,session daemon>> and you.
5624 To create a tracing session with a generated name:
5626 * Use the man:lttng-create(1) command:
5635 The created tracing session's name is `auto` followed by the
5638 To create a tracing session with a specific name:
5640 * Use the optional argument of the man:lttng-create(1) command:
5645 $ lttng create my-session
5649 Replace `my-session` with the specific tracing session name.
5651 LTTng appends the creation date to the created tracing session's name.
5653 LTTng writes the traces of a tracing session in
5654 +$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5655 name of the tracing session. Note that the env:LTTNG_HOME environment
5656 variable defaults to `$HOME` if not set.
5658 To output LTTng traces to a non-default location:
5660 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5665 $ lttng create my-session --output=/tmp/some-directory
5669 You may create as many tracing sessions as you wish.
5671 To list all the existing tracing sessions for your Unix user:
5673 * Use the man:lttng-list(1) command:
5682 When you create a tracing session, it is set as the _current tracing
5683 session_. The following man:lttng(1) commands operate on the current
5684 tracing session when you don't specify one:
5686 [role="list-3-cols"]
5703 To change the current tracing session:
5705 * Use the man:lttng-set-session(1) command:
5710 $ lttng set-session new-session
5714 Replace `new-session` by the name of the new current tracing session.
5716 When you are done tracing in a given tracing session, you can destroy
5717 it. This operation frees the resources taken by the tracing session
5718 to destroy; it does not destroy the trace data that LTTng wrote for
5719 this tracing session.
5721 To destroy the current tracing session:
5723 * Use the man:lttng-destroy(1) command:
5733 [[list-instrumentation-points]]
5734 === List the available instrumentation points
5736 The <<lttng-sessiond,session daemon>> can query the running instrumented
5737 user applications and the Linux kernel to get a list of available
5738 instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5739 they are tracepoints and system calls. For the user space tracing
5740 domain, they are tracepoints. For the other tracing domains, they are
5743 To list the available instrumentation points:
5745 * Use the man:lttng-list(1) command with the requested tracing domain's
5749 * opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5750 must be a root user, or it must be a member of the
5751 <<tracing-group,tracing group>>).
5752 * opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5753 kernel system calls (your Unix user must be a root user, or it must be
5754 a member of the tracing group).
5755 * opt:lttng-list(1):--userspace: user space tracepoints.
5756 * opt:lttng-list(1):--jul: `java.util.logging` loggers.
5757 * opt:lttng-list(1):--log4j: Apache log4j loggers.
5758 * opt:lttng-list(1):--python: Python loggers.
5761 .List the available user space tracepoints.
5765 $ lttng list --userspace
5769 .List the available Linux kernel system call tracepoints.
5773 $ lttng list --kernel --syscall
5778 [[enabling-disabling-events]]
5779 === Create and enable an event rule
5781 Once you <<creating-destroying-tracing-sessions,create a tracing
5782 session>>, you can create <<event,event rules>> with the
5783 man:lttng-enable-event(1) command.
5785 You specify each condition with a command-line option. The available
5786 condition options are shown in the following table.
5788 [role="growable",cols="asciidoc,asciidoc,default"]
5789 .Condition command-line options for the man:lttng-enable-event(1) command.
5791 |Option |Description |Applicable tracing domains
5797 . +--probe=__ADDR__+
5798 . +--function=__ADDR__+
5801 Instead of using the default _tracepoint_ instrumentation type, use:
5803 . A Linux system call.
5804 . A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5805 . The entry and return points of a Linux function (symbol or address).
5809 |First positional argument.
5812 Tracepoint or system call name. In the case of a Linux KProbe or
5813 function, this is a custom name given to the event rule. With the
5814 JUL, log4j, and Python domains, this is a logger name.
5816 With a tracepoint, logger, or system call name, the last character
5817 can be `*` to match anything that remains.
5824 . +--loglevel=__LEVEL__+
5825 . +--loglevel-only=__LEVEL__+
5828 . Match only tracepoints or log statements with a logging level at
5829 least as severe as +__LEVEL__+.
5830 . Match only tracepoints or log statements with a logging level
5831 equal to +__LEVEL__+.
5833 See man:lttng-enable-event(1) for the list of available logging level
5836 |User space, JUL, log4j, and Python.
5838 |+--exclude=__EXCLUSIONS__+
5841 When you use a `*` character at the end of the tracepoint or logger
5842 name (first positional argument), exclude the specific names in the
5843 comma-delimited list +__EXCLUSIONS__+.
5846 User space, JUL, log4j, and Python.
5848 |+--filter=__EXPR__+
5851 Match only events which satisfy the expression +__EXPR__+.
5853 See man:lttng-enable-event(1) to learn more about the syntax of a
5860 You attach an event rule to a <<channel,channel>> on creation. If you do
5861 not specify the channel with the opt:lttng-enable-event(1):--channel
5862 option, and if the event rule to create is the first in its
5863 <<domain,tracing domain>> for a given tracing session, then LTTng
5864 creates a _default channel_ for you. This default channel is reused in
5865 subsequent invocations of the man:lttng-enable-event(1) command for the
5866 same tracing domain.
5868 An event rule is always enabled at creation time.
5870 The following examples show how you can combine the previous
5871 command-line options to create simple to more complex event rules.
5873 .Create an event rule targetting a Linux kernel tracepoint (default channel).
5877 $ lttng enable-event --kernel sched_switch
5881 .Create an event rule matching four Linux kernel system calls (default channel).
5885 $ lttng enable-event --kernel --syscall open,write,read,close
5889 .Create event rules matching tracepoints with filter expressions (default channel).
5893 $ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5898 $ lttng enable-event --kernel --all \
5899 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5904 $ lttng enable-event --jul my_logger \
5905 --filter='$app.retriever:cur_msg_id > 3'
5908 IMPORTANT: Make sure to always quote the filter string when you
5909 use man:lttng(1) from a shell.
5912 .Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5916 $ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5919 IMPORTANT: Make sure to always quote the wildcard character when you
5920 use man:lttng(1) from a shell.
5923 .Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5927 $ lttng enable-event --python my-app.'*' \
5928 --exclude='my-app.module,my-app.hello'
5932 .Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5936 $ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5940 .Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5944 $ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5948 The event rules of a given channel form a whitelist: as soon as an
5949 emitted event passes one of them, LTTng can record the event. For
5950 example, an event named `my_app:my_tracepoint` emitted from a user space
5951 tracepoint with a `TRACE_ERROR` log level passes both of the following
5956 $ lttng enable-event --userspace my_app:my_tracepoint
5957 $ lttng enable-event --userspace my_app:my_tracepoint \
5958 --loglevel=TRACE_INFO
5961 The second event rule is redundant: the first one includes
5965 [[disable-event-rule]]
5966 === Disable an event rule
5968 To disable an event rule that you <<enabling-disabling-events,created>>
5969 previously, use the man:lttng-disable-event(1) command. This command
5970 disables _all_ the event rules (of a given tracing domain and channel)
5971 which match an instrumentation point. The other conditions are not
5972 supported as of LTTng{nbsp}{revision}.
5974 The LTTng tracer does not record an emitted event which passes
5975 a _disabled_ event rule.
5977 .Disable an event rule matching a Python logger (default channel).
5981 $ lttng disable-event --python my-logger
5985 .Disable an event rule matching all `java.util.logging` loggers (default channel).
5989 $ lttng disable-event --jul '*'
5993 .Disable _all_ the event rules of the default channel.
5995 The opt:lttng-disable-event(1):--all-events option is not, like the
5996 opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
5997 equivalent of the event name `*` (wildcard): it disables _all_ the event
5998 rules of a given channel.
6002 $ lttng disable-event --jul --all-events
6006 NOTE: You cannot delete an event rule once you create it.
6010 === Get the status of a tracing session
6012 To get the status of the current tracing session, that is, its
6013 parameters, its channels, event rules, and their attributes:
6015 * Use the man:lttng-status(1) command:
6025 To get the status of any tracing session:
6027 * Use the man:lttng-list(1) command with the tracing session's name:
6032 $ lttng list my-session
6036 Replace `my-session` with the desired tracing session's name.
6039 [[basic-tracing-session-control]]
6040 === Start and stop a tracing session
6042 Once you <<creating-destroying-tracing-sessions,create a tracing
6044 <<enabling-disabling-events,create one or more event rules>>,
6045 you can start and stop the tracers for this tracing session.
6047 To start tracing in the current tracing session:
6049 * Use the man:lttng-start(1) command:
6058 LTTng is very flexible: you can launch user applications before
6059 or after the you start the tracers. The tracers only record the events
6060 if they pass enabled event rules and if they occur while the tracers are
6063 To stop tracing in the current tracing session:
6065 * Use the man:lttng-stop(1) command:
6074 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6075 records>> or lost sub-buffers since the last time you ran
6076 man:lttng-start(1), warnings are printed when you run the
6077 man:lttng-stop(1) command.
6080 [[enabling-disabling-channels]]
6081 === Create a channel
6083 Once you create a tracing session, you can create a <<channel,channel>>
6084 with the man:lttng-enable-channel(1) command.
6086 Note that LTTng automatically creates a default channel when, for a
6087 given <<domain,tracing domain>>, no channels exist and you
6088 <<enabling-disabling-events,create>> the first event rule. This default
6089 channel is named `channel0` and its attributes are set to reasonable
6090 values. Therefore, you only need to create a channel when you need
6091 non-default attributes.
6093 You specify each non-default channel attribute with a command-line
6094 option when you use the man:lttng-enable-channel(1) command. The
6095 available command-line options are:
6097 [role="growable",cols="asciidoc,asciidoc"]
6098 .Command-line options for the man:lttng-enable-channel(1) command.
6100 |Option |Description
6106 <<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
6107 the default _discard_ mode.
6109 |`--buffers-pid` (user space tracing domain only)
6112 Use the per-process <<channel-buffering-schemes,buffering scheme>>
6113 instead of the default per-user buffering scheme.
6115 |+--subbuf-size=__SIZE__+
6118 Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
6119 either for each Unix user (default), or for each instrumented process.
6121 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6123 |+--num-subbuf=__COUNT__+
6126 Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
6127 for each Unix user (default), or for each instrumented process.
6129 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6131 |+--tracefile-size=__SIZE__+
6134 Set the maximum size of each trace file that this channel writes within
6135 a stream to +__SIZE__+ bytes instead of no maximum.
6137 See <<tracefile-rotation,Trace file count and size>>.
6139 |+--tracefile-count=__COUNT__+
6142 Limit the number of trace files that this channel creates to
6143 +__COUNT__+ channels instead of no limit.
6145 See <<tracefile-rotation,Trace file count and size>>.
6147 |+--switch-timer=__PERIODUS__+
6150 Set the <<channel-switch-timer,switch timer period>>
6151 to +__PERIODUS__+{nbsp}µs.
6153 |+--read-timer=__PERIODUS__+
6156 Set the <<channel-read-timer,read timer period>>
6157 to +__PERIODUS__+{nbsp}µs.
6159 |+--output=__TYPE__+ (Linux kernel tracing domain only)
6162 Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
6166 You can only create a channel in the Linux kernel and user space
6167 <<domain,tracing domains>>: other tracing domains have their own channel
6168 created on the fly when <<enabling-disabling-events,creating event
6173 Because of a current LTTng limitation, you must create all channels
6174 _before_ you <<basic-tracing-session-control,start tracing>> in a given
6175 tracing session, that is, before the first time you run
6178 Since LTTng automatically creates a default channel when you use the
6179 man:lttng-enable-event(1) command with a specific tracing domain, you
6180 cannot, for example, create a Linux kernel event rule, start tracing,
6181 and then create a user space event rule, because no user space channel
6182 exists yet and it's too late to create one.
6184 For this reason, make sure to configure your channels properly
6185 before starting the tracers for the first time!
6188 The following examples show how you can combine the previous
6189 command-line options to create simple to more complex channels.
6191 .Create a Linux kernel channel with default attributes.
6195 $ lttng enable-channel --kernel my-channel
6199 .Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6203 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6204 --buffers-pid my-channel
6208 .Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
6212 $ lttng enable-channel --kernel --tracefile-count=8 \
6213 --tracefile-size=4194304 my-channel
6217 .Create a user space channel in overwrite (or _flight recorder_) mode.
6221 $ lttng enable-channel --userspace --overwrite my-channel
6225 You can <<enabling-disabling-events,create>> the same event rule in
6226 two different channels:
6230 $ lttng enable-event --userspace --channel=my-channel app:tp
6231 $ lttng enable-event --userspace --channel=other-channel app:tp
6234 If both channels are enabled, when a tracepoint named `app:tp` is
6235 reached, LTTng records two events, one for each channel.
6239 === Disable a channel
6241 To disable a specific channel that you <<enabling-disabling-channels,created>>
6242 previously, use the man:lttng-disable-channel(1) command.
6244 .Disable a specific Linux kernel channel.
6248 $ lttng disable-channel --kernel my-channel
6252 The state of a channel precedes the individual states of event rules
6253 attached to it: event rules which belong to a disabled channel, even if
6254 they are enabled, are also considered disabled.
6258 === Add context fields to a channel
6260 Event record fields in trace files provide important information about
6261 events that occured previously, but sometimes some external context may
6262 help you solve a problem faster. Examples of context fields are:
6264 * The **process ID**, **thread ID**, **process name**, and
6265 **process priority** of the thread in which the event occurs.
6266 * The **hostname** of the system on which the event occurs.
6267 * The current values of many possible **performance counters** using
6269 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6271 ** Branch instructions, misses, and loads.
6273 * Any context defined at the application level (supported for the
6274 JUL and log4j <<domain,tracing domains>>).
6276 To get the full list of available context fields, see
6277 `lttng add-context --list`. Some context fields are reserved for a
6278 specific <<domain,tracing domain>> (Linux kernel or user space).
6280 You add context fields to <<channel,channels>>. All the events
6281 that a channel with added context fields records contain those fields.
6283 To add context fields to one or all the channels of a given tracing
6286 * Use the man:lttng-add-context(1) command.
6288 .Add context fields to all the channels of the current tracing session.
6290 The following command line adds the virtual process identifier and
6291 the per-thread CPU cycles count fields to all the user space channels
6292 of the current tracing session.
6296 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6300 .Add performance counter context fields by raw ID
6302 See man:lttng-add-context(1) for the exact format of the context field
6303 type, which is partly compatible with the format used in
6308 $ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6309 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6313 .Add a context field to a specific channel.
6315 The following command line adds the thread identifier context field
6316 to the Linux kernel channel named `my-channel` in the current
6321 $ lttng add-context --kernel --channel=my-channel --type=tid
6325 .Add an application-specific context field to a specific channel.
6327 The following command line adds the `cur_msg_id` context field of the
6328 `retriever` context retriever for all the instrumented
6329 <<java-application,Java applications>> recording <<event,event records>>
6330 in the channel named `my-channel`:
6334 $ lttng add-context --kernel --channel=my-channel \
6335 --type='$app:retriever:cur_msg_id'
6338 IMPORTANT: Make sure to always quote the `$` character when you
6339 use man:lttng-add-context(1) from a shell.
6342 NOTE: You cannot remove context fields from a channel once you add it.
6347 === Track process IDs
6349 It's often useful to allow only specific process IDs (PIDs) to emit
6350 events. For example, you may wish to record all the system calls made by
6351 a given process (à la http://linux.die.net/man/1/strace[strace]).
6353 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6354 purpose. Both commands operate on a whitelist of process IDs. You _add_
6355 entries to this whitelist with the man:lttng-track(1) command and remove
6356 entries with the man:lttng-untrack(1) command. Any process which has one
6357 of the PIDs in the whitelist is allowed to emit LTTng events which pass
6358 an enabled <<event,event rule>>.
6360 NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6361 process with a given tracked ID exit and another process be given this
6362 ID, then the latter would also be allowed to emit events.
6364 .Track and untrack process IDs.
6366 For the sake of the following example, assume the target system has 16
6370 <<creating-destroying-tracing-sessions,create a tracing session>>,
6371 the whitelist contains all the possible PIDs:
6374 .All PIDs are tracked.
6375 image::track-all.png[]
6377 When the whitelist is full and you use the man:lttng-track(1) command to
6378 specify some PIDs to track, LTTng first clears the whitelist, then it
6379 tracks the specific PIDs. After:
6383 $ lttng track --pid=3,4,7,10,13
6389 .PIDs 3, 4, 7, 10, and 13 are tracked.
6390 image::track-3-4-7-10-13.png[]
6392 You can add more PIDs to the whitelist afterwards:
6396 $ lttng track --pid=1,15,16
6402 .PIDs 1, 15, and 16 are added to the whitelist.
6403 image::track-1-3-4-7-10-13-15-16.png[]
6405 The man:lttng-untrack(1) command removes entries from the PID tracker's
6406 whitelist. Given the previous example, the following command:
6410 $ lttng untrack --pid=3,7,10,13
6413 leads to this whitelist:
6416 .PIDs 3, 7, 10, and 13 are removed from the whitelist.
6417 image::track-1-4-15-16.png[]
6419 LTTng can track all possible PIDs again using the
6420 opt:lttng-track(1):--all option:
6424 $ lttng track --pid --all
6427 The result is, again:
6430 .All PIDs are tracked.
6431 image::track-all.png[]
6434 .Track only specific PIDs
6436 A very typical use case with PID tracking is to start with an empty
6437 whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6438 then add PIDs manually while tracers are active. You can accomplish this
6439 by using the opt:lttng-untrack(1):--all option of the
6440 man:lttng-untrack(1) command to clear the whitelist after you
6441 <<creating-destroying-tracing-sessions,create a tracing session>>:
6445 $ lttng untrack --pid --all
6451 .No PIDs are tracked.
6452 image::untrack-all.png[]
6454 If you trace with this whitelist configuration, the tracer records no
6455 events for this <<domain,tracing domain>> because no processes are
6456 tracked. You can use the man:lttng-track(1) command as usual to track
6457 specific PIDs, for example:
6461 $ lttng track --pid=6,11
6467 .PIDs 6 and 11 are tracked.
6468 image::track-6-11.png[]
6473 [[saving-loading-tracing-session]]
6474 === Save and load tracing session configurations
6476 Configuring a <<tracing-session,tracing session>> can be long. Some of
6477 the tasks involved are:
6479 * <<enabling-disabling-channels,Create channels>> with
6480 specific attributes.
6481 * <<adding-context,Add context fields>> to specific channels.
6482 * <<enabling-disabling-events,Create event rules>> with specific log
6483 level and filter conditions.
6485 If you use LTTng to solve real world problems, chances are you have to
6486 record events using the same tracing session setup over and over,
6487 modifying a few variables each time in your instrumented program
6488 or environment. To avoid constant tracing session reconfiguration,
6489 the man:lttng(1) command-line tool can save and load tracing session
6490 configurations to/from XML files.
6492 To save a given tracing session configuration:
6494 * Use the man:lttng-save(1) command:
6499 $ lttng save my-session
6503 Replace `my-session` with the name of the tracing session to save.
6505 LTTng saves tracing session configurations to
6506 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6507 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6508 the opt:lttng-save(1):--output-path option to change this destination
6511 LTTng saves all configuration parameters, for example:
6513 * The tracing session name.
6514 * The trace data output path.
6515 * The channels with their state and all their attributes.
6516 * The context fields you added to channels.
6517 * The event rules with their state, log level and filter conditions.
6519 To load a tracing session:
6521 * Use the man:lttng-load(1) command:
6526 $ lttng load my-session
6530 Replace `my-session` with the name of the tracing session to load.
6532 When LTTng loads a configuration, it restores your saved tracing session
6533 as if you just configured it manually.
6535 See man:lttng(1) for the complete list of command-line options. You
6536 can also save and load all many sessions at a time, and decide in which
6537 directory to output the XML files.
6540 [[sending-trace-data-over-the-network]]
6541 === Send trace data over the network
6543 LTTng can send the recorded trace data to a remote system over the
6544 network instead of writing it to the local file system.
6546 To send the trace data over the network:
6548 . On the _remote_ system (which can also be the target system),
6549 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6558 . On the _target_ system, create a tracing session configured to
6559 send trace data over the network:
6564 $ lttng create my-session --set-url=net://remote-system
6568 Replace `remote-system` by the host name or IP address of the
6569 remote system. See man:lttng-create(1) for the exact URL format.
6571 . On the target system, use the man:lttng(1) command-line tool as usual.
6572 When tracing is active, the target's consumer daemon sends sub-buffers
6573 to the relay daemon running on the remote system instead of flushing
6574 them to the local file system. The relay daemon writes the received
6575 packets to the local file system.
6577 The relay daemon writes trace files to
6578 +$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6579 +__hostname__+ is the host name of the target system and +__session__+
6580 is the tracing session name. Note that the env:LTTNG_HOME environment
6581 variable defaults to `$HOME` if not set. Use the
6582 opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6583 trace files to another base directory.
6588 === View events as LTTng emits them (noch:{LTTng} live)
6590 LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6591 daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6592 display events as LTTng emits them on the target system while tracing is
6595 The relay daemon creates a _tee_: it forwards the trace data to both
6596 the local file system and to connected live viewers:
6599 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6604 . On the _target system_, create a <<tracing-session,tracing session>>
6610 $ lttng create my-session --live
6614 This spawns a local relay daemon.
6616 . Start the live viewer and configure it to connect to the relay
6617 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6622 $ babeltrace --input-format=lttng-live \
6623 net://localhost/host/hostname/my-session
6630 * `hostname` with the host name of the target system.
6631 * `my-session` with the name of the tracing session to view.
6634 . Configure the tracing session as usual with the man:lttng(1)
6635 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6637 You can list the available live tracing sessions with Babeltrace:
6641 $ babeltrace --input-format=lttng-live net://localhost
6644 You can start the relay daemon on another system. In this case, you need
6645 to specify the relay daemon's URL when you create the tracing session
6646 with the opt:lttng-create(1):--set-url option. You also need to replace
6647 `localhost` in the procedure above with the host name of the system on
6648 which the relay daemon is running.
6650 See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6651 command-line options.
6655 [[taking-a-snapshot]]
6656 === Take a snapshot of the current sub-buffers of a tracing session
6658 The normal behavior of LTTng is to append full sub-buffers to growing
6659 trace data files. This is ideal to keep a full history of the events
6660 that occurred on the target system, but it can
6661 represent too much data in some situations. For example, you may wish
6662 to trace your application continuously until some critical situation
6663 happens, in which case you only need the latest few recorded
6664 events to perform the desired analysis, not multi-gigabyte trace files.
6666 With the man:lttng-snapshot(1) command, you can take a snapshot of the
6667 current sub-buffers of a given <<tracing-session,tracing session>>.
6668 LTTng can write the snapshot to the local file system or send it over
6673 . Create a tracing session in _snapshot mode_:
6678 $ lttng create my-session --snapshot
6682 The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6683 <<channel,channels>> created in this mode is automatically set to
6684 _overwrite_ (flight recorder mode).
6686 . Configure the tracing session as usual with the man:lttng(1)
6687 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6689 . **Optional**: When you need to take a snapshot,
6690 <<basic-tracing-session-control,stop tracing>>.
6692 You can take a snapshot when the tracers are active, but if you stop
6693 them first, you are sure that the data in the sub-buffers does not
6694 change before you actually take the snapshot.
6701 $ lttng snapshot record --name=my-first-snapshot
6705 LTTng writes the current sub-buffers of all the current tracing
6706 session's channels to trace files on the local file system. Those trace
6707 files have `my-first-snapshot` in their name.
6709 There is no difference between the format of a normal trace file and the
6710 format of a snapshot: viewers of LTTng traces also support LTTng
6713 By default, LTTng writes snapshot files to the path shown by
6714 `lttng snapshot list-output`. You can change this path or decide to send
6715 snapshots over the network using either:
6717 . An output path or URL that you specify when you create the
6719 . An snapshot output path or URL that you add using
6720 `lttng snapshot add-output`
6721 . An output path or URL that you provide directly to the
6722 `lttng snapshot record` command.
6724 Method 3 overrides method 2, which overrides method 1. When you
6725 specify a URL, a relay daemon must listen on a remote system (see
6726 <<sending-trace-data-over-the-network,Send trace data over the network>>).
6731 === Use the machine interface
6733 With any command of the man:lttng(1) command-line tool, you can set the
6734 opt:lttng(1):--mi option to `xml` (before the command name) to get an
6735 XML machine interface output, for example:
6739 $ lttng --mi=xml enable-event --kernel --syscall open
6742 A schema definition (XSD) is
6743 https://github.com/lttng/lttng-tools/blob/stable-2.9/src/common/mi-lttng-3.0.xsd[available]
6744 to ease the integration with external tools as much as possible.
6748 [[metadata-regenerate]]
6749 === Regenerate the metadata of an LTTng trace
6751 An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6752 data stream files and a metadata file. This metadata file contains,
6753 amongst other things, information about the offset of the clock sources
6754 used to timestamp <<event,event records>> when tracing.
6756 If, once a <<tracing-session,tracing session>> is
6757 <<basic-tracing-session-control,started>>, a major
6758 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6759 happens, the trace's clock offset also needs to be updated. You
6760 can use the `metadata` item of the man:lttng-regenerate(1) command
6763 The main use case of this command is to allow a system to boot with
6764 an incorrect wall time and trace it with LTTng before its wall time
6765 is corrected. Once the system is known to be in a state where its
6766 wall time is correct, it can run `lttng regenerate metadata`.
6768 To regenerate the metadata of an LTTng trace:
6770 * Use the `metadata` item of the man:lttng-regenerate(1) command:
6775 $ lttng regenerate metadata
6781 `lttng regenerate metadata` has the following limitations:
6783 * Tracing session <<creating-destroying-tracing-sessions,created>>
6785 * User space <<channel,channels>>, if any, are using
6786 <<channel-buffering-schemes,per-user buffering>>.
6791 [[regenerate-statedump]]
6792 === Regenerate the state dump of a tracing session
6794 The LTTng kernel and user space tracers generate state dump
6795 <<event,event records>> when the application starts or when you
6796 <<basic-tracing-session-control,start a tracing session>>. An analysis
6797 can use the state dump event records to set an initial state before it
6798 builds the rest of the state from the following event records.
6799 http://tracecompass.org/[Trace Compass] is a notable example of an
6800 application which uses the state dump of an LTTng trace.
6802 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6803 state dump event records are not included in the snapshot because they
6804 were recorded to a sub-buffer that has been consumed or overwritten
6807 You can use the `lttng regenerate statedump` command to emit the state
6808 dump event records again.
6810 To regenerate the state dump of the current tracing session, provided
6811 create it in snapshot mode, before you take a snapshot:
6813 . Use the `statedump` item of the man:lttng-regenerate(1) command:
6818 $ lttng regenerate statedump
6822 . <<basic-tracing-session-control,Stop the tracing session>>:
6831 . <<taking-a-snapshot,Take a snapshot>>:
6836 $ lttng snapshot record --name=my-snapshot
6840 Depending on the event throughput, you should run steps 1 and 2
6841 as closely as possible.
6843 NOTE: To record the state dump events, you need to
6844 <<enabling-disabling-events,create event rules>> which enable them.
6845 LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6846 LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6850 [[persistent-memory-file-systems]]
6851 === Record trace data on persistent memory file systems
6853 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6854 (NVRAM) is random-access memory that retains its information when power
6855 is turned off (non-volatile). Systems with such memory can store data
6856 structures in RAM and retrieve them after a reboot, without flushing
6857 to typical _storage_.
6859 Linux supports NVRAM file systems thanks to either
6860 http://pramfs.sourceforge.net/[PRAMFS] or
6861 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6862 (requires Linux 4.1+).
6864 This section does not describe how to operate such file systems;
6865 we assume that you have a working persistent memory file system.
6867 When you create a <<tracing-session,tracing session>>, you can specify
6868 the path of the shared memory holding the sub-buffers. If you specify a
6869 location on an NVRAM file system, then you can retrieve the latest
6870 recorded trace data when the system reboots after a crash.
6872 To record trace data on a persistent memory file system and retrieve the
6873 trace data after a system crash:
6875 . Create a tracing session with a sub-buffer shared memory path located
6876 on an NVRAM file system:
6881 $ lttng create my-session --shm-path=/path/to/shm
6885 . Configure the tracing session as usual with the man:lttng(1)
6886 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6888 . After a system crash, use the man:lttng-crash(1) command-line tool to
6889 view the trace data recorded on the NVRAM file system:
6894 $ lttng-crash /path/to/shm
6898 The binary layout of the ring buffer files is not exactly the same as
6899 the trace files layout. This is why you need to use man:lttng-crash(1)
6900 instead of your preferred trace viewer directly.
6902 To convert the ring buffer files to LTTng trace files:
6904 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6909 $ lttng-crash --extract=/path/to/trace /path/to/shm
6917 [[lttng-modules-ref]]
6918 === noch:{LTTng-modules}
6922 [[lttng-tracepoint-enum]]
6923 ==== `LTTNG_TRACEPOINT_ENUM()` usage
6925 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6929 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
6934 * `name` with the name of the enumeration (C identifier, unique
6935 amongst all the defined enumerations).
6936 * `entries` with a list of enumeration entries.
6938 The available enumeration entry macros are:
6940 +ctf_enum_value(__name__, __value__)+::
6941 Entry named +__name__+ mapped to the integral value +__value__+.
6943 +ctf_enum_range(__name__, __begin__, __end__)+::
6944 Entry named +__name__+ mapped to the range of integral values between
6945 +__begin__+ (included) and +__end__+ (included).
6947 +ctf_enum_auto(__name__)+::
6948 Entry named +__name__+ mapped to the integral value following the
6949 last mapping's value.
6951 The last value of a `ctf_enum_value()` entry is its +__value__+
6954 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
6956 If `ctf_enum_auto()` is the first entry in the list, its integral
6959 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
6960 to use a defined enumeration as a tracepoint field.
6962 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
6966 LTTNG_TRACEPOINT_ENUM(
6969 ctf_enum_auto("AUTO: EXPECT 0")
6970 ctf_enum_value("VALUE: 23", 23)
6971 ctf_enum_value("VALUE: 27", 27)
6972 ctf_enum_auto("AUTO: EXPECT 28")
6973 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
6974 ctf_enum_auto("AUTO: EXPECT 304")
6982 [[lttng-modules-tp-fields]]
6983 ==== Tracepoint fields macros (for `TP_FIELDS()`)
6985 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
6986 tracepoint fields, which must be listed within `TP_FIELDS()` in
6987 `LTTNG_TRACEPOINT_EVENT()`, are:
6989 [role="func-desc growable",cols="asciidoc,asciidoc"]
6990 .Available macros to define LTTng-modules tracepoint fields
6992 |Macro |Description and parameters
6995 +ctf_integer(__t__, __n__, __e__)+
6997 +ctf_integer_nowrite(__t__, __n__, __e__)+
6999 +ctf_user_integer(__t__, __n__, __e__)+
7001 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
7003 Standard integer, displayed in base 10.
7006 Integer C type (`int`, `long`, `size_t`, ...).
7012 Argument expression.
7015 +ctf_integer_hex(__t__, __n__, __e__)+
7017 +ctf_user_integer_hex(__t__, __n__, __e__)+
7019 Standard integer, displayed in base 16.
7028 Argument expression.
7030 |+ctf_integer_oct(__t__, __n__, __e__)+
7032 Standard integer, displayed in base 8.
7041 Argument expression.
7044 +ctf_integer_network(__t__, __n__, __e__)+
7046 +ctf_user_integer_network(__t__, __n__, __e__)+
7048 Integer in network byte order (big-endian), displayed in base 10.
7057 Argument expression.
7060 +ctf_integer_network_hex(__t__, __n__, __e__)+
7062 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
7064 Integer in network byte order, displayed in base 16.
7073 Argument expression.
7076 +ctf_enum(__N__, __t__, __n__, __e__)+
7078 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7080 +ctf_user_enum(__N__, __t__, __n__, __e__)+
7082 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7087 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7090 Integer C type (`int`, `long`, `size_t`, ...).
7096 Argument expression.
7099 +ctf_string(__n__, __e__)+
7101 +ctf_string_nowrite(__n__, __e__)+
7103 +ctf_user_string(__n__, __e__)+
7105 +ctf_user_string_nowrite(__n__, __e__)+
7107 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7113 Argument expression.
7116 +ctf_array(__t__, __n__, __e__, __s__)+
7118 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7120 +ctf_user_array(__t__, __n__, __e__, __s__)+
7122 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7124 Statically-sized array of integers.
7127 Array element C type.
7133 Argument expression.
7139 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7141 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7143 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7145 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7147 Statically-sized array of bits.
7149 The type of +__e__+ must be an integer type. +__s__+ is the number
7150 of elements of such type in +__e__+, not the number of bits.
7153 Array element C type.
7159 Argument expression.
7165 +ctf_array_text(__t__, __n__, __e__, __s__)+
7167 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7169 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
7171 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7173 Statically-sized array, printed as text.
7175 The string does not need to be null-terminated.
7178 Array element C type (always `char`).
7184 Argument expression.
7190 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7192 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7194 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7196 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7198 Dynamically-sized array of integers.
7200 The type of +__E__+ must be unsigned.
7203 Array element C type.
7209 Argument expression.
7212 Length expression C type.
7218 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7220 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7222 Dynamically-sized array of integers, displayed in base 16.
7224 The type of +__E__+ must be unsigned.
7227 Array element C type.
7233 Argument expression.
7236 Length expression C type.
7241 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7243 Dynamically-sized array of integers in network byte order (big-endian),
7244 displayed in base 10.
7246 The type of +__E__+ must be unsigned.
7249 Array element C type.
7255 Argument expression.
7258 Length expression C type.
7264 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7266 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7268 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7270 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7272 Dynamically-sized array of bits.
7274 The type of +__e__+ must be an integer type. +__s__+ is the number
7275 of elements of such type in +__e__+, not the number of bits.
7277 The type of +__E__+ must be unsigned.
7280 Array element C type.
7286 Argument expression.
7289 Length expression C type.
7295 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7297 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7299 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7301 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7303 Dynamically-sized array, displayed as text.
7305 The string does not need to be null-terminated.
7307 The type of +__E__+ must be unsigned.
7309 The behaviour is undefined if +__e__+ is `NULL`.
7312 Sequence element C type (always `char`).
7318 Argument expression.
7321 Length expression C type.
7327 Use the `_user` versions when the argument expression, `e`, is
7328 a user space address. In the cases of `ctf_user_integer*()` and
7329 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7332 The `_nowrite` versions omit themselves from the session trace, but are
7333 otherwise identical. This means the `_nowrite` fields won't be written
7334 in the recorded trace. Their primary purpose is to make some
7335 of the event context available to the
7336 <<enabling-disabling-events,event filters>> without having to
7337 commit the data to sub-buffers.
7343 Terms related to LTTng and to tracing in general:
7346 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7347 the cmd:babeltrace command, some libraries, and Python bindings.
7349 <<channel-buffering-schemes,buffering scheme>>::
7350 A layout of sub-buffers applied to a given channel.
7352 <<channel,channel>>::
7353 An entity which is responsible for a set of ring buffers.
7355 <<event,Event rules>> are always attached to a specific channel.
7358 A reference of time for a tracer.
7360 <<lttng-consumerd,consumer daemon>>::
7361 A process which is responsible for consuming the full sub-buffers
7362 and write them to a file system or send them over the network.
7364 <<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7365 mode in which the tracer _discards_ new event records when there's no
7366 sub-buffer space left to store them.
7369 The consequence of the execution of an instrumentation
7370 point, like a tracepoint that you manually place in some source code,
7371 or a Linux kernel KProbe.
7373 An event is said to _occur_ at a specific time. Different actions can
7374 be taken upon the occurrence of an event, like record the event's payload
7377 <<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7378 The mechanism by which event records of a given channel are lost
7379 (not recorded) when there is no sub-buffer space left to store them.
7381 [[def-event-name]]event name::
7382 The name of an event, which is also the name of the event record.
7383 This is also called the _instrumentation point name_.
7386 A record, in a trace, of the payload of an event which occured.
7388 <<event,event rule>>::
7389 Set of conditions which must be satisfied for one or more occuring
7390 events to be recorded.
7392 `java.util.logging`::
7394 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7396 <<instrumenting,instrumentation>>::
7397 The use of LTTng probes to make a piece of software traceable.
7399 instrumentation point::
7400 A point in the execution path of a piece of software that, when
7401 reached by this execution, can emit an event.
7403 instrumentation point name::
7404 See _<<def-event-name,event name>>_.
7407 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7408 developed by the Apache Software Foundation.
7411 Level of severity of a log statement or user space
7412 instrumentation point.
7415 The _Linux Trace Toolkit: next generation_ project.
7417 <<lttng-cli,cmd:lttng>>::
7418 A command-line tool provided by the LTTng-tools project which you
7419 can use to send and receive control messages to and from a
7423 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7424 which is a set of analyzing programs that are used to obtain a
7425 higher level view of an LTTng trace.
7427 cmd:lttng-consumerd::
7428 The name of the consumer daemon program.
7431 A utility provided by the LTTng-tools project which can convert
7432 ring buffer files (usually
7433 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7436 LTTng Documentation::
7439 <<lttng-live,LTTng live>>::
7440 A communication protocol between the relay daemon and live viewers
7441 which makes it possible to see events "live", as they are received by
7444 <<lttng-modules,LTTng-modules>>::
7445 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7446 which contains the Linux kernel modules to make the Linux kernel
7447 instrumentation points available for LTTng tracing.
7450 The name of the relay daemon program.
7452 cmd:lttng-sessiond::
7453 The name of the session daemon program.
7456 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7457 contains the various programs and libraries used to
7458 <<controlling-tracing,control tracing>>.
7460 <<lttng-ust,LTTng-UST>>::
7461 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7462 contains libraries to instrument user applications.
7464 <<lttng-ust-agents,LTTng-UST Java agent>>::
7465 A Java package provided by the LTTng-UST project to allow the
7466 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7469 <<lttng-ust-agents,LTTng-UST Python agent>>::
7470 A Python package provided by the LTTng-UST project to allow the
7471 LTTng instrumentation of Python logging statements.
7473 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7474 The event loss mode in which new event records overwrite older
7475 event records when there's no sub-buffer space left to store them.
7477 <<channel-buffering-schemes,per-process buffering>>::
7478 A buffering scheme in which each instrumented process has its own
7479 sub-buffers for a given user space channel.
7481 <<channel-buffering-schemes,per-user buffering>>::
7482 A buffering scheme in which all the processes of a Unix user share the
7483 same sub-buffer for a given user space channel.
7485 <<lttng-relayd,relay daemon>>::
7486 A process which is responsible for receiving the trace data sent by
7487 a distant consumer daemon.
7490 A set of sub-buffers.
7492 <<lttng-sessiond,session daemon>>::
7493 A process which receives control commands from you and orchestrates
7494 the tracers and various LTTng daemons.
7496 <<taking-a-snapshot,snapshot>>::
7497 A copy of the current data of all the sub-buffers of a given tracing
7498 session, saved as trace files.
7501 One part of an LTTng ring buffer which contains event records.
7504 The time information attached to an event when it is emitted.
7507 A set of files which are the concatenations of one or more
7508 flushed sub-buffers.
7511 The action of recording the events emitted by an application
7512 or by a system, or to initiate such recording by controlling
7516 The http://tracecompass.org[Trace Compass] project and application.
7519 An instrumentation point using the tracepoint mechanism of the Linux
7520 kernel or of LTTng-UST.
7522 tracepoint definition::
7523 The definition of a single tracepoint.
7526 The name of a tracepoint.
7528 tracepoint provider::
7529 A set of functions providing tracepoints to an instrumented user
7532 Not to be confused with a _tracepoint provider package_: many tracepoint
7533 providers can exist within a tracepoint provider package.
7535 tracepoint provider package::
7536 One or more tracepoint providers compiled as an object file or as
7540 A software which records emitted events.
7542 <<domain,tracing domain>>::
7543 A namespace for event sources.
7545 <<tracing-group,tracing group>>::
7546 The Unix group in which a Unix user can be to be allowed to trace the
7549 <<tracing-session,tracing session>>::
7550 A stateful dialogue between you and a <<lttng-sessiond,session
7554 An application running in user space, as opposed to a Linux kernel
7555 module, for example.