1 The LTTng Documentation
2 =======================
3 Philippe Proulx <pproulx@efficios.com>
7 include::../common/copyright.txt[]
10 include::../common/welcome.txt[]
13 include::../common/audience.txt[]
17 === What's in this documentation?
19 The LTTng Documentation is divided into the following sections:
21 * **<<nuts-and-bolts,Nuts and bolts>>** explains the
22 rudiments of software tracing and the rationale behind the
25 You can skip this section if you’re familiar with software tracing and
26 with the LTTng project.
28 * **<<installing-lttng,Installation>>** describes the steps to
29 install the LTTng packages on common Linux distributions and from
32 You can skip this section if you already properly installed LTTng on
35 * **<<getting-started,Quick start>>** is a concise guide to
36 getting started quickly with LTTng kernel and user space tracing.
38 We recommend this section if you're new to LTTng or to software tracing
41 You can skip this section if you're not new to LTTng.
43 * **<<core-concepts,Core concepts>>** explains the concepts at
46 It's a good idea to become familiar with the core concepts
47 before attempting to use the toolkit.
49 * **<<plumbing,Components of LTTng>>** describes the various components
50 of the LTTng machinery, like the daemons, the libraries, and the
51 command-line interface.
52 * **<<instrumenting,Instrumentation>>** shows different ways to
53 instrument user applications and the Linux kernel.
55 Instrumenting source code is essential to provide a meaningful
58 You can skip this section if you do not have a programming background.
60 * **<<controlling-tracing,Tracing control>>** is divided into topics
61 which demonstrate how to use the vast array of features that
62 LTTng{nbsp}{revision} offers.
63 * **<<reference,Reference>>** contains reference tables.
64 * **<<glossary,Glossary>>** is a specialized dictionary of terms related
65 to LTTng or to the field of software tracing.
68 include::../common/convention.txt[]
71 include::../common/acknowledgements.txt[]
75 == What's new in LTTng {revision}?
77 LTTng{nbsp}{revision} bears the name _Joannès_. A Berliner Weisse style
78 beer from the http://letreflenoir.com/[Trèfle Noir] microbrewery in
79 https://en.wikipedia.org/wiki/Rouyn-Noranda[Rouyn-Noranda], the
80 https://www.beeradvocate.com/beer/profile/20537/238967/[_**Joannès**_]
81 is a tangy beer with a distinct pink dress and intense fruit flavor,
82 thanks to the presence of fresh blackcurrant grown in Témiscamingue.
84 New features and changes in LTTng{nbsp}{revision}:
86 * **Tracing control**:
87 ** You can override the name or the URL of a tracing session
88 configuration when you use man:lttng-load(1) thanks to the new
89 opt:lttng-load(1):--override-name and
90 opt:lttng-load(1):--override-url options.
91 ** The new `lttng regenerate` command replaces the now deprecated
92 `lttng metadata` command of LTTng 2.8. man:lttng-regenerate(1) can
93 also <<regenerate-statedump,generate the state dump event records>>
94 of a given tracing session on demand, a handy feature when
95 <<taking-a-snapshot,taking a snapshot>>.
96 ** You can add PMU counters by raw ID with man:lttng-add-context(1):
101 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
105 The format of the raw ID is the same as used with man:perf-record(1).
106 See <<adding-context,Add context fields to a channel>> for more
109 ** The LTTng <<lttng-relayd,relay daemon>> is now supported on
110 OS{nbsp}X and macOS for a smoother integration within a trace
111 analysis workflow, regardless of the platform used.
113 * **User space tracing**:
114 ** Improved performance (tested on x86-64 and ARMv7-A
115 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
117 ** New helper library (`liblttng-ust-fd`) to help with
118 <<liblttng-ust-fd,applications which close file descriptors that
119 don't belong to them>>, for example, in a loop which closes file
120 descriptors after man:fork(2), or BSD's `closeall()`.
121 ** More accurate <<liblttng-ust-dl,dynamic linker instrumentation>> and
122 state dump event records, especially when a dynamically loaded
123 library manually loads its own dependencies.
124 ** New `ctf_*()` field definition macros (see man:lttng-ust(3)):
125 *** `ctf_array_hex()`
126 *** `ctf_array_network()`
127 *** `ctf_array_network_hex()`
128 *** `ctf_sequence_hex()`
129 *** `ctf_sequence_network()`
130 *** `ctf_sequence_network_hex()`
131 ** New `lttng_ust_loaded` weak symbol defined by `liblttng-ust` for
132 an application to know if the LTTng-UST shared library is loaded
140 int lttng_ust_loaded __attribute__((weak));
144 if (lttng_ust_loaded) {
145 puts("LTTng-UST is loaded!");
147 puts("LTTng-UST is not loaded!");
155 ** LTTng-UST thread names have the `-ust` suffix.
157 * **Linux kernel tracing**:
158 ** Improved performance (tested on x86-64 and ARMv7-A
159 (https://en.wikipedia.org/wiki/Cubieboard[Cubieboard])
161 ** New enumeration <<lttng-modules-tp-fields,field definition macros>>:
162 `ctf_enum()` and `ctf_user_enum()`.
163 ** IPv4, IPv6, and TCP header data is recorded in the event records
164 produced by tracepoints starting with `net_`.
165 ** Detailed system call event records: `select`, `pselect6`, `poll`,
166 `ppoll`, `epoll_wait`, `epoll_pwait`, and `epoll_ctl` on all
167 architectures supported by LTTng-modules, and `accept4` on x86-64.
168 ** New I²C instrumentation: the `extract_sensitive_payload` parameter
169 of the new `lttng-probe-i2c` LTTng module controls whether or not
170 the payloads of I²C messages are recorded in I²C event records, since
171 they may contain sensitive data (for example, keystrokes).
172 ** When the LTTng kernel modules are built into the Linux kernel image,
173 the `CONFIG_TRACEPOINTS` configuration option is automatically
180 What is LTTng? As its name suggests, the _Linux Trace Toolkit: next
181 generation_ is a modern toolkit for tracing Linux systems and
182 applications. So your first question might be:
189 As the history of software engineering progressed and led to what
190 we now take for granted--complex, numerous and
191 interdependent software applications running in parallel on
192 sophisticated operating systems like Linux--the authors of such
193 components, software developers, began feeling a natural
194 urge to have tools that would ensure the robustness and good performance
195 of their masterpieces.
197 One major achievement in this field is, inarguably, the
198 https://www.gnu.org/software/gdb/[GNU debugger (GDB)],
199 an essential tool for developers to find and fix bugs. But even the best
200 debugger won't help make your software run faster, and nowadays, faster
201 software means either more work done by the same hardware, or cheaper
202 hardware for the same work.
204 A _profiler_ is often the tool of choice to identify performance
205 bottlenecks. Profiling is suitable to identify _where_ performance is
206 lost in a given software. The profiler outputs a profile, a statistical
207 summary of observed events, which you may use to discover which
208 functions took the most time to execute. However, a profiler won't
209 report _why_ some identified functions are the bottleneck. Bottlenecks
210 might only occur when specific conditions are met, conditions that are
211 sometimes impossible to capture by a statistical profiler, or impossible
212 to reproduce with an application altered by the overhead of an
213 event-based profiler. For a thorough investigation of software
214 performance issues, a history of execution is essential, with the
215 recorded values of variables and context fields you choose, and
216 with as little influence as possible on the instrumented software. This
217 is where tracing comes in handy.
219 _Tracing_ is a technique used to understand what goes on in a running
220 software system. The software used for tracing is called a _tracer_,
221 which is conceptually similar to a tape recorder. When recording,
222 specific instrumentation points placed in the software source code
223 generate events that are saved on a giant tape: a _trace_ file. You
224 can trace user applications and the operating system at the same time,
225 opening the possibility of resolving a wide range of problems that would
226 otherwise be extremely challenging.
228 Tracing is often compared to _logging_. However, tracers and loggers are
229 two different tools, serving two different purposes. Tracers are
230 designed to record much lower-level events that occur much more
231 frequently than log messages, often in the range of thousands per
232 second, with very little execution overhead. Logging is more appropriate
233 for a very high-level analysis of less frequent events: user accesses,
234 exceptional conditions (errors and warnings, for example), database
235 transactions, instant messaging communications, and such. Simply put,
236 logging is one of the many use cases that can be satisfied with tracing.
238 The list of recorded events inside a trace file can be read manually
239 like a log file for the maximum level of detail, but it is generally
240 much more interesting to perform application-specific analyses to
241 produce reduced statistics and graphs that are useful to resolve a
242 given problem. Trace viewers and analyzers are specialized tools
245 In the end, this is what LTTng is: a powerful, open source set of
246 tools to trace the Linux kernel and user applications at the same time.
247 LTTng is composed of several components actively maintained and
248 developed by its link:/community/#where[community].
251 [[lttng-alternatives]]
252 === Alternatives to noch:{LTTng}
254 Excluding proprietary solutions, a few competing software tracers
257 * https://github.com/dtrace4linux/linux[dtrace4linux] is a port of
258 Sun Microsystems's DTrace to Linux. The cmd:dtrace tool interprets
259 user scripts and is responsible for loading code into the
260 Linux kernel for further execution and collecting the outputted data.
261 * https://en.wikipedia.org/wiki/Berkeley_Packet_Filter[eBPF] is a
262 subsystem in the Linux kernel in which a virtual machine can execute
263 programs passed from the user space to the kernel. You can attach
264 such programs to tracepoints and KProbes thanks to a system call, and
265 they can output data to the user space when executed thanks to
266 different mechanisms (pipe, VM register values, and eBPF maps, to name
268 * https://www.kernel.org/doc/Documentation/trace/ftrace.txt[ftrace]
269 is the de facto function tracer of the Linux kernel. Its user
270 interface is a set of special files in sysfs.
271 * https://perf.wiki.kernel.org/[perf] is
272 a performance analyzing tool for Linux which supports hardware
273 performance counters, tracepoints, as well as other counters and
274 types of probes. perf's controlling utility is the cmd:perf command
276 * http://linux.die.net/man/1/strace[strace]
277 is a command-line utility which records system calls made by a
278 user process, as well as signal deliveries and changes of process
279 state. strace makes use of https://en.wikipedia.org/wiki/Ptrace[ptrace]
280 to fulfill its function.
281 * http://www.sysdig.org/[sysdig], like SystemTap, uses scripts to
282 analyze Linux kernel events. You write scripts, or _chisels_ in
283 sysdig's jargon, in Lua and sysdig executes them while the system is
284 being traced or afterwards. sysdig's interface is the cmd:sysdig
285 command-line tool as well as the curses-based cmd:csysdig tool.
286 * https://sourceware.org/systemtap/[SystemTap] is a Linux kernel and
287 user space tracer which uses custom user scripts to produce plain text
288 traces. SystemTap converts the scripts to the C language, and then
289 compiles them as Linux kernel modules which are loaded to produce
290 trace data. SystemTap's primary user interface is the cmd:stap
293 The main distinctive features of LTTng is that it produces correlated
294 kernel and user space traces, as well as doing so with the lowest
295 overhead amongst other solutions. It produces trace files in the
296 http://diamon.org/ctf[CTF] format, a file format optimized
297 for the production and analyses of multi-gigabyte data.
299 LTTng is the result of more than 10 years of active open source
300 development by a community of passionate developers.
301 LTTng{nbsp}{revision} is currently available on major desktop and server
304 The main interface for tracing control is a single command-line tool
305 named cmd:lttng. The latter can create several tracing sessions, enable
306 and disable events on the fly, filter events efficiently with custom
307 user expressions, start and stop tracing, and much more. LTTng can
308 record the traces on the file system or send them over the network, and
309 keep them totally or partially. You can view the traces once tracing
310 becomes inactive or in real-time.
312 <<installing-lttng,Install LTTng now>> and
313 <<getting-started,start tracing>>!
319 **LTTng** is a set of software <<plumbing,components>> which interact to
320 <<instrumenting,instrument>> the Linux kernel and user applications, and
321 to <<controlling-tracing,control tracing>> (start and stop
322 tracing, enable and disable event rules, and the rest). Those
323 components are bundled into the following packages:
325 * **LTTng-tools**: Libraries and command-line interface to
327 * **LTTng-modules**: Linux kernel modules to instrument and
329 * **LTTng-UST**: Libraries and Java/Python packages to instrument and
330 trace user applications.
332 Most distributions mark the LTTng-modules and LTTng-UST packages as
333 optional when installing LTTng-tools (which is always required). In the
334 following sections, we always provide the steps to install all three,
337 * You only need to install LTTng-modules if you intend to trace the
339 * You only need to install LTTng-UST if you intend to trace user
343 .Availability of LTTng{nbsp}{revision} for major Linux distributions as of 3 October 2017.
345 |Distribution |Available in releases |Alternatives
347 |https://www.ubuntu.com/[Ubuntu]
348 |<<ubuntu,Ubuntu{nbsp}17.04 _Zesty Zapus_ and Ubuntu{nbsp}17.10 _Artful Aardvark_>>.
350 Ubuntu{nbsp}14.04 _Trusty Tahr_ and Ubuntu{nbsp}16.04 _Xenial Xerus_:
351 <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
352 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
353 other Ubuntu releases.
355 |https://getfedora.org/[Fedora]
356 |<<fedora,Fedora{nbsp}26>>.
357 |link:/docs/v2.10#doc-fedora[LTTng{nbsp}2.10 for Fedora 27].
359 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
360 other Fedora releases.
362 |https://www.debian.org/[Debian]
363 |xref:debian[Debian "stretch" (stable)].
364 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
365 other Debian releases.
367 |https://www.archlinux.org/[Arch Linux]
369 |link:/docs/v2.10#doc-arch-linux[LTTng{nbsp}2.10 for the current Arch Linux build].
371 <<building-from-source,Build LTTng{nbsp}{revision} from source>>.
373 |https://alpinelinux.org/[Alpine Linux]
374 |<<alpine-linux,Alpine Linux "edge">>.
375 |<<building-from-source,Build LTTng{nbsp}{revision} from source>> for
376 other Alpine Linux releases.
378 |https://www.redhat.com/[RHEL] and https://www.suse.com/[SLES]
379 |See http://packages.efficios.com/[EfficiOS Enterprise Packages].
382 |https://buildroot.org/[Buildroot]
383 |xref:buildroot[Buildroot{nbsp}2017.02, Buildroot{nbsp}2017.05, and
384 Buildroot{nbsp}2017.08].
385 |link:/docs/v2.8#doc-buildroot[LTTng{nbsp}2.8 for Buildroot{nbsp}2016.11].
387 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
388 other Buildroot releases.
390 |http://www.openembedded.org/wiki/Main_Page[OpenEmbedded] and
391 https://www.yoctoproject.org/[Yocto]
392 |<<oe-yocto,Yocto Project{nbsp}2.3 _Pyro_>> (`openembedded-core` layer).
393 |link:/docs/v2.8#doc-oe-yocto[LTTng{nbsp}2.8 for Yocto Project{nbsp}2.2 _Morty_]
394 (`openembedded-core` layer).
396 <<building-from-source,Build LTTng{nbsp}{revision} from source>> for
397 other OpenEmbedded releases.
402 === [[ubuntu-official-repositories]]Ubuntu
404 LTTng{nbsp}{revision} is available on Ubuntu{nbsp}17.04 _Zesty Zapus_
405 and Ubuntu{nbsp}17.10 _Artful Aardvark_. For previous releases of
406 Ubuntu, <<ubuntu-ppa,use the LTTng Stable{nbsp}{revision} PPA>>.
408 To install LTTng{nbsp}{revision} on Ubuntu{nbsp}17.04 _Zesty Zapus_:
410 . Install the main LTTng{nbsp}{revision} packages:
415 # apt-get install lttng-tools
416 # apt-get install lttng-modules-dkms
417 # apt-get install liblttng-ust-dev
421 . **If you need to instrument and trace
422 <<java-application,Java applications>>**, install the LTTng-UST
428 # apt-get install liblttng-ust-agent-java
432 . **If you need to instrument and trace
433 <<python-application,Python{nbsp}3 applications>>**, install the
434 LTTng-UST Python agent:
439 # apt-get install python3-lttngust
445 ==== noch:{LTTng} Stable {revision} PPA
447 The https://launchpad.net/~lttng/+archive/ubuntu/stable-{revision}[LTTng
448 Stable{nbsp}{revision} PPA] offers the latest stable
449 LTTng{nbsp}{revision} packages for:
451 * Ubuntu{nbsp}14.04 _Trusty Tahr_
452 * Ubuntu{nbsp}16.04 _Xenial Xerus_
454 To install LTTng{nbsp}{revision} from the LTTng Stable{nbsp}{revision} PPA:
456 . Add the LTTng Stable{nbsp}{revision} PPA repository and update the
462 # apt-add-repository ppa:lttng/stable-2.9
467 . Install the main LTTng{nbsp}{revision} packages:
472 # apt-get install lttng-tools
473 # apt-get install lttng-modules-dkms
474 # apt-get install liblttng-ust-dev
478 . **If you need to instrument and trace
479 <<java-application,Java applications>>**, install the LTTng-UST
485 # apt-get install liblttng-ust-agent-java
489 . **If you need to instrument and trace
490 <<python-application,Python{nbsp}3 applications>>**, install the
491 LTTng-UST Python agent:
496 # apt-get install python3-lttngust
504 To install LTTng{nbsp}{revision} on Fedora{nbsp}26:
506 . Install the LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision}
512 # yum install lttng-tools
513 # yum install lttng-ust
517 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
523 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
524 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
525 cd lttng-modules-2.9.* &&
527 sudo make modules_install &&
533 .Java and Python application instrumentation and tracing
535 If you need to instrument and trace <<java-application,Java
536 applications>> on Fedora, you need to build and install
537 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
538 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
539 `--enable-java-agent-all` options to the `configure` script, depending
540 on which Java logging framework you use.
542 If you need to instrument and trace <<python-application,Python
543 applications>> on Fedora, you need to build and install
544 LTTng-UST{nbsp}{revision} from source and pass the
545 `--enable-python-agent` option to the `configure` script.
552 To install LTTng{nbsp}{revision} on Debian "stretch" (stable):
554 . Install the main LTTng{nbsp}{revision} packages:
559 # apt-get install lttng-modules-dkms
560 # apt-get install liblttng-ust-dev
561 # apt-get install lttng-tools
565 . **If you need to instrument and trace <<java-application,Java
566 applications>>**, install the LTTng-UST Java agent:
571 # apt-get install liblttng-ust-agent-java
575 . **If you need to instrument and trace <<python-application,Python
576 applications>>**, install the LTTng-UST Python agent:
581 # apt-get install python3-lttngust
589 To install LTTng-tools{nbsp}{revision} and LTTng-UST{nbsp}{revision} on
592 . Make sure your system is
593 https://wiki.alpinelinux.org/wiki/Edge[configured for "edge"].
594 . Enable the _testing_ repository by uncommenting the corresponding
595 line in path:{/etc/apk/repositories}.
596 . Add the LTTng packages:
601 # apk add lttng-tools
602 # apk add lttng-ust-dev
606 To install LTTng-modules{nbsp}{revision} (Linux kernel tracing support)
607 on Alpine Linux "edge":
609 . Add the vanilla Linux kernel:
614 # apk add linux-vanilla linux-vanilla-dev
618 . Reboot with the vanilla Linux kernel.
619 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
625 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
626 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
627 cd lttng-modules-2.9.* &&
629 sudo make modules_install &&
635 [[enterprise-distributions]]
636 === RHEL, SUSE, and other enterprise distributions
638 To install LTTng on enterprise Linux distributions, such as Red Hat
639 Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SUSE), please
640 see http://packages.efficios.com/[EfficiOS Enterprise Packages].
646 To install LTTng{nbsp}{revision} on Buildroot{nbsp}2017.02,
647 Buildroot{nbsp}2017.05, or Buildroot{nbsp}2017.08:
649 . Launch the Buildroot configuration tool:
658 . In **Kernel**, check **Linux kernel**.
659 . In **Toolchain**, check **Enable WCHAR support**.
660 . In **Target packages**{nbsp}→ **Debugging, profiling and benchmark**,
661 check **lttng-modules** and **lttng-tools**.
662 . In **Target packages**{nbsp}→ **Libraries**{nbsp}→
663 **Other**, check **lttng-libust**.
667 === OpenEmbedded and Yocto
669 LTTng{nbsp}{revision} recipes are available in the
670 http://layers.openembedded.org/layerindex/branch/master/layer/openembedded-core/[`openembedded-core`]
671 layer for Yocto Project{nbsp}2.3 _Pyro_ under the following names:
677 With BitBake, the simplest way to include LTTng recipes in your target
678 image is to add them to `IMAGE_INSTALL_append` in path:{conf/local.conf}:
681 IMAGE_INSTALL_append = " lttng-tools lttng-modules lttng-ust"
686 . Select a machine and an image recipe.
687 . Click **Edit image recipe**.
688 . Under the **All recipes** tab, search for **lttng**.
689 . Check the desired LTTng recipes.
692 .Java and Python application instrumentation and tracing
694 If you need to instrument and trace <<java-application,Java
695 applications>> on Yocto/OpenEmbedded, you need to build and install
696 LTTng-UST{nbsp}{revision} <<building-from-source,from source>> and pass
697 the `--enable-java-agent-jul`, `--enable-java-agent-log4j`, or
698 `--enable-java-agent-all` options to the `configure` script, depending
699 on which Java logging framework you use.
701 If you need to instrument and trace <<python-application,Python
702 applications>> on Yocto/OpenEmbedded, you need to build and install
703 LTTng-UST{nbsp}{revision} from source and pass the
704 `--enable-python-agent` option to the `configure` script.
708 [[building-from-source]]
709 === Build from source
711 To build and install LTTng{nbsp}{revision} from source:
713 . Using your distribution's package manager, or from source, install
714 the following dependencies of LTTng-tools and LTTng-UST:
717 * https://sourceforge.net/projects/libuuid/[libuuid]
718 * http://directory.fsf.org/wiki/Popt[popt]
719 * http://liburcu.org/[Userspace RCU]
720 * http://www.xmlsoft.org/[libxml2]
723 . Download, build, and install the latest LTTng-modules{nbsp}{revision}:
729 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
730 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
731 cd lttng-modules-2.9.* &&
733 sudo make modules_install &&
738 . Download, build, and install the latest LTTng-UST{nbsp}{revision}:
744 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
745 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
746 cd lttng-ust-2.9.* &&
756 .Java and Python application tracing
758 If you need to instrument and trace <<java-application,Java
759 applications>>, pass the `--enable-java-agent-jul`,
760 `--enable-java-agent-log4j`, or `--enable-java-agent-all` options to the
761 `configure` script, depending on which Java logging framework you use.
763 If you need to instrument and trace <<python-application,Python
764 applications>>, pass the `--enable-python-agent` option to the
765 `configure` script. You can set the `PYTHON` environment variable to the
766 path to the Python interpreter for which to install the LTTng-UST Python
774 By default, LTTng-UST libraries are installed to
775 dir:{/usr/local/lib}, which is the de facto directory in which to
776 keep self-compiled and third-party libraries.
778 When <<building-tracepoint-providers-and-user-application,linking an
779 instrumented user application with `liblttng-ust`>>:
781 * Append `/usr/local/lib` to the env:LD_LIBRARY_PATH environment
783 * Pass the `-L/usr/local/lib` and `-Wl,-rpath,/usr/local/lib` options to
784 man:gcc(1), man:g++(1), or man:clang(1).
788 . Download, build, and install the latest LTTng-tools{nbsp}{revision}:
794 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
795 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
796 cd lttng-tools-2.9.* &&
804 TIP: The https://github.com/eepp/vlttng[vlttng tool] can do all the
805 previous steps automatically for a given version of LTTng and confine
806 the installed files in a specific directory. This can be useful to test
807 LTTng without installing it on your system.
813 This is a short guide to get started quickly with LTTng kernel and user
816 Before you follow this guide, make sure to <<installing-lttng,install>>
819 This tutorial walks you through the steps to:
821 . <<tracing-the-linux-kernel,Trace the Linux kernel>>.
822 . <<tracing-your-own-user-application,Trace a user application>> written
824 . <<viewing-and-analyzing-your-traces,View and analyze the
828 [[tracing-the-linux-kernel]]
829 === Trace the Linux kernel
831 The following command lines start with the `#` prompt because you need
832 root privileges to trace the Linux kernel. You can also trace the kernel
833 as a regular user if your Unix user is a member of the
834 <<tracing-group,tracing group>>.
836 . Create a <<tracing-session,tracing session>> which writes its traces
837 to dir:{/tmp/my-kernel-trace}:
842 # lttng create my-kernel-session --output=/tmp/my-kernel-trace
846 . List the available kernel tracepoints and system calls:
851 # lttng list --kernel
852 # lttng list --kernel --syscall
856 . Create <<event,event rules>> which match the desired instrumentation
857 point names, for example the `sched_switch` and `sched_process_fork`
858 tracepoints, and the man:open(2) and man:close(2) system calls:
863 # lttng enable-event --kernel sched_switch,sched_process_fork
864 # lttng enable-event --kernel --syscall open,close
868 You can also create an event rule which matches _all_ the Linux kernel
869 tracepoints (this will generate a lot of data when tracing):
874 # lttng enable-event --kernel --all
878 . <<basic-tracing-session-control,Start tracing>>:
887 . Do some operation on your system for a few seconds. For example,
888 load a website, or list the files of a directory.
889 . <<basic-tracing-session-control,Stop tracing>> and destroy the
900 The man:lttng-destroy(1) command does not destroy the trace data; it
901 only destroys the state of the tracing session.
903 . For the sake of this example, make the recorded trace accessible to
909 # chown -R $(whoami) /tmp/my-kernel-trace
913 See <<viewing-and-analyzing-your-traces,View and analyze the
914 recorded events>> to view the recorded events.
917 [[tracing-your-own-user-application]]
918 === Trace a user application
920 This section steps you through a simple example to trace a
921 _Hello world_ program written in C.
923 To create the traceable user application:
925 . Create the tracepoint provider header file, which defines the
926 tracepoints and the events they can generate:
932 #undef TRACEPOINT_PROVIDER
933 #define TRACEPOINT_PROVIDER hello_world
935 #undef TRACEPOINT_INCLUDE
936 #define TRACEPOINT_INCLUDE "./hello-tp.h"
938 #if !defined(_HELLO_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
941 #include <lttng/tracepoint.h>
951 ctf_string(my_string_field, my_string_arg)
952 ctf_integer(int, my_integer_field, my_integer_arg)
956 #endif /* _HELLO_TP_H */
958 #include <lttng/tracepoint-event.h>
962 . Create the tracepoint provider package source file:
968 #define TRACEPOINT_CREATE_PROBES
969 #define TRACEPOINT_DEFINE
971 #include "hello-tp.h"
975 . Build the tracepoint provider package:
980 $ gcc -c -I. hello-tp.c
984 . Create the _Hello World_ application source file:
991 #include "hello-tp.h"
993 int main(int argc, char *argv[])
997 puts("Hello, World!\nPress Enter to continue...");
1000 * The following getchar() call is only placed here for the purpose
1001 * of this demonstration, to pause the application in order for
1002 * you to have time to list its tracepoints. It is not
1008 * A tracepoint() call.
1010 * Arguments, as defined in hello-tp.h:
1012 * 1. Tracepoint provider name (required)
1013 * 2. Tracepoint name (required)
1014 * 3. my_integer_arg (first user-defined argument)
1015 * 4. my_string_arg (second user-defined argument)
1017 * Notice the tracepoint provider and tracepoint names are
1018 * NOT strings: they are in fact parts of variables that the
1019 * macros in hello-tp.h create.
1021 tracepoint(hello_world, my_first_tracepoint, 23, "hi there!");
1023 for (x = 0; x < argc; ++x) {
1024 tracepoint(hello_world, my_first_tracepoint, x, argv[x]);
1027 puts("Quitting now!");
1028 tracepoint(hello_world, my_first_tracepoint, x * x, "x^2");
1035 . Build the application:
1044 . Link the application with the tracepoint provider package,
1045 `liblttng-ust`, and `libdl`:
1050 $ gcc -o hello hello.o hello-tp.o -llttng-ust -ldl
1054 Here's the whole build process:
1057 .User space tracing tutorial's build steps.
1058 image::ust-flow.png[]
1060 To trace the user application:
1062 . Run the application with a few arguments:
1067 $ ./hello world and beyond
1076 Press Enter to continue...
1080 . Start an LTTng <<lttng-sessiond,session daemon>>:
1085 $ lttng-sessiond --daemonize
1089 Note that a session daemon might already be running, for example as
1090 a service that the distribution's service manager started.
1092 . List the available user space tracepoints:
1097 $ lttng list --userspace
1101 You see the `hello_world:my_first_tracepoint` tracepoint listed
1102 under the `./hello` process.
1104 . Create a <<tracing-session,tracing session>>:
1109 $ lttng create my-user-space-session
1113 . Create an <<event,event rule>> which matches the
1114 `hello_world:my_first_tracepoint` event name:
1119 $ lttng enable-event --userspace hello_world:my_first_tracepoint
1123 . <<basic-tracing-session-control,Start tracing>>:
1132 . Go back to the running `hello` application and press Enter. The
1133 program executes all `tracepoint()` instrumentation points and exits.
1134 . <<basic-tracing-session-control,Stop tracing>> and destroy the
1145 The man:lttng-destroy(1) command does not destroy the trace data; it
1146 only destroys the state of the tracing session.
1148 By default, LTTng saves the traces in
1149 +$LTTNG_HOME/lttng-traces/__name__-__date__-__time__+,
1150 where +__name__+ is the tracing session name. The
1151 env:LTTNG_HOME environment variable defaults to `$HOME` if not set.
1153 See <<viewing-and-analyzing-your-traces,View and analyze the
1154 recorded events>> to view the recorded events.
1157 [[viewing-and-analyzing-your-traces]]
1158 === View and analyze the recorded events
1160 Once you have completed the <<tracing-the-linux-kernel,Trace the Linux
1161 kernel>> and <<tracing-your-own-user-application,Trace a user
1162 application>> tutorials, you can inspect the recorded events.
1164 Many tools are available to read LTTng traces:
1166 * **cmd:babeltrace** is a command-line utility which converts trace
1167 formats; it supports the format that LTTng produces, CTF, as well as a
1168 basic text output which can be ++grep++ed. The cmd:babeltrace command
1169 is part of the http://diamon.org/babeltrace[Babeltrace] project.
1170 * Babeltrace also includes
1171 **https://www.python.org/[Python] bindings** so
1172 that you can easily open and read an LTTng trace with your own script,
1173 benefiting from the power of Python.
1174 * http://tracecompass.org/[**Trace Compass**]
1175 is a graphical user interface for viewing and analyzing any type of
1176 logs or traces, including LTTng's.
1177 * https://github.com/lttng/lttng-analyses[**LTTng analyses**] is a
1178 project which includes many high-level analyses of LTTng kernel
1179 traces, like scheduling statistics, interrupt frequency distribution,
1180 top CPU usage, and more.
1182 NOTE: This section assumes that the traces recorded during the previous
1183 tutorials were saved to their default location, in the
1184 dir:{$LTTNG_HOME/lttng-traces} directory. The env:LTTNG_HOME
1185 environment variable defaults to `$HOME` if not set.
1188 [[viewing-and-analyzing-your-traces-bt]]
1189 ==== Use the cmd:babeltrace command-line tool
1191 The simplest way to list all the recorded events of a trace is to pass
1192 its path to cmd:babeltrace with no options:
1196 $ babeltrace ~/lttng-traces/my-user-space-session*
1199 cmd:babeltrace finds all traces recursively within the given path and
1200 prints all their events, merging them in chronological order.
1202 You can pipe the output of cmd:babeltrace into a tool like man:grep(1) for
1207 $ babeltrace /tmp/my-kernel-trace | grep _switch
1210 You can pipe the output of cmd:babeltrace into a tool like man:wc(1) to
1211 count the recorded events:
1215 $ babeltrace /tmp/my-kernel-trace | grep _open | wc --lines
1219 [[viewing-and-analyzing-your-traces-bt-python]]
1220 ==== Use the Babeltrace Python bindings
1222 The <<viewing-and-analyzing-your-traces-bt,text output of cmd:babeltrace>>
1223 is useful to isolate events by simple matching using man:grep(1) and
1224 similar utilities. However, more elaborate filters, such as keeping only
1225 event records with a field value falling within a specific range, are
1226 not trivial to write using a shell. Moreover, reductions and even the
1227 most basic computations involving multiple event records are virtually
1228 impossible to implement.
1230 Fortunately, Babeltrace ships with Python 3 bindings which makes it easy
1231 to read the event records of an LTTng trace sequentially and compute the
1232 desired information.
1234 The following script accepts an LTTng Linux kernel trace path as its
1235 first argument and prints the short names of the top 5 running processes
1236 on CPU 0 during the whole trace:
1241 from collections import Counter
1247 if len(sys.argv) != 2:
1248 msg = 'Usage: python3 {} TRACEPATH'.format(sys.argv[0])
1249 print(msg, file=sys.stderr)
1252 # A trace collection contains one or more traces
1253 col = babeltrace.TraceCollection()
1255 # Add the trace provided by the user (LTTng traces always have
1257 if col.add_trace(sys.argv[1], 'ctf') is None:
1258 raise RuntimeError('Cannot add trace')
1260 # This counter dict contains execution times:
1262 # task command name -> total execution time (ns)
1263 exec_times = Counter()
1265 # This contains the last `sched_switch` timestamp
1269 for event in col.events:
1270 # Keep only `sched_switch` events
1271 if event.name != 'sched_switch':
1274 # Keep only events which happened on CPU 0
1275 if event['cpu_id'] != 0:
1279 cur_ts = event.timestamp
1285 # Previous task command (short) name
1286 prev_comm = event['prev_comm']
1288 # Initialize entry in our dict if not yet done
1289 if prev_comm not in exec_times:
1290 exec_times[prev_comm] = 0
1292 # Compute previous command execution time
1293 diff = cur_ts - last_ts
1295 # Update execution time of this command
1296 exec_times[prev_comm] += diff
1298 # Update last timestamp
1302 for name, ns in exec_times.most_common(5):
1304 print('{:20}{} s'.format(name, s))
1309 if __name__ == '__main__':
1310 sys.exit(0 if top5proc() else 1)
1317 $ python3 top5proc.py /tmp/my-kernel-trace/kernel
1323 swapper/0 48.607245889 s
1324 chromium 7.192738188 s
1325 pavucontrol 0.709894415 s
1326 Compositor 0.660867933 s
1327 Xorg.bin 0.616753786 s
1330 Note that `swapper/0` is the "idle" process of CPU 0 on Linux; since we
1331 weren't using the CPU that much when tracing, its first position in the
1336 == [[understanding-lttng]]Core concepts
1338 From a user's perspective, the LTTng system is built on a few concepts,
1339 or objects, on which the <<lttng-cli,cmd:lttng command-line tool>>
1340 operates by sending commands to the <<lttng-sessiond,session daemon>>.
1341 Understanding how those objects relate to eachother is key in mastering
1344 The core concepts are:
1346 * <<tracing-session,Tracing session>>
1347 * <<domain,Tracing domain>>
1348 * <<channel,Channel and ring buffer>>
1349 * <<"event","Instrumentation point, event rule, event, and event record">>
1355 A _tracing session_ is a stateful dialogue between you and
1356 a <<lttng-sessiond,session daemon>>. You can
1357 <<creating-destroying-tracing-sessions,create a new tracing
1358 session>> with the `lttng create` command.
1360 Anything that you do when you control LTTng tracers happens within a
1361 tracing session. In particular, a tracing session:
1364 * Has its own set of trace files.
1365 * Has its own state of activity (started or stopped).
1366 * Has its own <<tracing-session-mode,mode>> (local, network streaming,
1368 * Has its own <<channel,channels>> which have their own
1369 <<event,event rules>>.
1372 .A _tracing session_ contains <<channel,channels>> that are members of <<domain,tracing domains>> and contain <<event,event rules>>.
1373 image::concepts.png[]
1375 Those attributes and objects are completely isolated between different
1378 A tracing session is analogous to a cash machine session:
1379 the operations you do on the banking system through the cash machine do
1380 not alter the data of other users of the same system. In the case of
1381 the cash machine, a session lasts as long as your bank card is inside.
1382 In the case of LTTng, a tracing session lasts from the `lttng create`
1383 command to the `lttng destroy` command.
1386 .Each Unix user has its own set of tracing sessions.
1387 image::many-sessions.png[]
1390 [[tracing-session-mode]]
1391 ==== Tracing session mode
1393 LTTng can send the generated trace data to different locations. The
1394 _tracing session mode_ dictates where to send it. The following modes
1395 are available in LTTng{nbsp}{revision}:
1398 LTTng writes the traces to the file system of the machine being traced
1401 Network streaming mode::
1402 LTTng sends the traces over the network to a
1403 <<lttng-relayd,relay daemon>> running on a remote system.
1406 LTTng does not write the traces by default. Instead, you can request
1407 LTTng to <<taking-a-snapshot,take a snapshot>>, that is, a copy of the
1408 current tracing buffers, and to write it to the target's file system
1409 or to send it over the network to a <<lttng-relayd,relay daemon>>
1410 running on a remote system.
1413 This mode is similar to the network streaming mode, but a live
1414 trace viewer can connect to the distant relay daemon to
1415 <<lttng-live,view event records as LTTng generates them>> by
1422 A _tracing domain_ is a namespace for event sources. A tracing domain
1423 has its own properties and features.
1425 There are currently five available tracing domains:
1429 * `java.util.logging` (JUL)
1433 You must specify a tracing domain when using some commands to avoid
1434 ambiguity. For example, since all the domains support named tracepoints
1435 as event sources (instrumentation points that you manually insert in the
1436 source code), you need to specify a tracing domain when
1437 <<enabling-disabling-events,creating an event rule>> because all the
1438 tracing domains could have tracepoints with the same names.
1440 Some features are reserved to specific tracing domains. Dynamic function
1441 entry and return instrumentation points, for example, are currently only
1442 supported in the Linux kernel tracing domain, but support for other
1443 tracing domains could be added in the future.
1445 You can create <<channel,channels>> in the Linux kernel and user space
1446 tracing domains. The other tracing domains have a single default
1451 === Channel and ring buffer
1453 A _channel_ is an object which is responsible for a set of ring buffers.
1454 Each ring buffer is divided into multiple sub-buffers. When an LTTng
1455 tracer emits an event, it can record it to one or more
1456 sub-buffers. The attributes of a channel determine what to do when
1457 there's no space left for a new event record because all sub-buffers
1458 are full, where to send a full sub-buffer, and other behaviours.
1460 A channel is always associated to a <<domain,tracing domain>>. The
1461 `java.util.logging` (JUL), log4j, and Python tracing domains each have
1462 a default channel which you cannot configure.
1464 A channel also owns <<event,event rules>>. When an LTTng tracer emits
1465 an event, it records it to the sub-buffers of all
1466 the enabled channels with a satisfied event rule, as long as those
1467 channels are part of active <<tracing-session,tracing sessions>>.
1470 [[channel-buffering-schemes]]
1471 ==== Per-user vs. per-process buffering schemes
1473 A channel has at least one ring buffer _per CPU_. LTTng always
1474 records an event to the ring buffer associated to the CPU on which it
1477 Two _buffering schemes_ are available when you
1478 <<enabling-disabling-channels,create a channel>> in the
1479 user space <<domain,tracing domain>>:
1481 Per-user buffering::
1482 Allocate one set of ring buffers--one per CPU--shared by all the
1483 instrumented processes of each Unix user.
1487 .Per-user buffering scheme.
1488 image::per-user-buffering.png[]
1491 Per-process buffering::
1492 Allocate one set of ring buffers--one per CPU--for each
1493 instrumented process.
1497 .Per-process buffering scheme.
1498 image::per-process-buffering.png[]
1501 The per-process buffering scheme tends to consume more memory than the
1502 per-user option because systems generally have more instrumented
1503 processes than Unix users running instrumented processes. However, the
1504 per-process buffering scheme ensures that one process having a high
1505 event throughput won't fill all the shared sub-buffers of the same
1508 The Linux kernel tracing domain has only one available buffering scheme
1509 which is to allocate a single set of ring buffers for the whole system.
1510 This scheme is similar to the per-user option, but with a single, global
1511 user "running" the kernel.
1514 [[channel-overwrite-mode-vs-discard-mode]]
1515 ==== Overwrite vs. discard event loss modes
1517 When an event occurs, LTTng records it to a specific sub-buffer (yellow
1518 arc in the following animation) of a specific channel's ring buffer.
1519 When there's no space left in a sub-buffer, the tracer marks it as
1520 consumable (red) and another, empty sub-buffer starts receiving the
1521 following event records. A <<lttng-consumerd,consumer daemon>>
1522 eventually consumes the marked sub-buffer (returns to white).
1525 [role="docsvg-channel-subbuf-anim"]
1530 In an ideal world, sub-buffers are consumed faster than they are filled,
1531 as is the case in the previous animation. In the real world,
1532 however, all sub-buffers can be full at some point, leaving no space to
1533 record the following events.
1535 By design, LTTng is a _non-blocking_ tracer: when no empty sub-buffer is
1536 available, it is acceptable to lose event records when the alternative
1537 would be to cause substantial delays in the instrumented application's
1538 execution. LTTng privileges performance over integrity; it aims at
1539 perturbing the traced system as little as possible in order to make
1540 tracing of subtle race conditions and rare interrupt cascades possible.
1542 When it comes to losing event records because no empty sub-buffer is
1543 available, the channel's _event loss mode_ determines what to do. The
1544 available event loss modes are:
1547 Drop the newest event records until a the tracer
1548 releases a sub-buffer.
1551 Clear the sub-buffer containing the oldest event records and start
1552 writing the newest event records there.
1554 This mode is sometimes called _flight recorder mode_ because it's
1556 https://en.wikipedia.org/wiki/Flight_recorder[flight recorder]:
1557 always keep a fixed amount of the latest data.
1559 Which mechanism you should choose depends on your context: prioritize
1560 the newest or the oldest event records in the ring buffer?
1562 Beware that, in overwrite mode, the tracer abandons a whole sub-buffer
1563 as soon as a there's no space left for a new event record, whereas in
1564 discard mode, the tracer only discards the event record that doesn't
1567 In discard mode, LTTng increments a count of lost event records when an
1568 event record is lost and saves this count to the trace. In overwrite
1569 mode, since LTTng 2.8, LTTng increments a count of lost sub-buffers when
1570 a sub-buffer is lost and saves this count to the trace. In this mode,
1571 the exact number of lost event records in those lost sub-buffers is not
1572 saved to the trace. Trace analyses can use the trace's saved discarded
1573 event record and sub-buffer counts to decide whether or not to perform
1574 the analyses even if trace data is known to be missing.
1576 There are a few ways to decrease your probability of losing event
1578 <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>> shows
1579 how you can fine-une the sub-buffer count and size of a channel to
1580 virtually stop losing event records, though at the cost of greater
1584 [[channel-subbuf-size-vs-subbuf-count]]
1585 ==== Sub-buffer count and size
1587 When you <<enabling-disabling-channels,create a channel>>, you can
1588 set its number of sub-buffers and their size.
1590 Note that there is noticeable CPU overhead introduced when
1591 switching sub-buffers (marking a full one as consumable and switching
1592 to an empty one for the following events to be recorded). Knowing this,
1593 the following list presents a few practical situations along with how
1594 to configure the sub-buffer count and size for them:
1596 * **High event throughput**: In general, prefer bigger sub-buffers to
1597 lower the risk of losing event records.
1599 Having bigger sub-buffers also ensures a lower
1600 <<channel-switch-timer,sub-buffer switching frequency>>.
1602 The number of sub-buffers is only meaningful if you create the channel
1603 in overwrite mode: in this case, if a sub-buffer overwrite happens, the
1604 other sub-buffers are left unaltered.
1606 * **Low event throughput**: In general, prefer smaller sub-buffers
1607 since the risk of losing event records is low.
1609 Because events occur less frequently, the sub-buffer switching frequency
1610 should remain low and thus the tracer's overhead should not be a
1613 * **Low memory system**: If your target system has a low memory
1614 limit, prefer fewer first, then smaller sub-buffers.
1616 Even if the system is limited in memory, you want to keep the
1617 sub-buffers as big as possible to avoid a high sub-buffer switching
1620 Note that LTTng uses http://diamon.org/ctf/[CTF] as its trace format,
1621 which means event data is very compact. For example, the average
1622 LTTng kernel event record weights about 32{nbsp}bytes. Thus, a
1623 sub-buffer size of 1{nbsp}MiB is considered big.
1625 The previous situations highlight the major trade-off between a few big
1626 sub-buffers and more, smaller sub-buffers: sub-buffer switching
1627 frequency vs. how much data is lost in overwrite mode. Assuming a
1628 constant event throughput and using the overwrite mode, the two
1629 following configurations have the same ring buffer total size:
1632 [role="docsvg-channel-subbuf-size-vs-count-anim"]
1637 * **2 sub-buffers of 4{nbsp}MiB each**: Expect a very low sub-buffer
1638 switching frequency, but if a sub-buffer overwrite happens, half of
1639 the event records so far (4{nbsp}MiB) are definitely lost.
1640 * **8 sub-buffers of 1{nbsp}MiB each**: Expect 4{nbsp}times the tracer's
1641 overhead as the previous configuration, but if a sub-buffer
1642 overwrite happens, only the eighth of event records so far are
1645 In discard mode, the sub-buffers count parameter is pointless: use two
1646 sub-buffers and set their size according to the requirements of your
1650 [[channel-switch-timer]]
1651 ==== Switch timer period
1653 The _switch timer period_ is an important configurable attribute of
1654 a channel to ensure periodic sub-buffer flushing.
1656 When the _switch timer_ expires, a sub-buffer switch happens. You can
1657 set the switch timer period attribute when you
1658 <<enabling-disabling-channels,create a channel>> to ensure that event
1659 data is consumed and committed to trace files or to a distant relay
1660 daemon periodically in case of a low event throughput.
1663 [role="docsvg-channel-switch-timer"]
1668 This attribute is also convenient when you use big sub-buffers to cope
1669 with a sporadic high event throughput, even if the throughput is
1673 [[channel-read-timer]]
1674 ==== Read timer period
1676 By default, the LTTng tracers use a notification mechanism to signal a
1677 full sub-buffer so that a consumer daemon can consume it. When such
1678 notifications must be avoided, for example in real-time applications,
1679 you can use the channel's _read timer_ instead. When the read timer
1680 fires, the <<lttng-consumerd,consumer daemon>> checks for full,
1681 consumable sub-buffers.
1684 [[tracefile-rotation]]
1685 ==== Trace file count and size
1687 By default, trace files can grow as large as needed. You can set the
1688 maximum size of each trace file that a channel writes when you
1689 <<enabling-disabling-channels,create a channel>>. When the size of
1690 a trace file reaches the channel's fixed maximum size, LTTng creates
1691 another file to contain the next event records. LTTng appends a file
1692 count to each trace file name in this case.
1694 If you set the trace file size attribute when you create a channel, the
1695 maximum number of trace files that LTTng creates is _unlimited_ by
1696 default. To limit them, you can also set a maximum number of trace
1697 files. When the number of trace files reaches the channel's fixed
1698 maximum count, the oldest trace file is overwritten. This mechanism is
1699 called _trace file rotation_.
1703 === Instrumentation point, event rule, event, and event record
1705 An _event rule_ is a set of conditions which must be **all** satisfied
1706 for LTTng to record an occuring event.
1708 You set the conditions when you <<enabling-disabling-events,create
1711 You always attach an event rule to <<channel,channel>> when you create
1714 When an event passes the conditions of an event rule, LTTng records it
1715 in one of the attached channel's sub-buffers.
1717 The available conditions, as of LTTng{nbsp}{revision}, are:
1719 * The event rule _is enabled_.
1720 * The instrumentation point's type _is{nbsp}T_.
1721 * The instrumentation point's name (sometimes called _event name_)
1722 _matches{nbsp}N_, but _is not{nbsp}E_.
1723 * The instrumentation point's log level _is as severe as{nbsp}L_, or
1724 _is exactly{nbsp}L_.
1725 * The fields of the event's payload _satisfy_ a filter
1726 expression{nbsp}__F__.
1728 As you can see, all the conditions but the dynamic filter are related to
1729 the event rule's status or to the instrumentation point, not to the
1730 occurring events. This is why, without a filter, checking if an event
1731 passes an event rule is not a dynamic task: when you create or modify an
1732 event rule, all the tracers of its tracing domain enable or disable the
1733 instrumentation points themselves once. This is possible because the
1734 attributes of an instrumentation point (type, name, and log level) are
1735 defined statically. In other words, without a dynamic filter, the tracer
1736 _does not evaluate_ the arguments of an instrumentation point unless it
1737 matches an enabled event rule.
1739 Note that, for LTTng to record an event, the <<channel,channel>> to
1740 which a matching event rule is attached must also be enabled, and the
1741 tracing session owning this channel must be active.
1744 .Logical path from an instrumentation point to an event record.
1745 image::event-rule.png[]
1747 .Event, event record, or event rule?
1749 With so many similar terms, it's easy to get confused.
1751 An **event** is the consequence of the execution of an _instrumentation
1752 point_, like a tracepoint that you manually place in some source code,
1753 or a Linux kernel KProbe. An event is said to _occur_ at a specific
1754 time. Different actions can be taken upon the occurrence of an event,
1755 like record the event's payload to a buffer.
1757 An **event record** is the representation of an event in a sub-buffer. A
1758 tracer is responsible for capturing the payload of an event, current
1759 context variables, the event's ID, and the event's timestamp. LTTng
1760 can append this sub-buffer to a trace file.
1762 An **event rule** is a set of conditions which must all be satisfied for
1763 LTTng to record an occuring event. Events still occur without
1764 satisfying event rules, but LTTng does not record them.
1769 == Components of noch:{LTTng}
1771 The second _T_ in _LTTng_ stands for _toolkit_: it would be wrong
1772 to call LTTng a simple _tool_ since it is composed of multiple
1773 interacting components. This section describes those components,
1774 explains their respective roles, and shows how they connect together to
1775 form the LTTng ecosystem.
1777 The following diagram shows how the most important components of LTTng
1778 interact with user applications, the Linux kernel, and you:
1781 .Control and trace data paths between LTTng components.
1782 image::plumbing.png[]
1784 The LTTng project incorporates:
1786 * **LTTng-tools**: Libraries and command-line interface to
1787 control tracing sessions.
1788 ** <<lttng-sessiond,Session daemon>> (man:lttng-sessiond(8)).
1789 ** <<lttng-consumerd,Consumer daemon>> (man:lttng-consumerd(8)).
1790 ** <<lttng-relayd,Relay daemon>> (man:lttng-relayd(8)).
1791 ** <<liblttng-ctl-lttng,Tracing control library>> (`liblttng-ctl`).
1792 ** <<lttng-cli,Tracing control command-line tool>> (man:lttng(1)).
1793 * **LTTng-UST**: Libraries and Java/Python packages to trace user
1795 ** <<lttng-ust,User space tracing library>> (`liblttng-ust`) and its
1796 headers to instrument and trace any native user application.
1797 ** <<prebuilt-ust-helpers,Preloadable user space tracing helpers>>:
1798 *** `liblttng-ust-libc-wrapper`
1799 *** `liblttng-ust-pthread-wrapper`
1800 *** `liblttng-ust-cyg-profile`
1801 *** `liblttng-ust-cyg-profile-fast`
1802 *** `liblttng-ust-dl`
1803 ** User space tracepoint provider source files generator command-line
1804 tool (man:lttng-gen-tp(1)).
1805 ** <<lttng-ust-agents,LTTng-UST Java agent>> to instrument and trace
1806 Java applications using `java.util.logging` or
1807 Apache log4j 1.2 logging.
1808 ** <<lttng-ust-agents,LTTng-UST Python agent>> to instrument
1809 Python applications using the standard `logging` package.
1810 * **LTTng-modules**: <<lttng-modules,Linux kernel modules>> to trace
1812 ** LTTng kernel tracer module.
1813 ** Tracing ring buffer kernel modules.
1814 ** Probe kernel modules.
1815 ** LTTng logger kernel module.
1819 === Tracing control command-line interface
1822 .The tracing control command-line interface.
1823 image::plumbing-lttng-cli.png[]
1825 The _man:lttng(1) command-line tool_ is the standard user interface to
1826 control LTTng <<tracing-session,tracing sessions>>. The cmd:lttng tool
1827 is part of LTTng-tools.
1829 The cmd:lttng tool is linked with
1830 <<liblttng-ctl-lttng,`liblttng-ctl`>> to communicate with
1831 one or more <<lttng-sessiond,session daemons>> behind the scenes.
1833 The cmd:lttng tool has a Git-like interface:
1837 $ lttng <GENERAL OPTIONS> <COMMAND> <COMMAND OPTIONS>
1840 The <<controlling-tracing,Tracing control>> section explores the
1841 available features of LTTng using the cmd:lttng tool.
1844 [[liblttng-ctl-lttng]]
1845 === Tracing control library
1848 .The tracing control library.
1849 image::plumbing-liblttng-ctl.png[]
1851 The _LTTng control library_, `liblttng-ctl`, is used to communicate
1852 with a <<lttng-sessiond,session daemon>> using a C API that hides the
1853 underlying protocol's details. `liblttng-ctl` is part of LTTng-tools.
1855 The <<lttng-cli,cmd:lttng command-line tool>>
1856 is linked with `liblttng-ctl`.
1858 You can use `liblttng-ctl` in C or $$C++$$ source code by including its
1863 #include <lttng/lttng.h>
1866 Some objects are referenced by name (C string), such as tracing
1867 sessions, but most of them require to create a handle first using
1868 `lttng_create_handle()`.
1870 The best available developer documentation for `liblttng-ctl` is, as of
1871 LTTng{nbsp}{revision}, its installed header files. Every function and
1872 structure is thoroughly documented.
1876 === User space tracing library
1879 .The user space tracing library.
1880 image::plumbing-liblttng-ust.png[]
1882 The _user space tracing library_, `liblttng-ust` (see man:lttng-ust(3)),
1883 is the LTTng user space tracer. It receives commands from a
1884 <<lttng-sessiond,session daemon>>, for example to
1885 enable and disable specific instrumentation points, and writes event
1886 records to ring buffers shared with a
1887 <<lttng-consumerd,consumer daemon>>.
1888 `liblttng-ust` is part of LTTng-UST.
1890 Public C header files are installed beside `liblttng-ust` to
1891 instrument any <<c-application,C or $$C++$$ application>>.
1893 <<lttng-ust-agents,LTTng-UST agents>>, which are regular Java and Python
1894 packages, use their own library providing tracepoints which is
1895 linked with `liblttng-ust`.
1897 An application or library does not have to initialize `liblttng-ust`
1898 manually: its constructor does the necessary tasks to properly register
1899 to a session daemon. The initialization phase also enables the
1900 instrumentation points matching the <<event,event rules>> that you
1904 [[lttng-ust-agents]]
1905 === User space tracing agents
1908 .The user space tracing agents.
1909 image::plumbing-lttng-ust-agents.png[]
1911 The _LTTng-UST Java and Python agents_ are regular Java and Python
1912 packages which add LTTng tracing capabilities to the
1913 native logging frameworks. The LTTng-UST agents are part of LTTng-UST.
1915 In the case of Java, the
1916 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[`java.util.logging`
1917 core logging facilities] and
1918 https://logging.apache.org/log4j/1.2/[Apache log4j 1.2] are supported.
1919 Note that Apache Log4{nbsp}2 is not supported.
1921 In the case of Python, the standard
1922 https://docs.python.org/3/library/logging.html[`logging`] package
1923 is supported. Both Python 2 and Python 3 modules can import the
1924 LTTng-UST Python agent package.
1926 The applications using the LTTng-UST agents are in the
1927 `java.util.logging` (JUL),
1928 log4j, and Python <<domain,tracing domains>>.
1930 Both agents use the same mechanism to trace the log statements. When an
1931 agent is initialized, it creates a log handler that attaches to the root
1932 logger. The agent also registers to a <<lttng-sessiond,session daemon>>.
1933 When the application executes a log statement, it is passed to the
1934 agent's log handler by the root logger. The agent's log handler calls a
1935 native function in a tracepoint provider package shared library linked
1936 with <<lttng-ust,`liblttng-ust`>>, passing the formatted log message and
1937 other fields, like its logger name and its log level. This native
1938 function contains a user space instrumentation point, hence tracing the
1941 The log level condition of an
1942 <<event,event rule>> is considered when tracing
1943 a Java or a Python application, and it's compatible with the standard
1944 JUL, log4j, and Python log levels.
1948 === LTTng kernel modules
1951 .The LTTng kernel modules.
1952 image::plumbing-lttng-modules.png[]
1954 The _LTTng kernel modules_ are a set of Linux kernel modules
1955 which implement the kernel tracer of the LTTng project. The LTTng
1956 kernel modules are part of LTTng-modules.
1958 The LTTng kernel modules include:
1960 * A set of _probe_ modules.
1962 Each module attaches to a specific subsystem
1963 of the Linux kernel using its tracepoint instrument points. There are
1964 also modules to attach to the entry and return points of the Linux
1965 system call functions.
1967 * _Ring buffer_ modules.
1969 A ring buffer implementation is provided as kernel modules. The LTTng
1970 kernel tracer writes to the ring buffer; a
1971 <<lttng-consumerd,consumer daemon>> reads from the ring buffer.
1973 * The _LTTng kernel tracer_ module.
1974 * The _LTTng logger_ module.
1976 The LTTng logger module implements the special path:{/proc/lttng-logger}
1977 file so that any executable can generate LTTng events by opening and
1978 writing to this file.
1980 See <<proc-lttng-logger-abi,LTTng logger>>.
1982 Generally, you do not have to load the LTTng kernel modules manually
1983 (using man:modprobe(8), for example): a root <<lttng-sessiond,session
1984 daemon>> loads the necessary modules when starting. If you have extra
1985 probe modules, you can specify to load them to the session daemon on
1988 The LTTng kernel modules are installed in
1989 +/usr/lib/modules/__release__/extra+ by default, where +__release__+ is
1990 the kernel release (see `uname --kernel-release`).
1997 .The session daemon.
1998 image::plumbing-sessiond.png[]
2000 The _session daemon_, man:lttng-sessiond(8), is a daemon responsible for
2001 managing tracing sessions and for controlling the various components of
2002 LTTng. The session daemon is part of LTTng-tools.
2004 The session daemon sends control requests to and receives control
2007 * The <<lttng-ust,user space tracing library>>.
2009 Any instance of the user space tracing library first registers to
2010 a session daemon. Then, the session daemon can send requests to
2011 this instance, such as:
2014 ** Get the list of tracepoints.
2015 ** Share an <<event,event rule>> so that the user space tracing library
2016 can enable or disable tracepoints. Amongst the possible conditions
2017 of an event rule is a filter expression which `liblttng-ust` evalutes
2018 when an event occurs.
2019 ** Share <<channel,channel>> attributes and ring buffer locations.
2022 The session daemon and the user space tracing library use a Unix
2023 domain socket for their communication.
2025 * The <<lttng-ust-agents,user space tracing agents>>.
2027 Any instance of a user space tracing agent first registers to
2028 a session daemon. Then, the session daemon can send requests to
2029 this instance, such as:
2032 ** Get the list of loggers.
2033 ** Enable or disable a specific logger.
2036 The session daemon and the user space tracing agent use a TCP connection
2037 for their communication.
2039 * The <<lttng-modules,LTTng kernel tracer>>.
2040 * The <<lttng-consumerd,consumer daemon>>.
2042 The session daemon sends requests to the consumer daemon to instruct
2043 it where to send the trace data streams, amongst other information.
2045 * The <<lttng-relayd,relay daemon>>.
2047 The session daemon receives commands from the
2048 <<liblttng-ctl-lttng,tracing control library>>.
2050 The root session daemon loads the appropriate
2051 <<lttng-modules,LTTng kernel modules>> on startup. It also spawns
2052 a <<lttng-consumerd,consumer daemon>> as soon as you create
2053 an <<event,event rule>>.
2055 The session daemon does not send and receive trace data: this is the
2056 role of the <<lttng-consumerd,consumer daemon>> and
2057 <<lttng-relayd,relay daemon>>. It does, however, generate the
2058 http://diamon.org/ctf/[CTF] metadata stream.
2060 Each Unix user can have its own session daemon instance. The
2061 tracing sessions managed by different session daemons are completely
2064 The root user's session daemon is the only one which is
2065 allowed to control the LTTng kernel tracer, and its spawned consumer
2066 daemon is the only one which is allowed to consume trace data from the
2067 LTTng kernel tracer. Note, however, that any Unix user which is a member
2068 of the <<tracing-group,tracing group>> is allowed
2069 to create <<channel,channels>> in the
2070 Linux kernel <<domain,tracing domain>>, and thus to trace the Linux
2073 The <<lttng-cli,cmd:lttng command-line tool>> automatically starts a
2074 session daemon when using its `create` command if none is currently
2075 running. You can also start the session daemon manually.
2082 .The consumer daemon.
2083 image::plumbing-consumerd.png[]
2085 The _consumer daemon_, man:lttng-consumerd(8), is a daemon which shares
2086 ring buffers with user applications or with the LTTng kernel modules to
2087 collect trace data and send it to some location (on disk or to a
2088 <<lttng-relayd,relay daemon>> over the network). The consumer daemon
2089 is part of LTTng-tools.
2091 You do not start a consumer daemon manually: a consumer daemon is always
2092 spawned by a <<lttng-sessiond,session daemon>> as soon as you create an
2093 <<event,event rule>>, that is, before you start tracing. When you kill
2094 its owner session daemon, the consumer daemon also exits because it is
2095 the session daemon's child process. Command-line options of
2096 man:lttng-sessiond(8) target the consumer daemon process.
2098 There are up to two running consumer daemons per Unix user, whereas only
2099 one session daemon can run per user. This is because each process can be
2100 either 32-bit or 64-bit: if the target system runs a mixture of 32-bit
2101 and 64-bit processes, it is more efficient to have separate
2102 corresponding 32-bit and 64-bit consumer daemons. The root user is an
2103 exception: it can have up to _three_ running consumer daemons: 32-bit
2104 and 64-bit instances for its user applications, and one more
2105 reserved for collecting kernel trace data.
2113 image::plumbing-relayd.png[]
2115 The _relay daemon_, man:lttng-relayd(8), is a daemon acting as a bridge
2116 between remote session and consumer daemons, local trace files, and a
2117 remote live trace viewer. The relay daemon is part of LTTng-tools.
2119 The main purpose of the relay daemon is to implement a receiver of
2120 <<sending-trace-data-over-the-network,trace data over the network>>.
2121 This is useful when the target system does not have much file system
2122 space to record trace files locally.
2124 The relay daemon is also a server to which a
2125 <<lttng-live,live trace viewer>> can
2126 connect. The live trace viewer sends requests to the relay daemon to
2127 receive trace data as the target system emits events. The
2128 communication protocol is named _LTTng live_; it is used over TCP
2131 Note that you can start the relay daemon on the target system directly.
2132 This is the setup of choice when the use case is to view events as
2133 the target system emits them without the need of a remote system.
2137 == [[using-lttng]]Instrumentation
2139 There are many examples of tracing and monitoring in our everyday life:
2141 * You have access to real-time and historical weather reports and
2142 forecasts thanks to weather stations installed around the country.
2143 * You know your heart is safe thanks to an electrocardiogram.
2144 * You make sure not to drive your car too fast and to have enough fuel
2145 to reach your destination thanks to gauges visible on your dashboard.
2147 All the previous examples have something in common: they rely on
2148 **instruments**. Without the electrodes attached to the surface of your
2149 body's skin, cardiac monitoring is futile.
2151 LTTng, as a tracer, is no different from those real life examples. If
2152 you're about to trace a software system or, in other words, record its
2153 history of execution, you better have **instrumentation points** in the
2154 subject you're tracing, that is, the actual software.
2156 Various ways were developed to instrument a piece of software for LTTng
2157 tracing. The most straightforward one is to manually place
2158 instrumentation points, called _tracepoints_, in the software's source
2159 code. It is also possible to add instrumentation points dynamically in
2160 the Linux kernel <<domain,tracing domain>>.
2162 If you're only interested in tracing the Linux kernel, your
2163 instrumentation needs are probably already covered by LTTng's built-in
2164 <<lttng-modules,Linux kernel tracepoints>>. You may also wish to trace a
2165 user application which is already instrumented for LTTng tracing.
2166 In such cases, you can skip this whole section and read the topics of
2167 the <<controlling-tracing,Tracing control>> section.
2169 Many methods are available to instrument a piece of software for LTTng
2172 * <<c-application,User space instrumentation for C and $$C++$$
2174 * <<prebuilt-ust-helpers,Prebuilt user space tracing helpers>>.
2175 * <<java-application,User space Java agent>>.
2176 * <<python-application,User space Python agent>>.
2177 * <<proc-lttng-logger-abi,LTTng logger>>.
2178 * <<instrumenting-linux-kernel,LTTng kernel tracepoints>>.
2182 === [[cxx-application]]User space instrumentation for C and $$C++$$ applications
2184 The procedure to instrument a C or $$C++$$ user application with
2185 the <<lttng-ust,LTTng user space tracing library>>, `liblttng-ust`, is:
2187 . <<tracepoint-provider,Create the source files of a tracepoint provider
2189 . <<probing-the-application-source-code,Add tracepoints to
2190 the application's source code>>.
2191 . <<building-tracepoint-providers-and-user-application,Build and link
2192 a tracepoint provider package and the user application>>.
2194 If you need quick, man:printf(3)-like instrumentation, you can skip
2195 those steps and use <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>>
2198 IMPORTANT: You need to <<installing-lttng,install>> LTTng-UST to
2199 instrument a user application with `liblttng-ust`.
2202 [[tracepoint-provider]]
2203 ==== Create the source files of a tracepoint provider package
2205 A _tracepoint provider_ is a set of compiled functions which provide
2206 **tracepoints** to an application, the type of instrumentation point
2207 supported by LTTng-UST. Those functions can emit events with
2208 user-defined fields and serialize those events as event records to one
2209 or more LTTng-UST <<channel,channel>> sub-buffers. The `tracepoint()`
2210 macro, which you <<probing-the-application-source-code,insert in a user
2211 application's source code>>, calls those functions.
2213 A _tracepoint provider package_ is an object file (`.o`) or a shared
2214 library (`.so`) which contains one or more tracepoint providers.
2215 Its source files are:
2217 * One or more <<tpp-header,tracepoint provider header>> (`.h`).
2218 * A <<tpp-source,tracepoint provider package source>> (`.c`).
2220 A tracepoint provider package is dynamically linked with `liblttng-ust`,
2221 the LTTng user space tracer, at run time.
2224 .User application linked with `liblttng-ust` and containing a tracepoint provider.
2225 image::ust-app.png[]
2227 NOTE: If you need quick, man:printf(3)-like instrumentation, you can
2228 skip creating and using a tracepoint provider and use
2229 <<tracef,`tracef()`>> or <<tracelog,`tracelog()`>> instead.
2233 ===== Create a tracepoint provider header file template
2235 A _tracepoint provider header file_ contains the tracepoint
2236 definitions of a tracepoint provider.
2238 To create a tracepoint provider header file:
2240 . Start from this template:
2244 .Tracepoint provider header file template (`.h` file extension).
2246 #undef TRACEPOINT_PROVIDER
2247 #define TRACEPOINT_PROVIDER provider_name
2249 #undef TRACEPOINT_INCLUDE
2250 #define TRACEPOINT_INCLUDE "./tp.h"
2252 #if !defined(_TP_H) || defined(TRACEPOINT_HEADER_MULTI_READ)
2255 #include <lttng/tracepoint.h>
2258 * Use TRACEPOINT_EVENT(), TRACEPOINT_EVENT_CLASS(),
2259 * TRACEPOINT_EVENT_INSTANCE(), and TRACEPOINT_LOGLEVEL() here.
2264 #include <lttng/tracepoint-event.h>
2270 * `provider_name` with the name of your tracepoint provider.
2271 * `"tp.h"` with the name of your tracepoint provider header file.
2273 . Below the `#include <lttng/tracepoint.h>` line, put your
2274 <<defining-tracepoints,tracepoint definitions>>.
2276 Your tracepoint provider name must be unique amongst all the possible
2277 tracepoint provider names used on the same target system. We
2278 suggest to include the name of your project or company in the name,
2279 for example, `org_lttng_my_project_tpp`.
2281 TIP: [[lttng-gen-tp]]You can use the man:lttng-gen-tp(1) tool to create
2282 this boilerplate for you. When using cmd:lttng-gen-tp, all you need to
2283 write are the <<defining-tracepoints,tracepoint definitions>>.
2286 [[defining-tracepoints]]
2287 ===== Create a tracepoint definition
2289 A _tracepoint definition_ defines, for a given tracepoint:
2291 * Its **input arguments**. They are the macro parameters that the
2292 `tracepoint()` macro accepts for this particular tracepoint
2293 in the user application's source code.
2294 * Its **output event fields**. They are the sources of event fields
2295 that form the payload of any event that the execution of the
2296 `tracepoint()` macro emits for this particular tracepoint.
2298 You can create a tracepoint definition by using the
2299 `TRACEPOINT_EVENT()` macro below the `#include <lttng/tracepoint.h>`
2301 <<tpp-header,tracepoint provider header file template>>.
2303 The syntax of the `TRACEPOINT_EVENT()` macro is:
2306 .`TRACEPOINT_EVENT()` macro syntax.
2309 /* Tracepoint provider name */
2312 /* Tracepoint name */
2315 /* Input arguments */
2320 /* Output event fields */
2329 * `provider_name` with your tracepoint provider name.
2330 * `tracepoint_name` with your tracepoint name.
2331 * `arguments` with the <<tpp-def-input-args,input arguments>>.
2332 * `fields` with the <<tpp-def-output-fields,output event field>>
2335 This tracepoint emits events named `provider_name:tracepoint_name`.
2338 .Event name's length limitation
2340 The concatenation of the tracepoint provider name and the
2341 tracepoint name must not exceed **254 characters**. If it does, the
2342 instrumented application compiles and runs, but LTTng throws multiple
2343 warnings and you could experience serious issues.
2346 [[tpp-def-input-args]]The syntax of the `TP_ARGS()` macro is:
2349 .`TP_ARGS()` macro syntax.
2358 * `type` with the C type of the argument.
2359 * `arg_name` with the argument name.
2361 You can repeat `type` and `arg_name` up to 10 times to have
2362 more than one argument.
2364 .`TP_ARGS()` usage with three arguments.
2376 The `TP_ARGS()` and `TP_ARGS(void)` forms are valid to create a
2377 tracepoint definition with no input arguments.
2379 [[tpp-def-output-fields]]The `TP_FIELDS()` macro contains a list of
2380 `ctf_*()` macros. Each `ctf_*()` macro defines one event field. See
2381 man:lttng-ust(3) for a complete description of the available `ctf_*()`
2382 macros. A `ctf_*()` macro specifies the type, size, and byte order of
2385 Each `ctf_*()` macro takes an _argument expression_ parameter. This is a
2386 C expression that the tracer evalutes at the `tracepoint()` macro site
2387 in the application's source code. This expression provides a field's
2388 source of data. The argument expression can include input argument names
2389 listed in the `TP_ARGS()` macro.
2391 Each `ctf_*()` macro also takes a _field name_ parameter. Field names
2392 must be unique within a given tracepoint definition.
2394 Here's a complete tracepoint definition example:
2396 .Tracepoint definition.
2398 The following tracepoint definition defines a tracepoint which takes
2399 three input arguments and has four output event fields.
2403 #include "my-custom-structure.h"
2409 const struct my_custom_structure*, my_custom_structure,
2414 ctf_string(query_field, query)
2415 ctf_float(double, ratio_field, ratio)
2416 ctf_integer(int, recv_size, my_custom_structure->recv_size)
2417 ctf_integer(int, send_size, my_custom_structure->send_size)
2422 You can refer to this tracepoint definition with the `tracepoint()`
2423 macro in your application's source code like this:
2427 tracepoint(my_provider, my_tracepoint,
2428 my_structure, some_ratio, the_query);
2432 NOTE: The LTTng tracer only evaluates tracepoint arguments at run time
2433 if they satisfy an enabled <<event,event rule>>.
2436 [[using-tracepoint-classes]]
2437 ===== Use a tracepoint class
2439 A _tracepoint class_ is a class of tracepoints which share the same
2440 output event field definitions. A _tracepoint instance_ is one
2441 instance of such a defined tracepoint class, with its own tracepoint
2444 The <<defining-tracepoints,`TRACEPOINT_EVENT()` macro>> is actually a
2445 shorthand which defines both a tracepoint class and a tracepoint
2446 instance at the same time.
2448 When you build a tracepoint provider package, the C or $$C++$$ compiler
2449 creates one serialization function for each **tracepoint class**. A
2450 serialization function is responsible for serializing the event fields
2451 of a tracepoint to a sub-buffer when tracing.
2453 For various performance reasons, when your situation requires multiple
2454 tracepoint definitions with different names, but with the same event
2455 fields, we recommend that you manually create a tracepoint class
2456 and instantiate as many tracepoint instances as needed. One positive
2457 effect of such a design, amongst other advantages, is that all
2458 tracepoint instances of the same tracepoint class reuse the same
2459 serialization function, thus reducing
2460 https://en.wikipedia.org/wiki/Cache_pollution[cache pollution].
2462 .Use a tracepoint class and tracepoint instances.
2464 Consider the following three tracepoint definitions:
2476 ctf_integer(int, userid, userid)
2477 ctf_integer(size_t, len, len)
2489 ctf_integer(int, userid, userid)
2490 ctf_integer(size_t, len, len)
2502 ctf_integer(int, userid, userid)
2503 ctf_integer(size_t, len, len)
2508 In this case, we create three tracepoint classes, with one implicit
2509 tracepoint instance for each of them: `get_account`, `get_settings`, and
2510 `get_transaction`. However, they all share the same event field names
2511 and types. Hence three identical, yet independent serialization
2512 functions are created when you build the tracepoint provider package.
2514 A better design choice is to define a single tracepoint class and three
2515 tracepoint instances:
2519 /* The tracepoint class */
2520 TRACEPOINT_EVENT_CLASS(
2521 /* Tracepoint provider name */
2524 /* Tracepoint class name */
2527 /* Input arguments */
2533 /* Output event fields */
2535 ctf_integer(int, userid, userid)
2536 ctf_integer(size_t, len, len)
2540 /* The tracepoint instances */
2541 TRACEPOINT_EVENT_INSTANCE(
2542 /* Tracepoint provider name */
2545 /* Tracepoint class name */
2548 /* Tracepoint name */
2551 /* Input arguments */
2557 TRACEPOINT_EVENT_INSTANCE(
2566 TRACEPOINT_EVENT_INSTANCE(
2579 [[assigning-log-levels]]
2580 ===== Assign a log level to a tracepoint definition
2582 You can assign an optional _log level_ to a
2583 <<defining-tracepoints,tracepoint definition>>.
2585 Assigning different levels of severity to tracepoint definitions can
2586 be useful: when you <<enabling-disabling-events,create an event rule>>,
2587 you can target tracepoints having a log level as severe as a specific
2590 The concept of LTTng-UST log levels is similar to the levels found
2591 in typical logging frameworks:
2593 * In a logging framework, the log level is given by the function
2594 or method name you use at the log statement site: `debug()`,
2595 `info()`, `warn()`, `error()`, and so on.
2596 * In LTTng-UST, you statically assign the log level to a tracepoint
2597 definition; any `tracepoint()` macro invocation which refers to
2598 this definition has this log level.
2600 You can assign a log level to a tracepoint definition with the
2601 `TRACEPOINT_LOGLEVEL()` macro. You must use this macro _after_ the
2602 <<defining-tracepoints,`TRACEPOINT_EVENT()`>> or
2603 <<using-tracepoint-classes,`TRACEPOINT_INSTANCE()`>> macro for a given
2606 The syntax of the `TRACEPOINT_LOGLEVEL()` macro is:
2609 .`TRACEPOINT_LOGLEVEL()` macro syntax.
2611 TRACEPOINT_LOGLEVEL(provider_name, tracepoint_name, log_level)
2616 * `provider_name` with the tracepoint provider name.
2617 * `tracepoint_name` with the tracepoint name.
2618 * `log_level` with the log level to assign to the tracepoint
2619 definition named `tracepoint_name` in the `provider_name`
2620 tracepoint provider.
2622 See man:lttng-ust(3) for a list of available log level names.
2624 .Assign the `TRACE_DEBUG_UNIT` log level to a tracepoint definition.
2628 /* Tracepoint definition */
2637 ctf_integer(int, userid, userid)
2638 ctf_integer(size_t, len, len)
2642 /* Log level assignment */
2643 TRACEPOINT_LOGLEVEL(my_app, get_transaction, TRACE_DEBUG_UNIT)
2649 ===== Create a tracepoint provider package source file
2651 A _tracepoint provider package source file_ is a C source file which
2652 includes a <<tpp-header,tracepoint provider header file>> to expand its
2653 macros into event serialization and other functions.
2655 You can always use the following tracepoint provider package source
2659 .Tracepoint provider package source file template.
2661 #define TRACEPOINT_CREATE_PROBES
2666 Replace `tp.h` with the name of your <<tpp-header,tracepoint provider
2667 header file>> name. You may also include more than one tracepoint
2668 provider header file here to create a tracepoint provider package
2669 holding more than one tracepoint providers.
2672 [[probing-the-application-source-code]]
2673 ==== Add tracepoints to an application's source code
2675 Once you <<tpp-header,create a tracepoint provider header file>>, you
2676 can use the `tracepoint()` macro in your application's
2677 source code to insert the tracepoints that this header
2678 <<defining-tracepoints,defines>>.
2680 The `tracepoint()` macro takes at least two parameters: the tracepoint
2681 provider name and the tracepoint name. The corresponding tracepoint
2682 definition defines the other parameters.
2684 .`tracepoint()` usage.
2686 The following <<defining-tracepoints,tracepoint definition>> defines a
2687 tracepoint which takes two input arguments and has two output event
2691 .Tracepoint provider header file.
2693 #include "my-custom-structure.h"
2700 const char*, cmd_name
2703 ctf_string(cmd_name, cmd_name)
2704 ctf_integer(int, number_of_args, argc)
2709 You can refer to this tracepoint definition with the `tracepoint()`
2710 macro in your application's source code like this:
2713 .Application's source file.
2717 int main(int argc, char* argv[])
2719 tracepoint(my_provider, my_tracepoint, argc, argv[0]);
2725 Note how the application's source code includes
2726 the tracepoint provider header file containing the tracepoint
2727 definitions to use, path:{tp.h}.
2730 .`tracepoint()` usage with a complex tracepoint definition.
2732 Consider this complex tracepoint definition, where multiple event
2733 fields refer to the same input arguments in their argument expression
2737 .Tracepoint provider header file.
2739 /* For `struct stat` */
2740 #include <sys/types.h>
2741 #include <sys/stat.h>
2753 ctf_integer(int, my_constant_field, 23 + 17)
2754 ctf_integer(int, my_int_arg_field, my_int_arg)
2755 ctf_integer(int, my_int_arg_field2, my_int_arg * my_int_arg)
2756 ctf_integer(int, sum4_field, my_str_arg[0] + my_str_arg[1] +
2757 my_str_arg[2] + my_str_arg[3])
2758 ctf_string(my_str_arg_field, my_str_arg)
2759 ctf_integer_hex(off_t, size_field, st->st_size)
2760 ctf_float(double, size_dbl_field, (double) st->st_size)
2761 ctf_sequence_text(char, half_my_str_arg_field, my_str_arg,
2762 size_t, strlen(my_str_arg) / 2)
2767 You can refer to this tracepoint definition with the `tracepoint()`
2768 macro in your application's source code like this:
2771 .Application's source file.
2773 #define TRACEPOINT_DEFINE
2780 stat("/etc/fstab", &s);
2781 tracepoint(my_provider, my_tracepoint, 23, "Hello, World!", &s);
2787 If you look at the event record that LTTng writes when tracing this
2788 program, assuming the file size of path:{/etc/fstab} is 301{nbsp}bytes,
2789 it should look like this:
2791 .Event record fields
2793 |Field's name |Field's value
2794 |`my_constant_field` |40
2795 |`my_int_arg_field` |23
2796 |`my_int_arg_field2` |529
2798 |`my_str_arg_field` |`Hello, World!`
2799 |`size_field` |0x12d
2800 |`size_dbl_field` |301.0
2801 |`half_my_str_arg_field` |`Hello,`
2805 Sometimes, the arguments you pass to `tracepoint()` are expensive to
2806 compute--they use the call stack, for example. To avoid this
2807 computation when the tracepoint is disabled, you can use the
2808 `tracepoint_enabled()` and `do_tracepoint()` macros.
2810 The syntax of the `tracepoint_enabled()` and `do_tracepoint()` macros
2814 .`tracepoint_enabled()` and `do_tracepoint()` macros syntax.
2816 tracepoint_enabled(provider_name, tracepoint_name)
2817 do_tracepoint(provider_name, tracepoint_name, ...)
2822 * `provider_name` with the tracepoint provider name.
2823 * `tracepoint_name` with the tracepoint name.
2825 `tracepoint_enabled()` returns a non-zero value if the tracepoint named
2826 `tracepoint_name` from the provider named `provider_name` is enabled
2829 `do_tracepoint()` is like `tracepoint()`, except that it doesn't check
2830 if the tracepoint is enabled. Using `tracepoint()` with
2831 `tracepoint_enabled()` is dangerous since `tracepoint()` also contains
2832 the `tracepoint_enabled()` check, thus a race condition is
2833 possible in this situation:
2836 .Possible race condition when using `tracepoint_enabled()` with `tracepoint()`.
2838 if (tracepoint_enabled(my_provider, my_tracepoint)) {
2839 stuff = prepare_stuff();
2842 tracepoint(my_provider, my_tracepoint, stuff);
2845 If the tracepoint is enabled after the condition, then `stuff` is not
2846 prepared: the emitted event will either contain wrong data, or the whole
2847 application could crash (segmentation fault, for example).
2849 NOTE: Neither `tracepoint_enabled()` nor `do_tracepoint()` have an
2850 `STAP_PROBEV()` call. If you need it, you must emit
2854 [[building-tracepoint-providers-and-user-application]]
2855 ==== Build and link a tracepoint provider package and an application
2857 Once you have one or more <<tpp-header,tracepoint provider header
2858 files>> and a <<tpp-source,tracepoint provider package source file>>,
2859 you can create the tracepoint provider package by compiling its source
2860 file. From here, multiple build and run scenarios are possible. The
2861 following table shows common application and library configurations
2862 along with the required command lines to achieve them.
2864 In the following diagrams, we use the following file names:
2867 Executable application.
2870 Application's object file.
2873 Tracepoint provider package object file.
2876 Tracepoint provider package archive file.
2879 Tracepoint provider package shared object file.
2882 User library object file.
2885 User library shared object file.
2887 We use the following symbols in the diagrams of table below:
2890 .Symbols used in the build scenario diagrams.
2891 image::ust-sit-symbols.png[]
2893 We assume that path:{.} is part of the env:LD_LIBRARY_PATH environment
2894 variable in the following instructions.
2896 [role="growable ust-scenarios",cols="asciidoc,asciidoc"]
2897 .Common tracepoint provider package scenarios.
2899 |Scenario |Instructions
2902 The instrumented application is statically linked with
2903 the tracepoint provider package object.
2905 image::ust-sit+app-linked-with-tp-o+app-instrumented.png[]
2908 include::../common/ust-sit-step-tp-o.txt[]
2910 To build the instrumented application:
2912 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2917 #define TRACEPOINT_DEFINE
2921 . Compile the application source file:
2930 . Build the application:
2935 $ gcc -o app app.o tpp.o -llttng-ust -ldl
2939 To run the instrumented application:
2941 * Start the application:
2951 The instrumented application is statically linked with the
2952 tracepoint provider package archive file.
2954 image::ust-sit+app-linked-with-tp-a+app-instrumented.png[]
2957 To create the tracepoint provider package archive file:
2959 . Compile the <<tpp-source,tracepoint provider package source file>>:
2968 . Create the tracepoint provider package archive file:
2973 $ ar rcs tpp.a tpp.o
2977 To build the instrumented application:
2979 . In path:{app.c}, before including path:{tpp.h}, add the following line:
2984 #define TRACEPOINT_DEFINE
2988 . Compile the application source file:
2997 . Build the application:
3002 $ gcc -o app app.o tpp.a -llttng-ust -ldl
3006 To run the instrumented application:
3008 * Start the application:
3018 The instrumented application is linked with the tracepoint provider
3019 package shared object.
3021 image::ust-sit+app-linked-with-tp-so+app-instrumented.png[]
3024 include::../common/ust-sit-step-tp-so.txt[]
3026 To build the instrumented application:
3028 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3033 #define TRACEPOINT_DEFINE
3037 . Compile the application source file:
3046 . Build the application:
3051 $ gcc -o app app.o -ldl -L. -ltpp
3055 To run the instrumented application:
3057 * Start the application:
3067 The tracepoint provider package shared object is preloaded before the
3068 instrumented application starts.
3070 image::ust-sit+tp-so-preloaded+app-instrumented.png[]
3073 include::../common/ust-sit-step-tp-so.txt[]
3075 To build the instrumented application:
3077 . In path:{app.c}, before including path:{tpp.h}, add the
3083 #define TRACEPOINT_DEFINE
3084 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3088 . Compile the application source file:
3097 . Build the application:
3102 $ gcc -o app app.o -ldl
3106 To run the instrumented application with tracing support:
3108 * Preload the tracepoint provider package shared object and
3109 start the application:
3114 $ LD_PRELOAD=./libtpp.so ./app
3118 To run the instrumented application without tracing support:
3120 * Start the application:
3130 The instrumented application dynamically loads the tracepoint provider
3131 package shared object.
3133 See the <<dlclose-warning,warning about `dlclose()`>>.
3135 image::ust-sit+app-dlopens-tp-so+app-instrumented.png[]
3138 include::../common/ust-sit-step-tp-so.txt[]
3140 To build the instrumented application:
3142 . In path:{app.c}, before including path:{tpp.h}, add the
3148 #define TRACEPOINT_DEFINE
3149 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3153 . Compile the application source file:
3162 . Build the application:
3167 $ gcc -o app app.o -ldl
3171 To run the instrumented application:
3173 * Start the application:
3183 The application is linked with the instrumented user library.
3185 The instrumented user library is statically linked with the tracepoint
3186 provider package object file.
3188 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-o+lib-instrumented.png[]
3191 include::../common/ust-sit-step-tp-o-fpic.txt[]
3193 To build the instrumented user library:
3195 . In path:{emon.c}, before including path:{tpp.h}, add the
3201 #define TRACEPOINT_DEFINE
3205 . Compile the user library source file:
3210 $ gcc -I. -fpic -c emon.c
3214 . Build the user library shared object:
3219 $ gcc -shared -o libemon.so emon.o tpp.o -llttng-ust -ldl
3223 To build the application:
3225 . Compile the application source file:
3234 . Build the application:
3239 $ gcc -o app app.o -L. -lemon
3243 To run the application:
3245 * Start the application:
3255 The application is linked with the instrumented user library.
3257 The instrumented user library is linked with the tracepoint provider
3258 package shared object.
3260 image::ust-sit+app-linked-with-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3263 include::../common/ust-sit-step-tp-so.txt[]
3265 To build the instrumented user library:
3267 . In path:{emon.c}, before including path:{tpp.h}, add the
3273 #define TRACEPOINT_DEFINE
3277 . Compile the user library source file:
3282 $ gcc -I. -fpic -c emon.c
3286 . Build the user library shared object:
3291 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3295 To build the application:
3297 . Compile the application source file:
3306 . Build the application:
3311 $ gcc -o app app.o -L. -lemon
3315 To run the application:
3317 * Start the application:
3327 The tracepoint provider package shared object is preloaded before the
3330 The application is linked with the instrumented user library.
3332 image::ust-sit+tp-so-preloaded+app-linked-with-lib+lib-instrumented.png[]
3335 include::../common/ust-sit-step-tp-so.txt[]
3337 To build the instrumented user library:
3339 . In path:{emon.c}, before including path:{tpp.h}, add the
3345 #define TRACEPOINT_DEFINE
3346 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3350 . Compile the user library source file:
3355 $ gcc -I. -fpic -c emon.c
3359 . Build the user library shared object:
3364 $ gcc -shared -o libemon.so emon.o -ldl
3368 To build the application:
3370 . Compile the application source file:
3379 . Build the application:
3384 $ gcc -o app app.o -L. -lemon
3388 To run the application with tracing support:
3390 * Preload the tracepoint provider package shared object and
3391 start the application:
3396 $ LD_PRELOAD=./libtpp.so ./app
3400 To run the application without tracing support:
3402 * Start the application:
3412 The application is linked with the instrumented user library.
3414 The instrumented user library dynamically loads the tracepoint provider
3415 package shared object.
3417 See the <<dlclose-warning,warning about `dlclose()`>>.
3419 image::ust-sit+app-linked-with-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3422 include::../common/ust-sit-step-tp-so.txt[]
3424 To build the instrumented user library:
3426 . In path:{emon.c}, before including path:{tpp.h}, add the
3432 #define TRACEPOINT_DEFINE
3433 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3437 . Compile the user library source file:
3442 $ gcc -I. -fpic -c emon.c
3446 . Build the user library shared object:
3451 $ gcc -shared -o libemon.so emon.o -ldl
3455 To build the application:
3457 . Compile the application source file:
3466 . Build the application:
3471 $ gcc -o app app.o -L. -lemon
3475 To run the application:
3477 * Start the application:
3487 The application dynamically loads the instrumented user library.
3489 The instrumented user library is linked with the tracepoint provider
3490 package shared object.
3492 See the <<dlclose-warning,warning about `dlclose()`>>.
3494 image::ust-sit+app-dlopens-lib+lib-linked-with-tp-so+lib-instrumented.png[]
3497 include::../common/ust-sit-step-tp-so.txt[]
3499 To build the instrumented user library:
3501 . In path:{emon.c}, before including path:{tpp.h}, add the
3507 #define TRACEPOINT_DEFINE
3511 . Compile the user library source file:
3516 $ gcc -I. -fpic -c emon.c
3520 . Build the user library shared object:
3525 $ gcc -shared -o libemon.so emon.o -ldl -L. -ltpp
3529 To build the application:
3531 . Compile the application source file:
3540 . Build the application:
3545 $ gcc -o app app.o -ldl -L. -lemon
3549 To run the application:
3551 * Start the application:
3561 The application dynamically loads the instrumented user library.
3563 The instrumented user library dynamically loads the tracepoint provider
3564 package shared object.
3566 See the <<dlclose-warning,warning about `dlclose()`>>.
3568 image::ust-sit+app-dlopens-lib+lib-dlopens-tp-so+lib-instrumented.png[]
3571 include::../common/ust-sit-step-tp-so.txt[]
3573 To build the instrumented user library:
3575 . In path:{emon.c}, before including path:{tpp.h}, add the
3581 #define TRACEPOINT_DEFINE
3582 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3586 . Compile the user library source file:
3591 $ gcc -I. -fpic -c emon.c
3595 . Build the user library shared object:
3600 $ gcc -shared -o libemon.so emon.o -ldl
3604 To build the application:
3606 . Compile the application source file:
3615 . Build the application:
3620 $ gcc -o app app.o -ldl -L. -lemon
3624 To run the application:
3626 * Start the application:
3636 The tracepoint provider package shared object is preloaded before the
3639 The application dynamically loads the instrumented user library.
3641 image::ust-sit+tp-so-preloaded+app-dlopens-lib+lib-instrumented.png[]
3644 include::../common/ust-sit-step-tp-so.txt[]
3646 To build the instrumented user library:
3648 . In path:{emon.c}, before including path:{tpp.h}, add the
3654 #define TRACEPOINT_DEFINE
3655 #define TRACEPOINT_PROBE_DYNAMIC_LINKAGE
3659 . Compile the user library source file:
3664 $ gcc -I. -fpic -c emon.c
3668 . Build the user library shared object:
3673 $ gcc -shared -o libemon.so emon.o -ldl
3677 To build the application:
3679 . Compile the application source file:
3688 . Build the application:
3693 $ gcc -o app app.o -L. -lemon
3697 To run the application with tracing support:
3699 * Preload the tracepoint provider package shared object and
3700 start the application:
3705 $ LD_PRELOAD=./libtpp.so ./app
3709 To run the application without tracing support:
3711 * Start the application:
3721 The application is statically linked with the tracepoint provider
3722 package object file.
3724 The application is linked with the instrumented user library.
3726 image::ust-sit+app-linked-with-tp-o+app-linked-with-lib+lib-instrumented.png[]
3729 include::../common/ust-sit-step-tp-o.txt[]
3731 To build the instrumented user library:
3733 . In path:{emon.c}, before including path:{tpp.h}, add the
3739 #define TRACEPOINT_DEFINE
3743 . Compile the user library source file:
3748 $ gcc -I. -fpic -c emon.c
3752 . Build the user library shared object:
3757 $ gcc -shared -o libemon.so emon.o
3761 To build the application:
3763 . Compile the application source file:
3772 . Build the application:
3777 $ gcc -o app app.o tpp.o -llttng-ust -ldl -L. -lemon
3781 To run the instrumented application:
3783 * Start the application:
3793 The application is statically linked with the tracepoint provider
3794 package object file.
3796 The application dynamically loads the instrumented user library.
3798 image::ust-sit+app-linked-with-tp-o+app-dlopens-lib+lib-instrumented.png[]
3801 include::../common/ust-sit-step-tp-o.txt[]
3803 To build the application:
3805 . In path:{app.c}, before including path:{tpp.h}, add the following line:
3810 #define TRACEPOINT_DEFINE
3814 . Compile the application source file:
3823 . Build the application:
3828 $ gcc -Wl,--export-dynamic -o app app.o tpp.o \
3833 The `--export-dynamic` option passed to the linker is necessary for the
3834 dynamically loaded library to ``see'' the tracepoint symbols defined in
3837 To build the instrumented user library:
3839 . Compile the user library source file:
3844 $ gcc -I. -fpic -c emon.c
3848 . Build the user library shared object:
3853 $ gcc -shared -o libemon.so emon.o
3857 To run the application:
3859 * Start the application:
3871 .Do not use man:dlclose(3) on a tracepoint provider package
3873 Never use man:dlclose(3) on any shared object which:
3875 * Is linked with, statically or dynamically, a tracepoint provider
3877 * Calls man:dlopen(3) itself to dynamically open a tracepoint provider
3878 package shared object.
3880 This is currently considered **unsafe** due to a lack of reference
3881 counting from LTTng-UST to the shared object.
3883 A known workaround (available since glibc 2.2) is to use the
3884 `RTLD_NODELETE` flag when calling man:dlopen(3) initially. This has the
3885 effect of not unloading the loaded shared object, even if man:dlclose(3)
3888 You can also preload the tracepoint provider package shared object with
3889 the env:LD_PRELOAD environment variable to overcome this limitation.
3893 [[using-lttng-ust-with-daemons]]
3894 ===== Use noch:{LTTng-UST} with daemons
3896 If your instrumented application calls man:fork(2), man:clone(2),
3897 or BSD's man:rfork(2), without a following man:exec(3)-family
3898 system call, you must preload the path:{liblttng-ust-fork.so} shared
3899 object when you start the application.
3903 $ LD_PRELOAD=liblttng-ust-fork.so ./my-app
3906 If your tracepoint provider package is
3907 a shared library which you also preload, you must put both
3908 shared objects in env:LD_PRELOAD:
3912 $ LD_PRELOAD=liblttng-ust-fork.so:/path/to/tp.so ./my-app
3918 ===== Use noch:{LTTng-UST} with applications which close file descriptors that don't belong to them
3920 If your instrumented application closes one or more file descriptors
3921 which it did not open itself, you must preload the
3922 path:{liblttng-ust-fd.so} shared object when you start the application:
3926 $ LD_PRELOAD=liblttng-ust-fd.so ./my-app
3929 Typical use cases include closing all the file descriptors after
3930 man:fork(2) or man:rfork(2) and buggy applications doing
3934 [[lttng-ust-pkg-config]]
3935 ===== Use noch:{pkg-config}
3937 On some distributions, LTTng-UST ships with a
3938 https://www.freedesktop.org/wiki/Software/pkg-config/[pkg-config]
3939 metadata file. If this is your case, then you can use cmd:pkg-config to
3940 build an application on the command line:
3944 $ gcc -o my-app my-app.o tp.o $(pkg-config --cflags --libs lttng-ust)
3948 [[instrumenting-32-bit-app-on-64-bit-system]]
3949 ===== [[advanced-instrumenting-techniques]]Build a 32-bit instrumented application for a 64-bit target system
3951 In order to trace a 32-bit application running on a 64-bit system,
3952 LTTng must use a dedicated 32-bit
3953 <<lttng-consumerd,consumer daemon>>.
3955 The following steps show how to build and install a 32-bit consumer
3956 daemon, which is _not_ part of the default 64-bit LTTng build, how to
3957 build and install the 32-bit LTTng-UST libraries, and how to build and
3958 link an instrumented 32-bit application in that context.
3960 To build a 32-bit instrumented application for a 64-bit target system,
3961 assuming you have a fresh target system with no installed Userspace RCU
3964 . Download, build, and install a 32-bit version of Userspace RCU:
3969 $ cd $(mktemp -d) &&
3970 wget http://lttng.org/files/urcu/userspace-rcu-latest-0.9.tar.bz2 &&
3971 tar -xf userspace-rcu-latest-0.9.tar.bz2 &&
3972 cd userspace-rcu-0.9.* &&
3973 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 &&
3975 sudo make install &&
3980 . Using your distribution's package manager, or from source, install
3981 the following 32-bit versions of the following dependencies of
3982 LTTng-tools and LTTng-UST:
3985 * https://sourceforge.net/projects/libuuid/[libuuid]
3986 * http://directory.fsf.org/wiki/Popt[popt]
3987 * http://www.xmlsoft.org/[libxml2]
3990 . Download, build, and install a 32-bit version of the latest
3991 LTTng-UST{nbsp}{revision}:
3996 $ cd $(mktemp -d) &&
3997 wget http://lttng.org/files/lttng-ust/lttng-ust-latest-2.9.tar.bz2 &&
3998 tar -xf lttng-ust-latest-2.9.tar.bz2 &&
3999 cd lttng-ust-2.9.* &&
4000 ./configure --libdir=/usr/local/lib32 \
4001 CFLAGS=-m32 CXXFLAGS=-m32 \
4002 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' &&
4004 sudo make install &&
4011 Depending on your distribution,
4012 32-bit libraries could be installed at a different location than
4013 `/usr/lib32`. For example, Debian is known to install
4014 some 32-bit libraries in `/usr/lib/i386-linux-gnu`.
4016 In this case, make sure to set `LDFLAGS` to all the
4017 relevant 32-bit library paths, for example:
4021 $ LDFLAGS='-L/usr/lib/i386-linux-gnu -L/usr/lib32'
4025 . Download the latest LTTng-tools{nbsp}{revision}, build, and install
4026 the 32-bit consumer daemon:
4031 $ cd $(mktemp -d) &&
4032 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
4033 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
4034 cd lttng-tools-2.9.* &&
4035 ./configure --libdir=/usr/local/lib32 CFLAGS=-m32 CXXFLAGS=-m32 \
4036 LDFLAGS='-L/usr/local/lib32 -L/usr/lib32' \
4037 --disable-bin-lttng --disable-bin-lttng-crash \
4038 --disable-bin-lttng-relayd --disable-bin-lttng-sessiond &&
4040 cd src/bin/lttng-consumerd &&
4041 sudo make install &&
4046 . From your distribution or from source,
4047 <<installing-lttng,install>> the 64-bit versions of
4048 LTTng-UST and Userspace RCU.
4049 . Download, build, and install the 64-bit version of the
4050 latest LTTng-tools{nbsp}{revision}:
4055 $ cd $(mktemp -d) &&
4056 wget http://lttng.org/files/lttng-tools/lttng-tools-latest-2.9.tar.bz2 &&
4057 tar -xf lttng-tools-latest-2.9.tar.bz2 &&
4058 cd lttng-tools-2.9.* &&
4059 ./configure --with-consumerd32-libdir=/usr/local/lib32 \
4060 --with-consumerd32-bin=/usr/local/lib32/lttng/libexec/lttng-consumerd &&
4062 sudo make install &&
4067 . Pass the following options to man:gcc(1), man:g++(1), or man:clang(1)
4068 when linking your 32-bit application:
4071 -m32 -L/usr/lib32 -L/usr/local/lib32 \
4072 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32
4075 For example, let's rebuild the quick start example in
4076 <<tracing-your-own-user-application,Trace a user application>> as an
4077 instrumented 32-bit application:
4082 $ gcc -m32 -c -I. hello-tp.c
4083 $ gcc -m32 -c hello.c
4084 $ gcc -m32 -o hello hello.o hello-tp.o \
4085 -L/usr/lib32 -L/usr/local/lib32 \
4086 -Wl,-rpath,/usr/lib32,-rpath,/usr/local/lib32 \
4091 No special action is required to execute the 32-bit application and
4092 to trace it: use the command-line man:lttng(1) tool as usual.
4099 man:tracef(3) is a small LTTng-UST API designed for quick,
4100 man:printf(3)-like instrumentation without the burden of
4101 <<tracepoint-provider,creating>> and
4102 <<building-tracepoint-providers-and-user-application,building>>
4103 a tracepoint provider package.
4105 To use `tracef()` in your application:
4107 . In the C or C++ source files where you need to use `tracef()`,
4108 include `<lttng/tracef.h>`:
4113 #include <lttng/tracef.h>
4117 . In the application's source code, use `tracef()` like you would use
4125 tracef("my message: %d (%s)", my_integer, my_string);
4131 . Link your application with `liblttng-ust`:
4136 $ gcc -o app app.c -llttng-ust
4140 To trace the events that `tracef()` calls emit:
4142 * <<enabling-disabling-events,Create an event rule>> which matches the
4143 `lttng_ust_tracef:*` event name:
4148 $ lttng enable-event --userspace 'lttng_ust_tracef:*'
4153 .Limitations of `tracef()`
4155 The `tracef()` utility function was developed to make user space tracing
4156 super simple, albeit with notable disadvantages compared to
4157 <<defining-tracepoints,user-defined tracepoints>>:
4159 * All the emitted events have the same tracepoint provider and
4160 tracepoint names, respectively `lttng_ust_tracef` and `event`.
4161 * There is no static type checking.
4162 * The only event record field you actually get, named `msg`, is a string
4163 potentially containing the values you passed to `tracef()`
4164 using your own format string. This also means that you cannot filter
4165 events with a custom expression at run time because there are no
4167 * Since `tracef()` uses the C standard library's man:vasprintf(3)
4168 function behind the scenes to format the strings at run time, its
4169 expected performance is lower than with user-defined tracepoints,
4170 which do not require a conversion to a string.
4172 Taking this into consideration, `tracef()` is useful for some quick
4173 prototyping and debugging, but you should not consider it for any
4174 permanent and serious applicative instrumentation.
4180 ==== Use `tracelog()`
4182 The man:tracelog(3) API is very similar to <<tracef,`tracef()`>>, with
4183 the difference that it accepts an additional log level parameter.
4185 The goal of `tracelog()` is to ease the migration from logging to
4188 To use `tracelog()` in your application:
4190 . In the C or C++ source files where you need to use `tracelog()`,
4191 include `<lttng/tracelog.h>`:
4196 #include <lttng/tracelog.h>
4200 . In the application's source code, use `tracelog()` like you would use
4201 man:printf(3), except for the first parameter which is the log
4209 tracelog(TRACE_WARNING, "my message: %d (%s)",
4210 my_integer, my_string);
4216 See man:lttng-ust(3) for a list of available log level names.
4218 . Link your application with `liblttng-ust`:
4223 $ gcc -o app app.c -llttng-ust
4227 To trace the events that `tracelog()` calls emit with a log level
4228 _as severe as_ a specific log level:
4230 * <<enabling-disabling-events,Create an event rule>> which matches the
4231 `lttng_ust_tracelog:*` event name and a minimum level
4237 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4238 --loglevel=TRACE_WARNING
4242 To trace the events that `tracelog()` calls emit with a
4243 _specific log level_:
4245 * Create an event rule which matches the `lttng_ust_tracelog:*`
4246 event name and a specific log level:
4251 $ lttng enable-event --userspace 'lttng_ust_tracelog:*'
4252 --loglevel-only=TRACE_INFO
4257 [[prebuilt-ust-helpers]]
4258 === Prebuilt user space tracing helpers
4260 The LTTng-UST package provides a few helpers in the form or preloadable
4261 shared objects which automatically instrument system functions and
4264 The helper shared objects are normally found in dir:{/usr/lib}. If you
4265 built LTTng-UST <<building-from-source,from source>>, they are probably
4266 located in dir:{/usr/local/lib}.
4268 The installed user space tracing helpers in LTTng-UST{nbsp}{revision}
4271 path:{liblttng-ust-libc-wrapper.so}::
4272 path:{liblttng-ust-pthread-wrapper.so}::
4273 <<liblttng-ust-libc-pthread-wrapper,C{nbsp}standard library
4274 memory and POSIX threads function tracing>>.
4276 path:{liblttng-ust-cyg-profile.so}::
4277 path:{liblttng-ust-cyg-profile-fast.so}::
4278 <<liblttng-ust-cyg-profile,Function entry and exit tracing>>.
4280 path:{liblttng-ust-dl.so}::
4281 <<liblttng-ust-dl,Dynamic linker tracing>>.
4283 To use a user space tracing helper with any user application:
4285 * Preload the helper shared object when you start the application:
4290 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so my-app
4294 You can preload more than one helper:
4299 $ LD_PRELOAD=liblttng-ust-libc-wrapper.so:liblttng-ust-dl.so my-app
4305 [[liblttng-ust-libc-pthread-wrapper]]
4306 ==== Instrument C standard library memory and POSIX threads functions
4308 The path:{liblttng-ust-libc-wrapper.so} and
4309 path:{liblttng-ust-pthread-wrapper.so} helpers
4310 add instrumentation to some C standard library and POSIX
4314 .Functions instrumented by preloading path:{liblttng-ust-libc-wrapper.so}.
4316 |TP provider name |TP name |Instrumented function
4318 .6+|`lttng_ust_libc` |`malloc` |man:malloc(3)
4319 |`calloc` |man:calloc(3)
4320 |`realloc` |man:realloc(3)
4321 |`free` |man:free(3)
4322 |`memalign` |man:memalign(3)
4323 |`posix_memalign` |man:posix_memalign(3)
4327 .Functions instrumented by preloading path:{liblttng-ust-pthread-wrapper.so}.
4329 |TP provider name |TP name |Instrumented function
4331 .4+|`lttng_ust_pthread` |`pthread_mutex_lock_req` |man:pthread_mutex_lock(3p) (request time)
4332 |`pthread_mutex_lock_acq` |man:pthread_mutex_lock(3p) (acquire time)
4333 |`pthread_mutex_trylock` |man:pthread_mutex_trylock(3p)
4334 |`pthread_mutex_unlock` |man:pthread_mutex_unlock(3p)
4337 When you preload the shared object, it replaces the functions listed
4338 in the previous tables by wrappers which contain tracepoints and call
4339 the replaced functions.
4342 [[liblttng-ust-cyg-profile]]
4343 ==== Instrument function entry and exit
4345 The path:{liblttng-ust-cyg-profile*.so} helpers can add instrumentation
4346 to the entry and exit points of functions.
4348 man:gcc(1) and man:clang(1) have an option named
4349 https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html[`-finstrument-functions`]
4350 which generates instrumentation calls for entry and exit to functions.
4351 The LTTng-UST function tracing helpers,
4352 path:{liblttng-ust-cyg-profile.so} and
4353 path:{liblttng-ust-cyg-profile-fast.so}, take advantage of this feature
4354 to add tracepoints to the two generated functions (which contain
4355 `cyg_profile` in their names, hence the helper's name).
4357 To use the LTTng-UST function tracing helper, the source files to
4358 instrument must be built using the `-finstrument-functions` compiler
4361 There are two versions of the LTTng-UST function tracing helper:
4363 * **path:{liblttng-ust-cyg-profile-fast.so}** is a lightweight variant
4364 that you should only use when it can be _guaranteed_ that the
4365 complete event stream is recorded without any lost event record.
4366 Any kind of duplicate information is left out.
4368 Assuming no event record is lost, having only the function addresses on
4369 entry is enough to create a call graph, since an event record always
4370 contains the ID of the CPU that generated it.
4372 You can use a tool like man:addr2line(1) to convert function addresses
4373 back to source file names and line numbers.
4375 * **path:{liblttng-ust-cyg-profile.so}** is a more robust variant
4376 which also works in use cases where event records might get discarded or
4377 not recorded from application startup.
4378 In these cases, the trace analyzer needs more information to be
4379 able to reconstruct the program flow.
4381 See man:lttng-ust-cyg-profile(3) to learn more about the instrumentation
4382 points of this helper.
4384 All the tracepoints that this helper provides have the
4385 log level `TRACE_DEBUG_FUNCTION` (see man:lttng-ust(3)).
4387 TIP: It's sometimes a good idea to limit the number of source files that
4388 you compile with the `-finstrument-functions` option to prevent LTTng
4389 from writing an excessive amount of trace data at run time. When using
4390 man:gcc(1), you can use the
4391 `-finstrument-functions-exclude-function-list` option to avoid
4392 instrument entries and exits of specific function names.
4397 ==== Instrument the dynamic linker
4399 The path:{liblttng-ust-dl.so} helper adds instrumentation to the
4400 man:dlopen(3) and man:dlclose(3) function calls.
4402 See man:lttng-ust-dl(3) to learn more about the instrumentation points
4407 [[java-application]]
4408 === User space Java agent
4410 You can instrument any Java application which uses one of the following
4413 * The https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[**`java.util.logging`**]
4414 (JUL) core logging facilities.
4415 * http://logging.apache.org/log4j/1.2/[**Apache log4j 1.2**], since
4416 LTTng 2.6. Note that Apache Log4j{nbsp}2 is not supported.
4419 .LTTng-UST Java agent imported by a Java application.
4420 image::java-app.png[]
4422 Note that the methods described below are new in LTTng{nbsp}{revision}.
4423 Previous LTTng versions use another technique.
4425 NOTE: We use http://openjdk.java.net/[OpenJDK]{nbsp}8 for development
4426 and https://ci.lttng.org/[continuous integration], thus this version is
4427 directly supported. However, the LTTng-UST Java agent is also tested
4428 with OpenJDK{nbsp}7.
4433 ==== Use the LTTng-UST Java agent for `java.util.logging`
4435 To use the LTTng-UST Java agent in a Java application which uses
4436 `java.util.logging` (JUL):
4438 . In the Java application's source code, import the LTTng-UST
4439 log handler package for `java.util.logging`:
4444 import org.lttng.ust.agent.jul.LttngLogHandler;
4448 . Create an LTTng-UST JUL log handler:
4453 Handler lttngUstLogHandler = new LttngLogHandler();
4457 . Add this handler to the JUL loggers which should emit LTTng events:
4462 Logger myLogger = Logger.getLogger("some-logger");
4464 myLogger.addHandler(lttngUstLogHandler);
4468 . Use `java.util.logging` log statements and configuration as usual.
4469 The loggers with an attached LTTng-UST log handler can emit
4472 . Before exiting the application, remove the LTTng-UST log handler from
4473 the loggers attached to it and call its `close()` method:
4478 myLogger.removeHandler(lttngUstLogHandler);
4479 lttngUstLogHandler.close();
4483 This is not strictly necessary, but it is recommended for a clean
4484 disposal of the handler's resources.
4486 . Include the LTTng-UST Java agent's common and JUL-specific JAR files,
4487 path:{lttng-ust-agent-common.jar} and path:{lttng-ust-agent-jul.jar},
4489 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4490 path] when you build the Java application.
4492 The JAR files are typically located in dir:{/usr/share/java}.
4494 IMPORTANT: The LTTng-UST Java agent must be
4495 <<installing-lttng,installed>> for the logging framework your
4498 .Use the LTTng-UST Java agent for `java.util.logging`.
4503 import java.io.IOException;
4504 import java.util.logging.Handler;
4505 import java.util.logging.Logger;
4506 import org.lttng.ust.agent.jul.LttngLogHandler;
4510 private static final int answer = 42;
4512 public static void main(String[] argv) throws Exception
4515 Logger logger = Logger.getLogger("jello");
4517 // Create an LTTng-UST log handler
4518 Handler lttngUstLogHandler = new LttngLogHandler();
4520 // Add the LTTng-UST log handler to our logger
4521 logger.addHandler(lttngUstLogHandler);
4524 logger.info("some info");
4525 logger.warning("some warning");
4527 logger.finer("finer information; the answer is " + answer);
4529 logger.severe("error!");
4531 // Not mandatory, but cleaner
4532 logger.removeHandler(lttngUstLogHandler);
4533 lttngUstLogHandler.close();
4542 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4545 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4546 <<enabling-disabling-events,create an event rule>> matching the
4547 `jello` JUL logger, and <<basic-tracing-session-control,start tracing>>:
4552 $ lttng enable-event --jul jello
4556 Run the compiled class:
4560 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4563 <<basic-tracing-session-control,Stop tracing>> and inspect the
4573 In the resulting trace, an <<event,event record>> generated by a Java
4574 application using `java.util.logging` is named `lttng_jul:event` and
4575 has the following fields:
4578 Log record's message.
4584 Name of the class in which the log statement was executed.
4587 Name of the method in which the log statement was executed.
4590 Logging time (timestamp in milliseconds).
4593 Log level integer value.
4596 ID of the thread in which the log statement was executed.
4598 You can use the opt:lttng-enable-event(1):--loglevel or
4599 opt:lttng-enable-event(1):--loglevel-only option of the
4600 man:lttng-enable-event(1) command to target a range of JUL log levels
4601 or a specific JUL log level.
4606 ==== Use the LTTng-UST Java agent for Apache log4j
4608 To use the LTTng-UST Java agent in a Java application which uses
4611 . In the Java application's source code, import the LTTng-UST
4612 log appender package for Apache log4j:
4617 import org.lttng.ust.agent.log4j.LttngLogAppender;
4621 . Create an LTTng-UST log4j log appender:
4626 Appender lttngUstLogAppender = new LttngLogAppender();
4630 . Add this appender to the log4j loggers which should emit LTTng events:
4635 Logger myLogger = Logger.getLogger("some-logger");
4637 myLogger.addAppender(lttngUstLogAppender);
4641 . Use Apache log4j log statements and configuration as usual. The
4642 loggers with an attached LTTng-UST log appender can emit LTTng events.
4644 . Before exiting the application, remove the LTTng-UST log appender from
4645 the loggers attached to it and call its `close()` method:
4650 myLogger.removeAppender(lttngUstLogAppender);
4651 lttngUstLogAppender.close();
4655 This is not strictly necessary, but it is recommended for a clean
4656 disposal of the appender's resources.
4658 . Include the LTTng-UST Java agent's common and log4j-specific JAR
4659 files, path:{lttng-ust-agent-common.jar} and
4660 path:{lttng-ust-agent-log4j.jar}, in the
4661 https://docs.oracle.com/javase/tutorial/essential/environment/paths.html[class
4662 path] when you build the Java application.
4664 The JAR files are typically located in dir:{/usr/share/java}.
4666 IMPORTANT: The LTTng-UST Java agent must be
4667 <<installing-lttng,installed>> for the logging framework your
4670 .Use the LTTng-UST Java agent for Apache log4j.
4675 import org.apache.log4j.Appender;
4676 import org.apache.log4j.Logger;
4677 import org.lttng.ust.agent.log4j.LttngLogAppender;
4681 private static final int answer = 42;
4683 public static void main(String[] argv) throws Exception
4686 Logger logger = Logger.getLogger("jello");
4688 // Create an LTTng-UST log appender
4689 Appender lttngUstLogAppender = new LttngLogAppender();
4691 // Add the LTTng-UST log appender to our logger
4692 logger.addAppender(lttngUstLogAppender);
4695 logger.info("some info");
4696 logger.warn("some warning");
4698 logger.debug("debug information; the answer is " + answer);
4700 logger.fatal("error!");
4702 // Not mandatory, but cleaner
4703 logger.removeAppender(lttngUstLogAppender);
4704 lttngUstLogAppender.close();
4710 Build this example (`$LOG4JPATH` is the path to the Apache log4j JAR
4715 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH Test.java
4718 <<creating-destroying-tracing-sessions,Create a tracing session>>,
4719 <<enabling-disabling-events,create an event rule>> matching the
4720 `jello` log4j logger, and <<basic-tracing-session-control,start tracing>>:
4725 $ lttng enable-event --log4j jello
4729 Run the compiled class:
4733 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-log4j.jar:$LOG4JPATH:. Test
4736 <<basic-tracing-session-control,Stop tracing>> and inspect the
4746 In the resulting trace, an <<event,event record>> generated by a Java
4747 application using log4j is named `lttng_log4j:event` and
4748 has the following fields:
4751 Log record's message.
4757 Name of the class in which the log statement was executed.
4760 Name of the method in which the log statement was executed.
4763 Name of the file in which the executed log statement is located.
4766 Line number at which the log statement was executed.
4772 Log level integer value.
4775 Name of the Java thread in which the log statement was executed.
4777 You can use the opt:lttng-enable-event(1):--loglevel or
4778 opt:lttng-enable-event(1):--loglevel-only option of the
4779 man:lttng-enable-event(1) command to target a range of Apache log4j log levels
4780 or a specific log4j log level.
4784 [[java-application-context]]
4785 ==== Provide application-specific context fields in a Java application
4787 A Java application-specific context field is a piece of state provided
4788 by the application which <<adding-context,you can add>>, using the
4789 man:lttng-add-context(1) command, to each <<event,event record>>
4790 produced by the log statements of this application.
4792 For example, a given object might have a current request ID variable.
4793 You can create a context information retriever for this object and
4794 assign a name to this current request ID. You can then, using the
4795 man:lttng-add-context(1) command, add this context field by name to
4796 the JUL or log4j <<channel,channel>>.
4798 To provide application-specific context fields in a Java application:
4800 . In the Java application's source code, import the LTTng-UST
4801 Java agent context classes and interfaces:
4806 import org.lttng.ust.agent.context.ContextInfoManager;
4807 import org.lttng.ust.agent.context.IContextInfoRetriever;
4811 . Create a context information retriever class, that is, a class which
4812 implements the `IContextInfoRetriever` interface:
4817 class MyContextInfoRetriever implements IContextInfoRetriever
4820 public Object retrieveContextInfo(String key)
4822 if (key.equals("intCtx")) {
4824 } else if (key.equals("strContext")) {
4825 return "context value!";
4834 This `retrieveContextInfo()` method is the only member of the
4835 `IContextInfoRetriever` interface. Its role is to return the current
4836 value of a state by name to create a context field. The names of the
4837 context fields and which state variables they return depends on your
4840 All primitive types and objects are supported as context fields.
4841 When `retrieveContextInfo()` returns an object, the context field
4842 serializer calls its `toString()` method to add a string field to
4843 event records. The method can also return `null`, which means that
4844 no context field is available for the required name.
4846 . Register an instance of your context information retriever class to
4847 the context information manager singleton:
4852 IContextInfoRetriever cir = new MyContextInfoRetriever();
4853 ContextInfoManager cim = ContextInfoManager.getInstance();
4854 cim.registerContextInfoRetriever("retrieverName", cir);
4858 . Before exiting the application, remove your context information
4859 retriever from the context information manager singleton:
4864 ContextInfoManager cim = ContextInfoManager.getInstance();
4865 cim.unregisterContextInfoRetriever("retrieverName");
4869 This is not strictly necessary, but it is recommended for a clean
4870 disposal of some manager's resources.
4872 . Build your Java application with LTTng-UST Java agent support as
4873 usual, following the procedure for either the <<jul,JUL>> or
4874 <<log4j,Apache log4j>> framework.
4877 .Provide application-specific context fields in a Java application.
4882 import java.util.logging.Handler;
4883 import java.util.logging.Logger;
4884 import org.lttng.ust.agent.jul.LttngLogHandler;
4885 import org.lttng.ust.agent.context.ContextInfoManager;
4886 import org.lttng.ust.agent.context.IContextInfoRetriever;
4890 // Our context information retriever class
4891 private static class MyContextInfoRetriever
4892 implements IContextInfoRetriever
4895 public Object retrieveContextInfo(String key) {
4896 if (key.equals("intCtx")) {
4898 } else if (key.equals("strContext")) {
4899 return "context value!";
4906 private static final int answer = 42;
4908 public static void main(String args[]) throws Exception
4910 // Get the context information manager instance
4911 ContextInfoManager cim = ContextInfoManager.getInstance();
4913 // Create and register our context information retriever
4914 IContextInfoRetriever cir = new MyContextInfoRetriever();
4915 cim.registerContextInfoRetriever("myRetriever", cir);
4918 Logger logger = Logger.getLogger("jello");
4920 // Create an LTTng-UST log handler
4921 Handler lttngUstLogHandler = new LttngLogHandler();
4923 // Add the LTTng-UST log handler to our logger
4924 logger.addHandler(lttngUstLogHandler);
4927 logger.info("some info");
4928 logger.warning("some warning");
4930 logger.finer("finer information; the answer is " + answer);
4932 logger.severe("error!");
4934 // Not mandatory, but cleaner
4935 logger.removeHandler(lttngUstLogHandler);
4936 lttngUstLogHandler.close();
4937 cim.unregisterContextInfoRetriever("myRetriever");
4946 $ javac -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar Test.java
4949 <<creating-destroying-tracing-sessions,Create a tracing session>>
4950 and <<enabling-disabling-events,create an event rule>> matching the
4956 $ lttng enable-event --jul jello
4959 <<adding-context,Add the application-specific context fields>> to the
4964 $ lttng add-context --jul --type='$app.myRetriever:intCtx'
4965 $ lttng add-context --jul --type='$app.myRetriever:strContext'
4968 <<basic-tracing-session-control,Start tracing>>:
4975 Run the compiled class:
4979 $ java -cp /usr/share/java/jarpath/lttng-ust-agent-common.jar:/usr/share/java/jarpath/lttng-ust-agent-jul.jar:. Test
4982 <<basic-tracing-session-control,Stop tracing>> and inspect the
4994 [[python-application]]
4995 === User space Python agent
4997 You can instrument a Python 2 or Python 3 application which uses the
4998 standard https://docs.python.org/3/library/logging.html[`logging`]
5001 Each log statement emits an LTTng event once the
5002 application module imports the
5003 <<lttng-ust-agents,LTTng-UST Python agent>> package.
5006 .A Python application importing the LTTng-UST Python agent.
5007 image::python-app.png[]
5009 To use the LTTng-UST Python agent:
5011 . In the Python application's source code, import the LTTng-UST Python
5021 The LTTng-UST Python agent automatically adds its logging handler to the
5022 root logger at import time.
5024 Any log statement that the application executes before this import does
5025 not emit an LTTng event.
5027 IMPORTANT: The LTTng-UST Python agent must be
5028 <<installing-lttng,installed>>.
5030 . Use log statements and logging configuration as usual.
5031 Since the LTTng-UST Python agent adds a handler to the _root_
5032 logger, you can trace any log statement from any logger.
5034 .Use the LTTng-UST Python agent.
5045 logging.basicConfig()
5046 logger = logging.getLogger('my-logger')
5049 logger.debug('debug message')
5050 logger.info('info message')
5051 logger.warn('warn message')
5052 logger.error('error message')
5053 logger.critical('critical message')
5057 if __name__ == '__main__':
5061 NOTE: `logging.basicConfig()`, which adds to the root logger a basic
5062 logging handler which prints to the standard error stream, is not
5063 strictly required for LTTng-UST tracing to work, but in versions of
5064 Python preceding 3.2, you could see a warning message which indicates
5065 that no handler exists for the logger `my-logger`.
5067 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5068 <<enabling-disabling-events,create an event rule>> matching the
5069 `my-logger` Python logger, and <<basic-tracing-session-control,start
5075 $ lttng enable-event --python my-logger
5079 Run the Python script:
5086 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5096 In the resulting trace, an <<event,event record>> generated by a Python
5097 application is named `lttng_python:event` and has the following fields:
5100 Logging time (string).
5103 Log record's message.
5109 Name of the function in which the log statement was executed.
5112 Line number at which the log statement was executed.
5115 Log level integer value.
5118 ID of the Python thread in which the log statement was executed.
5121 Name of the Python thread in which the log statement was executed.
5123 You can use the opt:lttng-enable-event(1):--loglevel or
5124 opt:lttng-enable-event(1):--loglevel-only option of the
5125 man:lttng-enable-event(1) command to target a range of Python log levels
5126 or a specific Python log level.
5128 When an application imports the LTTng-UST Python agent, the agent tries
5129 to register to a <<lttng-sessiond,session daemon>>. Note that you must
5130 <<start-sessiond,start the session daemon>> _before_ you run the Python
5131 application. If a session daemon is found, the agent tries to register
5132 to it during 5{nbsp}seconds, after which the application continues
5133 without LTTng tracing support. You can override this timeout value with
5134 the env:LTTNG_UST_PYTHON_REGISTER_TIMEOUT environment variable
5137 If the session daemon stops while a Python application with an imported
5138 LTTng-UST Python agent runs, the agent retries to connect and to
5139 register to a session daemon every 3{nbsp}seconds. You can override this
5140 delay with the env:LTTNG_UST_PYTHON_REGISTER_RETRY_DELAY environment
5145 [[proc-lttng-logger-abi]]
5148 The `lttng-tracer` Linux kernel module, part of
5149 <<lttng-modules,LTTng-modules>>, creates the special LTTng logger file
5150 path:{/proc/lttng-logger} when it's loaded. Any application can write
5151 text data to this file to emit an LTTng event.
5154 .An application writes to the LTTng logger file to emit an LTTng event.
5155 image::lttng-logger.png[]
5157 The LTTng logger is the quickest method--not the most efficient,
5158 however--to add instrumentation to an application. It is designed
5159 mostly to instrument shell scripts:
5163 $ echo "Some message, some $variable" > /proc/lttng-logger
5166 Any event that the LTTng logger emits is named `lttng_logger` and
5167 belongs to the Linux kernel <<domain,tracing domain>>. However, unlike
5168 other instrumentation points in the kernel tracing domain, **any Unix
5169 user** can <<enabling-disabling-events,create an event rule>> which
5170 matches its event name, not only the root user or users in the
5171 <<tracing-group,tracing group>>.
5173 To use the LTTng logger:
5175 * From any application, write text data to the path:{/proc/lttng-logger}
5178 The `msg` field of `lttng_logger` event records contains the
5181 NOTE: The maximum message length of an LTTng logger event is
5182 1024{nbsp}bytes. Writing more than this makes the LTTng logger emit more
5183 than one event to contain the remaining data.
5185 You should not use the LTTng logger to trace a user application which
5186 can be instrumented in a more efficient way, namely:
5188 * <<c-application,C and $$C++$$ applications>>.
5189 * <<java-application,Java applications>>.
5190 * <<python-application,Python applications>>.
5192 .Use the LTTng logger.
5197 echo 'Hello, World!' > /proc/lttng-logger
5199 df --human-readable --print-type / > /proc/lttng-logger
5202 <<creating-destroying-tracing-sessions,Create a tracing session>>,
5203 <<enabling-disabling-events,create an event rule>> matching the
5204 `lttng_logger` Linux kernel tracepoint, and
5205 <<basic-tracing-session-control,start tracing>>:
5210 $ lttng enable-event --kernel lttng_logger
5214 Run the Bash script:
5221 <<basic-tracing-session-control,Stop tracing>> and inspect the recorded
5232 [[instrumenting-linux-kernel]]
5233 === LTTng kernel tracepoints
5235 NOTE: This section shows how to _add_ instrumentation points to the
5236 Linux kernel. The kernel's subsystems are already thoroughly
5237 instrumented at strategic places for LTTng when you
5238 <<installing-lttng,install>> the <<lttng-modules,LTTng-modules>>
5242 There are two methods to instrument the Linux kernel:
5244 . <<linux-add-lttng-layer,Add an LTTng layer>> over an existing ftrace
5245 tracepoint which uses the `TRACE_EVENT()` API.
5247 Choose this if you want to instrumentation a Linux kernel tree with an
5248 instrumentation point compatible with ftrace, perf, and SystemTap.
5250 . Use an <<linux-lttng-tracepoint-event,LTTng-only approach>> to
5251 instrument an out-of-tree kernel module.
5253 Choose this if you don't need ftrace, perf, or SystemTap support.
5257 [[linux-add-lttng-layer]]
5258 ==== [[instrumenting-linux-kernel-itself]][[mainline-trace-event]][[lttng-adaptation-layer]]Add an LTTng layer to an existing ftrace tracepoint
5260 This section shows how to add an LTTng layer to existing ftrace
5261 instrumentation using the `TRACE_EVENT()` API.
5263 This section does not document the `TRACE_EVENT()` macro. You can
5264 read the following articles to learn more about this API:
5266 * http://lwn.net/Articles/379903/[Using the TRACE_EVENT() macro (Part 1)]
5267 * http://lwn.net/Articles/381064/[Using the TRACE_EVENT() macro (Part 2)]
5268 * http://lwn.net/Articles/383362/[Using the TRACE_EVENT() macro (Part 3)]
5270 The following procedure assumes that your ftrace tracepoints are
5271 correctly defined in their own header and that they are created in
5272 one source file using the `CREATE_TRACE_POINTS` definition.
5274 To add an LTTng layer over an existing ftrace tracepoint:
5276 . Make sure the following kernel configuration options are
5282 * `CONFIG_HIGH_RES_TIMERS`
5283 * `CONFIG_TRACEPOINTS`
5286 . Build the Linux source tree with your custom ftrace tracepoints.
5287 . Boot the resulting Linux image on your target system.
5289 Confirm that the tracepoints exist by looking for their names in the
5290 dir:{/sys/kernel/debug/tracing/events/subsys} directory, where `subsys`
5291 is your subsystem's name.
5293 . Get a copy of the latest LTTng-modules{nbsp}{revision}:
5298 $ cd $(mktemp -d) &&
5299 wget http://lttng.org/files/lttng-modules/lttng-modules-latest-2.9.tar.bz2 &&
5300 tar -xf lttng-modules-latest-2.9.tar.bz2 &&
5301 cd lttng-modules-2.9.*
5305 . In dir:{instrumentation/events/lttng-module}, relative to the root
5306 of the LTTng-modules source tree, create a header file named
5307 +__subsys__.h+ for your custom subsystem +__subsys__+ and write your
5308 LTTng-modules tracepoint definitions using the LTTng-modules
5311 Start with this template:
5315 .path:{instrumentation/events/lttng-module/my_subsys.h}
5318 #define TRACE_SYSTEM my_subsys
5320 #if !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
5321 #define _LTTNG_MY_SUBSYS_H
5323 #include "../../../probes/lttng-tracepoint-event.h"
5324 #include <linux/tracepoint.h>
5326 LTTNG_TRACEPOINT_EVENT(
5328 * Format is identical to TRACE_EVENT()'s version for the three
5329 * following macro parameters:
5332 TP_PROTO(int my_int, const char *my_string),
5333 TP_ARGS(my_int, my_string),
5335 /* LTTng-modules specific macros */
5337 ctf_integer(int, my_int_field, my_int)
5338 ctf_string(my_bar_field, my_bar)
5342 #endif /* !defined(_LTTNG_MY_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ) */
5344 #include "../../../probes/define_trace.h"
5348 The entries in the `TP_FIELDS()` section are the list of fields for the
5349 LTTng tracepoint. This is similar to the `TP_STRUCT__entry()` part of
5350 ftrace's `TRACE_EVENT()` macro.
5352 See <<lttng-modules-tp-fields,Tracepoint fields macros>> for a
5353 complete description of the available `ctf_*()` macros.
5355 . Create the LTTng-modules probe's kernel module C source file,
5356 +probes/lttng-probe-__subsys__.c+, where +__subsys__+ is your
5361 .path:{probes/lttng-probe-my-subsys.c}
5363 #include <linux/module.h>
5364 #include "../lttng-tracer.h"
5367 * Build-time verification of mismatch between mainline
5368 * TRACE_EVENT() arguments and the LTTng-modules adaptation
5369 * layer LTTNG_TRACEPOINT_EVENT() arguments.
5371 #include <trace/events/my_subsys.h>
5373 /* Create LTTng tracepoint probes */
5374 #define LTTNG_PACKAGE_BUILD
5375 #define CREATE_TRACE_POINTS
5376 #define TRACE_INCLUDE_PATH ../instrumentation/events/lttng-module
5378 #include "../instrumentation/events/lttng-module/my_subsys.h"
5380 MODULE_LICENSE("GPL and additional rights");
5381 MODULE_AUTHOR("Your name <your-email>");
5382 MODULE_DESCRIPTION("LTTng my_subsys probes");
5383 MODULE_VERSION(__stringify(LTTNG_MODULES_MAJOR_VERSION) "."
5384 __stringify(LTTNG_MODULES_MINOR_VERSION) "."
5385 __stringify(LTTNG_MODULES_PATCHLEVEL_VERSION)
5386 LTTNG_MODULES_EXTRAVERSION);
5390 . Edit path:{probes/KBuild} and add your new kernel module object
5391 next to the existing ones:
5395 .path:{probes/KBuild}
5399 obj-m += lttng-probe-module.o
5400 obj-m += lttng-probe-power.o
5402 obj-m += lttng-probe-my-subsys.o
5408 . Build and install the LTTng kernel modules:
5413 $ make KERNELDIR=/path/to/linux
5414 # make modules_install && depmod -a
5418 Replace `/path/to/linux` with the path to the Linux source tree where
5419 you defined and used tracepoints with ftrace's `TRACE_EVENT()` macro.
5421 Note that you can also use the
5422 <<lttng-tracepoint-event-code,`LTTNG_TRACEPOINT_EVENT_CODE()` macro>>
5423 instead of `LTTNG_TRACEPOINT_EVENT()` to use custom local variables and
5424 C code that need to be executed before the event fields are recorded.
5426 The best way to learn how to use the previous LTTng-modules macros is to
5427 inspect the existing LTTng-modules tracepoint definitions in the
5428 dir:{instrumentation/events/lttng-module} header files. Compare them
5429 with the Linux kernel mainline versions in the
5430 dir:{include/trace/events} directory of the Linux source tree.
5434 [[lttng-tracepoint-event-code]]
5435 ===== Use custom C code to access the data for tracepoint fields
5437 Although we recommended to always use the
5438 <<lttng-adaptation-layer,`LTTNG_TRACEPOINT_EVENT()`>> macro to describe
5439 the arguments and fields of an LTTng-modules tracepoint when possible,
5440 sometimes you need a more complex process to access the data that the
5441 tracer records as event record fields. In other words, you need local
5442 variables and multiple C{nbsp}statements instead of simple
5443 argument-based expressions that you pass to the
5444 <<lttng-modules-tp-fields,`ctf_*()` macros of `TP_FIELDS()`>>.
5446 You can use the `LTTNG_TRACEPOINT_EVENT_CODE()` macro instead of
5447 `LTTNG_TRACEPOINT_EVENT()` to declare custom local variables and define
5448 a block of C{nbsp}code to be executed before LTTng records the fields.
5449 The structure of this macro is:
5452 .`LTTNG_TRACEPOINT_EVENT_CODE()` macro syntax.
5454 LTTNG_TRACEPOINT_EVENT_CODE(
5456 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5457 * version for the following three macro parameters:
5460 TP_PROTO(int my_int, const char *my_string),
5461 TP_ARGS(my_int, my_string),
5463 /* Declarations of custom local variables */
5466 unsigned long b = 0;
5467 const char *name = "(undefined)";
5468 struct my_struct *my_struct;
5472 * Custom code which uses both tracepoint arguments
5473 * (in TP_ARGS()) and local variables (in TP_locvar()).
5475 * Local variables are actually members of a structure pointed
5476 * to by the special variable tp_locvar.
5480 tp_locvar->a = my_int + 17;
5481 tp_locvar->my_struct = get_my_struct_at(tp_locvar->a);
5482 tp_locvar->b = my_struct_compute_b(tp_locvar->my_struct);
5483 tp_locvar->name = my_struct_get_name(tp_locvar->my_struct);
5484 put_my_struct(tp_locvar->my_struct);
5493 * Format identical to the LTTNG_TRACEPOINT_EVENT()
5494 * version for this, except that tp_locvar members can be
5495 * used in the argument expression parameters of
5496 * the ctf_*() macros.
5499 ctf_integer(unsigned long, my_struct_b, tp_locvar->b)
5500 ctf_integer(int, my_struct_a, tp_locvar->a)
5501 ctf_string(my_string_field, my_string)
5502 ctf_string(my_struct_name, tp_locvar->name)
5507 IMPORTANT: The C code defined in `TP_code()` must not have any side
5508 effects when executed. In particular, the code must not allocate
5509 memory or get resources without deallocating this memory or putting
5510 those resources afterwards.
5513 [[instrumenting-linux-kernel-tracing]]
5514 ==== Load and unload a custom probe kernel module
5516 You must load a <<lttng-adaptation-layer,created LTTng-modules probe
5517 kernel module>> in the kernel before it can emit LTTng events.
5519 To load the default probe kernel modules and a custom probe kernel
5522 * Use the opt:lttng-sessiond(8):--extra-kmod-probes option to give extra
5523 probe modules to load when starting a root <<lttng-sessiond,session
5527 .Load the `my_subsys`, `usb`, and the default probe modules.
5531 # lttng-sessiond --extra-kmod-probes=my_subsys,usb
5536 You only need to pass the subsystem name, not the whole kernel module
5539 To load _only_ a given custom probe kernel module:
5541 * Use the opt:lttng-sessiond(8):--kmod-probes option to give the probe
5542 modules to load when starting a root session daemon:
5545 .Load only the `my_subsys` and `usb` probe modules.
5549 # lttng-sessiond --kmod-probes=my_subsys,usb
5554 To confirm that a probe module is loaded:
5561 $ lsmod | grep lttng_probe_usb
5565 To unload the loaded probe modules:
5567 * Kill the session daemon with `SIGTERM`:
5572 # pkill lttng-sessiond
5576 You can also use man:modprobe(8)'s `--remove` option if the session
5577 daemon terminates abnormally.
5580 [[controlling-tracing]]
5583 Once an application or a Linux kernel is
5584 <<instrumenting,instrumented>> for LTTng tracing,
5587 This section is divided in topics on how to use the various
5588 <<plumbing,components of LTTng>>, in particular the <<lttng-cli,cmd:lttng
5589 command-line tool>>, to _control_ the LTTng daemons and tracers.
5591 NOTE: In the following subsections, we refer to an man:lttng(1) command
5592 using its man page name. For example, instead of _Run the `create`
5593 command to..._, we use _Run the man:lttng-create(1) command to..._.
5597 === Start a session daemon
5599 In some situations, you need to run a <<lttng-sessiond,session daemon>>
5600 (man:lttng-sessiond(8)) _before_ you can use the man:lttng(1)
5603 You will see the following error when you run a command while no session
5607 Error: No session daemon is available
5610 The only command that automatically runs a session daemon is
5611 man:lttng-create(1), which you use to
5612 <<creating-destroying-tracing-sessions,create a tracing session>>. While
5613 this is most of the time the first operation that you do, sometimes it's
5614 not. Some examples are:
5616 * <<list-instrumentation-points,List the available instrumentation points>>.
5617 * <<saving-loading-tracing-session,Load a tracing session configuration>>.
5619 [[tracing-group]] Each Unix user must have its own running session
5620 daemon to trace user applications. The session daemon that the root user
5621 starts is the only one allowed to control the LTTng kernel tracer. Users
5622 that are part of the _tracing group_ can control the root session
5623 daemon. The default tracing group name is `tracing`; you can set it to
5624 something else with the opt:lttng-sessiond(8):--group option when you
5625 start the root session daemon.
5627 To start a user session daemon:
5629 * Run man:lttng-sessiond(8):
5634 $ lttng-sessiond --daemonize
5638 To start the root session daemon:
5640 * Run man:lttng-sessiond(8) as the root user:
5645 # lttng-sessiond --daemonize
5649 In both cases, remove the opt:lttng-sessiond(8):--daemonize option to
5650 start the session daemon in foreground.
5652 To stop a session daemon, use man:kill(1) on its process ID (standard
5655 Note that some Linux distributions could manage the LTTng session daemon
5656 as a service. In this case, you should use the service manager to
5657 start, restart, and stop session daemons.
5660 [[creating-destroying-tracing-sessions]]
5661 === Create and destroy a tracing session
5663 Almost all the LTTng control operations happen in the scope of
5664 a <<tracing-session,tracing session>>, which is the dialogue between the
5665 <<lttng-sessiond,session daemon>> and you.
5667 To create a tracing session with a generated name:
5669 * Use the man:lttng-create(1) command:
5678 The created tracing session's name is `auto` followed by the
5681 To create a tracing session with a specific name:
5683 * Use the optional argument of the man:lttng-create(1) command:
5688 $ lttng create my-session
5692 Replace `my-session` with the specific tracing session name.
5694 LTTng appends the creation date to the created tracing session's name.
5696 LTTng writes the traces of a tracing session in
5697 +$LTTNG_HOME/lttng-trace/__name__+ by default, where +__name__+ is the
5698 name of the tracing session. Note that the env:LTTNG_HOME environment
5699 variable defaults to `$HOME` if not set.
5701 To output LTTng traces to a non-default location:
5703 * Use the opt:lttng-create(1):--output option of the man:lttng-create(1) command:
5708 $ lttng create my-session --output=/tmp/some-directory
5712 You may create as many tracing sessions as you wish.
5714 To list all the existing tracing sessions for your Unix user:
5716 * Use the man:lttng-list(1) command:
5725 When you create a tracing session, it is set as the _current tracing
5726 session_. The following man:lttng(1) commands operate on the current
5727 tracing session when you don't specify one:
5729 [role="list-3-cols"]
5746 To change the current tracing session:
5748 * Use the man:lttng-set-session(1) command:
5753 $ lttng set-session new-session
5757 Replace `new-session` by the name of the new current tracing session.
5759 When you are done tracing in a given tracing session, you can destroy
5760 it. This operation frees the resources taken by the tracing session
5761 to destroy; it does not destroy the trace data that LTTng wrote for
5762 this tracing session.
5764 To destroy the current tracing session:
5766 * Use the man:lttng-destroy(1) command:
5776 [[list-instrumentation-points]]
5777 === List the available instrumentation points
5779 The <<lttng-sessiond,session daemon>> can query the running instrumented
5780 user applications and the Linux kernel to get a list of available
5781 instrumentation points. For the Linux kernel <<domain,tracing domain>>,
5782 they are tracepoints and system calls. For the user space tracing
5783 domain, they are tracepoints. For the other tracing domains, they are
5786 To list the available instrumentation points:
5788 * Use the man:lttng-list(1) command with the requested tracing domain's
5792 * opt:lttng-list(1):--kernel: Linux kernel tracepoints (your Unix user
5793 must be a root user, or it must be a member of the
5794 <<tracing-group,tracing group>>).
5795 * opt:lttng-list(1):--kernel with opt:lttng-list(1):--syscall: Linux
5796 kernel system calls (your Unix user must be a root user, or it must be
5797 a member of the tracing group).
5798 * opt:lttng-list(1):--userspace: user space tracepoints.
5799 * opt:lttng-list(1):--jul: `java.util.logging` loggers.
5800 * opt:lttng-list(1):--log4j: Apache log4j loggers.
5801 * opt:lttng-list(1):--python: Python loggers.
5804 .List the available user space tracepoints.
5808 $ lttng list --userspace
5812 .List the available Linux kernel system call tracepoints.
5816 $ lttng list --kernel --syscall
5821 [[enabling-disabling-events]]
5822 === Create and enable an event rule
5824 Once you <<creating-destroying-tracing-sessions,create a tracing
5825 session>>, you can create <<event,event rules>> with the
5826 man:lttng-enable-event(1) command.
5828 You specify each condition with a command-line option. The available
5829 condition options are shown in the following table.
5831 [role="growable",cols="asciidoc,asciidoc,default"]
5832 .Condition command-line options for the man:lttng-enable-event(1) command.
5834 |Option |Description |Applicable tracing domains
5840 . +--probe=__ADDR__+
5841 . +--function=__ADDR__+
5844 Instead of using the default _tracepoint_ instrumentation type, use:
5846 . A Linux system call.
5847 . A Linux https://lwn.net/Articles/132196/[KProbe] (symbol or address).
5848 . The entry and return points of a Linux function (symbol or address).
5852 |First positional argument.
5855 Tracepoint or system call name. In the case of a Linux KProbe or
5856 function, this is a custom name given to the event rule. With the
5857 JUL, log4j, and Python domains, this is a logger name.
5859 With a tracepoint, logger, or system call name, the last character
5860 can be `*` to match anything that remains.
5867 . +--loglevel=__LEVEL__+
5868 . +--loglevel-only=__LEVEL__+
5871 . Match only tracepoints or log statements with a logging level at
5872 least as severe as +__LEVEL__+.
5873 . Match only tracepoints or log statements with a logging level
5874 equal to +__LEVEL__+.
5876 See man:lttng-enable-event(1) for the list of available logging level
5879 |User space, JUL, log4j, and Python.
5881 |+--exclude=__EXCLUSIONS__+
5884 When you use a `*` character at the end of the tracepoint or logger
5885 name (first positional argument), exclude the specific names in the
5886 comma-delimited list +__EXCLUSIONS__+.
5889 User space, JUL, log4j, and Python.
5891 |+--filter=__EXPR__+
5894 Match only events which satisfy the expression +__EXPR__+.
5896 See man:lttng-enable-event(1) to learn more about the syntax of a
5903 You attach an event rule to a <<channel,channel>> on creation. If you do
5904 not specify the channel with the opt:lttng-enable-event(1):--channel
5905 option, and if the event rule to create is the first in its
5906 <<domain,tracing domain>> for a given tracing session, then LTTng
5907 creates a _default channel_ for you. This default channel is reused in
5908 subsequent invocations of the man:lttng-enable-event(1) command for the
5909 same tracing domain.
5911 An event rule is always enabled at creation time.
5913 The following examples show how you can combine the previous
5914 command-line options to create simple to more complex event rules.
5916 .Create an event rule targetting a Linux kernel tracepoint (default channel).
5920 $ lttng enable-event --kernel sched_switch
5924 .Create an event rule matching four Linux kernel system calls (default channel).
5928 $ lttng enable-event --kernel --syscall open,write,read,close
5932 .Create event rules matching tracepoints with filter expressions (default channel).
5936 $ lttng enable-event --kernel sched_switch --filter='prev_comm == "bash"'
5941 $ lttng enable-event --kernel --all \
5942 --filter='$ctx.tid == 1988 || $ctx.tid == 1534'
5947 $ lttng enable-event --jul my_logger \
5948 --filter='$app.retriever:cur_msg_id > 3'
5951 IMPORTANT: Make sure to always quote the filter string when you
5952 use man:lttng(1) from a shell.
5955 .Create an event rule matching any user space tracepoint of a given tracepoint provider with a log level range (default channel).
5959 $ lttng enable-event --userspace my_app:'*' --loglevel=TRACE_INFO
5962 IMPORTANT: Make sure to always quote the wildcard character when you
5963 use man:lttng(1) from a shell.
5966 .Create an event rule matching multiple Python loggers with a wildcard and with exclusions (default channel).
5970 $ lttng enable-event --python my-app.'*' \
5971 --exclude='my-app.module,my-app.hello'
5975 .Create an event rule matching any Apache log4j logger with a specific log level (default channel).
5979 $ lttng enable-event --log4j --all --loglevel-only=LOG4J_WARN
5983 .Create an event rule attached to a specific channel matching a specific user space tracepoint provider and tracepoint.
5987 $ lttng enable-event --userspace my_app:my_tracepoint --channel=my-channel
5991 The event rules of a given channel form a whitelist: as soon as an
5992 emitted event passes one of them, LTTng can record the event. For
5993 example, an event named `my_app:my_tracepoint` emitted from a user space
5994 tracepoint with a `TRACE_ERROR` log level passes both of the following
5999 $ lttng enable-event --userspace my_app:my_tracepoint
6000 $ lttng enable-event --userspace my_app:my_tracepoint \
6001 --loglevel=TRACE_INFO
6004 The second event rule is redundant: the first one includes
6008 [[disable-event-rule]]
6009 === Disable an event rule
6011 To disable an event rule that you <<enabling-disabling-events,created>>
6012 previously, use the man:lttng-disable-event(1) command. This command
6013 disables _all_ the event rules (of a given tracing domain and channel)
6014 which match an instrumentation point. The other conditions are not
6015 supported as of LTTng{nbsp}{revision}.
6017 The LTTng tracer does not record an emitted event which passes
6018 a _disabled_ event rule.
6020 .Disable an event rule matching a Python logger (default channel).
6024 $ lttng disable-event --python my-logger
6028 .Disable an event rule matching all `java.util.logging` loggers (default channel).
6032 $ lttng disable-event --jul '*'
6036 .Disable _all_ the event rules of the default channel.
6038 The opt:lttng-disable-event(1):--all-events option is not, like the
6039 opt:lttng-enable-event(1):--all option of man:lttng-enable-event(1), the
6040 equivalent of the event name `*` (wildcard): it disables _all_ the event
6041 rules of a given channel.
6045 $ lttng disable-event --jul --all-events
6049 NOTE: You cannot delete an event rule once you create it.
6053 === Get the status of a tracing session
6055 To get the status of the current tracing session, that is, its
6056 parameters, its channels, event rules, and their attributes:
6058 * Use the man:lttng-status(1) command:
6068 To get the status of any tracing session:
6070 * Use the man:lttng-list(1) command with the tracing session's name:
6075 $ lttng list my-session
6079 Replace `my-session` with the desired tracing session's name.
6082 [[basic-tracing-session-control]]
6083 === Start and stop a tracing session
6085 Once you <<creating-destroying-tracing-sessions,create a tracing
6087 <<enabling-disabling-events,create one or more event rules>>,
6088 you can start and stop the tracers for this tracing session.
6090 To start tracing in the current tracing session:
6092 * Use the man:lttng-start(1) command:
6101 LTTng is very flexible: you can launch user applications before
6102 or after the you start the tracers. The tracers only record the events
6103 if they pass enabled event rules and if they occur while the tracers are
6106 To stop tracing in the current tracing session:
6108 * Use the man:lttng-stop(1) command:
6117 If there were <<channel-overwrite-mode-vs-discard-mode,lost event
6118 records>> or lost sub-buffers since the last time you ran
6119 man:lttng-start(1), warnings are printed when you run the
6120 man:lttng-stop(1) command.
6123 [[enabling-disabling-channels]]
6124 === Create a channel
6126 Once you create a tracing session, you can create a <<channel,channel>>
6127 with the man:lttng-enable-channel(1) command.
6129 Note that LTTng automatically creates a default channel when, for a
6130 given <<domain,tracing domain>>, no channels exist and you
6131 <<enabling-disabling-events,create>> the first event rule. This default
6132 channel is named `channel0` and its attributes are set to reasonable
6133 values. Therefore, you only need to create a channel when you need
6134 non-default attributes.
6136 You specify each non-default channel attribute with a command-line
6137 option when you use the man:lttng-enable-channel(1) command. The
6138 available command-line options are:
6140 [role="growable",cols="asciidoc,asciidoc"]
6141 .Command-line options for the man:lttng-enable-channel(1) command.
6143 |Option |Description
6149 <<channel-overwrite-mode-vs-discard-mode,event loss mode>> instead of
6150 the default _discard_ mode.
6152 |`--buffers-pid` (user space tracing domain only)
6155 Use the per-process <<channel-buffering-schemes,buffering scheme>>
6156 instead of the default per-user buffering scheme.
6158 |+--subbuf-size=__SIZE__+
6161 Allocate sub-buffers of +__SIZE__+ bytes (power of two), for each CPU,
6162 either for each Unix user (default), or for each instrumented process.
6164 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6166 |+--num-subbuf=__COUNT__+
6169 Allocate +__COUNT__+ sub-buffers (power of two), for each CPU, either
6170 for each Unix user (default), or for each instrumented process.
6172 See <<channel-subbuf-size-vs-subbuf-count,Sub-buffer count and size>>.
6174 |+--tracefile-size=__SIZE__+
6177 Set the maximum size of each trace file that this channel writes within
6178 a stream to +__SIZE__+ bytes instead of no maximum.
6180 See <<tracefile-rotation,Trace file count and size>>.
6182 |+--tracefile-count=__COUNT__+
6185 Limit the number of trace files that this channel creates to
6186 +__COUNT__+ channels instead of no limit.
6188 See <<tracefile-rotation,Trace file count and size>>.
6190 |+--switch-timer=__PERIODUS__+
6193 Set the <<channel-switch-timer,switch timer period>>
6194 to +__PERIODUS__+{nbsp}µs.
6196 |+--read-timer=__PERIODUS__+
6199 Set the <<channel-read-timer,read timer period>>
6200 to +__PERIODUS__+{nbsp}µs.
6202 |+--output=__TYPE__+ (Linux kernel tracing domain only)
6205 Set the channel's output type to +__TYPE__+, either `mmap` or `splice`.
6209 You can only create a channel in the Linux kernel and user space
6210 <<domain,tracing domains>>: other tracing domains have their own channel
6211 created on the fly when <<enabling-disabling-events,creating event
6216 Because of a current LTTng limitation, you must create all channels
6217 _before_ you <<basic-tracing-session-control,start tracing>> in a given
6218 tracing session, that is, before the first time you run
6221 Since LTTng automatically creates a default channel when you use the
6222 man:lttng-enable-event(1) command with a specific tracing domain, you
6223 cannot, for example, create a Linux kernel event rule, start tracing,
6224 and then create a user space event rule, because no user space channel
6225 exists yet and it's too late to create one.
6227 For this reason, make sure to configure your channels properly
6228 before starting the tracers for the first time!
6231 The following examples show how you can combine the previous
6232 command-line options to create simple to more complex channels.
6234 .Create a Linux kernel channel with default attributes.
6238 $ lttng enable-channel --kernel my-channel
6242 .Create a user space channel with 4 sub-buffers or 1{nbsp}MiB each, per CPU, per instrumented process.
6246 $ lttng enable-channel --userspace --num-subbuf=4 --subbuf-size=1M \
6247 --buffers-pid my-channel
6251 .Create a Linux kernel channel which rotates 8 trace files of 4{nbsp}MiB each for each stream
6255 $ lttng enable-channel --kernel --tracefile-count=8 \
6256 --tracefile-size=4194304 my-channel
6260 .Create a user space channel in overwrite (or _flight recorder_) mode.
6264 $ lttng enable-channel --userspace --overwrite my-channel
6268 You can <<enabling-disabling-events,create>> the same event rule in
6269 two different channels:
6273 $ lttng enable-event --userspace --channel=my-channel app:tp
6274 $ lttng enable-event --userspace --channel=other-channel app:tp
6277 If both channels are enabled, when a tracepoint named `app:tp` is
6278 reached, LTTng records two events, one for each channel.
6282 === Disable a channel
6284 To disable a specific channel that you <<enabling-disabling-channels,created>>
6285 previously, use the man:lttng-disable-channel(1) command.
6287 .Disable a specific Linux kernel channel.
6291 $ lttng disable-channel --kernel my-channel
6295 The state of a channel precedes the individual states of event rules
6296 attached to it: event rules which belong to a disabled channel, even if
6297 they are enabled, are also considered disabled.
6301 === Add context fields to a channel
6303 Event record fields in trace files provide important information about
6304 events that occured previously, but sometimes some external context may
6305 help you solve a problem faster. Examples of context fields are:
6307 * The **process ID**, **thread ID**, **process name**, and
6308 **process priority** of the thread in which the event occurs.
6309 * The **hostname** of the system on which the event occurs.
6310 * The current values of many possible **performance counters** using
6312 ** CPU cycles, stalled cycles, idle cycles, and the other cycle types.
6314 ** Branch instructions, misses, and loads.
6316 * Any context defined at the application level (supported for the
6317 JUL and log4j <<domain,tracing domains>>).
6319 To get the full list of available context fields, see
6320 `lttng add-context --list`. Some context fields are reserved for a
6321 specific <<domain,tracing domain>> (Linux kernel or user space).
6323 You add context fields to <<channel,channels>>. All the events
6324 that a channel with added context fields records contain those fields.
6326 To add context fields to one or all the channels of a given tracing
6329 * Use the man:lttng-add-context(1) command.
6331 .Add context fields to all the channels of the current tracing session.
6333 The following command line adds the virtual process identifier and
6334 the per-thread CPU cycles count fields to all the user space channels
6335 of the current tracing session.
6339 $ lttng add-context --userspace --type=vpid --type=perf:thread:cpu-cycles
6343 .Add performance counter context fields by raw ID
6345 See man:lttng-add-context(1) for the exact format of the context field
6346 type, which is partly compatible with the format used in
6351 $ lttng add-context --userspace --type=perf:thread:raw:r0110:test
6352 $ lttng add-context --kernel --type=perf:cpu:raw:r0013c:x86unhalted
6356 .Add a context field to a specific channel.
6358 The following command line adds the thread identifier context field
6359 to the Linux kernel channel named `my-channel` in the current
6364 $ lttng add-context --kernel --channel=my-channel --type=tid
6368 .Add an application-specific context field to a specific channel.
6370 The following command line adds the `cur_msg_id` context field of the
6371 `retriever` context retriever for all the instrumented
6372 <<java-application,Java applications>> recording <<event,event records>>
6373 in the channel named `my-channel`:
6377 $ lttng add-context --kernel --channel=my-channel \
6378 --type='$app:retriever:cur_msg_id'
6381 IMPORTANT: Make sure to always quote the `$` character when you
6382 use man:lttng-add-context(1) from a shell.
6385 NOTE: You cannot remove context fields from a channel once you add it.
6390 === Track process IDs
6392 It's often useful to allow only specific process IDs (PIDs) to emit
6393 events. For example, you may wish to record all the system calls made by
6394 a given process (à la http://linux.die.net/man/1/strace[strace]).
6396 The man:lttng-track(1) and man:lttng-untrack(1) commands serve this
6397 purpose. Both commands operate on a whitelist of process IDs. You _add_
6398 entries to this whitelist with the man:lttng-track(1) command and remove
6399 entries with the man:lttng-untrack(1) command. Any process which has one
6400 of the PIDs in the whitelist is allowed to emit LTTng events which pass
6401 an enabled <<event,event rule>>.
6403 NOTE: The PID tracker tracks the _numeric process IDs_. Should a
6404 process with a given tracked ID exit and another process be given this
6405 ID, then the latter would also be allowed to emit events.
6407 .Track and untrack process IDs.
6409 For the sake of the following example, assume the target system has 16
6413 <<creating-destroying-tracing-sessions,create a tracing session>>,
6414 the whitelist contains all the possible PIDs:
6417 .All PIDs are tracked.
6418 image::track-all.png[]
6420 When the whitelist is full and you use the man:lttng-track(1) command to
6421 specify some PIDs to track, LTTng first clears the whitelist, then it
6422 tracks the specific PIDs. After:
6426 $ lttng track --pid=3,4,7,10,13
6432 .PIDs 3, 4, 7, 10, and 13 are tracked.
6433 image::track-3-4-7-10-13.png[]
6435 You can add more PIDs to the whitelist afterwards:
6439 $ lttng track --pid=1,15,16
6445 .PIDs 1, 15, and 16 are added to the whitelist.
6446 image::track-1-3-4-7-10-13-15-16.png[]
6448 The man:lttng-untrack(1) command removes entries from the PID tracker's
6449 whitelist. Given the previous example, the following command:
6453 $ lttng untrack --pid=3,7,10,13
6456 leads to this whitelist:
6459 .PIDs 3, 7, 10, and 13 are removed from the whitelist.
6460 image::track-1-4-15-16.png[]
6462 LTTng can track all possible PIDs again using the opt:track(1):--all
6467 $ lttng track --pid --all
6470 The result is, again:
6473 .All PIDs are tracked.
6474 image::track-all.png[]
6477 .Track only specific PIDs
6479 A very typical use case with PID tracking is to start with an empty
6480 whitelist, then <<basic-tracing-session-control,start the tracers>>, and
6481 then add PIDs manually while tracers are active. You can accomplish this
6482 by using the opt:lttng-untrack(1):--all option of the
6483 man:lttng-untrack(1) command to clear the whitelist after you
6484 <<creating-destroying-tracing-sessions,create a tracing session>>:
6488 $ lttng untrack --pid --all
6494 .No PIDs are tracked.
6495 image::untrack-all.png[]
6497 If you trace with this whitelist configuration, the tracer records no
6498 events for this <<domain,tracing domain>> because no processes are
6499 tracked. You can use the man:lttng-track(1) command as usual to track
6500 specific PIDs, for example:
6504 $ lttng track --pid=6,11
6510 .PIDs 6 and 11 are tracked.
6511 image::track-6-11.png[]
6516 [[saving-loading-tracing-session]]
6517 === Save and load tracing session configurations
6519 Configuring a <<tracing-session,tracing session>> can be long. Some of
6520 the tasks involved are:
6522 * <<enabling-disabling-channels,Create channels>> with
6523 specific attributes.
6524 * <<adding-context,Add context fields>> to specific channels.
6525 * <<enabling-disabling-events,Create event rules>> with specific log
6526 level and filter conditions.
6528 If you use LTTng to solve real world problems, chances are you have to
6529 record events using the same tracing session setup over and over,
6530 modifying a few variables each time in your instrumented program
6531 or environment. To avoid constant tracing session reconfiguration,
6532 the man:lttng(1) command-line tool can save and load tracing session
6533 configurations to/from XML files.
6535 To save a given tracing session configuration:
6537 * Use the man:lttng-save(1) command:
6542 $ lttng save my-session
6546 Replace `my-session` with the name of the tracing session to save.
6548 LTTng saves tracing session configurations to
6549 dir:{$LTTNG_HOME/.lttng/sessions} by default. Note that the
6550 env:LTTNG_HOME environment variable defaults to `$HOME` if not set. Use
6551 the opt:lttng-save(1):--output-path option to change this destination
6554 LTTng saves all configuration parameters, for example:
6556 * The tracing session name.
6557 * The trace data output path.
6558 * The channels with their state and all their attributes.
6559 * The context fields you added to channels.
6560 * The event rules with their state, log level and filter conditions.
6562 To load a tracing session:
6564 * Use the man:lttng-load(1) command:
6569 $ lttng load my-session
6573 Replace `my-session` with the name of the tracing session to load.
6575 When LTTng loads a configuration, it restores your saved tracing session
6576 as if you just configured it manually.
6578 See man:lttng(1) for the complete list of command-line options. You
6579 can also save and load all many sessions at a time, and decide in which
6580 directory to output the XML files.
6583 [[sending-trace-data-over-the-network]]
6584 === Send trace data over the network
6586 LTTng can send the recorded trace data to a remote system over the
6587 network instead of writing it to the local file system.
6589 To send the trace data over the network:
6591 . On the _remote_ system (which can also be the target system),
6592 start an LTTng <<lttng-relayd,relay daemon>> (man:lttng-relayd(8)):
6601 . On the _target_ system, create a tracing session configured to
6602 send trace data over the network:
6607 $ lttng create my-session --set-url=net://remote-system
6611 Replace `remote-system` by the host name or IP address of the
6612 remote system. See man:lttng-create(1) for the exact URL format.
6614 . On the target system, use the man:lttng(1) command-line tool as usual.
6615 When tracing is active, the target's consumer daemon sends sub-buffers
6616 to the relay daemon running on the remote system instead of flushing
6617 them to the local file system. The relay daemon writes the received
6618 packets to the local file system.
6620 The relay daemon writes trace files to
6621 +$LTTNG_HOME/lttng-traces/__hostname__/__session__+ by default, where
6622 +__hostname__+ is the host name of the target system and +__session__+
6623 is the tracing session name. Note that the env:LTTNG_HOME environment
6624 variable defaults to `$HOME` if not set. Use the
6625 opt:lttng-relayd(8):--output option of man:lttng-relayd(8) to write
6626 trace files to another base directory.
6631 === View events as LTTng emits them (noch:{LTTng} live)
6633 LTTng live is a network protocol implemented by the <<lttng-relayd,relay
6634 daemon>> (man:lttng-relayd(8)) to allow compatible trace viewers to
6635 display events as LTTng emits them on the target system while tracing is
6638 The relay daemon creates a _tee_: it forwards the trace data to both
6639 the local file system and to connected live viewers:
6642 .The relay daemon creates a _tee_, forwarding the trace data to both trace files and a connected live viewer.
6647 . On the _target system_, create a <<tracing-session,tracing session>>
6653 $ lttng create my-session --live
6657 This spawns a local relay daemon.
6659 . Start the live viewer and configure it to connect to the relay
6660 daemon. For example, with http://diamon.org/babeltrace[Babeltrace]:
6665 $ babeltrace --input-format=lttng-live \
6666 net://localhost/host/hostname/my-session
6673 * `hostname` with the host name of the target system.
6674 * `my-session` with the name of the tracing session to view.
6677 . Configure the tracing session as usual with the man:lttng(1)
6678 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6680 You can list the available live tracing sessions with Babeltrace:
6684 $ babeltrace --input-format=lttng-live net://localhost
6687 You can start the relay daemon on another system. In this case, you need
6688 to specify the relay daemon's URL when you create the tracing session
6689 with the opt:lttng-create(1):--set-url option. You also need to replace
6690 `localhost` in the procedure above with the host name of the system on
6691 which the relay daemon is running.
6693 See man:lttng-create(1) and man:lttng-relayd(8) for the complete list of
6694 command-line options.
6698 [[taking-a-snapshot]]
6699 === Take a snapshot of the current sub-buffers of a tracing session
6701 The normal behavior of LTTng is to append full sub-buffers to growing
6702 trace data files. This is ideal to keep a full history of the events
6703 that occurred on the target system, but it can
6704 represent too much data in some situations. For example, you may wish
6705 to trace your application continuously until some critical situation
6706 happens, in which case you only need the latest few recorded
6707 events to perform the desired analysis, not multi-gigabyte trace files.
6709 With the man:lttng-snapshot(1) command, you can take a snapshot of the
6710 current sub-buffers of a given <<tracing-session,tracing session>>.
6711 LTTng can write the snapshot to the local file system or send it over
6716 . Create a tracing session in _snapshot mode_:
6721 $ lttng create my-session --snapshot
6725 The <<channel-overwrite-mode-vs-discard-mode,event loss mode>> of
6726 <<channel,channels>> created in this mode is automatically set to
6727 _overwrite_ (flight recorder mode).
6729 . Configure the tracing session as usual with the man:lttng(1)
6730 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6732 . **Optional**: When you need to take a snapshot,
6733 <<basic-tracing-session-control,stop tracing>>.
6735 You can take a snapshot when the tracers are active, but if you stop
6736 them first, you are sure that the data in the sub-buffers does not
6737 change before you actually take the snapshot.
6744 $ lttng snapshot record --name=my-first-snapshot
6748 LTTng writes the current sub-buffers of all the current tracing
6749 session's channels to trace files on the local file system. Those trace
6750 files have `my-first-snapshot` in their name.
6752 There is no difference between the format of a normal trace file and the
6753 format of a snapshot: viewers of LTTng traces also support LTTng
6756 By default, LTTng writes snapshot files to the path shown by
6757 `lttng snapshot list-output`. You can change this path or decide to send
6758 snapshots over the network using either:
6760 . An output path or URL that you specify when you create the
6762 . An snapshot output path or URL that you add using
6763 `lttng snapshot add-output`
6764 . An output path or URL that you provide directly to the
6765 `lttng snapshot record` command.
6767 Method 3 overrides method 2, which overrides method 1. When you
6768 specify a URL, a relay daemon must listen on a remote system (see
6769 <<sending-trace-data-over-the-network,Send trace data over the network>>).
6774 === Use the machine interface
6776 With any command of the man:lttng(1) command-line tool, you can set the
6777 opt:lttng(1):--mi option to `xml` (before the command name) to get an
6778 XML machine interface output, for example:
6782 $ lttng --mi=xml enable-event --kernel --syscall open
6785 A schema definition (XSD) is
6786 https://github.com/lttng/lttng-tools/blob/stable-2.9/src/common/mi-lttng-3.0.xsd[available]
6787 to ease the integration with external tools as much as possible.
6791 [[metadata-regenerate]]
6792 === Regenerate the metadata of an LTTng trace
6794 An LTTng trace, which is a http://diamon.org/ctf[CTF] trace, has both
6795 data stream files and a metadata file. This metadata file contains,
6796 amongst other things, information about the offset of the clock sources
6797 used to timestamp <<event,event records>> when tracing.
6799 If, once a <<tracing-session,tracing session>> is
6800 <<basic-tracing-session-control,started>>, a major
6801 https://en.wikipedia.org/wiki/Network_Time_Protocol[NTP] correction
6802 happens, the trace's clock offset also needs to be updated. You
6803 can use the `metadata` item of the man:lttng-regenerate(1) command
6806 The main use case of this command is to allow a system to boot with
6807 an incorrect wall time and trace it with LTTng before its wall time
6808 is corrected. Once the system is known to be in a state where its
6809 wall time is correct, it can run `lttng regenerate metadata`.
6811 To regenerate the metadata of an LTTng trace:
6813 * Use the `metadata` item of the man:lttng-regenerate(1) command:
6818 $ lttng regenerate metadata
6824 `lttng regenerate metadata` has the following limitations:
6826 * Tracing session <<creating-destroying-tracing-sessions,created>>
6828 * User space <<channel,channels>>, if any, are using
6829 <<channel-buffering-schemes,per-user buffering>>.
6834 [[regenerate-statedump]]
6835 === Regenerate the state dump of a tracing session
6837 The LTTng kernel and user space tracers generate state dump
6838 <<event,event records>> when the application starts or when you
6839 <<basic-tracing-session-control,start a tracing session>>. An analysis
6840 can use the state dump event records to set an initial state before it
6841 builds the rest of the state from the following event records.
6842 http://tracecompass.org/[Trace Compass] is a notable example of an
6843 application which uses the state dump of an LTTng trace.
6845 When you <<taking-a-snapshot,take a snapshot>>, it's possible that the
6846 state dump event records are not included in the snapshot because they
6847 were recorded to a sub-buffer that has been consumed or overwritten
6850 You can use the `lttng regenerate statedump` command to emit the state
6851 dump event records again.
6853 To regenerate the state dump of the current tracing session, provided
6854 create it in snapshot mode, before you take a snapshot:
6856 . Use the `statedump` item of the man:lttng-regenerate(1) command:
6861 $ lttng regenerate statedump
6865 . <<basic-tracing-session-control,Stop the tracing session>>:
6874 . <<taking-a-snapshot,Take a snapshot>>:
6879 $ lttng snapshot record --name=my-snapshot
6883 Depending on the event throughput, you should run steps 1 and 2
6884 as closely as possible.
6886 NOTE: To record the state dump events, you need to
6887 <<enabling-disabling-events,create event rules>> which enable them.
6888 LTTng-UST state dump tracepoints start with `lttng_ust_statedump:`.
6889 LTTng-modules state dump tracepoints start with `lttng_statedump_`.
6893 [[persistent-memory-file-systems]]
6894 === Record trace data on persistent memory file systems
6896 https://en.wikipedia.org/wiki/Non-volatile_random-access_memory[Non-volatile random-access memory]
6897 (NVRAM) is random-access memory that retains its information when power
6898 is turned off (non-volatile). Systems with such memory can store data
6899 structures in RAM and retrieve them after a reboot, without flushing
6900 to typical _storage_.
6902 Linux supports NVRAM file systems thanks to either
6903 http://pramfs.sourceforge.net/[PRAMFS] or
6904 https://www.kernel.org/doc/Documentation/filesystems/dax.txt[DAX]{nbsp}+{nbsp}http://lkml.iu.edu/hypermail/linux/kernel/1504.1/03463.html[pmem]
6905 (requires Linux 4.1+).
6907 This section does not describe how to operate such file systems;
6908 we assume that you have a working persistent memory file system.
6910 When you create a <<tracing-session,tracing session>>, you can specify
6911 the path of the shared memory holding the sub-buffers. If you specify a
6912 location on an NVRAM file system, then you can retrieve the latest
6913 recorded trace data when the system reboots after a crash.
6915 To record trace data on a persistent memory file system and retrieve the
6916 trace data after a system crash:
6918 . Create a tracing session with a sub-buffer shared memory path located
6919 on an NVRAM file system:
6924 $ lttng create my-session --shm-path=/path/to/shm
6928 . Configure the tracing session as usual with the man:lttng(1)
6929 command-line tool, and <<basic-tracing-session-control,start tracing>>.
6931 . After a system crash, use the man:lttng-crash(1) command-line tool to
6932 view the trace data recorded on the NVRAM file system:
6937 $ lttng-crash /path/to/shm
6941 The binary layout of the ring buffer files is not exactly the same as
6942 the trace files layout. This is why you need to use man:lttng-crash(1)
6943 instead of your preferred trace viewer directly.
6945 To convert the ring buffer files to LTTng trace files:
6947 * Use the opt:lttng-crash(1):--extract option of man:lttng-crash(1):
6952 $ lttng-crash --extract=/path/to/trace /path/to/shm
6960 [[lttng-modules-ref]]
6961 === noch:{LTTng-modules}
6965 [[lttng-tracepoint-enum]]
6966 ==== `LTTNG_TRACEPOINT_ENUM()` usage
6968 Use the `LTTNG_TRACEPOINT_ENUM()` macro to define an enumeration:
6972 LTTNG_TRACEPOINT_ENUM(name, TP_ENUM_VALUES(entries))
6977 * `name` with the name of the enumeration (C identifier, unique
6978 amongst all the defined enumerations).
6979 * `entries` with a list of enumeration entries.
6981 The available enumeration entry macros are:
6983 +ctf_enum_value(__name__, __value__)+::
6984 Entry named +__name__+ mapped to the integral value +__value__+.
6986 +ctf_enum_range(__name__, __begin__, __end__)+::
6987 Entry named +__name__+ mapped to the range of integral values between
6988 +__begin__+ (included) and +__end__+ (included).
6990 +ctf_enum_auto(__name__)+::
6991 Entry named +__name__+ mapped to the integral value following the
6992 last mapping's value.
6994 The last value of a `ctf_enum_value()` entry is its +__value__+
6997 The last value of a `ctf_enum_range()` entry is its +__end__+ parameter.
6999 If `ctf_enum_auto()` is the first entry in the list, its integral
7002 Use the `ctf_enum()` <<lttng-modules-tp-fields,field definition macro>>
7003 to use a defined enumeration as a tracepoint field.
7005 .Define an enumeration with `LTTNG_TRACEPOINT_ENUM()`.
7009 LTTNG_TRACEPOINT_ENUM(
7012 ctf_enum_auto("AUTO: EXPECT 0")
7013 ctf_enum_value("VALUE: 23", 23)
7014 ctf_enum_value("VALUE: 27", 27)
7015 ctf_enum_auto("AUTO: EXPECT 28")
7016 ctf_enum_range("RANGE: 101 TO 303", 101, 303)
7017 ctf_enum_auto("AUTO: EXPECT 304")
7025 [[lttng-modules-tp-fields]]
7026 ==== Tracepoint fields macros (for `TP_FIELDS()`)
7028 [[tp-fast-assign]][[tp-struct-entry]]The available macros to define
7029 tracepoint fields, which must be listed within `TP_FIELDS()` in
7030 `LTTNG_TRACEPOINT_EVENT()`, are:
7032 [role="func-desc growable",cols="asciidoc,asciidoc"]
7033 .Available macros to define LTTng-modules tracepoint fields
7035 |Macro |Description and parameters
7038 +ctf_integer(__t__, __n__, __e__)+
7040 +ctf_integer_nowrite(__t__, __n__, __e__)+
7042 +ctf_user_integer(__t__, __n__, __e__)+
7044 +ctf_user_integer_nowrite(__t__, __n__, __e__)+
7046 Standard integer, displayed in base 10.
7049 Integer C type (`int`, `long`, `size_t`, ...).
7055 Argument expression.
7058 +ctf_integer_hex(__t__, __n__, __e__)+
7060 +ctf_user_integer_hex(__t__, __n__, __e__)+
7062 Standard integer, displayed in base 16.
7071 Argument expression.
7073 |+ctf_integer_oct(__t__, __n__, __e__)+
7075 Standard integer, displayed in base 8.
7084 Argument expression.
7087 +ctf_integer_network(__t__, __n__, __e__)+
7089 +ctf_user_integer_network(__t__, __n__, __e__)+
7091 Integer in network byte order (big-endian), displayed in base 10.
7100 Argument expression.
7103 +ctf_integer_network_hex(__t__, __n__, __e__)+
7105 +ctf_user_integer_network_hex(__t__, __n__, __e__)+
7107 Integer in network byte order, displayed in base 16.
7116 Argument expression.
7119 +ctf_enum(__N__, __t__, __n__, __e__)+
7121 +ctf_enum_nowrite(__N__, __t__, __n__, __e__)+
7123 +ctf_user_enum(__N__, __t__, __n__, __e__)+
7125 +ctf_user_enum_nowrite(__N__, __t__, __n__, __e__)+
7130 Name of a <<lttng-tracepoint-enum,previously defined enumeration>>.
7133 Integer C type (`int`, `long`, `size_t`, ...).
7139 Argument expression.
7142 +ctf_string(__n__, __e__)+
7144 +ctf_string_nowrite(__n__, __e__)+
7146 +ctf_user_string(__n__, __e__)+
7148 +ctf_user_string_nowrite(__n__, __e__)+
7150 Null-terminated string; undefined behavior if +__e__+ is `NULL`.
7156 Argument expression.
7159 +ctf_array(__t__, __n__, __e__, __s__)+
7161 +ctf_array_nowrite(__t__, __n__, __e__, __s__)+
7163 +ctf_user_array(__t__, __n__, __e__, __s__)+
7165 +ctf_user_array_nowrite(__t__, __n__, __e__, __s__)+
7167 Statically-sized array of integers.
7170 Array element C type.
7176 Argument expression.
7182 +ctf_array_bitfield(__t__, __n__, __e__, __s__)+
7184 +ctf_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7186 +ctf_user_array_bitfield(__t__, __n__, __e__, __s__)+
7188 +ctf_user_array_bitfield_nowrite(__t__, __n__, __e__, __s__)+
7190 Statically-sized array of bits.
7192 The type of +__e__+ must be an integer type. +__s__+ is the number
7193 of elements of such type in +__e__+, not the number of bits.
7196 Array element C type.
7202 Argument expression.
7208 +ctf_array_text(__t__, __n__, __e__, __s__)+
7210 +ctf_array_text_nowrite(__t__, __n__, __e__, __s__)+
7212 +ctf_user_array_text(__t__, __n__, __e__, __s__)+
7214 +ctf_user_array_text_nowrite(__t__, __n__, __e__, __s__)+
7216 Statically-sized array, printed as text.
7218 The string does not need to be null-terminated.
7221 Array element C type (always `char`).
7227 Argument expression.
7233 +ctf_sequence(__t__, __n__, __e__, __T__, __E__)+
7235 +ctf_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7237 +ctf_user_sequence(__t__, __n__, __e__, __T__, __E__)+
7239 +ctf_user_sequence_nowrite(__t__, __n__, __e__, __T__, __E__)+
7241 Dynamically-sized array of integers.
7243 The type of +__E__+ must be unsigned.
7246 Array element C type.
7252 Argument expression.
7255 Length expression C type.
7261 +ctf_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7263 +ctf_user_sequence_hex(__t__, __n__, __e__, __T__, __E__)+
7265 Dynamically-sized array of integers, displayed in base 16.
7267 The type of +__E__+ must be unsigned.
7270 Array element C type.
7276 Argument expression.
7279 Length expression C type.
7284 |+ctf_sequence_network(__t__, __n__, __e__, __T__, __E__)+
7286 Dynamically-sized array of integers in network byte order (big-endian),
7287 displayed in base 10.
7289 The type of +__E__+ must be unsigned.
7292 Array element C type.
7298 Argument expression.
7301 Length expression C type.
7307 +ctf_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7309 +ctf_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7311 +ctf_user_sequence_bitfield(__t__, __n__, __e__, __T__, __E__)+
7313 +ctf_user_sequence_bitfield_nowrite(__t__, __n__, __e__, __T__, __E__)+
7315 Dynamically-sized array of bits.
7317 The type of +__e__+ must be an integer type. +__s__+ is the number
7318 of elements of such type in +__e__+, not the number of bits.
7320 The type of +__E__+ must be unsigned.
7323 Array element C type.
7329 Argument expression.
7332 Length expression C type.
7338 +ctf_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7340 +ctf_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7342 +ctf_user_sequence_text(__t__, __n__, __e__, __T__, __E__)+
7344 +ctf_user_sequence_text_nowrite(__t__, __n__, __e__, __T__, __E__)+
7346 Dynamically-sized array, displayed as text.
7348 The string does not need to be null-terminated.
7350 The type of +__E__+ must be unsigned.
7352 The behaviour is undefined if +__e__+ is `NULL`.
7355 Sequence element C type (always `char`).
7361 Argument expression.
7364 Length expression C type.
7370 Use the `_user` versions when the argument expression, `e`, is
7371 a user space address. In the cases of `ctf_user_integer*()` and
7372 `ctf_user_float*()`, `&e` must be a user space address, thus `e` must
7375 The `_nowrite` versions omit themselves from the session trace, but are
7376 otherwise identical. This means the `_nowrite` fields won't be written
7377 in the recorded trace. Their primary purpose is to make some
7378 of the event context available to the
7379 <<enabling-disabling-events,event filters>> without having to
7380 commit the data to sub-buffers.
7386 Terms related to LTTng and to tracing in general:
7389 The http://diamon.org/babeltrace[Babeltrace] project, which includes
7390 the cmd:babeltrace command, some libraries, and Python bindings.
7392 <<channel-buffering-schemes,buffering scheme>>::
7393 A layout of sub-buffers applied to a given channel.
7395 <<channel,channel>>::
7396 An entity which is responsible for a set of ring buffers.
7398 <<event,Event rules>> are always attached to a specific channel.
7401 A reference of time for a tracer.
7403 <<lttng-consumerd,consumer daemon>>::
7404 A process which is responsible for consuming the full sub-buffers
7405 and write them to a file system or send them over the network.
7407 <<channel-overwrite-mode-vs-discard-mode,discard mode>>:: The event loss
7408 mode in which the tracer _discards_ new event records when there's no
7409 sub-buffer space left to store them.
7412 The consequence of the execution of an instrumentation
7413 point, like a tracepoint that you manually place in some source code,
7414 or a Linux kernel KProbe.
7416 An event is said to _occur_ at a specific time. Different actions can
7417 be taken upon the occurrence of an event, like record the event's payload
7420 <<channel-overwrite-mode-vs-discard-mode,event loss mode>>::
7421 The mechanism by which event records of a given channel are lost
7422 (not recorded) when there is no sub-buffer space left to store them.
7424 [[def-event-name]]event name::
7425 The name of an event, which is also the name of the event record.
7426 This is also called the _instrumentation point name_.
7429 A record, in a trace, of the payload of an event which occured.
7431 <<event,event rule>>::
7432 Set of conditions which must be satisfied for one or more occuring
7433 events to be recorded.
7435 `java.util.logging`::
7437 https://docs.oracle.com/javase/7/docs/api/java/util/logging/package-summary.html[core logging facilities].
7439 <<instrumenting,instrumentation>>::
7440 The use of LTTng probes to make a piece of software traceable.
7442 instrumentation point::
7443 A point in the execution path of a piece of software that, when
7444 reached by this execution, can emit an event.
7446 instrumentation point name::
7447 See _<<def-event-name,event name>>_.
7450 A http://logging.apache.org/log4j/1.2/[logging library] for Java
7451 developed by the Apache Software Foundation.
7454 Level of severity of a log statement or user space
7455 instrumentation point.
7458 The _Linux Trace Toolkit: next generation_ project.
7460 <<lttng-cli,cmd:lttng>>::
7461 A command-line tool provided by the LTTng-tools project which you
7462 can use to send and receive control messages to and from a
7466 The https://github.com/lttng/lttng-analyses[LTTng analyses] project,
7467 which is a set of analyzing programs that are used to obtain a
7468 higher level view of an LTTng trace.
7470 cmd:lttng-consumerd::
7471 The name of the consumer daemon program.
7474 A utility provided by the LTTng-tools project which can convert
7475 ring buffer files (usually
7476 <<persistent-memory-file-systems,saved on a persistent memory file system>>)
7479 LTTng Documentation::
7482 <<lttng-live,LTTng live>>::
7483 A communication protocol between the relay daemon and live viewers
7484 which makes it possible to see events "live", as they are received by
7487 <<lttng-modules,LTTng-modules>>::
7488 The https://github.com/lttng/lttng-modules[LTTng-modules] project,
7489 which contains the Linux kernel modules to make the Linux kernel
7490 instrumentation points available for LTTng tracing.
7493 The name of the relay daemon program.
7495 cmd:lttng-sessiond::
7496 The name of the session daemon program.
7499 The https://github.com/lttng/lttng-tools[LTTng-tools] project, which
7500 contains the various programs and libraries used to
7501 <<controlling-tracing,control tracing>>.
7503 <<lttng-ust,LTTng-UST>>::
7504 The https://github.com/lttng/lttng-ust[LTTng-UST] project, which
7505 contains libraries to instrument user applications.
7507 <<lttng-ust-agents,LTTng-UST Java agent>>::
7508 A Java package provided by the LTTng-UST project to allow the
7509 LTTng instrumentation of `java.util.logging` and Apache log4j 1.2
7512 <<lttng-ust-agents,LTTng-UST Python agent>>::
7513 A Python package provided by the LTTng-UST project to allow the
7514 LTTng instrumentation of Python logging statements.
7516 <<channel-overwrite-mode-vs-discard-mode,overwrite mode>>::
7517 The event loss mode in which new event records overwrite older
7518 event records when there's no sub-buffer space left to store them.
7520 <<channel-buffering-schemes,per-process buffering>>::
7521 A buffering scheme in which each instrumented process has its own
7522 sub-buffers for a given user space channel.
7524 <<channel-buffering-schemes,per-user buffering>>::
7525 A buffering scheme in which all the processes of a Unix user share the
7526 same sub-buffer for a given user space channel.
7528 <<lttng-relayd,relay daemon>>::
7529 A process which is responsible for receiving the trace data sent by
7530 a distant consumer daemon.
7533 A set of sub-buffers.
7535 <<lttng-sessiond,session daemon>>::
7536 A process which receives control commands from you and orchestrates
7537 the tracers and various LTTng daemons.
7539 <<taking-a-snapshot,snapshot>>::
7540 A copy of the current data of all the sub-buffers of a given tracing
7541 session, saved as trace files.
7544 One part of an LTTng ring buffer which contains event records.
7547 The time information attached to an event when it is emitted.
7550 A set of files which are the concatenations of one or more
7551 flushed sub-buffers.
7554 The action of recording the events emitted by an application
7555 or by a system, or to initiate such recording by controlling
7559 The http://tracecompass.org[Trace Compass] project and application.
7562 An instrumentation point using the tracepoint mechanism of the Linux
7563 kernel or of LTTng-UST.
7565 tracepoint definition::
7566 The definition of a single tracepoint.
7569 The name of a tracepoint.
7571 tracepoint provider::
7572 A set of functions providing tracepoints to an instrumented user
7575 Not to be confused with a _tracepoint provider package_: many tracepoint
7576 providers can exist within a tracepoint provider package.
7578 tracepoint provider package::
7579 One or more tracepoint providers compiled as an object file or as
7583 A software which records emitted events.
7585 <<domain,tracing domain>>::
7586 A namespace for event sources.
7588 <<tracing-group,tracing group>>::
7589 The Unix group in which a Unix user can be to be allowed to trace the
7592 <<tracing-session,tracing session>>::
7593 A stateful dialogue between you and a <<lttng-sessiond,session
7597 An application running in user space, as opposed to a Linux kernel
7598 module, for example.