[Understanding LTTng](#doc-understanding-lttng).
Before reading this guide, make sure LTTng
-[is installed](#doc-installing-lttng). You will at least need
-LTTng-tools. Also install LTTng-modules for
+[is installed](#doc-installing-lttng). You need LTTng-tools. Also install
+LTTng-modules for
[tracing the Linux kernel](#doc-tracing-the-linux-kernel) and LTTng-UST
for
[tracing your own user space applications](#doc-tracing-your-own-user-application).
When your traces are finally written and complete, the
[Viewing and analyzing your traces](#doc-viewing-and-analyzing-your-traces)
-section of this chapter will help you analyze your tracepoint events to investigate.
+section of this chapter will help you analyze your tracepoint events
+to investigate.
</div>
`my-session` is the tracing session name and could be anything you
-like. `auto` will be used if omitted.
+like. `auto` is used if omitted.
Let's now enable some events for this session:
</pre>
or you might want to simply enable all available kernel events (beware
-that trace files will grow rapidly when doing this):
+that trace files grow rapidly when doing this):
<pre class="term">
sudo lttng enable-event --kernel --all
defined before using them. So before even writing our _Hello world_ program,
we need to define the format of our tracepoint. This is done by writing a
**template file**, with a name usually ending with the `.tp` extension (for **t**race**p**oint),
-which the `lttng-gen-tp` tool (shipped with LTTng-UST) will use to generate
+which the `lttng-gen-tp` tool (shipped with LTTng-UST) uses to generate
an object file (along with a `.c` file) and a header to be included in our application source code.
Here's the whole flow:
</div>
The template file format is a list of tracepoint definitions
-and other optional definition entries which we will skip for
+and other optional definition entries which we skip for
this quickstart. Each tracepoint is defined using the
`TRACEPOINT_EVENT()` macro. For each tracepoint, you must provide:
* a **list of arguments** for the eventual `tracepoint()` call, each item being:
* the argument C type
* the argument name
- * a **list of fields**, which will be the actual fields of the recorded events
- for this tracepoint
+ * a **list of fields**, which correspond to the actual fields of the
+ recorded events for this tracepoint
Here's a simple tracepoint definition example with two arguments: an integer
and a string:
lttng-gen-tp hello-tp.tp
</pre>
-The following files will be created next to `hello-tp.tp`:
+The following files are created next to `hello-tp.tp`:
* `hello-tp.c`
* `hello-tp.o`
If you followed the
[Tracing the Linux kernel](#doc-tracing-the-linux-kernel) section, the
-following steps will look familiar.
+following steps should look familiar.
First, run the application with a few arguments:
</pre>
Go back to the running `hello` application and press Enter. All `tracepoint()`
-calls will be executed and the program will finally exit.
+calls are executed and the program finally exits.
Stop tracing:
babeltrace ~/lttng-traces/my-session
</pre>
-`babeltrace` will find all traces within the given path recursively and
-output all their events, merging them intelligently.
+`babeltrace` finds all traces within the given path recursively and
+prints all their events, merging them in order of time.
Listing all the system calls of a Linux kernel trace with their arguments is
easy with `babeltrace` and `grep`:
</div>
If you're using Ubuntu, executing the following Bash script
-will install the appropriate dependencies, clone the LTTng
-Git repositories, build the projects, and install them. The sources will
-be cloned into `~/src`. Your user needs to be a sudoer for the install
+installs the appropriate dependencies, clone the LTTng
+Git repositories, build the projects, and install them. The sources
+are cloned into `~/src`. Your user needs to be a sudoer for the install
steps to be completed.
~~~ text
id: debian
---
-Debian wheezy (stable) and previous versions are not supported; you will
+Debian wheezy (stable) and previous versions are not supported; you
need to build and install LTTng packages
[from source](#doc-building-from-source) for those.
first need to add an entry to your repository configuration. All LTTng repositories
are available
<a href="http://download.opensuse.org/repositories/devel:/tools:/lttng/" class="ext">here</a>.
-For example, the following commands will add the LTTng repository for
+For example, the following commands adds the LTTng repository for
openSUSE 13.1:
<pre class="term">
---
The following steps apply to Ubuntu ≥ 12.04. For
-previous releases, you will need to build and install LTTng
+previous releases, you need to build and install LTTng
[from source](#doc-building-from-source), as no Ubuntu packages were
available before version 12.04.
* **Per-UID buffering**: keep one ring buffer for all processes of
a single user.
-The per-PID buffering scheme will consume more memory than the per-UID
+The per-PID buffering scheme consumes more memory than the per-UID
option if more than one process is instrumented for LTTng-UST. However,
per-PID buffering ensures that one process having a high event
throughput won't fill all the shared sub-buffers, only its own.
sub-buffer (yellow arc in the following animation) until it is full:
when this happens, the sub-buffer is marked as consumable (red) and
another, _empty_ (white) sub-buffer starts receiving the following
-events. The marked sub-buffer will be consumed eventually by a consumer
+events. The marked sub-buffer is eventually consumed by a consumer
daemon (returns to white).
<script type="text/javascript">
as a new event doesn't find an empty sub-buffer, whereas in discard
mode, only the event that doesn't fit is discarded.
-Also note that a count of lost events will be incremented and saved in
+Also note that a count of lost events is incremented and saved in
the trace itself when an event is lost in discard mode, whereas no
information is kept when a sub-buffer gets overwritten before being
committed.
to configure sub-buffers for them:
* **High event throughput**: in general, prefer bigger sub-buffers to
- lower the risk of losing events. Having bigger sub-buffers will
- also ensure a lower sub-buffer switching frequency. The number of
- sub-buffers is only meaningful if the channel is in overwrite mode:
- in this case, if a sub-buffer overwrite happens, you will still have
- the other sub-buffers left unaltered.
+ lower the risk of losing events. Having bigger sub-buffers
+ also ensures a lower sub-buffer switching frequency. The number of
+ sub-buffers is only meaningful if the channel is enabled in
+ overwrite mode: in this case, if a sub-buffer overwrite happens, the
+ other sub-buffers are left unaltered.
* **Low event throughput**: in general, prefer smaller sub-buffers
since the risk of losing events is already low. Since events
happen less frequently, the sub-buffer switching frequency should
A _channel_ is a set of events with specific parameters and potential
added context information. Channels have unique names per domain within
a tracing session. A given event is always registered to at least one
-channel; having an enabled event in two channels will produce a trace
-with this event recorded twice everytime it occurs.
+channel; having the same enabled event in two channels makes
+this event being recorded twice everytime it occurs.
Channels may be individually enabled or disabled. Occurring events of
-a disabled channel will never make it to recorded events.
+a disabled channel never make it to recorded events.
The fundamental role of a channel is to keep a shared ring buffer, where
events are eventually recorded by the tracer and consumed by a consumer
The whole [Core concepts](#doc-core-concepts) section focuses on the
third definition. An event is always registered to _one or more_
channels and may be enabled or disabled at will per channel. A disabled
-event will never lead to a recorded event, even if its channel
-is enabled.
+event never leads to a recorded event, even if its channel is enabled.
An event (3) is enabled with a few conditions that must _all_ be met
when an event (1) happens in order to generate a recorded event (2):
The other important feature of LTTng's relay daemon is the support of
_LTTng live_. LTTng live is an application protocol to view events as
-they arrive. The relay daemon will still record events in trace files,
-but a _tee_ may be created to inspect incoming events. Using LTTng live
+they arrive. The relay daemon still records events in trace files,
+but a _tee_ allows to inspect incoming events. Using LTTng live
locally thus requires to run a local relay daemon.
lttng-sessiond
</pre>
-This will start the session daemon in foreground. Use
+This starts the session daemon in foreground. Use
<pre class="term">
lttng-sessiond --daemonize
pkill lttng-sessiond
</pre>
-The default `SIGTERM` signal will terminate it cleanly.
+The default `SIGTERM` signal terminates it cleanly.
Several other options are available and described in
<a href="/man/8/lttng-sessiond" class="ext"><code>lttng-sessiond</code>'s manpage</a>
</pre>
LTTng is very flexible: user space applications may be launched before
-or after the tracers are started. Events will only be recorded if they
-are properly enabled and if they occur while tracers are started.
+or after the tracers are started. Events are only recorded if they
+are properly enabled and if they occur while tracers are active.
A tracing session name may be passed to both the `start` and `stop`
commands to start/stop tracing a session other than the current one.
lttng create my-session
</pre>
-This will create a new tracing session named `my-session` and make it
-the current one. If you don't specify any name (calling only
-`lttng create`), your tracing session will be named `auto`. Traces
+This creates a new tracing session named `my-session` and make it
+the current one. If you don't specify a name (running only
+`lttng create`), your tracing session is named `auto` followed by the
+current date and time. Traces
are written in <code>~/lttng-traces/<em>session</em>-</code> followed
by the tracing session's creation date/time by default, where
<code><em>session</em></code> is the tracing session name. To save them
lttng destroy my-session
</pre>
-Providing no argument to `lttng destroy` will destroy the current
-tracing session. Destroying a tracing session will stop any tracing
+Providing no argument to `lttng destroy` destroys the current
+tracing session. Destroying a tracing session stops any tracing
running within the latter. Destroying a tracing session frees resources
acquired by the session daemon and tracer side, making sure to flush
all trace data.
`--tracefile-size` and `--tracefile-count`, which respectively limit
the size of each trace file and the their count for a given channel.
When the number of written trace files reaches its limit for a given
-channel-CPU pair, the next trace file will overwrite the very first
+channel-CPU pair, the next trace file overwrites the very first
one. The following example creates a kernel domain channel with a
maximum of three trace files of 1 MiB each:
lttng enable-channel --kernel my-channel
</pre>
-This will create a kernel domain channel named `my-channel` with
+This creates a kernel domain channel named `my-channel` with
default parameters in the current tracing session.
<div class="tip">
--tracefile-size 1048576 1mib-channel
</pre>
-This will create a user space domain channel named `1mib-channel` in
+This creates a user space domain channel named `1mib-channel` in
the tracing session named `other-session` that loses new events by
overwriting previously recorded events (instead of the default mode of
discarding newer ones) and saves trace files with a maximum size of
lttng enable-event --userspace --channel other-channel app:tp
</pre>
-If both channels are enabled, the occurring `app:tp` event will
-generate two recorded events, one for each channel.
+If both channels are enabled, the occurring `app:tp` event
+generates two recorded events, one for each channel.
Disabling a channel is done with the `disable-event` command:
it from its channel's whitelist. This is why you cannot disable an event
which wasn't previously enabled.
-A disabled event will not generate any trace data, even if all its
+A disabled event doesn't generate any trace data, even if all its
specified conditions are met.
Events may be enabled and disabled at will, either when LTTng tracers
LTTng live is implemented, in LTTng, solely on the relay daemon side.
As trace data is sent over the network to a relay daemon by a (possibly
-remote) consumer daemon, a _tee_ may be created: trace data will be
-recorded to trace files _as well as_ being transmitted to a
-connected live viewer:
+remote) consumer daemon, a _tee_ is created: trace data is recorded to
+trace files _as well as_ being transmitted to a connected live viewer:
<div class="img img-90">
<object data="/images/docs26/lttng-live-relayd.svg" type="image/svg+xml">
lttng create --live
</pre>
-An optional parameter may be passed to `--live` to set the interval
-of time (in microseconds) between flushes to the network
-(1 second is the default):
+An optional parameter may be passed to `--live` to set the period
+(in microseconds) between flushes to the network
+(1 second is the default). With:
<pre class="term">
lttng create --live 100000
</pre>
-will flush every 100 ms.
+the daemons flush their data every 100 ms.
If no network output is specified to the `create` command, a local
-relay daemon will be spawned. In this very common case, viewing a live
+relay daemon is spawned. In this very common case, viewing a live
trace is easy: enable events and start tracing as usual, then use
`lttng view` to start the default live viewer:
lttng view
</pre>
-The correct arguments will be passed to the live viewer so that it
+The correct arguments are passed to the live viewer so that it
may connect to the local relay daemon and start reading live events.
You may also wish to use a live viewer not running on the target
The `lttng` tool aims at providing a command output as human-readable as
possible. While this output is easy to parse by a human being, machines
-will have a hard time.
+have a hard time.
This is why the `lttng` tool provides the general `--mi` option, which
must specify a machine interface output format. As of the latest
lttng load --input-path /path/to/my-session.lttng
</pre>
-Your saved tracing session will be restored as if you just configured
+Your saved tracing session is restored as if you just configured
it manually.
The relay daemon listens on two different TCP ports: one for control
information and the other for actual trace data.
-Starting the relay daemon on the remote machine is as easy as:
+Starting the relay daemon on the remote machine is easy:
<pre class="term">
lttng-relayd
</pre>
-This will make it listen to its default ports: 5342 for control and
+This makes it listen to its default ports: 5342 for control and
5343 for trace data. The `--control-port` and `--data-port` options may
be used to specify different ports.
</pre>
The URL format is described in the output of `lttng create --help`.
-The above example will use the default ports; the `--ctrl-url` and
+The above example uses the default ports; the `--ctrl-url` and
`--data-url` options may be used to set the control and data URLs
individually.
Once this basic setup is completed and the connection is established,
you may use the `lttng` tool on the target machine as usual; everything
-you do will be transparently forwarded to the remote machine if needed.
-For example, a parameter changing the maximum size of trace files will
-have an effect on the distant relay daemon actually writing the trace.
+you do is transparently forwarded to the remote machine if needed.
+For example, a parameter changing the maximum size of trace files
+only has an effect on the distant relay daemon actually writing
+the trace.
or stopped, you may take a snapshot of those sub-buffers.
There is no difference between the format of a normal trace file and the
-format of a snapshot: viewers of LTTng traces will also support LTTng
+format of a snapshot: viewers of LTTng traces also support LTTng
snapshots. By default, snapshots are written to disk, but they may also
be sent over the network.
</pre>
Next, enable channels, events and add context to channels as usual.
-Once a tracing session is created in snapshot mode, channels will be
+Once a tracing session is created in snapshot mode, channels are
forced to use the
[overwrite](#doc-channel-overwrite-mode-vs-discard-mode) mode
(`--overwrite` option of the `enable-channel` command; also called
lttng snapshot record --name my-snapshot
</pre>
-This will record a snapshot named `my-snapshot` of all channels of
+This records a snapshot named `my-snapshot` of all channels of
all domains of the current tracing session. By default, snapshots files
are recorded in the path returned by `lttng snapshot list-output`. You
may change this path or decide to send snapshots over the network
lttng snapshot record --name my-snapshot --max-size 2M
</pre>
-Older recorded events will be discarded in order to respect this
+Older recorded events are discarded in order to respect this
maximum size.
---
Since the host is a 64-bit system, most 32-bit binaries and libraries of
-LTTng-tools are not needed; the host will use their 64-bit counterparts.
+LTTng-tools are not needed; the host uses their 64-bit counterparts.
The required step here is building and installing a 32-bit consumer
daemon.
sudo ldconfig
</pre>
-Henceforth, the 64-bit session daemon will automatically find the
+Henceforth, the 64-bit session daemon automatically finds the
32-bit consumer daemon if required.
-ldl -llttng-ust <strong>-Wl,-rpath,/usr/lib32</strong>
</pre>
-The `-rpath` option, passed to the linker, will make the dynamic loader
+The `-rpath` option, passed to the linker, makes the dynamic loader
check for libraries in `/usr/lib32` before looking in its default paths,
where it should find the 32-bit version of `liblttng-ust`.
Make sure you install all 32-bit versions of LTTng dependencies.
Their names can be found in the `README.md` files of each LTTng package
-source. How to find and install them will vary depending on your target
+source. How to find and install them depends on your target's
Linux distribution. `gcc-multilib` is a common package name for the
-multilib version of GCC, which you will also need.
+multilib version of GCC, which you also need.
The following packages will be built for 32-bit support on a 64-bit
system: <a href="http://urcu.so/" class="ext">Userspace RCU</a>,
sought, loaded and unloaded at runtime using `libdl`.
It has to be noted that, for a variety of reasons, the created shared
-library will be dynamically _loaded_, as opposed to dynamically
+library is be dynamically _loaded_, as opposed to dynamically
_linked_. The tracepoint provider shared object is, however, linked
with `liblttng-ust`, so that `liblttng-ust` is guaranteed to be loaded
as soon as the tracepoint provider is. If the tracepoint provider is
</pre>
As previously stated, this tracepoint provider shared object isn't
-linked with the user application: it will be loaded manually. This is
+linked with the user application: it's loaded manually. This is
why the application is built with no mention of this tracepoint
provider, but still needs `libdl`:
This is accomplished by defining `TRACEPOINT_CREATE_PROBES` in a translation
unit and then including the tracepoint provider header file.
When `TRACEPOINT_CREATE_PROBES` is defined, macros used and included by
-the tracepoint provider header will output actual source code needed by any
+the tracepoint provider header produce actual source code needed by any
application using the defined tracepoints. Defining
`TRACEPOINT_CREATE_PROBES` produces code used when registering
tracepoint providers when the tracepoint provider package loads.
pkg-config --libs lttng-ust
</pre>
-This will return `-llttng-ust -ldl` on Linux systems.
+This prints `-llttng-ust -ldl` on Linux systems.
You may also check the LTTng-UST version using `pkg-config`:
(see [Tracepoint provider](#doc-tracepoint-provider)). In other words,
always use the same string as the value of `TRACEPOINT_PROVIDER` above.
-The tracepoint name will become the event name once events are recorded
+The tracepoint name becomes the event name once events are recorded
by the LTTng-UST tracer. It must follow the tracepoint provider name
syntax: start with a letter and contain either letters, numbers or
underscores. Two tracepoints under the same provider cannot have the
<div class="tip">
<p><span class="t">Note:</span>The concatenation of the tracepoint
provider name and the tracepoint name cannot exceed 254 characters. If
-it does, the instrumented application will compile and run, but LTTng
-will issue multiple warnings and you could experience serious problems.</p>
+it does, the instrumented application compiles and runs, but LTTng
+issues multiple warnings and you could experience serious problems.</p>
</div>
The list of tracepoint arguments gives this tracepoint its signature:
),
~~~
-Of course, you will need to include appropriate header files before
+Of course, you need to include appropriate header files before
the `TRACEPOINT_EVENT()` macro calls if any argument has a complex type.
`TP_ARGS()` may not be omitted, but may be empty. `TP_ARGS(void)` is
also accepted.
The list of fields is where the fun really begins. The fields defined
-in this list will be the fields of the events generated by the execution
+in this list are the fields of the events generated by the execution
of this tracepoint. Each tracepoint field definition has a C
-_argument expression_ which will be evaluated when the execution reaches
+_argument expression_ which is evaluated when the execution reaches
the tracepoint. Tracepoint arguments _may be_ used freely in those
argument expressions, but they _don't_ have to.
`lttng-gen-tp` should suffice in [static linking](#doc-static-linking)
situations. When using it, write a template file containing a list of
-`TRACEPOINT_EVENT()` macro calls. The tool will find the provider names
+`TRACEPOINT_EVENT()` macro calls. The tool finds the provider names
used and generate the appropriate files which are going to look a lot
like `tp.h` and `tp.c` above.
lttng-gen-tp my-template.tp
</pre>
-`my-template.c`, `my-template.o` and `my-template.h` will be created
+`my-template.c`, `my-template.o` and `my-template.h` are created
in the same directory.
You may specify custom C flags passed to the compiler invoked by
~~~
`TRACEPOINT_PROVIDER` defines the name of the provider to which the
-following tracepoint definitions will belong. It is used internally by
+following tracepoint definitions belong. It is used internally by
LTTng-UST headers and _must_ be defined. Since `TRACEPOINT_PROVIDER`
could have been defined by another header file also included by the same
C source file, the best practice is to undefine it first.
#include <lttng/tracepoint.h>
~~~
-This will also allow the application to use the `tracepoint()` macro.
+This also allows the application to use the `tracepoint()` macro.
Next is a list of `TRACEPOINT_EVENT()` macro calls which create the
-actual tracepoint definitions. We will skip this for the moment and
+actual tracepoint definitions. We skip this for the moment and
come back to how to use `TRACEPOINT_EVENT()`
[in a later section](#doc-defining-tracepoints). Just pay attention to
the first argument: it's always the name of the tracepoint provider
~~~
When `TRACEPOINT_CREATE_PROBES` is defined, the macros used in `tp.h`,
-which is included just after, will actually create the source code for
+which is included just after, actually create the source code for
LTTng-UST probes (global data structures and functions) out of your
tracepoint definitions. How exactly this is done is out of this text's scope.
`TRACEPOINT_CREATE_PROBES` is discussed further
with the same fields layout, the best practice is to manually create
a tracepoint class and instantiate as many tracepoint instances as
needed. One positive effect of such a design, amongst other advantages,
-is that all tracepoint instances of the same tracepoint class will
+is that all tracepoint instances of the same tracepoint class
reuse the same serialization function, thus reducing cache pollution.
As an example, here are three tracepoint definitions as we know them:
The [Controlling tracing](#doc-controlling-tracing) section explains
how to use the `lttng` tool to create and control tracing sessions.
-Although the `lttng` tool will load the appropriate _known_ LTTng kernel
+Although the `lttng` tool loads the appropriate _known_ LTTng kernel
modules when needed (by launching `root`'s session daemon), it won't
load your custom `lttng-probe-hello` module by default. You need to
manually start an LTTng session daemon as `root` and use the
Throughout this section, all file paths are relative to the root of
this tree unless otherwise stated.
-You will need a copy of the LTTng-modules Git repository:
+You need a copy of the LTTng-modules Git repository:
<pre class="term">
git clone git://git.lttng.org/lttng-modules.git
At the top of `driver.c`, we need to include our actual tracepoint
definition and, in this case (one place per subsystem), define
-`CREATE_TRACE_POINTS`, which will create our tracepoint:
+`CREATE_TRACE_POINTS`, which creates our tracepoint:
~~~ c
/* ... */
be that your tracing needs are already appropriately covered by LTTng's
built-in Linux kernel tracepoints and other probes. Or you may be in
possession of a user space application which has already been
-instrumented. In such cases, the work will reside entirely in the design
+instrumented. In such cases, the work resides entirely in the design
and execution of tracing sessions, allowing you to jump to
[Controlling tracing](#doc-controlling-tracing) right now.