Earlier versions of LTTng contained simple, system call based, tracepoints and fast, user-space buffered, tracepoints for user-space tracing. During the kernel inclusion phase of LTTng, extensive rework and modularization of the kernel tracing portion was undertaken. This phase is well under way and several portions have been included already in the mainline kernel. The rework of the kernel tracing infrastructure will shortly thereafter be ported to fast user-space tracing. This fast user-space tracing scheme uses a direct function call to write events into buffers mapped in user-space. This should be an order of magnitude faster than the current Dtrace implementation (c.f. Dtrace information on the TracingWiki) which uses a breakpoint to perform both dynamic and static user-space tracing. Performance comparison of a function call vs the int3 approach is available at Markers vs int3 performance comparison (see "Conclusion").
Libmarkers will provide applications with user-space Markers and Tracepoints declarations, such that programmers will be able to insert Markers and Tracepoints in their libraries and applications. User-space Tracepoints and Markers, analogous to Kernel Tracepoints and Markers, define program locations and arguments provided. Libmarkers will also provide utility functions to enumerate, activate and deactivate tracepoints and markers in the process, and to associate probes with any tracepoint or marker.
Libtracing will provide the infrastructure to allocate buffers, define event types, write event metadata and data to the buffers, and get notification when buffers are full. The initial implementation will simply use one set of buffers per process. Subsequent more optimized versions will allocate one set of buffers per thread; one set of buffers per CPU would be desirable but user-space programs cannot check or control CPU migration without resorting to more costly bus locking operations or system calls. The library provides a generic probe for markers which, when connected, generates an event in the buffer each time the marker is encountered.
Finally, libtracingcontrol opens a connection allowing a separate process (e.g. LTTng daemon, Eclipse, GDB) to control the tracing within the application. Through this connection, the remote process can:
Tracing of Java application is planned to be done through a JNI interface. Linking standard low-level C tracing library to the application within a JNI adaptation class will be required to trace Java events. This has been prototyped in the past. The work is available here for older LTTng versions.
The principle of operation of libtracingcontrol is that when the instrumented application starts, a pipe is opened to allow external tracing control. Asynchronous notification is requested when commands arrive in the pipe, and a signal handler is installed for SIGIO (or a carefully chosen chainable signal number). Every time such signal is received, the runtime library checks for commands received on the external tracing control pipe. The application may also spontaneously provide information to the remote control process through the pipe:In addition, the tracing control application should be notified when the application exits (to save the content of buffers if the application is crashing) or forks (to trace the child as well if needed). Such notification may be obtained through utrace.
In summary, a user-space application linking with libmarkers may contain static instrumentation, Tracepoints and Markers, just like the kernel with Kernel Markers and Tracepoints. The application can exploit this instrumentation itself or link with libtracing and have tracing probes connected to each Marker. Other instrumentation mechanisms, like the GCC instrument-function option, or hooks inserted by a JIT compiler, can also use libtracing to define and write event to the trace buffers. Finally, libtracingcontrol, analogous to GDB stubs, allows the remote control of the tracing by remote monitoring or debugging frameworks.