LTTV & LTTng roadmap
Here are the roadmaps for the LTTV and LTTng development. I use a priority indice
for the TODO items :
(1) : very high priority
(10): lowest priority
Dependencies are written between brackets [ ].
The # symbol marks who is currently working on the item.
The % symbol marks who is interested in the realisation of the item.
The $ symbol marks who is contributing funding for the realisation of the item.
LTT Next Generation Roadmap
* TODO (high priority)
(1) LTTng event description: move from tracepoint/markers to Ftrace TRACE_EVENT
declarations. Extend TRACE_EVENT as needed.
# Douglas Santos)
(1) LTTng ring buffer adaptation for FTrace.
(1) FTrace/LTTng trace format standardization.
(1) Extend NO_HZ support vs trace streaming support to other architectures (need
to add cpu idle notifiers and test).
(1) Make sure ltt-ascii kernel text dump fits well with streaming hooked into
cpu idle.
[Depends on cpu idle notifier port to other architectures]
(1) Support CPUs with scalable frequency with a time-consistent increment and
with an approach scalable to SMP. (done for ARM OMAP3 UP only, but the OMAP3
approach should be tested and probably derived into an SMP implementation)
* TODO (medium priority)
(3) LTTng trace session (support multiple active traces at once) integration
into Ftrace.
(3) LTTng and Ftrace DebugFS interface merge.
(3) LTTng trace clock time-stamping merge into mainline.
(3) NMI-safe tracing merge into mainline.
* Nice to have
(3) Bring stack dump in sync with new lttng.
(4) Dump mounts. (to fix)
(4) Add Xen support. (Trace buffer desallocation needs to be fixed)
(4) integrate NPTL instrumentation (see
PTT).
(4) Probe calibration kernel module.
(5) Add boot time tracing support.
(5) Integrate LTTng and lttd with LKCD.
(7) Integrate periodical dump of perfctr hardware counters.
(8) Integrate SystemTAP logging with LTTng.
(8) Integrate periodical dump of SystemTAP computed information.
(9) Add support for setjmp/longjmp and jump tables instrumentation to
ltt-instrument-functions.
* Done
- (2009) Port LTTng to ARM OMAP3 with power management and dynamic frequency scaling
support. (Done by Mathieu Desnoyers, funded by Nokia).
- (2009) Improvement of trace streaming power consumption efficiency (NO_HZ
support) (x86 only for now).
- (2009) Periodic flush for trace streaming (Mathieu Desnoyers).
- (2009) Ascii text output from LTTng. (started by Lai Jiangshan (Fujitsu),
completed by Mathieu Desnoyers)
LTTV Roadmap
Note: new feature development is currently done in the Linux Tools Project:
LTTng Integration. Mainwhile, LTTV is maintained as a known-stable viewer.
* Nice to have
(4) Statistics per time window.
(4) Add Xen per physical CPU view.
(4) Add Xen per vcpu view.
(4) Disable plugins when threshold reached (i.e. too much process in control
flow view). Draw, and, when the threshold is reached, stop drawing. The global
statistics view can inhibit showing the per process stats.
(4) Add a visual artifact : PID 0 could be named swapper instead of UNNAMED for
cpus > 0.
(4) Add event specific fields support to filter.
(4) Add a periodic event interval view. (useful to verify event periodicity)
(4) create a graphical per cpu activity view.
(4) Filter by target process.
(4) Compensate for time spent in probes in LTTV analysis.
(4) Add CPU, network, disk, memory usage histogram. [Per interval statistics]
(4) Add sort by process priority in the control flow view (must also instrument
priority information of the processes).
% Airbus
(5) Add Python scripting hooks.
(5) Add GUI interface to take an hybrid trace.
(5) Automatically detect traces with too much processes and disable faulty operations.
(5) Event sequence detector (inspired from regular expressions).
(7) Create a hardware counter viewer (low cost rate counters : L1 cache miss,
page faults, interrupts...). This will be a generalisation of the event rate
view into a view of the evolution of a user definable event field.
* TO FIX
(10) Add cancel button to LTTV filter GUI window.
(10) Sometimes, in the control flow view, a process with 0 creation time is
created in addition to the real process itself. Seems to be caused by end of
process life.
(10) Statistics do not take in account the time spent in the mode present at
the beginning of the trace. Example : real time spent in system call on behalf
of process 0.
Mathieu Desnoyers