| 1 | <html> |
| 2 | <body> |
| 3 | <center><big><big>LTTV & LTTng roadmap<small><small></center> |
| 4 | <br> |
| 5 | <br> |
| 6 | Here are the roadmaps for the LTTV and LTTng development. I use a priority indice |
| 7 | for the TODO items :<br> |
| 8 | (1) : very high priority<br> |
| 9 | (10): lowest priority<br> |
| 10 | <br> |
| 11 | <br> |
| 12 | Dependencies are written between brackets [ ].<br> |
| 13 | The # symbol marks who is currently working on the item.<br> |
| 14 | The % symbol marks who is interested in the realisation of the item.<br> |
| 15 | The $ symbol marks who is contributing funding for the realisation of the item.<br> |
| 16 | <br> |
| 17 | <br> |
| 18 | <big>LTT Next Generation Roadmap<small><br> |
| 19 | <br> |
| 20 | * TODO (high priority)<br> |
| 21 | <BR> |
| 22 | (1) LTTng event description: move from tracepoint/markers to Ftrace TRACE_EVENT |
| 23 | declarations. Extend TRACE_EVENT as needed.<br> |
| 24 | # <A HREF="mailto: Douglas Santos <douglas.santos@polymtl.ca>">Douglas Santos</A>)<BR> |
| 25 | (1) LTTng ring buffer adaptation for FTrace.<br> |
| 26 | (1) FTrace/LTTng trace format standardization.<BR> |
| 27 | (1) Extend NO_HZ support vs trace streaming support to other architectures (need |
| 28 | to add cpu idle notifiers and test).<br> |
| 29 | (1) Make sure ltt-ascii kernel text dump fits well with streaming hooked into |
| 30 | cpu idle.<br> |
| 31 | [Depends on cpu idle notifier port to other architectures]<br> |
| 32 | (1) Support CPUs with scalable frequency with a time-consistent increment and |
| 33 | with an approach scalable to SMP. (done for ARM OMAP3 UP only, but the OMAP3 |
| 34 | approach should be tested and probably derived into an SMP implementation)<br> |
| 35 | <br> |
| 36 | <br> |
| 37 | * TODO (medium priority)<br> |
| 38 | (3) LTTng trace session (support multiple active traces at once) integration |
| 39 | into Ftrace.<br> |
| 40 | (3) LTTng and Ftrace DebugFS interface merge.<br> |
| 41 | (3) LTTng trace clock time-stamping merge into mainline.<br> |
| 42 | (3) NMI-safe tracing merge into mainline.<br> |
| 43 | <br> |
| 44 | <br> |
| 45 | * Nice to have<br> |
| 46 | <br> |
| 47 | (3) Bring stack dump in sync with new lttng.<br> |
| 48 | (4) Dump mounts. (to fix)<br> |
| 49 | (4) Add Xen support. (Trace buffer desallocation needs to be fixed)<br> |
| 50 | (4) integrate NPTL instrumentation (see |
| 51 | <A HREF="http://nptltracetool.sourceforge.net/">PTT</A>).<br> |
| 52 | (4) Probe calibration kernel module.<br> |
| 53 | (5) Add boot time tracing support.<br> |
| 54 | (5) Integrate LTTng and lttd with LKCD.<br> |
| 55 | (7) Integrate periodical dump of perfctr hardware counters.<br> |
| 56 | (8) Integrate SystemTAP logging with LTTng.<br> |
| 57 | (8) Integrate periodical dump of SystemTAP computed information.<br> |
| 58 | (9) Add support for setjmp/longjmp and jump tables instrumentation to |
| 59 | ltt-instrument-functions.<br> |
| 60 | <br> |
| 61 | <br> |
| 62 | * Done<br> |
| 63 | <br> |
| 64 | - (2009) Port LTTng to ARM OMAP3 with power management and dynamic frequency scaling |
| 65 | support. (Done by Mathieu Desnoyers, funded by Nokia).<br> |
| 66 | - (2009) Improvement of trace streaming power consumption efficiency (NO_HZ |
| 67 | support) (x86 only for now).<br> |
| 68 | - (2009) Periodic flush for trace streaming (Mathieu Desnoyers).<br> |
| 69 | - (2009) Ascii text output from LTTng. (started by Lai Jiangshan (Fujitsu), |
| 70 | completed by Mathieu Desnoyers)<br> |
| 71 | <br> |
| 72 | <br> |
| 73 | <big>LTTV Roadmap<small><br> |
| 74 | <br> |
| 75 | Note: new feature development is currently done in the Linux Tools Project: |
| 76 | LTTng Integration. Mainwhile, LTTV is maintained as a known-stable viewer.<br> |
| 77 | <br> |
| 78 | <br> |
| 79 | * Nice to have<br> |
| 80 | <br> |
| 81 | (4) Statistics per time window.<br> |
| 82 | (4) Add Xen per physical CPU view.<br> |
| 83 | (4) Add Xen per vcpu view.<br> |
| 84 | (4) Disable plugins when threshold reached (i.e. too much process in control |
| 85 | flow view). Draw, and, when the threshold is reached, stop drawing. The global |
| 86 | statistics view can inhibit showing the per process stats.<br> |
| 87 | (4) Add a visual artifact : PID 0 could be named swapper instead of UNNAMED for |
| 88 | cpus > 0.<br> |
| 89 | (4) Add event specific fields support to filter.<br> |
| 90 | (4) Add a periodic event interval view. (useful to verify event periodicity)<br> |
| 91 | (4) create a graphical per cpu activity view.<br> |
| 92 | (4) Filter by target process.<br> |
| 93 | (4) Compensate for time spent in probes in LTTV analysis.<br> |
| 94 | (4) Add CPU, network, disk, memory usage histogram. [Per interval statistics]<br> |
| 95 | (4) Add sort by process priority in the control flow view (must also instrument |
| 96 | priority information of the processes).<br> |
| 97 | % Airbus<br> |
| 98 | (5) Add Python scripting hooks.<br> |
| 99 | (5) Add GUI interface to take an hybrid trace.<br> |
| 100 | (5) Automatically detect traces with too much processes and disable faulty operations.<br> |
| 101 | (5) Event sequence detector (inspired from regular expressions).<br> |
| 102 | (7) Create a hardware counter viewer (low cost rate counters : L1 cache miss, |
| 103 | page faults, interrupts...). This will be a generalisation of the event rate |
| 104 | view into a view of the evolution of a user definable event field.<br> |
| 105 | <br> |
| 106 | * TO FIX<br> |
| 107 | <br> |
| 108 | (10) Add cancel button to LTTV filter GUI window.<br> |
| 109 | (10) Sometimes, in the control flow view, a process with 0 creation time is |
| 110 | created in addition to the real process itself. Seems to be caused by end of |
| 111 | process life.<br> |
| 112 | (10) Statistics do not take in account the time spent in the mode present at |
| 113 | the beginning of the trace. Example : real time spent in system call on behalf |
| 114 | of process 0.<br> |
| 115 | <br> |
| 116 | <br> |
| 117 | Mathieu Desnoyers<br> |
| 118 | |
| 119 | |
| 120 | </body> |
| 121 | </html> |