a0beef4f |
1 | |
2 | Redesign of the GUI detailed event list |
3 | |
4 | Mathieu Desnoyers 08/2005 |
5 | |
6 | The basic problem about this list is that it uses the number of events, not the |
7 | time, as a vertical axis (for the Y axis scrollbar). |
8 | |
9 | Seeking in the traces is done by time. We have no clue of the number of events |
10 | between two times without doing preparsing. |
11 | |
12 | If we want to fully reuse textDump, it's bettwer if we depend upon state |
13 | computation. It would be good to make the viewer work with this information |
14 | missing though. |
15 | |
16 | textDump's print_field should be put in a lttv/lttv core file, so we can use it |
17 | as is in the detailed event list module without depending upon batchAnalysis. |
18 | |
19 | |
20 | * With preparsing only : |
21 | |
22 | The problem then becomes simpler : |
23 | |
24 | We can precompute the event number while doing state computation and save it |
25 | periodically with saved states. We can then use the event number in the trace |
26 | as scrollbar value, which, when scrolled, would result into a search in the |
27 | saved states by event number. |
28 | |
29 | How much time would it take to seek back to the wanted position from the last |
30 | saved state ? |
31 | |
32 | compudj@dijkstra:~/local/bin$ ./lttv -m batchtest -1 -2 -t |
33 | /home/compudj/traces/200MB |
34 | ** Message: Processing trace while counting events (12447572 events in 14.0173 |
35 | seconds) |
36 | ** Message: Processing trace while updating state (9.46535 seconds) |
37 | |
38 | 9.46535 s / 12447572 events * 50000 events = 0.038 s |
39 | |
40 | 38 ms latency shouldn't be too noticeable by a user when scrolling. |
41 | |
42 | (note : counting events batchtest does also verify time flow integrity and get |
43 | the position for each event (not optimal), that's why it takes 14s) |
44 | |
45 | As an optimisation, we could use a backing text buffer (an array of strings), |
46 | where we would save the 50000 computed events between two consecutive saved |
47 | states. |
48 | |
49 | Memory required : 50000 * 20 bytes/event = 1MB |
50 | |
51 | Which seems ok, but costy. In would be better, in fact, not to depend on the |
52 | saved states interval for such a value : we could keep a 1000 events array, for |
53 | instance (for 20KB cost, which is really better). |
54 | |
55 | The backing text buffer would, by itself, make sure it has a sufficient |
56 | number of events so a scroll up/down of one page would be responded directly. |
57 | That imply that a scroll up/down would first update the shown fields, and only |
58 | afterward make the backing buffer resync its events in the background. In the |
59 | case where the events were not directly available, it would have to update the |
60 | buffer in the foreground and only then show the requested events. |
61 | |
62 | |
dde7fe13 |
63 | Important note : this design doesn't support filtering of events, which is |
64 | an important downside. |
65 | |
a0beef4f |
66 | |
67 | |
68 | * If we want the viewer to be able to show information without preparsing : |
69 | |
70 | This is the hardest the problem could get. We have to seek by time (even the |
71 | scrollbar must seek by time), but increment/decrement event by event when using |
72 | the scrollbar up/down, page up/page down. Let's call them "far scroll" and "near |
73 | scroll", respectively. |
74 | |
75 | A far scroll must resync the trace to the time requested by the scrollbar value. |
76 | |
77 | A near scroll must sync the trace to a time that is prior to the requested |
78 | event, show the events requested, and then sync the scrollbar value (without |
79 | event updating) to the shown event time. |
80 | |
81 | * seek n events backward |
82 | |
83 | We have no information about how far back we must request events in the trace : |
84 | |
85 | The algorithm would look like : |
86 | |
87 | seek_n_events_backward(current time, current position, time_offset, filter) |
88 | Returns : a TracesetPosition |
89 | - If the current time < beginning of trace, is means we cannot get any more |
90 | events, inform the requester that a list of less than n events is ready. |
91 | - Else, request a read to a the time_offset backward, calling the |
92 | per event hook, and calling the after_traceset hook when finished. The end |
93 | position would be the position of the current first event. |
94 | |
95 | per_event_hook |
96 | - if filter returns true |
97 | - Append the traceset position to a list of maximum size n. Remove the first |
98 | entries. |
99 | |
100 | after_traceset_hook |
101 | - if the list has a size less than n, invoke a seek_n_events_backward |
102 | subsequent iteration, for completing the list. The new time_offset is the |
103 | last time_offset used multiplied by 2. (can be done by tail recursion (if we |
104 | want to split this operation in multiple segments) or by an iterative |
105 | algorithm (seek_n_events_backward would be a while() calling its own |
106 | process_traceset_middle()). |
107 | - if the list a a size of n, it's complete : call the viewer get_print_events |
108 | hook. |
109 | |
110 | |
111 | * seek n events forward |
112 | |
113 | seek_n_events_forward(current position, filter) |
114 | - Simple : seek to the current position, request read of trace calling an |
115 | event counting hook (starts at 0). |
116 | |
117 | event_counting_hook |
118 | - if filter returns true |
119 | - increment event count. |
120 | - if event count > requested count, inform that the current position if the |
121 | wanted position. Return TRUE, so the read will stop. |
122 | |
123 | |
124 | * Printing events |
125 | |
126 | get_print_events |
127 | - seek to the position at the beginning of the list. End position is the |
128 | current one (not in the list! the one currently shown). Call a events |
129 | request between this positions, printing the fields to strings shown in the |
130 | viewer. |
131 | |
132 | |
133 | |
134 | seek_n_events backward and forward seems to be interesting algorithms that |
135 | should be implemented in the tracecontext library. With those helpers, it would |
136 | become simpler to implement a detailed event list not depending on state |
137 | computation. |
138 | |
139 | |