3 Mathieu Desnoyers 17-05-2004
6 This document explains how the lttvwindow API could process the event requests
7 of the viewers, merging event requests and hook lists to benefit from the fact
8 that process_traceset can call multiple hooks for the same event.
10 First, we will explain the detailed process of event delivery in the current
11 framework. We will then study its strengths and weaknesses.
13 In a second time, a framework where the events requests are dealt by the main
14 window with fine granularity will be described. We will then discussed the
15 advantages and inconvenients over the first framework.
18 1. (Actual) Boundaryless event reading
20 Actually, viewers request events in a time interval from the main window. They
21 also specify a (not so) maximum number of events to be delivered. In fact, the
22 number of events to read only gives a stop point, from where only events with
23 the same timestamp will be delivered.
25 Viewers register hooks themselves in the traceset context. When merging read
26 requests in the main window, all hooks registered by viewers will be called for
27 the union of all the read requests, because the main window has no control on
30 The main window calls process_traceset on its own for all the intervals
31 requested by all the viewers. It must not duplicate a read of the same time
32 interval : it could be very hard to filter by viewers. So, in order to achieve
33 this, time requests are sorted by start time, and process_traceset is called for
34 each time request. We keep the last event time between each read : if the start
35 time of the next read is lower than the time reached, we continue the reading
36 from the actual position.
38 We deal with specific number of events requests (infinite end time) by
39 garantying that, starting from the time start of the request, at least that
40 number of events will be read. As we can't do it efficiently without interacting
41 very closely with process_traceset, we always read the specified number of
42 events requested starting from the current position when we answer to a request
43 based on the number of events.
45 The viewers have to filter events delivered by traceset reading, because they
46 can be asked by another viewer for a totally (or partially) different time
52 - process_middle does not guarantee the number of events read
54 First of all, a viewer that requests events to process_traceset has no garantee
55 that it will get exactly what it asked for. For example, a direct call to
56 traceset_middle for a specific number of events will delived _at least_ that
57 quantity of events, plus the ones that have the same timestamp that the last one
62 Viewer's writers will have to deal with a lot of border effects caused by the
63 particularities of the reading. They will be required to select the information
64 they need from their input by filtering.
66 - Lack of encapsulation and difficulty of testing
68 The viewer's writer will have to take into account all the border effects caused
69 by the interaction with other modules. This means that event if a viewer works
70 well alone or with another viewer, it's possible that new bugs arises when a new
71 viewer comes around. So, even if a perfect testbench works well for a viewer, it
72 does not confirm that no new bug will arise when another viewer is loaded at the
73 same moment asking for different time intervals.
76 - Duplication of the work
78 Time based filters and counters of events will have to be implemented at the
79 viewer's side, which is a duplication of the functionnalities that would
80 normally be expected from the tracecontext API.
82 - Lack of control over the data input
84 As we expect module's writers to prefer to be as close as possible from the raw
85 datas, making them interact with a lower level library that gives them a data
86 input that they only control by further filtering of the input is not
87 appropriated. We should expect some reluctancy from them about using this API
88 because of this lack of control on the input.
92 All hooks of all viewers will be called for all the time intervals. So, if we
93 have a detailed events list and a control flow view, asking both for different
94 time intervals, the detailed events list will have to filter all the events
95 delivered originally to the control flow view. This can be a case occuring quite
102 - Simple concatenation of time intervals at the main window level.
104 Having the opportunity of delivering more events than necessary to the viewers
105 means that we can concatenate time intervals and number of events requested
106 fairly easily, while being hard to determine if some specific cases will be
107 wrong, in depth testing being impossible.
109 - No duplication of the tracecontext API
111 Viewers deal directly with the tracecontext API for registering hooks, removing
112 a layer of encapsulation.
118 2. (Proposed) Strict boundaries events reading
120 The idea behind this method is to provide exactly the events requested by the
121 viewers to them, no more, no less.
123 It uses the new API for process traceset suggested in the document
124 process_traceset_strict_boundaries.txt.
126 It also means that the lttvwindow API will have to deal with viewer's hooks.
127 Those will not be allowed to add them directly in the context. They will give
128 them to the lttvwindow API, along with the time interval or the position and
129 number of events. The lttvwindow API will have to take care of adding and
130 removing hooks for the different time intervals requested. That means that hooks
131 insertion and removal will be done between each traceset processing based on
132 the time intervals and event positions related to each hook. We must therefore
133 provide a simple interface for hooks passing between the viewers and the main
134 window, making them easier to manage from the main window. A modification to the
135 LttvHooks type solves this problem.
140 Added to the lttvwindow API :
143 void lttvwindow_events_request
145 const EventsRequest *events_request);
147 void lttvwindow_events_request_remove_all
149 gconstpointer viewer);
154 - lttvwindow_process_pending_requests
157 Events Requests Removal
159 A new API function will be necessary to let viewers remove all event requests
160 they have made previously. By allowing this, no more out of bound requests will
161 be serviced : a viewer that sees its time interval changed before the first
162 servicing is completed can clear its previous events requests and make a new
163 one for the new interval needed, considering the finished chunks as completed
166 It is also very useful for dealing with the viewer destruction case : the viewer
167 just has to remove its events requests from the main window before it gets
171 Permitted GTK Events Between Chunks
173 All GTK Events will be enabled between chunks. A viewer could ask for a
174 long computation that has no impact on the display : in that case, it is
175 necessary to keep the graphical interface active. While a processing is in
176 progress, the whole graphical interface must be enabled.
178 We needed to deal with the coherence of background processing and diverse GTK
179 events anyway. This algorithm provides a generalized way to deal with any type
180 of request and any GTK events.
183 Background Computation Request
185 A background computation has a trace scope, and is therefore not linked to a
186 main window. It is not detailed in this document.
187 see requests_servicing_schedulers.txt
189 A New "Redraw" Button
191 It will be used to redraw the viewers entirely. It is useful to restart the
192 servicing after a "stop" action.
194 A New "Continue" Button
196 It will tell the viewers to send requests for damaged areas. It is useful to
197 complete the servicing after a "stop" action.
203 If a tab change occurs, we still want to do background processing.
204 Events requests must be stocked in a list located in the same scope than the
205 traceset context. Right now, this is tab scope. All functions called from the
206 request servicing function must _not_ use the current_tab concept, as it may
207 change. The idle function must the take a tab, and not the main window, as
210 If a tab is removed, its associated idle events requests servicing function must
213 It now looks a lot more useful to give a Tab* to the viewer instead of a
214 MainWindow*, as all the information needed by the viewer is located at the tab
215 level. It will diminish the dependance upon the current tab concept.
219 Idle function (lttvwindow_process_pending_requests)
221 The idle function must return FALSE to be removed from the idle functions when
222 no more events requests are pending. Otherwise, it returns TRUE. It will service
223 requests until there is no more request left.
234 The viewers will just have to pass hooks to the main window through this type,
235 using the hook.h interface to manipulate it. Then, the main window will add
236 them and remove them from the context to deliver exactly the events requested by
237 each viewer through process traceset.
240 - lttvwindow_events_request
242 It adds the an EventsRequest struct to the list of events requests
243 pending and registers a pending request for the next g_idle if none is
244 registered. The viewer can access this structure during the read as its
245 hook_data. Only the stop_flag can be changed by the viewer through the
248 typedef struct _EventsRequest {
249 gpointer owner; /* Owner of the request */
250 gpointer viewer_data; /* Unset : NULL */
251 gboolean servicing; /* service in progress: TRUE */
252 LttTime start_time;/* Unset : { G_MAXUINT, G_MAXUINT }*/
253 LttvTracesetContextPosition *start_position; /* Unset : NULL */
254 gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
255 LttTime end_time;/* Unset : { G_MAXUINT, G_MAXUINT } */
256 guint num_events; /* Unset : G_MAXUINT */
257 LttvTracesetContextPosition *end_position; /* Unset : NULL */
258 LttvHooks *before_chunk_traceset; /* Unset : NULL */
259 LttvHooks *before_chunk_trace; /* Unset : NULL */
260 LttvHooks *before_chunk_tracefile;/* Unset : NULL */
261 LttvHooks *event; /* Unset : NULL */
262 LttvHooksById *event_by_id; /* Unset : NULL */
263 LttvHooks *after_chunk_tracefile; /* Unset : NULL */
264 LttvHooks *after_chunk_trace; /* Unset : NULL */
265 LttvHooks *after_chunk_traceset; /* Unset : NULL */
266 LttvHooks *before_request; /* Unset : NULL */
267 LttvHooks *after_request; /* Unset : NULL */
271 - lttvwindow_events_request_remove_all
273 It removes all the events requests from the pool that has their "owner" field
274 maching the owner pointer given as argument.
276 It calls the traceset/trace/tracefile end hooks for each request removed if
277 they are currently serviced.
280 - lttvwindow_process_pending_requests
282 This internal function gets called by g_idle, taking care of the pending
283 requests. It is responsible for concatenation of time intervals and position
284 requests. It does it with the following algorithm organizing process traceset
285 calls. Here is the detailed description of the way it works :
289 - Revised Events Requests Servicing Algorithm (v2)
291 The reads are splitted in chunks. After a chunk is over, we want to check if
292 there is a GTK Event pending and execute it. It can add or remove events
293 requests from the event requests list. If it happens, we want to start over
294 the algorithm from the beginning. The after traceset/trace/tracefile hooks are
295 called after each chunk, and before traceset/trace/tracefile are
296 called when the request processing resumes. Before and after request hooks are
297 called respectively before and after the request processing.
300 Data structures necessary :
302 List of requests added to context : list_in
303 List of requests not added to context : list_out
308 list_out : many events requests
312 0.2 Seek traces positions to current context position.
314 A. While (list_in !empty or list_out !empty)
315 1. If list_in is empty (need a seek)
316 1.1 Add requests to list_in
317 1.1.1 Find all time requests with lowest start time in list_out (ltime)
318 1.1.2 Find all position requests with lowest position in list_out (lpos)
319 1.1.3 If lpos.start time < ltime
320 - Add lpos to list_in, remove them from list_out
321 1.1.4 Else, (lpos.start time >= ltime)
322 - Add ltime to list_in, remove them from list_out
324 1.2.1 If first request in list_in is a time request
325 - If first req in list_in start time != current time
327 1.2.2 Else, the first request in list_in is a position request
328 - If first req in list_in pos != current pos
329 - seek to that position
330 1.3 Add hooks and call before request for all list_in members
332 - begin request hooks called
334 1.3.2 call before chunk
335 1.3.3 events hooks added
336 2. Else, list_in is not empty, we continue a read
337 2.0 For each req of list_in
340 2.1 For each req of list_out
341 - if req.start time == current context time
342 or req.start position == current position
343 - Add to list_in, remove from list_out
345 - Call before request
350 3. Find end criterions
352 3.1.1 Find lowest end time in list_in
353 3.1.2 Find lowest start time in list_out (>= than current time*)
354 * To eliminate lower prio requests (not used)
355 3.1.3 Use lowest of both as end time
357 3.2.1 Find lowest number of events in list_in
358 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events
360 3.3.1 Find lowest end position in list_in
361 3.3.2 Find lowest start position in list_out (>= than current
363 3.3.3 Use lowest of both as end position
365 4. Call process traceset middle
366 4.1 Call process traceset middle (Use end criterion found in 3)
367 * note : end criterion can also be viewer's hook returning TRUE
368 5. After process traceset middle
369 - if current context time > traceset.end time
370 - For each req in list_in
371 - Remove events hooks for req
372 - Call end chunk for req
373 - Call end request for req
374 - remove req from list_in
375 5.1 For each req in list_in
376 - Call end chunk for req
377 - Remove events hooks for req
381 current context time >= req.end time
383 req.end pos == current pos
385 req.stop_flag == TRUE
386 - Call end request for req
387 - remove req from list_in
388 If GTK Event pending : break A loop
390 B. When interrupted between chunks
391 1. for each request in list_in
392 1.1. Use current postition as start position
393 1.2. Remove start time
394 1.3. Move from list_in to list_out
401 End criterions for process traceset middle :
402 If the criterion is reached, event is out of boundaries and we return.
403 Current time >= End time
404 Event count > Number of events
405 Current position >= End position
406 Last hook list called returned TRUE
408 The >= for position is necessary to make ensure consistency between start time
409 requests and positions requests that happens to be at the exact same start time
421 - Removes the need for filtering of information supplied to the viewers.
423 - Viewers have a better control on their data input.
425 - Solves all the weaknesses idenfied in the actual boundaryless traceset