Added to the lttvwindow API :
-- lttvwindow_events_request
-( MainWindow *main_win,
- LttTime start_time,
- LttvTracesetPosition start_position,
- LttTime end_time,
- guint num_events,
- LttvTracesetPosition end_position,
- LttvHooksById before_traceset,
- LttvHooksById before_trace,
- LttvHooksById before_tracefile,
- LttvHooksById middle,
- LttvHooksById after_tracefile,
- LttvHooksById after_trace,
- LttvHooksById after_traceset)
+void lttvwindow_events_request
+( Tab *tab,
+ const EventsRequest *events_request);
+
+void lttvwindow_events_request_remove_all
+( Tab *tab,
+ gconstpointer viewer);
Internal functions :
- lttvwindow_process_pending_requests
+Events Requests Removal
+
+A new API function will be necessary to let viewers remove all event requests
+they have made previously. By allowing this, no more out of bound requests will
+be serviced : a viewer that sees its time interval changed before the first
+servicing is completed can clear its previous events requests and make a new
+one for the new interval needed, considering the finished chunks as completed
+area.
+
+It is also very useful for dealing with the viewer destruction case : the viewer
+just has to remove its events requests from the main window before it gets
+destroyed.
+
+
+Permitted GTK Events Between Chunks
+
+All GTK Events will be enabled between chunks. This is due to the fact that the
+background processing and a high priority request are seen as the same case.
+While a background processing is in progress, the whole graphical interface must
+be enabled.
+
+We needed to deal with the coherence of background processing and diverse GTK
+events anyway. This algorithm provides a generalized way to deal with any type
+of request and any GTK events.
+
+
+Background Computation Request
+
+The types of background computation that can be requested by a viewer : state
+computation (main window scope) or viewer specific background computation.
+
+A background computation request is asked via lttvwindow_events_request, with a
+priority field set with a low priority.
+
+In the case of a background computation with viewer pointer field set to NULL,
+if a lttvwindow_events_request_remove_all is done on the viewer pointer, it will
+not affect the state computation as no viewer pointer will have been passed in
+the initial request. This is the expected result. For the background processings
+that call viewer's hooks, they will be removed.
+
+
+A New "Redraw" Button
+
+It will be used to redraw the viewers entirely. It is useful to restart the
+servicing after a "stop" action.
+
+A New "Continue" Button
+
+It will tell the viewers to send requests for damaged areas. It is useful to
+complete the servicing after a "stop" action.
+
+
+
+Tab change
+
+If a tab change occurs, we still want to do background processing.
+Events requests must be stocked in a list located in the same scope than the
+traceset context. Right now, this is tab scope. All functions called from the
+request servicing function must _not_ use the current_tab concept, as it may
+change. The idle function must the take a tab, and not the main window, as
+parameter.
+
+If a tab is removed, its associated idle events requests servicing function must
+also be removed.
+
+It now looks a lot more useful to give a Tab* to the viewer instead of a
+MainWindow*, as all the information needed by the viewer is located at the tab
+level. It will diminish the dependance upon the current tab concept.
+
+
+
+Idle function (lttvwindow_process_pending_requests)
+
+The idle function must return FALSE to be removed from the idle functions when
+no more events requests are pending. Otherwise, it returns TRUE. It will service
+requests until there is no more request left.
+
+
Implementation
- lttvwindow_events_request
-It adds the EventsRequest struct to the array of time requests pending and
-registers a pending request for the next g_idle if none is registered.
+It adds the an EventsRequest struct to the list of events requests
+pending and registers a pending request for the next g_idle if none is
+registered. The viewer can access this structure during the read as its
+hook_data. Only the stop_flag can be changed by the viewer through the
+event hooks.
+
+typedef LttvEventsRequestPrio guint;
typedef struct _EventsRequest {
- LttTime start_time,
- LttvTracesetPosition start_position,
- LttTime end_time,
- guint num_events,
- LttvTracesetPosition end_position,
- LttvHooksById before_traceset,
- LttvHooksById before_trace,
- LttvHooksById before_tracefile,
- LttvHooksById middle,
- LttvHooksById after_tracefile,
- LttvHooksById after_trace,
- LttvHooksById after_traceset)
+ gpointer owner; /* Owner of the request */
+ gpointer viewer_data; /* Unset : NULL */
+ gboolean servicing; /* service in progress: TRUE */
+ LttvEventsRequestPrio prio; /* Ev. Req. priority */
+ LttTime start_time;/* Unset : { G_MAXUINT, G_MAXUINT }*/
+ LttvTracesetContextPosition *start_position; /* Unset : NULL */
+ gboolean stop_flag; /* Continue:TRUE Stop:FALSE */
+ LttTime end_time;/* Unset : { G_MAXUINT, G_MAXUINT } */
+ guint num_events; /* Unset : G_MAXUINT */
+ LttvTracesetContextPosition *end_position; /* Unset : NULL */
+ LttvHooks *before_traceset; /* Unset : NULL */
+ LttvHooks *before_trace; /* Unset : NULL */
+ LttvHooks *before_tracefile;/* Unset : NULL */
+ LttvHooks *event; /* Unset : NULL */
+ LttvHooksById *event_by_id; /* Unset : NULL */
+ LttvHooks *after_tracefile; /* Unset : NULL */
+ LttvHooks *after_trace; /* Unset : NULL */
+ LttvHooks *after_traceset; /* Unset : NULL */
+ LttvHooks *before_request; /* Unset : NULL */
+ LttvHooks *after_request /* Unset : NULL */
} EventsRequest;
+- lttvwindow_events_request_remove_all
+
+It removes all the events requests from the pool that has their "owner" field
+maching the owner pointer given as argument.
+
+It calls the traceset/trace/tracefile end hooks for each request removed if
+they are currently serviced.
+
- lttvwindow_process_pending_requests
calls. Here is the detailed description of the way it works :
-- Events Requests Servicing Algorithm
+
+- Revised Events Requests Servicing Algorithm (v2)
+
+The reads are splitted in chunks. After a chunk is over, we want to check if
+there is a GTK Event pending and execute it. It can add or remove events
+requests from the event requests list. If it happens, we want to start over
+the algorithm from the beginning. The after traceset/trace/tracefile hooks are
+called after each chunk, and before traceset/trace/tracefile are
+called when the request processing resumes. Before and after request hooks are
+called respectively before and after the request processing.
+
Data structures necessary :
list_out : many events requests
-While list_in !empty and list_out !empty
+// NOT A. While (list_in !empty or list_out !empty) and !GTK Event pending
+
+We do this once, go back to GTK, then get called again...
+We remove the g_idle function when list_in and list_out are empty
+
1. If list_in is empty (need a seek)
1.1 Add requests to list_in
- 1.1.1 Find all time requests with the lowest start time in list_out
- (ltime)
- 1.1.2 Find all position requests with the lowest position in list_out
- (lpos)
+ 1.1.1 Find all time requests with lowest start time in list_out (ltime)
+ 1.1.2 Find all position requests with lowest position in list_out (lpos)
1.1.3 If lpos.start time < ltime
- Add lpos to list_in, remove them from list_out
1.1.4 Else, (lpos.start time >= ltime)
- Add ltime to list_in, remove them from list_out
1.2 Seek
1.2.1 If first request in list_in is a time request
- 1.2.1.1 Seek to that time
+ - If first req in list_in start time != current time
+ - Seek to that time
1.2.2 Else, the first request in list_in is a position request
- 1.2.2.1 Seek to that position
- 1.3 Call begin for all list_in members
- (1.3.1 begin hooks called)
- (1.3.2 middle hooks added)
+ - If first req in list_in pos != current pos
+ - seek to that position
+ 1.3 Add hooks and call before request for all list_in members
+ 1.3.1 If !servicing
+ - begin request hooks called
+ - servicing = TRUE
+ 1.3.2 call before_traceset
+ 1.3.3 events hooks added
2. Else, list_in is not empty, we continue a read
2.1 For each req of list_out
- - if req.start time == current time
+ - if req.start time == current context time
- Add to list_in, remove from list_out
- - Call begin
+ - If !servicing
+ - Call begin request
+ - servicing = TRUE
+ - Call before_traceset
+ - events hooks added
- if req.start position == current position
- Add to list_in, remove from list_out
- - Call begin
+ - If !servicing
+ - Call begin request
+ - servicing = TRUE
+ - Call before_traceset
+ - events hooks added
3. Find end criterions
3.1 End time
3.1.1 Find lowest end time in list_in
- 3.1.2 Find lowest start time in list_out
+ 3.1.2 Find lowest start time in list_out (>= than current time*)
+ * To eliminate lower prio requests (not used)
3.1.3 Use lowest of both as end time
3.2 Number of events
3.2.1 Find lowest number of events in list_in
+ 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events
3.3 End position
3.3.1 Find lowest end position in list_in
- 3.3.2 Find lowest start position in list_out
+ 3.3.2 Find lowest start position in list_out (>= than current
+ position)
3.3.3 Use lowest of both as end position
4. Call process traceset middle
4.1 Call process traceset middle (Use end criterion found in 3)
+ * note : end criterion can also be viewer's hook returning TRUE
5. After process traceset middle
+ - if current context time > traceset.end time
+ - For each req in list_in
+ - Remove events hooks for req
+ - Call end traceset for req
+ - Call end request for req
+ - remove req from list_in
5.1 For each req in list_in
- req.num -= count
- if req.num == 0
- - Call end for req
+ - Remove events hooks for req
+ - Call end traceset for req
+ - Call end request for req
- remove req from list_in
- - if req.end time == current time
- - Call end for req
+ - if current context time > req.end time
+ - Remove events hooks for req
+ - Call end traceset for req
+ - Call end request for req
- remove req from list_in
- if req.end pos == current pos
- - Call end for req
+ - Remove events hooks for req
+ - Call end traceset for req
+ - Call end request for req
- remove req from list_in
+ - if req.stop_flag == TRUE
+ - Remove events hooks for req
+ - Call end traceset for req
+ - Call end request for req
+ - remove req from list_in
+
+B. When interrupted
+ 1. for each request in list_in
+ 1.1. Use current postition as start position
+ 1.2. Remove start time
+ 1.3. Call after_traceset
+ 1.4. Remove event hooks
+ 1.5. Put it back in list_out
Notes :
End criterions for process traceset middle :
If the criterion is reached, event is out of boundaries and we return.
-Current time > End time
+Current time >= End time
Event count > Number of events
Current position >= End position
+Last hook list called returned TRUE
The >= for position is necessary to make ensure consistency between start time
requests and positions requests that happens to be at the exact same start time
and position.
+We only keep one saved state in memory. If, for example, a low priority
+servicing is interrupted, a high priority is serviced, then the low priority
+will use the saved state to start back where it was instead of seeking to the
+time. In the very specific case where a low priority servicing is interrupted,
+and then a high priority servicing on top of it is also interrupted, well, the
+low priority will loose its state and will have to seek back. It should not
+occur often. The solution to it would be to save one state per priority.
-Weaknesses
-- None (nearly?) :)
+
+Weaknesses
+
+- ?
+
Strengths
- Removes the need for filtering of information supplied to the viewers.
- Solves all the weaknesses idenfied in the actual boundaryless traceset
reading.
+
+- Background processing available.
+