| 1 | Linux Trace Toolkit |
| 2 | |
| 3 | Mathieu Desnoyers 17-05-2004 |
| 4 | |
| 5 | |
| 6 | This document explains how the lttvwindow API could process the event requests |
| 7 | of the viewers, merging event requests and hook lists to benefit from the fact |
| 8 | that process_traceset can call multiple hooks for the same event. |
| 9 | |
| 10 | First, we will explain the detailed process of event delivery in the current |
| 11 | framework. We will then study its strengths and weaknesses. |
| 12 | |
| 13 | In a second time, a framework where the events requests are dealt by the main |
| 14 | window with fine granularity will be described. We will then discussed the |
| 15 | advantages and inconvenients over the first framework. |
| 16 | |
| 17 | |
| 18 | 1. (Actual) Boundaryless event reading |
| 19 | |
| 20 | Actually, viewers request events in a time interval from the main window. They |
| 21 | also specify a (not so) maximum number of events to be delivered. In fact, the |
| 22 | number of events to read only gives a stop point, from where only events with |
| 23 | the same timestamp will be delivered. |
| 24 | |
| 25 | Viewers register hooks themselves in the traceset context. When merging read |
| 26 | requests in the main window, all hooks registered by viewers will be called for |
| 27 | the union of all the read requests, because the main window has no control on |
| 28 | hook registration. |
| 29 | |
| 30 | The main window calls process_traceset on its own for all the intervals |
| 31 | requested by all the viewers. It must not duplicate a read of the same time |
| 32 | interval : it could be very hard to filter by viewers. So, in order to achieve |
| 33 | this, time requests are sorted by start time, and process_traceset is called for |
| 34 | each time request. We keep the last event time between each read : if the start |
| 35 | time of the next read is lower than the time reached, we continue the reading |
| 36 | from the actual position. |
| 37 | |
| 38 | We deal with specific number of events requests (infinite end time) by |
| 39 | garantying that, starting from the time start of the request, at least that |
| 40 | number of events will be read. As we can't do it efficiently without interacting |
| 41 | very closely with process_traceset, we always read the specified number of |
| 42 | events requested starting from the current position when we answer to a request |
| 43 | based on the number of events. |
| 44 | |
| 45 | The viewers have to filter events delivered by traceset reading, because they |
| 46 | can be asked by another viewer for a totally (or partially) different time |
| 47 | interval. |
| 48 | |
| 49 | |
| 50 | Weaknesses |
| 51 | |
| 52 | - process_middle does not guarantee the number of events read |
| 53 | |
| 54 | First of all, a viewer that requests events to process_traceset has no garantee |
| 55 | that it will get exactly what it asked for. For example, a direct call to |
| 56 | traceset_middle for a specific number of events will delived _at least_ that |
| 57 | quantity of events, plus the ones that have the same timestamp that the last one |
| 58 | has. |
| 59 | |
| 60 | - Border effects |
| 61 | |
| 62 | Viewer's writers will have to deal with a lot of border effects caused by the |
| 63 | particularities of the reading. They will be required to select the information |
| 64 | they need from their input by filtering. |
| 65 | |
| 66 | - Lack of encapsulation and difficulty of testing |
| 67 | |
| 68 | The viewer's writer will have to take into account all the border effects caused |
| 69 | by the interaction with other modules. This means that event if a viewer works |
| 70 | well alone or with another viewer, it's possible that new bugs arises when a new |
| 71 | viewer comes around. So, even if a perfect testbench works well for a viewer, it |
| 72 | does not confirm that no new bug will arise when another viewer is loaded at the |
| 73 | same moment asking for different time intervals. |
| 74 | |
| 75 | |
| 76 | - Duplication of the work |
| 77 | |
| 78 | Time based filters and counters of events will have to be implemented at the |
| 79 | viewer's side, which is a duplication of the functionnalities that would |
| 80 | normally be expected from the tracecontext API. |
| 81 | |
| 82 | - Lack of control over the data input |
| 83 | |
| 84 | As we expect module's writers to prefer to be as close as possible from the raw |
| 85 | datas, making them interact with a lower level library that gives them a data |
| 86 | input that they only control by further filtering of the input is not |
| 87 | appropriated. We should expect some reluctancy from them about using this API |
| 88 | because of this lack of control on the input. |
| 89 | |
| 90 | - Speed cost |
| 91 | |
| 92 | All hooks of all viewers will be called for all the time intervals. So, if we |
| 93 | have a detailed events list and a control flow view, asking both for different |
| 94 | time intervals, the detailed events list will have to filter all the events |
| 95 | delivered originally to the control flow view. This can be a case occuring quite |
| 96 | often. |
| 97 | |
| 98 | |
| 99 | |
| 100 | Strengths |
| 101 | |
| 102 | - Simple concatenation of time intervals at the main window level. |
| 103 | |
| 104 | Having the opportunity of delivering more events than necessary to the viewers |
| 105 | means that we can concatenate time intervals and number of events requested |
| 106 | fairly easily, while being hard to determine if some specific cases will be |
| 107 | wrong, in depth testing being impossible. |
| 108 | |
| 109 | - No duplication of the tracecontext API |
| 110 | |
| 111 | Viewers deal directly with the tracecontext API for registering hooks, removing |
| 112 | a layer of encapsulation. |
| 113 | |
| 114 | |
| 115 | |
| 116 | |
| 117 | |
| 118 | 2. (Proposed) Strict boundaries events reading |
| 119 | |
| 120 | The idea behind this method is to provide exactly the events requested by the |
| 121 | viewers to them, no more, no less. |
| 122 | |
| 123 | It uses the new API for process traceset suggested in the document |
| 124 | process_traceset_strict_boundaries.txt. |
| 125 | |
| 126 | It also means that the lttvwindow API will have to deal with viewer's hooks. |
| 127 | Those will not be allowed to add them directly in the context. They will give |
| 128 | them to the lttvwindow API, along with the time interval or the position and |
| 129 | number of events. The lttvwindow API will have to take care of adding and |
| 130 | removing hooks for the different time intervals requested. That means that hooks |
| 131 | insertion and removal will be done between each traceset processing based on |
| 132 | the time intervals and event positions related to each hook. We must therefore |
| 133 | provide a simple interface for hooks passing between the viewers and the main |
| 134 | window, make them easier to manage from the main window. A modification to the |
| 135 | LttvHooks type solves this problem. |
| 136 | |
| 137 | |
| 138 | Architecture |
| 139 | |
| 140 | Added to the lttvwindow API : |
| 141 | |
| 142 | |
| 143 | void lttvwindow_events_request |
| 144 | ( Tab *tab, |
| 145 | const EventsRequest *events_request); |
| 146 | |
| 147 | void lttvwindow_events_request_remove_all |
| 148 | ( Tab *tab, |
| 149 | gconstpointer viewer); |
| 150 | |
| 151 | |
| 152 | Internal functions : |
| 153 | |
| 154 | - lttvwindow_process_pending_requests |
| 155 | |
| 156 | |
| 157 | Events Requests Removal |
| 158 | |
| 159 | A new API function will be necessary to let viewers remove all event requests |
| 160 | they have made previously. By allowing this, no more out of bound requests will |
| 161 | be serviced : a viewer that sees its time interval changed before the first |
| 162 | servicing is completed can clear its previous events requests and make a new |
| 163 | one for the new interval needed, considering the finished chunks as completed |
| 164 | area. |
| 165 | |
| 166 | It is also very useful for dealing with the viewer destruction case : the viewer |
| 167 | just has to remove its events requests from the main window before it gets |
| 168 | destroyed. |
| 169 | |
| 170 | |
| 171 | Permitted GTK Events Between Chunks |
| 172 | |
| 173 | All GTK Events will be enabled between chunks. This is due to the fact that the |
| 174 | background processing and a high priority request are seen as the same case. |
| 175 | While a background processing is in progress, the whole graphical interface must |
| 176 | be enabled. |
| 177 | |
| 178 | We needed to deal with the coherence of background processing and diverse GTK |
| 179 | events anyway. This algorithm provides a generalized way to deal with any type |
| 180 | of request and any GTK events. |
| 181 | |
| 182 | |
| 183 | Background Computation Request |
| 184 | |
| 185 | The types of background computation that can be requested by a viewer : state |
| 186 | computation (main window scope) or viewer specific background computation. |
| 187 | |
| 188 | A background computation request is asked via lttvwindow_events_request, with a |
| 189 | priority field set with a low priority. |
| 190 | |
| 191 | In the case of a background computation with viewer pointer field set to NULL, |
| 192 | if a lttvwindow_events_request_remove_all is done on the viewer pointer, it will |
| 193 | not affect the state computation as no viewer pointer will have been passed in |
| 194 | the initial request. This is the expected result. For the background processings |
| 195 | that call viewer's hooks, they will be removed. |
| 196 | |
| 197 | |
| 198 | A New "Redraw" Button |
| 199 | |
| 200 | It will be used to redraw the viewers entirely. It is useful to restart the |
| 201 | servicing after a "stop" action. |
| 202 | |
| 203 | A New "Continue" Button |
| 204 | |
| 205 | It will tell the viewers to send requests for damaged areas. It is useful to |
| 206 | complete the servicing after a "stop" action. |
| 207 | |
| 208 | |
| 209 | |
| 210 | Tab change |
| 211 | |
| 212 | If a tab change occurs, we still want to do background processing. |
| 213 | Events requests must be stocked in a list located in the same scope than the |
| 214 | traceset context. Right now, this is tab scope. All functions called from the |
| 215 | request servicing function must _not_ use the current_tab concept, as it may |
| 216 | change. The idle function must the take a tab, and not the main window, as |
| 217 | parameter. |
| 218 | |
| 219 | If a tab is removed, its associated idle events requests servicing function must |
| 220 | also be removed. |
| 221 | |
| 222 | It now looks a lot more useful to give a Tab* to the viewer instead of a |
| 223 | MainWindow*, as all the information needed by the viewer is located at the tab |
| 224 | level. It will diminish the dependance upon the current tab concept. |
| 225 | |
| 226 | |
| 227 | |
| 228 | Idle function (lttvwindow_process_pending_requests) |
| 229 | |
| 230 | The idle function must return FALSE to be removed from the idle functions when |
| 231 | no more events requests are pending. Otherwise, it returns TRUE. It will service |
| 232 | requests until there is no more request left. |
| 233 | |
| 234 | |
| 235 | |
| 236 | Implementation |
| 237 | |
| 238 | |
| 239 | - Type LttvHooks |
| 240 | |
| 241 | see hook_prio.txt |
| 242 | |
| 243 | The viewers will just have to pass hooks to the main window through this type, |
| 244 | using the hook.h interface to manipulate it. Then, the main window will add |
| 245 | them and remove them from the context to deliver exactly the events requested by |
| 246 | each viewer through process traceset. |
| 247 | |
| 248 | |
| 249 | - lttvwindow_events_request |
| 250 | |
| 251 | It adds the an EventsRequest struct to the list of events requests |
| 252 | pending and registers a pending request for the next g_idle if none is |
| 253 | registered. The viewer can access this structure during the read as its |
| 254 | hook_data. Only the stop_flag can be changed by the viewer through the |
| 255 | event hooks. |
| 256 | |
| 257 | typedef LttvEventsRequestPrio guint; |
| 258 | |
| 259 | typedef struct _EventsRequest { |
| 260 | gpointer owner; /* Owner of the request */ |
| 261 | gpointer viewer_data; /* Unset : NULL */ |
| 262 | gboolean servicing; /* service in progress: TRUE */ |
| 263 | LttvEventsRequestPrio prio; /* Ev. Req. priority */ |
| 264 | LttTime start_time;/* Unset : { G_MAXUINT, G_MAXUINT }*/ |
| 265 | LttvTracesetContextPosition *start_position; /* Unset : NULL */ |
| 266 | gboolean stop_flag; /* Continue:TRUE Stop:FALSE */ |
| 267 | LttTime end_time;/* Unset : { G_MAXUINT, G_MAXUINT } */ |
| 268 | guint num_events; /* Unset : G_MAXUINT */ |
| 269 | LttvTracesetContextPosition *end_position; /* Unset : NULL */ |
| 270 | LttvHooks *before_traceset; /* Unset : NULL */ |
| 271 | LttvHooks *before_trace; /* Unset : NULL */ |
| 272 | LttvHooks *before_tracefile;/* Unset : NULL */ |
| 273 | LttvHooks *event; /* Unset : NULL */ |
| 274 | LttvHooksById *event_by_id; /* Unset : NULL */ |
| 275 | LttvHooks *after_tracefile; /* Unset : NULL */ |
| 276 | LttvHooks *after_trace; /* Unset : NULL */ |
| 277 | LttvHooks *after_traceset; /* Unset : NULL */ |
| 278 | LttvHooks *before_request; /* Unset : NULL */ |
| 279 | LttvHooks *after_request /* Unset : NULL */ |
| 280 | } EventsRequest; |
| 281 | |
| 282 | - lttvwindow_events_request_remove_all |
| 283 | |
| 284 | It removes all the events requests from the pool that has their "owner" field |
| 285 | maching the owner pointer given as argument. |
| 286 | |
| 287 | It calls the traceset/trace/tracefile end hooks for each request removed if |
| 288 | they are currently serviced. |
| 289 | |
| 290 | |
| 291 | - lttvwindow_process_pending_requests |
| 292 | |
| 293 | This internal function gets called by g_idle, taking care of the pending |
| 294 | requests. It is responsible for concatenation of time intervals and position |
| 295 | requests. It does it with the following algorithm organizing process traceset |
| 296 | calls. Here is the detailed description of the way it works : |
| 297 | |
| 298 | |
| 299 | |
| 300 | - Revised Events Requests Servicing Algorithm (v2) |
| 301 | |
| 302 | The reads are splitted in chunks. After a chunk is over, we want to check if |
| 303 | there is a GTK Event pending and execute it. It can add or remove events |
| 304 | requests from the event requests list. If it happens, we want to start over |
| 305 | the algorithm from the beginning. The after traceset/trace/tracefile hooks are |
| 306 | called after each chunk, and before traceset/trace/tracefile are |
| 307 | called when the request processing resumes. Before and after request hooks are |
| 308 | called respectively before and after the request processing. |
| 309 | |
| 310 | |
| 311 | Data structures necessary : |
| 312 | |
| 313 | List of requests added to context : list_in |
| 314 | List of requests not added to context : list_out |
| 315 | |
| 316 | Initial state : |
| 317 | |
| 318 | list_in : empty |
| 319 | list_out : many events requests |
| 320 | |
| 321 | |
| 322 | // NOT A. While (list_in !empty or list_out !empty) and !GTK Event pending |
| 323 | |
| 324 | We do this once, go back to GTK, then get called again... |
| 325 | We remove the g_idle function when list_in and list_out are empty |
| 326 | |
| 327 | 1. If list_in is empty (need a seek) |
| 328 | 1.1 Add requests to list_in |
| 329 | 1.1.1 Find all time requests with lowest start time in list_out (ltime) |
| 330 | 1.1.2 Find all position requests with lowest position in list_out (lpos) |
| 331 | 1.1.3 If lpos.start time < ltime |
| 332 | - Add lpos to list_in, remove them from list_out |
| 333 | 1.1.4 Else, (lpos.start time >= ltime) |
| 334 | - Add ltime to list_in, remove them from list_out |
| 335 | 1.2 Seek |
| 336 | 1.2.1 If first request in list_in is a time request |
| 337 | - If first req in list_in start time != current time |
| 338 | - Seek to that time |
| 339 | 1.2.2 Else, the first request in list_in is a position request |
| 340 | - If first req in list_in pos != current pos |
| 341 | - seek to that position |
| 342 | 1.3 Add hooks and call before request for all list_in members |
| 343 | 1.3.1 If !servicing |
| 344 | - begin request hooks called |
| 345 | - servicing = TRUE |
| 346 | 1.3.2 call before_traceset |
| 347 | 1.3.3 events hooks added |
| 348 | 2. Else, list_in is not empty, we continue a read |
| 349 | 2.1 For each req of list_out |
| 350 | - if req.start time == current context time |
| 351 | - Add to list_in, remove from list_out |
| 352 | - If !servicing |
| 353 | - Call begin request |
| 354 | - servicing = TRUE |
| 355 | - Call before_traceset |
| 356 | - events hooks added |
| 357 | - if req.start position == current position |
| 358 | - Add to list_in, remove from list_out |
| 359 | - If !servicing |
| 360 | - Call begin request |
| 361 | - servicing = TRUE |
| 362 | - Call before_traceset |
| 363 | - events hooks added |
| 364 | |
| 365 | 3. Find end criterions |
| 366 | 3.1 End time |
| 367 | 3.1.1 Find lowest end time in list_in |
| 368 | 3.1.2 Find lowest start time in list_out (>= than current time*) |
| 369 | * To eliminate lower prio requests (not used) |
| 370 | 3.1.3 Use lowest of both as end time |
| 371 | 3.2 Number of events |
| 372 | 3.2.1 Find lowest number of events in list_in |
| 373 | 3.2.2 Use min(CHUNK_NUM_EVENTS, min num events in list_in) as num_events |
| 374 | 3.3 End position |
| 375 | 3.3.1 Find lowest end position in list_in |
| 376 | 3.3.2 Find lowest start position in list_out (>= than current |
| 377 | position) |
| 378 | 3.3.3 Use lowest of both as end position |
| 379 | |
| 380 | 4. Call process traceset middle |
| 381 | 4.1 Call process traceset middle (Use end criterion found in 3) |
| 382 | * note : end criterion can also be viewer's hook returning TRUE |
| 383 | 5. After process traceset middle |
| 384 | - if current context time > traceset.end time |
| 385 | - For each req in list_in |
| 386 | - Remove events hooks for req |
| 387 | - Call end traceset for req |
| 388 | - Call end request for req |
| 389 | - remove req from list_in |
| 390 | 5.1 For each req in list_in |
| 391 | - req.num -= count |
| 392 | - if req.num == 0 |
| 393 | - Remove events hooks for req |
| 394 | - Call end traceset for req |
| 395 | - Call end request for req |
| 396 | - remove req from list_in |
| 397 | - if current context time > req.end time |
| 398 | - Remove events hooks for req |
| 399 | - Call end traceset for req |
| 400 | - Call end request for req |
| 401 | - remove req from list_in |
| 402 | - if req.end pos == current pos |
| 403 | - Remove events hooks for req |
| 404 | - Call end traceset for req |
| 405 | - Call end request for req |
| 406 | - remove req from list_in |
| 407 | - if req.stop_flag == TRUE |
| 408 | - Remove events hooks for req |
| 409 | - Call end traceset for req |
| 410 | - Call end request for req |
| 411 | - remove req from list_in |
| 412 | |
| 413 | B. When interrupted |
| 414 | 1. for each request in list_in |
| 415 | 1.1. Use current postition as start position |
| 416 | 1.2. Remove start time |
| 417 | 1.3. Call after_traceset |
| 418 | 1.4. Remove event hooks |
| 419 | 1.5. Put it back in list_out |
| 420 | |
| 421 | |
| 422 | |
| 423 | Notes : |
| 424 | End criterions for process traceset middle : |
| 425 | If the criterion is reached, event is out of boundaries and we return. |
| 426 | Current time >= End time |
| 427 | Event count > Number of events |
| 428 | Current position >= End position |
| 429 | Last hook list called returned TRUE |
| 430 | |
| 431 | The >= for position is necessary to make ensure consistency between start time |
| 432 | requests and positions requests that happens to be at the exact same start time |
| 433 | and position. |
| 434 | |
| 435 | We only keep one saved state in memory. If, for example, a low priority |
| 436 | servicing is interrupted, a high priority is serviced, then the low priority |
| 437 | will use the saved state to start back where it was instead of seeking to the |
| 438 | time. In the very specific case where a low priority servicing is interrupted, |
| 439 | and then a high priority servicing on top of it is also interrupted, well, the |
| 440 | low priority will loose its state and will have to seek back. It should not |
| 441 | occur often. The solution to it would be to save one state per priority. |
| 442 | |
| 443 | |
| 444 | |
| 445 | |
| 446 | |
| 447 | |
| 448 | Weaknesses |
| 449 | |
| 450 | - ? |
| 451 | |
| 452 | Strengths |
| 453 | |
| 454 | - Removes the need for filtering of information supplied to the viewers. |
| 455 | |
| 456 | - Viewers have a better control on their data input. |
| 457 | |
| 458 | - Solves all the weaknesses idenfied in the actual boundaryless traceset |
| 459 | reading. |
| 460 | |
| 461 | - Background processing available. |
| 462 | |