From: Michael Jeanson Date: Wed, 20 Sep 2017 16:12:40 +0000 (-0400) Subject: Fix: update writeback instrumentation for kernel 4.14 X-Git-Tag: v2.11.0-rc1~106 X-Git-Url: https://git.lttng.org./?a=commitdiff_plain;h=7ceeb15de1454896381ca45f68151211de6eff6c;p=lttng-modules.git Fix: update writeback instrumentation for kernel 4.14 See upstream commits: commit 11fb998986a72aa7e997d96d63d52582a01228c5 Author: Mel Gorman Date: Thu Jul 28 15:46:20 2016 -0700 mm: move most file-based accounting to the node There are now a number of accounting oddities such as mapped file pages being accounted for on the node while the total number of file pages are accounted on the zone. This can be coped with to some extent but it's confusing so this patch moves the relevant file-based accounted. Due to throttling logic in the page allocator for reliable OOM detection, it is still necessary to track dirty and writeback pages on a per-zone basis. commit c4a25635b60d08853a3e4eaae3ab34419a36cfa2 Author: Mel Gorman Date: Thu Jul 28 15:46:23 2016 -0700 mm: move vmscan writes and file write accounting to the node As reclaim is now node-based, it follows that page write activity due to page reclaim should also be accounted for on the node. For consistency, also account page writes and page dirtying on a per-node basis. After this patch, there are a few remaining zone counters that may appear strange but are fine. NUMA stats are still per-zone as this is a user-space interface that tools consume. NR_MLOCK, NR_SLAB_*, NR_PAGETABLE, NR_KERNEL_STACK and NR_BOUNCE are all allocations that potentially pin low memory and cannot trivially be reclaimed on demand. This information is still useful for debugging a page allocation failure warning. Signed-off-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers --- diff --git a/instrumentation/events/lttng-module/writeback.h b/instrumentation/events/lttng-module/writeback.h index 6006c294..c472b335 100644 --- a/instrumentation/events/lttng-module/writeback.h +++ b/instrumentation/events/lttng-module/writeback.h @@ -400,6 +400,31 @@ LTTNG_TRACEPOINT_EVENT(writeback_queue_io, ) ) +#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0)) +LTTNG_TRACEPOINT_EVENT_MAP(global_dirty_state, + + writeback_global_dirty_state, + + TP_PROTO(unsigned long background_thresh, + unsigned long dirty_thresh + ), + + TP_ARGS(background_thresh, + dirty_thresh + ), + + TP_FIELDS( + ctf_integer(unsigned long, nr_dirty, global_node_page_state(NR_FILE_DIRTY)) + ctf_integer(unsigned long, nr_writeback, global_node_page_state(NR_WRITEBACK)) + ctf_integer(unsigned long, nr_unstable, global_node_page_state(NR_UNSTABLE_NFS)) + ctf_integer(unsigned long, nr_dirtied, global_node_page_state(NR_DIRTIED)) + ctf_integer(unsigned long, nr_written, global_node_page_state(NR_WRITTEN)) + ctf_integer(unsigned long, background_thresh, background_thresh) + ctf_integer(unsigned long, dirty_thresh, dirty_thresh) + ctf_integer(unsigned long, dirty_limit, global_dirty_limit) + ) +) +#else LTTNG_TRACEPOINT_EVENT_MAP(global_dirty_state, writeback_global_dirty_state, @@ -424,6 +449,7 @@ LTTNG_TRACEPOINT_EVENT_MAP(global_dirty_state, ) ) #endif +#endif #if (LINUX_VERSION_CODE >= KERNEL_VERSION(3,2,0))