Introduce the lttng_get_clid helper to always check for NULL pointer
when getting the client id. While not always strictly needed depending
on the tracepoint callsite, prefer robustness of instrumentation and
always check for NULL rather than play whack-a-mole.
Within include/linux/sunrpc/clnt.h:struct rpc_cltn, the cl_clid field
is an unsigned integer, which is the type expected by the tracepoint
signature.
However, looking into net/sunrpc/clnt.c:rpc_alloc_clid(), its allocation
considers negative signed integer as errors.
Therefore, in order to properly show "-1" in the trace output (rather
than MAX_INT) when called with a NULL task->tk_client, move to a
signed integer as backing type for the client_id field.
btrfs: use fs_info for btrfs_handle_em_exist tracepoint
We really want to know to which filesystem the extent map events belong,
but as it cannot be reached from the extent_map pointers, we need to
pass it down the callchain.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
So far we have reserved only relatively high fixed amount of revoke
credits for each transaction. We over-reserved by large amount for most
cases but when freeing large directories or files with data journalling,
the fixed amount is not enough. In fact the worst case estimate is
inconveniently large (maximum extent size) for freeing of one extent.
We fix this by doing proper estimate of the amount of blocks that need
to be revoked when removing blocks from the inode due to truncate or
hole punching and otherwise reserve just a small amount of revoke
credits for each transaction to accommodate freeing of xattrs block or
so.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
btrfs: use fs_info for btrfs_handle_em_exist tracepoint
We really want to know to which filesystem the extent map events belong,
but as it cannot be reached from the extent_map pointers, we need to
pass it down the callchain.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The type name is misleading, a single entry is named 'cache' while this
normally means a collection of objects. Rename that everywhere. Also the
identifier was quite long, making function prototypes harder to format.
btrfs: add dedicated members for start and length of a block group
The on-disk format of block group item makes use of the key that stores
the offset and length. This is further used in the code, although this
makes thing harder to understand. The key is also packed so the
offset/length is not properly aligned as u64.
Add start (key.objectid) and length (key.offset) members to block group
and remove the embedded key. When the item is searched or written, a
local variable for key is used.
For unknown reasons, the member 'used' in the block group struct is
stored in the b-tree item and accessed everywhere using the special
accessor helper. Let's unify it and make it a regular member and only
update the item before writing it to the tree.
The item is still being used for flags and chunk_objectid, there's some
duplication until the item is removed in following patches.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
y2038: itimer: change implementation to timespec64
There is no 64-bit version of getitimer/setitimer since that is not
actually needed. However, the implementation is built around the
deprecated 'struct timeval' type.
Change the code to use timespec64 internally to reduce the dependencies
on timeval and associated helper functions.
Minor adjustments in the code are needed to make the native and compat
version work the same way, and to keep the range check working after
the conversion.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: lttng-tracepoint module notifier should return NOTIFY_OK
Module notifiers should return NOTIFY_OK on success rather than the
value 0. The return value 0 does not seem to have any ill side-effects
in the notifier chain caller, but it is preferable to respect the API
requirements in case this changes in the future.
Notifiers can encapsulate a negative errno value with
notifier_from_errno(), but this is not needed by the LTTng tracepoint
notifier.
The approach taken in this notifier is to just print a console warning
on error, because tracing failure should not prevent loading a module.
So we definitely do not want to stop notifier iteration. Returning
an error without stopping iteration is not really that useful, because
only the return value of the last callback is returned to notifier chain
caller.
Fix: Don't print ring-buffer's records count when it is not used
The teardown of a ring buffer causes a number of diagnostic messages
to be printed using printk. One of those contains the "records
count", which is only updated when lttng-modules is built with
LTTNG_RING_BUFFER_COUNT_EVENTS defined.
Move the "records count" printing to a different function and stub it
out when LTTNG_RING_BUFFER_COUNT_EVENTS is not defined
(default configuration).
This eliminates messages of the following form from the dmesg output
when an LTTng session is torn down.
[...] ring buffer relay-discard, cpu 0: 0 records written, 0 records overrun
Fix: do not set quiescent state on channel destroy
Setting the quiescent state to true for each stream at channel
destruction is not useful: there are no readers left anyway at
that stage.
The side-effect perceived of setting this quiescent state on
destroy is that the metadata stream ends up with an empty last
packet (due to flush_empty performed when setting the quiescent state)
which is never consumed. This shows up in the lttng-modules error
reporting.
Fix: ring_buffer_frontend.c: init read timer with uninitialized flags
For the config->alloc RING_BUFFER_ALLOC_GLOBAL (metadata channel), the
read timer flags argument is uninitialized.
Found by Coverity:
CID 1401114 (#1 of 1): Uninitialized scalar variable (UNINIT)
6. uninit_use_in_call: Using uninitialized value flags when calling init_timer_key.
random: only read from /dev/random after its pool has received 128 bits
Immediately after boot, we allow reads from /dev/random before its
entropy pool has been fully initialized. Fix this so that we don't
allow this until the blocking pool has received 128 bits.
We do this by repurposing the initialized flag in the entropy pool
struct, and use the initialized flag in the blocking pool to indicate
whether it is safe to pull from the blocking pool.
To do this, we needed to rework when we decide to push entropy from the
input pool to the blocking pool, since the initialized flag for the
input pool was used for this purpose. To simplify things, we no
longer use the initialized flag for that purpose, nor do we use the
entropy_total field any more.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm: move recent_rotated pages calculation to shrink_inactive_list()
Patch series "mm: Generalize putback functions"]
putback_inactive_pages() and move_active_pages_to_lru() are almost
similar, so this patchset merges them ina single function.
This patch (of 4):
The patch moves the calculation from putback_inactive_pages() to
shrink_inactive_list(). This makes putback_inactive_pages() looking more
similar to move_active_pages_to_lru().
To do that, we account activated pages in reclaim_stat::nr_activate.
Since a page may change its LRU type from anon to file cache inside
shrink_page_list() (see ClearPageSwapBacked()), we have to account pages
for the both types. So, nr_activate becomes an array.
Previously we used nr_activate to account PGACTIVATE events, but now we
account them into pgactivate variable (since they are about number of
pages in general, not about sum of hpage_nr_pages).
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm/vmscan: drop may_writepage and classzone_idx from direct reclaim begin template
There are three tracepoints using this template, which are
mm_vmscan_direct_reclaim_begin,
mm_vmscan_memcg_reclaim_begin,
mm_vmscan_memcg_softlimit_reclaim_begin.
Regarding mm_vmscan_direct_reclaim_begin,
sc.may_writepage is !laptop_mode, that's a static setting, and
reclaim_idx is derived from gfp_mask which is already show in this
tracepoint.
Regarding mm_vmscan_memcg_reclaim_begin,
may_writepage is !laptop_mode too, and reclaim_idx is (MAX_NR_ZONES-1),
which are both static value.
mm_vmscan_memcg_softlimit_reclaim_begin is the same with
mm_vmscan_memcg_reclaim_begin.
So we can drop them all.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Timers are added to the timer wheel off by one. This is required in
case a timer is queued directly before incrementing jiffies to prevent
early timer expiry.
When reading a timer trace and relying only on the expiry time of the timer
in the timer_start trace point and on the now in the timer_expiry_entry
trace point, it seems that the timer fires late. With the current
timer_expiry_entry trace point information only now=jiffies is printed but
not the value of base->clk. This makes it impossible to draw a conclusion
to the index of base->clk and makes it impossible to examine timer problems
without additional trace points.
Therefore add the base->clk value to the timer_expire_entry trace
point, to be able to calculate the index the timer base is located at
during collecting expired timers.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: bitfield: shift undefined/implementation defined behaviors
bitfield.h uses the left shift operator with a left operand which
may be negative. The C99 standard states that shifting a negative
value is undefined.
When building with -Wshift-negative-value, we get this gcc warning:
In file included from /home/smarchi/src/babeltrace/include/babeltrace/ctfser-internal.h:44:0,
from /home/smarchi/src/babeltrace/ctfser/ctfser.c:42:
/home/smarchi/src/babeltrace/include/babeltrace/ctfser-internal.h: In function ‘bt_ctfser_write_unsigned_int’:
/home/smarchi/src/babeltrace/include/babeltrace/bitfield-internal.h:116:24: error: left shift of negative value [-Werror=shift-negative-value]
mask = ~((~(type) 0) << (__start % ts)); \
^
/home/smarchi/src/babeltrace/include/babeltrace/bitfield-internal.h:222:2: note: in expansion of macro ‘_bt_bitfield_write_le’
_bt_bitfield_write_le(ptr, type, _start, _length, _v)
^~~~~~~~~~~~~~~~~~~~~
/home/smarchi/src/babeltrace/include/babeltrace/ctfser-internal.h:418:3: note: in expansion of macro ‘bt_bitfield_write_le’
bt_bitfield_write_le(mmap_align_addr(ctfser->base_mma) +
^~~~~~~~~~~~~~~~~~~~
This boils down to the fact that the expression ~((uint8_t)0) has type
"signed int", which is used as an operand of the left shift. This is due
to the integer promotion rules of C99 (6.3.3.1):
If an int can represent all values of the original type, the value is
converted to an int; otherwise, it is converted to an unsigned int.
These are called the integer promotions. All other types are unchanged
by the integer promotions.
We also need to cast the result explicitly into the left hand
side type to deal with:
warning: large integer implicitly truncated to unsigned type [-Woverflow]
The C99 standard states that a right shift has implementation-defined
behavior when shifting a signed negative value. Add a preprocessor check
that the compiler provides the expected behavior, else provide an
alternative implementation which guarantees the intended behavior.
A preprocessor check is also added to ensure that the compiler
representation for signed values is two's complement, which is expected
by this header.
Document that this header strictly respects the C99 standard, with
the exception of its use of __typeof__.
Cleanup: bitfield.h: move to kernel style SPDX license identifiers
The SPDX identifier is a legally binding shorthand, which can be used
instead of the full boiler plate text. According to kernel documentation
it has to be inserted on the first or second line of a file.
Fix: timestamp_end field should include all events within sub-buffer
Fix for timestamp_end not including all events within sub-buffer. This
happens if a thread is preempted/interrupted for a long time between
reserve and commit (e.g. in the middle of a packet), which causes the
timestamp used for timestamp_end field of the packet header to be lower
than the timestamp of the last events in the buffer (those following the
event that was preempted/interrupted between reserve and commit).
The fix involves sampling the timestamp when doing the last space
reservation in a sub-buffer (which necessarily happens before doing the
delivery after its last commit). Save this timestamp temporarily in a
per-sub-buffer control area (we have exclusive access to that area until
we increment the commit counter).
Then, that timestamp value will be read when delivering the sub-buffer,
whichever event or switch happens to be the last to increment the commit
counter to perform delivery. The timestamp value can be read without
worrying about concurrent access, because at that point sub-buffer
delivery has exclusive access to the sub-buffer.
This ensures the timestamp_end value is always larger or equal to the
timestamp of the last event, always below or equal the timestamp_begin
of the following packet, and always below or equal the timestamp of the
first event in the following packet.
syscalls: Remove start and number from syscall_get_arguments() args
At Linux Plumbers, Andy Lutomirski approached me and pointed out that the
function call syscall_get_arguments() implemented in x86 was horribly
written and not optimized for the standard case of passing in 0 and 6 for
the starting index and the number of system calls to get. When looking at
all the users of this function, I discovered that all instances pass in only
0 and 6 for these arguments. Instead of having this function handle
different cases that are never used, simply rewrite it to return the first 6
arguments of a system call.
This should help out the performance of tracing system calls by ptrace,
ftrace and perf.
Fix: don't access packet header for stream_id and stream_instance_id getters
The stream ID and stream instance ID are invariant for a stream, so
there is no point reading them from the packet header currently owned by
the consumer (between get/put subbuf).
Actually, the consumer try to access the stream_id from the live timer
when sending a live beacon without getting the reader subbuffer first.
Doing so is racy against producers. In typical live scenarios
(non-overwrite channels), the producers will always write the same
stream id and stream instance id values at the same header offsets,
which will "work", except for the initial state of an empty buffer:
the value "0" will be returned (erroneously).
For the less frequently used scenario of a live session with "overwrite"
channels, this will trigger WARN_ON safety nets in libringbuffer. This
safety net triggers a kernel OOPS report and disables tracing for that
channel.
In the case where a ring buffer does not have any data ready, it makes
no sense to try to get a subbuffer for reading anyway, so the approach
was broken.
So return the stream id and stream instance id from the internal
data structures rather than reading it from the ring buffer.
Michael Jeanson [Mon, 18 Mar 2019 20:20:36 +0000 (16:20 -0400)]
Fix: atomic_long_add_unless() returns a boolean
Because of a documentation error in older kernels, it was assumed that
atomic_long_add_unless would return the old value, but the
implementation actually returns a boolean.
Also add missing error code int 'ret' and compare against the right type
max value.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Revert "KVM: MMU: show mmu_valid_gen in shadow page related tracepoints"
...as part of removing x86 KVM's fast invalidate mechanism, i.e. this
is one part of a revert all patches from the series that introduced the
mechanism[1].
Al Viro pointed out that since there is only one pipe buffer type to which
new data can be appended, it isn't necessary to have a ->can_merge field in
struct pipe_buf_operations, we can just check for a magic type.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
rcu: Remove wrapper definitions for obsolete RCU update functions
None of synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched, and
cond_synchronize_sched are actually used. This commit therefore removes
their trivial wrapper-function definitions.
Page fault handlers are supposed to return VM_FAULT codes, but some
drivers/file systems mistakenly return error numbers. Now that all
drivers/file systems have been converted to use the vm_fault_t return
type, change the type definition to no longer be compatible with 'int'.
By making it an unsigned int, the function prototype becomes
incompatible with a function which returns int. Sparse will detect any
attempts to return a value which is not a VM_FAULT code.
VM_FAULT_SET_HINDEX and VM_FAULT_GET_HINDEX values are changed to avoid
conflict with other VM_FAULT codes.
Fix: extra-version-git.sh redirect stderr to /dev/null
Running make in a git repo that does not contain any tag prints:
fatal: No names found, cannot describe anything.
in the make and make clean outputs.
It's fine to have no tag name available (extra-version-git.sh will
return the value 0), but we should not print an error in the make
output. Redirect this error to /dev/null.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Suggested-by: Michael Jeanson <mjeanson@efficios.com>
ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE
The arm compiler internally interprets an inline assembly label
as an unsigned long value, not a pointer. As a result, under
CONFIG_FORTIFY_SOURCE, the address of a label has a size of 4 bytes,
which was tripping the runtime checks. Instead, we can just cast the label
(as done with the size calculations earlier).
Reported-by: William Cohen <wcohen@redhat.com> Fixes: 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions") Cc: stable@vger.kernel.org Acked-by: Laura Abbott <labbott@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: William Cohen <wcohen@redhat.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
It was introduced in the 4.20 cycle.
It was also backported to the 4.19 and 4.14 branch.
This issue is fixed upstream by [1] and is present in the 5.0 kernel
release.
btrfs: Remove fsid/metadata_fsid fields from btrfs_info
Currently btrfs_fs_info structure contains a copy of the
fsid/metadata_uuid fields. Same values are also contained in the
btrfs_fs_devices structure which fs_info has a reference to. Let's
reduce duplication by removing the fields from fs_info and always refer
to the ones in fs_devices. No functional changes.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
of the user address range verification function since we got rid of the
old racy i386-only code to walk page tables by hand.
It existed because the original 80386 would not honor the write protect
bit when in kernel mode, so you had to do COW by hand before doing any
user access. But we haven't supported that in a long time, and these
days the 'type' argument is a purely historical artifact.
A discussion about extending 'user_access_begin()' to do the range
checking resulted this patch, because there is no way we're going to
move the old VERIFY_xyz interface to that model. And it's best done at
the end of the merge window when I've done most of my merges, so let's
just get this done once and for all.
This patch was mostly done with a sed-script, with manual fix-ups for
the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
There were a couple of notable cases:
- csky still had the old "verify_area()" name as an alias.
- the iter_iov code had magical hardcoded knowledge of the actual
values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
really used it)
- microblaze used the type argument for a debug printout
but other than those oddities this should be a total no-op patch.
I tried to fix up all architectures, did fairly extensive grepping for
access_ok() uses, and the changes are trivial, but I may have missed
something. Any missed conversion should be trivially fixable, though.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
ext4: adjust reserved cluster count when removing extents
Modify ext4_ext_remove_space() and the code it calls to correct the
reserved cluster count for pending reservations (delayed allocated
clusters shared with allocated blocks) when a block range is removed
from the extent tree. Pending reservations may be found for the clusters
at the ends of written or unwritten extents when a block range is removed.
If a physical cluster at the end of an extent is freed, it's necessary
to increment the reserved cluster count to maintain correct accounting
if the corresponding logical cluster is shared with at least one
delayed and unwritten extent as found in the extents status tree.
Add a new function, ext4_rereserve_cluster(), to reapply a reservation
on a delayed allocated cluster sharing blocks with a freed allocated
cluster. To avoid ENOSPC on reservation, a flag is applied to
ext4_free_blocks() to briefly defer updating the freeclusters counter
when an allocated cluster is freed. This prevents another thread
from allocating the freed block before the reservation can be reapplied.
Redefine the partial cluster object as a struct to carry more state
information and to clarify the code using it.
Adjust the conditional code structure in ext4_ext_remove_space to
reduce the indentation level in the main body of the code to improve
readability.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
There are no more users of SEND_SIG_FORCED so it may be safely removed.
Remove the definition of SEND_SIG_FORCED, it's use in is_si_special,
it's use in TP_STORE_SIGINFO, and it's use in __send_signal as without
any users the uses of SEND_SIG_FORCED are now unncessary.
This makes the code simpler, easier to understand and use. Users of
signal sending functions now no longer need to ask themselves do I
need to use SEND_SIG_FORCED.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
signal: Distinguish between kernel_siginfo and siginfo
Linus recently observed that if we did not worry about the padding
member in struct siginfo it is only about 48 bytes, and 48 bytes is
much nicer than 128 bytes for allocating on the stack and copying
around in the kernel.
The obvious thing of only adding the padding when userspace is
including siginfo.h won't work as there are sigframe definitions in
the kernel that embed struct siginfo.
So split siginfo in two; kernel_siginfo and siginfo. Keeping the
traditional name for the userspace definition. While the version that
is used internally to the kernel and ultimately will not be padded to
128 bytes is called kernel_siginfo.
The definition of struct kernel_siginfo I have put in include/signal_types.h
A set of buildtime checks has been added to verify the two structures have
the same field offsets.
To make it easy to verify the change kernel_siginfo retains the same
size as siginfo. The reduction in size comes in a following change.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Upstream Linux commit 46e0c9be20 introduces relative references in the
struct tracepoint array of pointers.
Up to (including) v4.19-rc7, the upstream kernel has a type mismatch bug
that allows it to pass an out-of-bound end of array to modules
coming/going notifiers.
The fix for upstream Linux is to introduce a new type: tracepoint_ptr_t,
which can be used to adequately iterate on the array. It is introduced
prior to v4.19 as commit 9c0be3f6b5d77 "tracepoint: Fix tracepoint array
element size mismatch".
* si_mem_available() was added in kernel 4.6 with commit d02bd27.
* {set, clear}_current_oom_origin() were added in kernel 3.8 with commit: e1e12d2f
Solution
========
Add wrappers around these functions such that older kernels will build
with these functions defined as NOP or trivial return value.
wrapper_check_enough_free_pages() uses the si_mem_available() kernel
function to compute if the number pages requested passed as parameter is
smaller than the number of pages available on the machine. If the
si_mem_available() kernel function is unavailable, we always return
true.
wrapper_set_current_oom_origin() function wraps the
set_current_oom_origin() kernel function when it is available.
If set_current_oom_origin() is unavailable the wrapper is empty.
wrapper_clear_current_oom_origin() function wraps the
clear_current_oom_origin() kernel function when it is available.
If clear_current_oom_origin() is unavailable the wrapper is empty.
Drawbacks
=========
None.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Prevent allocation of buffers if exceeding available memory
Issue
=====
The running system can be rendered unusable by creating a channel
buffers larger than the available memory of the system, resulting in
random processes being killed by the OOM-killer.
These simple commands trigger the crash on my 15G of RAM laptop:
lttng create
lttng enable-channel -k --subbuf-size=16G --num-subbuf=1 chan0
Note that the subbuf-size * num-subbuf is larger than the physical
memory.
Solution
========
Get an estimate of the number of available pages and return ENOMEM if
there are not enough pages to cover the needs of the caller. Also, mark
the calling user thread as the first target for the OOM killer in case
the estimate of available pages was wrong.
This greatly reduces the attack surface of this issue as well as reducing
its potential impact.
This approach is inspired by the one taken by the Linux kernel
trace ring buffer[1].
Drawback
========
This approach is imperfect because it's based on an estimate.
rcu: Convert rcu_grace_period tracepoint to gp_seq
This commit makes the rcu_grace_period tracepoint use gp_seq instead
of ->gpnum or ->completed. It also introduces a "cpuofl-bgp" string to
less obscurely indicate when a CPU has gone offline while a grace period
is waiting on it.
net: expose sk wmem in sock_exceed_buf_limit tracepoint
Currently trace_sock_exceed_buf_limit() only show rmem info,
but wmem limit may also be hit.
So expose wmem info in this tracepoint as well.
Regarding memcg, I think it is better to introduce a new tracepoint(if
that is needed), i.e. trace_memcg_limit_hit other than show memcg info in
trace_sock_exceed_buf_limit.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Jonathan Rajotte [Wed, 19 Sep 2018 21:48:49 +0000 (17:48 -0400)]
Fix: access migrate_disable field directly
For stable real time kernel > 4.9, the __migrate_disabled utility symbol
is not always exported. This can result in linking problem at build time
and runtime, preventing the loading of the tracer.
The problem was reported to the RT community. [1] [2]
A solution is to access the field directly instead of using the
utility wrapper.
It is important to note that the field is now available for other
configurations than CONFIG_PREEMPT_RT_FULL. For now, we choose to
expose the migratable context only for configurations where
CONFIG_PREEMPT_RT_FULL is set.
Based on the configuration dependency of the kernels, selecting
CONFIG_PREEMPT_RT_FULL ensures the presence of the migrate_disable
field.
CPU hotplug handles teardown on failure to complete adding an instance
of CPU hotplug. Trying to remove after a failed "add" on that instance
triggers a NULL pointer dereference OOPS.
Michael Jeanson [Thu, 9 Aug 2018 15:56:56 +0000 (11:56 -0400)]
Fix: adjust SLE version ranges to build with SP2 and SP3
The early kernel versions of SuSE 12 SP3 overlap with the range from the
later SP2 kernels but are from a different source trees. This patch adds
specific ranges for the SP3 kernels that overlap and allows compatibility
with both SP2 and SP3 kernels.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Thu, 9 Aug 2018 15:56:55 +0000 (11:56 -0400)]
Fix: Allow alphanumeric characters in SLE version
Allow alphanumeric characters in the long version string before
extracting specific version numbers. This prevents failure in detecting
a SuSE kernel when the version string was customized by the end user.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
btrfs: trace: Remove unnecessary fs_info parameter for btrfs__reserve_extent event class
fs_info can be extracted from btrfs_block_group_cache, and all
btrfs_block_group_cache is created by btrfs_create_block_group_cache()
with fs_info initialized, no need to worry about NULL pointer
dereference.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
After a change to the snd_jack structure, the 'name' member
is no longer available in all configurations, which results in a
build failure in the tracing code:
include/trace/events/asoc.h: In function 'trace_event_raw_event_snd_soc_jack_report':
include/trace/events/asoc.h:240:32: error: 'struct snd_jack' has no member named 'name'
The name field is normally initialized from the card shortname and
the jack "id" field:
This changes the tracing output to just contain the 'id' by
itself, which slightly changes the output format but avoids the
link error and is hopefully still enough to see what is going on.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The snd_soc_dapm_input_path and snd_soc_dapm_output_path trace events are
identical except for the direction. Instead of having two events have a
single one that has a field that contains the direction.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The ASoC framework is in the process of migrating all IO operations to regmap.
regmap has its own more sophisticated tracing infrastructure for IO operations,
which means that the ASoC level IO tracing becomes redundant, hence this patch
removes them. There are still a handful of ASoC drivers left that do not use
regmap yet, but hopefully the removal of the ASoC IO tracing will be an
additional incentive to switch to regmap.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
BUILD_BUG_ON(): fix it and a couple of bogus uses of it
gcc permitting variable length arrays makes the current construct used for
BUILD_BUG_ON() useless, as that doesn't produce any diagnostic if the
controlling expression isn't really constant. Instead, this patch makes
it so that a bit field gets used here. Consequently, those uses where the
condition isn't really constant now also need fixing.
Note that in the gfp.h, kmemcheck.h, and virtio_config.h cases
MAYBE_BUILD_BUG_ON() really just serves documentation purposes - even if
the expression is compile time constant (__builtin_constant_p() yields
true), the array is still deemed of variable length by gcc, and hence the
whole expression doesn't have the intended effect.
BUILD_BUG_ON used to use the optimizer to do code elimination or fail
at link time; it was changed to first the size of a negative array (a
nicer compile time error), then (in 8c87df457cb58fe75b9b893007917cf8095660a0) to a bitfield.
This forced us to change some non-constant cases to MAYBE_BUILD_BUG_ON();
as Jan points out in that commit, it didn't work as intended anyway.
bitfields: needs a literal constant at parse time, and can't be put under
"if (__builtin_constant_p(x))" for example.
negative array: can handle anything, but if the compiler can't tell it's
a constant, silently has no effect.
link time: breaks link if the compiler can't determine the value, but the
linker output is not usually as informative as a compiler error.
If we use the negative-array-size method *and* the link time trick,
we get the ability to use BUILD_BUG_ON() under __builtin_constant_p()
branches, and maximal ability for the compiler to detect errors at
build time.
We also document it thoroughly.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: pid tracker should track "pgid" for noargs probes
The "pid" notion exposed by LTTng translates to the "pgid" notion in the
Linux kernel. Therefore using "current->pid" as argument to the PID
tracker actually ends up behaving as a "tid" tracker, which does not
match the intent nor the user-space tracer behavior.
The probes taking arguments were fixed by a prior commit, but it missed
probes without arguments.
Clean up: struct rpc_task carries a pointer to a struct rpc_clnt,
and in fact task->tk_client is always what is passed into trace
points that are already passing @task.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm, vmscan, tracing: use pointer to reclaim_stat struct in trace event
The trace event trace_mm_vmscan_lru_shrink_inactive() currently has 12
parameters! Seven of them are from the reclaim_stat structure. This
structure is currently local to mm/vmscan.c. By moving it to the global
vmstat.h header, we can also reference it from the vmscan tracepoints.
In moving it, it brings down the overhead of passing so many arguments
to the trace event. In the future, we may limit the number of arguments
that a trace event may pass (ideally just 6, but more realistically it
may be 8).
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm, page_alloc: wakeup kcompactd even if kswapd cannot free more memory
Kswapd will not wakeup if per-zone watermarks are not failing or if too
many previous attempts at background reclaim have failed.
This can be true if there is a lot of free memory available. For high-
order allocations, kswapd is responsible for waking up kcompactd for
background compaction. If the zone is not below its watermarks or
reclaim has recently failed (lots of free memory, nothing left to
reclaim), kcompactd does not get woken up.
When __GFP_DIRECT_RECLAIM is not allowed, allow kcompactd to still be
woken up even if kswapd will not reclaim. This allows high-order
allocations, such as thp, to still trigger background compaction even
when the zone has an abundance of free memory.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Lars Persson [Sun, 11 Mar 2018 14:02:43 +0000 (15:02 +0100)]
Fix: do not use CONFIG_HOTPLUG_CPU for the new hotplug API
Kernel configurations without CONFIG_HOTPLUG_CPU throw an unknown
symbol error when attempting to insert the lttng-trace module:
lttng_tracer: Unknown symbol lttng_hp_prepare (err 0)
lttng_tracer: Unknown symbol lttng_hp_online (err 0)
This was caused by lttng-events and lttng-context-perf-counter not
agreeing on which preprocessor condition that should guard the use of
the hotplug API. In fact the API is available also on kernels built
without CONFIG_HOTPLUG_CPU.
Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Julien Desfossez [Fri, 23 Feb 2018 16:37:10 +0000 (11:37 -0500)]
Create a memory pool for temporary tracepoint probes storage
This memory pool is created when the lttng-tracer module is loaded. It
allocates 4 buffers of 4k on each CPU. These buffers are designed to
allow tracepoint probes to temporarily store data that does not fit on
the stack (during the code_pre and code_post phases). The memory is
freed when the lttng-tracer module is unloaded.
This removes the need for dynamic allocation during the execution of
tracepoint probes, which does not behave well on PREEMPT_RT kernel, even
when invoked with the GFP_ATOMIC | GFP_NOWAIT flags.
Michael Jeanson [Wed, 21 Feb 2018 21:36:17 +0000 (16:36 -0500)]
Fix: use proper pid_ns in the process statedump
The pid_ns we currently use from the nsproxy struct is not the task's
pid_ns but the one that children of this task will use.
As stated in include/linux/nsproxy.h :
The pid namespace is an exception -- it's accessed using
task_active_pid_ns. The pid namespace here is the
namespace that children will use.
While it will be the same most of the time, it will report incorrect
information in some situations. Plus it has the side effect of
simplifying the code and removing kernel version checks.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Tue, 20 Feb 2018 17:10:05 +0000 (12:10 -0500)]
Update: kvm instrumentation for fedora 4.14.13-300
Starting from 4.14.13-300 the fedora kernel backport a kvm instrumentation
change introduced in 4.15 which affects the prototype of the kvm_mmio event.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
rcu: Shrink ->dynticks_{nmi_,}nesting from long long to long
Because the ->dynticks_nesting field now only contains the process-based
nesting level instead of a value encoding both the process nesting level
and the irq "nesting" level, we no longer need a long long, even on
32-bit systems. This commit therefore changes both the ->dynticks_nesting
and ->dynticks_nmi_nesting fields to long.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Previously we were using the ratio of the number of lru pages scanned to
the number of eligible lru pages to determine the number of slab objects
to scan. The problem with this is that these two things have nothing to
do with each other, so in slab heavy work loads where there is little to
no page cache we can end up with the pages scanned being a very low
number. This means that we reclaim next to no slab pages and waste a
lot of time reclaiming small amounts of space.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>