Fix: timestamp_end field should include all events within sub-buffer
Fix for timestamp_end not including all events within sub-buffer. This
happens if a thread is preempted/interrupted for a long time between
reserve and commit (e.g. in the middle of a packet), which causes the
timestamp used for timestamp_end field of the packet header to be lower
than the timestamp of the last events in the buffer (those following the
event that was preempted/interrupted between reserve and commit).
The fix involves sampling the timestamp when doing the last space
reservation in a sub-buffer (which necessarily happens before doing the
delivery after its last commit). Save this timestamp temporarily in a
per-sub-buffer control area (we have exclusive access to that area until
we increment the commit counter).
Then, that timestamp value will be read when delivering the sub-buffer,
whichever event or switch happens to be the last to increment the commit
counter to perform delivery. The timestamp value can be read without
worrying about concurrent access, because at that point sub-buffer
delivery has exclusive access to the sub-buffer.
This ensures the timestamp_end value is always larger or equal to the
timestamp of the last event, always below or equal the timestamp_begin
of the following packet, and always below or equal the timestamp of the
first event in the following packet.
syscalls: Remove start and number from syscall_get_arguments() args
At Linux Plumbers, Andy Lutomirski approached me and pointed out that the
function call syscall_get_arguments() implemented in x86 was horribly
written and not optimized for the standard case of passing in 0 and 6 for
the starting index and the number of system calls to get. When looking at
all the users of this function, I discovered that all instances pass in only
0 and 6 for these arguments. Instead of having this function handle
different cases that are never used, simply rewrite it to return the first 6
arguments of a system call.
This should help out the performance of tracing system calls by ptrace,
ftrace and perf.
Fix: don't access packet header for stream_id and stream_instance_id getters
The stream ID and stream instance ID are invariant for a stream, so
there is no point reading them from the packet header currently owned by
the consumer (between get/put subbuf).
Actually, the consumer try to access the stream_id from the live timer
when sending a live beacon without getting the reader subbuffer first.
Doing so is racy against producers. In typical live scenarios
(non-overwrite channels), the producers will always write the same
stream id and stream instance id values at the same header offsets,
which will "work", except for the initial state of an empty buffer:
the value "0" will be returned (erroneously).
For the less frequently used scenario of a live session with "overwrite"
channels, this will trigger WARN_ON safety nets in libringbuffer. This
safety net triggers a kernel OOPS report and disables tracing for that
channel.
In the case where a ring buffer does not have any data ready, it makes
no sense to try to get a subbuffer for reading anyway, so the approach
was broken.
So return the stream id and stream instance id from the internal
data structures rather than reading it from the ring buffer.
Michael Jeanson [Mon, 18 Mar 2019 20:20:36 +0000 (16:20 -0400)]
Fix: atomic_long_add_unless() returns a boolean
Because of a documentation error in older kernels, it was assumed that
atomic_long_add_unless would return the old value, but the
implementation actually returns a boolean.
Also add missing error code int 'ret' and compare against the right type
max value.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Revert "KVM: MMU: show mmu_valid_gen in shadow page related tracepoints"
...as part of removing x86 KVM's fast invalidate mechanism, i.e. this
is one part of a revert all patches from the series that introduced the
mechanism[1].
Al Viro pointed out that since there is only one pipe buffer type to which
new data can be appended, it isn't necessary to have a ->can_merge field in
struct pipe_buf_operations, we can just check for a magic type.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
rcu: Remove wrapper definitions for obsolete RCU update functions
None of synchronize_rcu_bh, synchronize_rcu_bh_expedited, call_rcu_bh,
rcu_barrier_bh, synchronize_sched, synchronize_sched_expedited,
call_rcu_sched, rcu_barrier_sched, get_state_synchronize_sched, and
cond_synchronize_sched are actually used. This commit therefore removes
their trivial wrapper-function definitions.
Page fault handlers are supposed to return VM_FAULT codes, but some
drivers/file systems mistakenly return error numbers. Now that all
drivers/file systems have been converted to use the vm_fault_t return
type, change the type definition to no longer be compatible with 'int'.
By making it an unsigned int, the function prototype becomes
incompatible with a function which returns int. Sparse will detect any
attempts to return a value which is not a VM_FAULT code.
VM_FAULT_SET_HINDEX and VM_FAULT_GET_HINDEX values are changed to avoid
conflict with other VM_FAULT codes.
Fix: extra-version-git.sh redirect stderr to /dev/null
Running make in a git repo that does not contain any tag prints:
fatal: No names found, cannot describe anything.
in the make and make clean outputs.
It's fine to have no tag name available (extra-version-git.sh will
return the value 0), but we should not print an error in the make
output. Redirect this error to /dev/null.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Suggested-by: Michael Jeanson <mjeanson@efficios.com>
ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE
The arm compiler internally interprets an inline assembly label
as an unsigned long value, not a pointer. As a result, under
CONFIG_FORTIFY_SOURCE, the address of a label has a size of 4 bytes,
which was tripping the runtime checks. Instead, we can just cast the label
(as done with the size calculations earlier).
Reported-by: William Cohen <wcohen@redhat.com> Fixes: 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions") Cc: stable@vger.kernel.org Acked-by: Laura Abbott <labbott@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: William Cohen <wcohen@redhat.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
It was introduced in the 4.20 cycle.
It was also backported to the 4.19 and 4.14 branch.
This issue is fixed upstream by [1] and is present in the 5.0 kernel
release.
btrfs: Remove fsid/metadata_fsid fields from btrfs_info
Currently btrfs_fs_info structure contains a copy of the
fsid/metadata_uuid fields. Same values are also contained in the
btrfs_fs_devices structure which fs_info has a reference to. Let's
reduce duplication by removing the fields from fs_info and always refer
to the ones in fs_devices. No functional changes.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
of the user address range verification function since we got rid of the
old racy i386-only code to walk page tables by hand.
It existed because the original 80386 would not honor the write protect
bit when in kernel mode, so you had to do COW by hand before doing any
user access. But we haven't supported that in a long time, and these
days the 'type' argument is a purely historical artifact.
A discussion about extending 'user_access_begin()' to do the range
checking resulted this patch, because there is no way we're going to
move the old VERIFY_xyz interface to that model. And it's best done at
the end of the merge window when I've done most of my merges, so let's
just get this done once and for all.
This patch was mostly done with a sed-script, with manual fix-ups for
the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
There were a couple of notable cases:
- csky still had the old "verify_area()" name as an alias.
- the iter_iov code had magical hardcoded knowledge of the actual
values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
really used it)
- microblaze used the type argument for a debug printout
but other than those oddities this should be a total no-op patch.
I tried to fix up all architectures, did fairly extensive grepping for
access_ok() uses, and the changes are trivial, but I may have missed
something. Any missed conversion should be trivially fixable, though.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
ext4: adjust reserved cluster count when removing extents
Modify ext4_ext_remove_space() and the code it calls to correct the
reserved cluster count for pending reservations (delayed allocated
clusters shared with allocated blocks) when a block range is removed
from the extent tree. Pending reservations may be found for the clusters
at the ends of written or unwritten extents when a block range is removed.
If a physical cluster at the end of an extent is freed, it's necessary
to increment the reserved cluster count to maintain correct accounting
if the corresponding logical cluster is shared with at least one
delayed and unwritten extent as found in the extents status tree.
Add a new function, ext4_rereserve_cluster(), to reapply a reservation
on a delayed allocated cluster sharing blocks with a freed allocated
cluster. To avoid ENOSPC on reservation, a flag is applied to
ext4_free_blocks() to briefly defer updating the freeclusters counter
when an allocated cluster is freed. This prevents another thread
from allocating the freed block before the reservation can be reapplied.
Redefine the partial cluster object as a struct to carry more state
information and to clarify the code using it.
Adjust the conditional code structure in ext4_ext_remove_space to
reduce the indentation level in the main body of the code to improve
readability.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
There are no more users of SEND_SIG_FORCED so it may be safely removed.
Remove the definition of SEND_SIG_FORCED, it's use in is_si_special,
it's use in TP_STORE_SIGINFO, and it's use in __send_signal as without
any users the uses of SEND_SIG_FORCED are now unncessary.
This makes the code simpler, easier to understand and use. Users of
signal sending functions now no longer need to ask themselves do I
need to use SEND_SIG_FORCED.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
signal: Distinguish between kernel_siginfo and siginfo
Linus recently observed that if we did not worry about the padding
member in struct siginfo it is only about 48 bytes, and 48 bytes is
much nicer than 128 bytes for allocating on the stack and copying
around in the kernel.
The obvious thing of only adding the padding when userspace is
including siginfo.h won't work as there are sigframe definitions in
the kernel that embed struct siginfo.
So split siginfo in two; kernel_siginfo and siginfo. Keeping the
traditional name for the userspace definition. While the version that
is used internally to the kernel and ultimately will not be padded to
128 bytes is called kernel_siginfo.
The definition of struct kernel_siginfo I have put in include/signal_types.h
A set of buildtime checks has been added to verify the two structures have
the same field offsets.
To make it easy to verify the change kernel_siginfo retains the same
size as siginfo. The reduction in size comes in a following change.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Upstream Linux commit 46e0c9be20 introduces relative references in the
struct tracepoint array of pointers.
Up to (including) v4.19-rc7, the upstream kernel has a type mismatch bug
that allows it to pass an out-of-bound end of array to modules
coming/going notifiers.
The fix for upstream Linux is to introduce a new type: tracepoint_ptr_t,
which can be used to adequately iterate on the array. It is introduced
prior to v4.19 as commit 9c0be3f6b5d77 "tracepoint: Fix tracepoint array
element size mismatch".
* si_mem_available() was added in kernel 4.6 with commit d02bd27.
* {set, clear}_current_oom_origin() were added in kernel 3.8 with commit: e1e12d2f
Solution
========
Add wrappers around these functions such that older kernels will build
with these functions defined as NOP or trivial return value.
wrapper_check_enough_free_pages() uses the si_mem_available() kernel
function to compute if the number pages requested passed as parameter is
smaller than the number of pages available on the machine. If the
si_mem_available() kernel function is unavailable, we always return
true.
wrapper_set_current_oom_origin() function wraps the
set_current_oom_origin() kernel function when it is available.
If set_current_oom_origin() is unavailable the wrapper is empty.
wrapper_clear_current_oom_origin() function wraps the
clear_current_oom_origin() kernel function when it is available.
If clear_current_oom_origin() is unavailable the wrapper is empty.
Drawbacks
=========
None.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Prevent allocation of buffers if exceeding available memory
Issue
=====
The running system can be rendered unusable by creating a channel
buffers larger than the available memory of the system, resulting in
random processes being killed by the OOM-killer.
These simple commands trigger the crash on my 15G of RAM laptop:
lttng create
lttng enable-channel -k --subbuf-size=16G --num-subbuf=1 chan0
Note that the subbuf-size * num-subbuf is larger than the physical
memory.
Solution
========
Get an estimate of the number of available pages and return ENOMEM if
there are not enough pages to cover the needs of the caller. Also, mark
the calling user thread as the first target for the OOM killer in case
the estimate of available pages was wrong.
This greatly reduces the attack surface of this issue as well as reducing
its potential impact.
This approach is inspired by the one taken by the Linux kernel
trace ring buffer[1].
Drawback
========
This approach is imperfect because it's based on an estimate.
rcu: Convert rcu_grace_period tracepoint to gp_seq
This commit makes the rcu_grace_period tracepoint use gp_seq instead
of ->gpnum or ->completed. It also introduces a "cpuofl-bgp" string to
less obscurely indicate when a CPU has gone offline while a grace period
is waiting on it.
net: expose sk wmem in sock_exceed_buf_limit tracepoint
Currently trace_sock_exceed_buf_limit() only show rmem info,
but wmem limit may also be hit.
So expose wmem info in this tracepoint as well.
Regarding memcg, I think it is better to introduce a new tracepoint(if
that is needed), i.e. trace_memcg_limit_hit other than show memcg info in
trace_sock_exceed_buf_limit.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Jonathan Rajotte [Wed, 19 Sep 2018 21:48:49 +0000 (17:48 -0400)]
Fix: access migrate_disable field directly
For stable real time kernel > 4.9, the __migrate_disabled utility symbol
is not always exported. This can result in linking problem at build time
and runtime, preventing the loading of the tracer.
The problem was reported to the RT community. [1] [2]
A solution is to access the field directly instead of using the
utility wrapper.
It is important to note that the field is now available for other
configurations than CONFIG_PREEMPT_RT_FULL. For now, we choose to
expose the migratable context only for configurations where
CONFIG_PREEMPT_RT_FULL is set.
Based on the configuration dependency of the kernels, selecting
CONFIG_PREEMPT_RT_FULL ensures the presence of the migrate_disable
field.
CPU hotplug handles teardown on failure to complete adding an instance
of CPU hotplug. Trying to remove after a failed "add" on that instance
triggers a NULL pointer dereference OOPS.
Michael Jeanson [Thu, 9 Aug 2018 15:56:56 +0000 (11:56 -0400)]
Fix: adjust SLE version ranges to build with SP2 and SP3
The early kernel versions of SuSE 12 SP3 overlap with the range from the
later SP2 kernels but are from a different source trees. This patch adds
specific ranges for the SP3 kernels that overlap and allows compatibility
with both SP2 and SP3 kernels.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Thu, 9 Aug 2018 15:56:55 +0000 (11:56 -0400)]
Fix: Allow alphanumeric characters in SLE version
Allow alphanumeric characters in the long version string before
extracting specific version numbers. This prevents failure in detecting
a SuSE kernel when the version string was customized by the end user.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
btrfs: trace: Remove unnecessary fs_info parameter for btrfs__reserve_extent event class
fs_info can be extracted from btrfs_block_group_cache, and all
btrfs_block_group_cache is created by btrfs_create_block_group_cache()
with fs_info initialized, no need to worry about NULL pointer
dereference.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
After a change to the snd_jack structure, the 'name' member
is no longer available in all configurations, which results in a
build failure in the tracing code:
include/trace/events/asoc.h: In function 'trace_event_raw_event_snd_soc_jack_report':
include/trace/events/asoc.h:240:32: error: 'struct snd_jack' has no member named 'name'
The name field is normally initialized from the card shortname and
the jack "id" field:
This changes the tracing output to just contain the 'id' by
itself, which slightly changes the output format but avoids the
link error and is hopefully still enough to see what is going on.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The snd_soc_dapm_input_path and snd_soc_dapm_output_path trace events are
identical except for the direction. Instead of having two events have a
single one that has a field that contains the direction.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The ASoC framework is in the process of migrating all IO operations to regmap.
regmap has its own more sophisticated tracing infrastructure for IO operations,
which means that the ASoC level IO tracing becomes redundant, hence this patch
removes them. There are still a handful of ASoC drivers left that do not use
regmap yet, but hopefully the removal of the ASoC IO tracing will be an
additional incentive to switch to regmap.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
BUILD_BUG_ON(): fix it and a couple of bogus uses of it
gcc permitting variable length arrays makes the current construct used for
BUILD_BUG_ON() useless, as that doesn't produce any diagnostic if the
controlling expression isn't really constant. Instead, this patch makes
it so that a bit field gets used here. Consequently, those uses where the
condition isn't really constant now also need fixing.
Note that in the gfp.h, kmemcheck.h, and virtio_config.h cases
MAYBE_BUILD_BUG_ON() really just serves documentation purposes - even if
the expression is compile time constant (__builtin_constant_p() yields
true), the array is still deemed of variable length by gcc, and hence the
whole expression doesn't have the intended effect.
BUILD_BUG_ON used to use the optimizer to do code elimination or fail
at link time; it was changed to first the size of a negative array (a
nicer compile time error), then (in 8c87df457cb58fe75b9b893007917cf8095660a0) to a bitfield.
This forced us to change some non-constant cases to MAYBE_BUILD_BUG_ON();
as Jan points out in that commit, it didn't work as intended anyway.
bitfields: needs a literal constant at parse time, and can't be put under
"if (__builtin_constant_p(x))" for example.
negative array: can handle anything, but if the compiler can't tell it's
a constant, silently has no effect.
link time: breaks link if the compiler can't determine the value, but the
linker output is not usually as informative as a compiler error.
If we use the negative-array-size method *and* the link time trick,
we get the ability to use BUILD_BUG_ON() under __builtin_constant_p()
branches, and maximal ability for the compiler to detect errors at
build time.
We also document it thoroughly.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: pid tracker should track "pgid" for noargs probes
The "pid" notion exposed by LTTng translates to the "pgid" notion in the
Linux kernel. Therefore using "current->pid" as argument to the PID
tracker actually ends up behaving as a "tid" tracker, which does not
match the intent nor the user-space tracer behavior.
The probes taking arguments were fixed by a prior commit, but it missed
probes without arguments.
Clean up: struct rpc_task carries a pointer to a struct rpc_clnt,
and in fact task->tk_client is always what is passed into trace
points that are already passing @task.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm, vmscan, tracing: use pointer to reclaim_stat struct in trace event
The trace event trace_mm_vmscan_lru_shrink_inactive() currently has 12
parameters! Seven of them are from the reclaim_stat structure. This
structure is currently local to mm/vmscan.c. By moving it to the global
vmstat.h header, we can also reference it from the vmscan tracepoints.
In moving it, it brings down the overhead of passing so many arguments
to the trace event. In the future, we may limit the number of arguments
that a trace event may pass (ideally just 6, but more realistically it
may be 8).
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm, page_alloc: wakeup kcompactd even if kswapd cannot free more memory
Kswapd will not wakeup if per-zone watermarks are not failing or if too
many previous attempts at background reclaim have failed.
This can be true if there is a lot of free memory available. For high-
order allocations, kswapd is responsible for waking up kcompactd for
background compaction. If the zone is not below its watermarks or
reclaim has recently failed (lots of free memory, nothing left to
reclaim), kcompactd does not get woken up.
When __GFP_DIRECT_RECLAIM is not allowed, allow kcompactd to still be
woken up even if kswapd will not reclaim. This allows high-order
allocations, such as thp, to still trigger background compaction even
when the zone has an abundance of free memory.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Lars Persson [Sun, 11 Mar 2018 14:02:43 +0000 (15:02 +0100)]
Fix: do not use CONFIG_HOTPLUG_CPU for the new hotplug API
Kernel configurations without CONFIG_HOTPLUG_CPU throw an unknown
symbol error when attempting to insert the lttng-trace module:
lttng_tracer: Unknown symbol lttng_hp_prepare (err 0)
lttng_tracer: Unknown symbol lttng_hp_online (err 0)
This was caused by lttng-events and lttng-context-perf-counter not
agreeing on which preprocessor condition that should guard the use of
the hotplug API. In fact the API is available also on kernels built
without CONFIG_HOTPLUG_CPU.
Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Julien Desfossez [Fri, 23 Feb 2018 16:37:10 +0000 (11:37 -0500)]
Create a memory pool for temporary tracepoint probes storage
This memory pool is created when the lttng-tracer module is loaded. It
allocates 4 buffers of 4k on each CPU. These buffers are designed to
allow tracepoint probes to temporarily store data that does not fit on
the stack (during the code_pre and code_post phases). The memory is
freed when the lttng-tracer module is unloaded.
This removes the need for dynamic allocation during the execution of
tracepoint probes, which does not behave well on PREEMPT_RT kernel, even
when invoked with the GFP_ATOMIC | GFP_NOWAIT flags.
Michael Jeanson [Wed, 21 Feb 2018 21:36:17 +0000 (16:36 -0500)]
Fix: use proper pid_ns in the process statedump
The pid_ns we currently use from the nsproxy struct is not the task's
pid_ns but the one that children of this task will use.
As stated in include/linux/nsproxy.h :
The pid namespace is an exception -- it's accessed using
task_active_pid_ns. The pid namespace here is the
namespace that children will use.
While it will be the same most of the time, it will report incorrect
information in some situations. Plus it has the side effect of
simplifying the code and removing kernel version checks.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Tue, 20 Feb 2018 17:10:05 +0000 (12:10 -0500)]
Update: kvm instrumentation for fedora 4.14.13-300
Starting from 4.14.13-300 the fedora kernel backport a kvm instrumentation
change introduced in 4.15 which affects the prototype of the kvm_mmio event.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
rcu: Shrink ->dynticks_{nmi_,}nesting from long long to long
Because the ->dynticks_nesting field now only contains the process-based
nesting level instead of a value encoding both the process nesting level
and the irq "nesting" level, we no longer need a long long, even on
32-bit systems. This commit therefore changes both the ->dynticks_nesting
and ->dynticks_nmi_nesting fields to long.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Previously we were using the ratio of the number of lru pages scanned to
the number of eligible lru pages to determine the number of slab objects
to scan. The problem with this is that these two things have nothing to
do with each other, so in slab heavy work loads where there is little to
no page cache we can end up with the pages scanned being a very low
number. This means that we reclaim next to no slab pages and waste a
lot of time reclaiming small amounts of space.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
tracing/hrtimer: Print the hrtimer mode in the 'hrtimer_start' tracepoint
The 'hrtimer_start' tracepoint lacks the mode information. The mode is
important because consecutive starts can switch from ABS to REL or from
PINNED to non PINNED.
tracing: hrtimer: Print hrtimer mode in hrtimer_start tracepoint
The hrtimer_start tracepoint lacks the mode information. The mode is
important because consecutive starts can switch from ABS to REL or from
PINNED to non PINNED.
This patch is a part of the series to define wb_domain which
represents a domain that wb's (bdi_writeback's) belong to and are
measured against each other in. This will enable IO backpressure
propagation for cgroup writeback.
global_dirty_limit exists to regulate the global dirty threshold which
is a property of the wb_domain. This patch moves hard_dirty_limit,
dirty_lock, and update_time into wb_domain.
This is pure reorganization and doesn't introduce any behavioral
changes.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: network instrumentation handling of corrupted TCP headers
A malformed packet may contain a valid IPv4/IPv6 header, but an
inconsistent TCP header. As a result, the trace contains a fully
formed IPv4/IPv6 header, including the "protocol" or "nexthdr"
fields indicating TCP, but no following TCP header.
This scenario leads to an unreadable CTF trace, because the
trace viewer expects a TCP header, but instead gets the next
event.
Therefore, using the IP header fields as selector for the
transport layer variant is not the right approach: introduce
our own selector field, which allows to properly deal with this
corner-case.
Update: kvm instrumentation for 4.14.14+, 4.9.77+, 4.4.112+
Starting from 3.14.14, 4.9.77, and 4.4.112, the 3.14, 4.9, and 4.4
stable kernel branches backport a kvm instrumentation change introduced
in 4.15 which affects the prototype of the kvm_mmio event.
This shows exactly how btrfs processes the delayed refs onto disks,
which is very helpful on understanding delayed ref mechanism and
debugging related bugs.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Tue, 9 Jan 2018 20:43:19 +0000 (15:43 -0500)]
Fix: debian kernel version parsing
The debian version script only worked for ckt kernels and that was fine
until now because we only had checks for those versions in the code.
ckt (Canonical Kernel Team) kernels were used for a while during the jessie
cycle, their versionning is a bit different. They track the upstream vanilla
stable updates but they don't update the minor version number and instead add
an additionnal -cktX. They were all 3.16.7-cktX and after a while the version
switched back to upstream style at 3.16.36.
Knowing that, we can compare regular debian and ckt kernel versions
using this scheme :
MAJOR.PATCHLEVEL.SUBLEVEL.CKT.DEBABI.DEBPATCH
And setting CKT to zero for non-ckt kernels.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Update: kvm instrumentation for 3.16.52 and 3.2.97
Starting from 3.16.52 and 3.2.97, the 3.16 and 3.2 stable kernel
branches backport a kvm instrumentation change introduced in 4.15 which
affects the prototype of the kvm_mmio event.
locking/atomics, mm: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.
However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.
It's possible to transform the bulk of kernel code using the Coccinelle
script below. However, this doesn't handle comments, leaving references
to ACCESS_ONCE() instances which have been removed. As a preparatory
step, this patch converts the mm code and comments to use
{READ,WRITE}_ONCE() consistently.
----
virtual patch
@ depends on patch @
expression E1, E2;
@@
- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)
@ depends on patch @
expression E;
@@
- ACCESS_ONCE(E)
+ READ_ONCE(E)
----
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
timer: Prepare to change timer callback argument type
Modern kernel callback systems pass the structure associated with a
given callback to the callback function. The timer callback remains one
of the legacy cases where an arbitrary unsigned long argument continues
to be passed as the callback argument. This has several problems:
- This bloats the timer_list structure with a normally redundant
.data field.
- No type checking is being performed, forcing callbacks to do
explicit type casts of the unsigned long argument into the object
that was passed, rather than using container_of(), as done in most
of the other callback infrastructure.
- Neighboring buffer overflows can overwrite both the .function and
the .data field, providing attackers with a way to elevate from a buffer
overflow into a simplistic ROP-like mechanism that allows calling
arbitrary functions with a controlled first argument.
- For future Control Flow Integrity work, this creates a unique function
prototype for timer callbacks, instead of allowing them to continue to
be clustered with other void functions that take a single unsigned long
argument.
This adds a new timer initialization API, which will ultimately replace
the existing setup_timer(), setup_{deferrable,pinned,etc}_timer() family,
named timer_setup() (to mirror hrtimer_setup(), making instances of its
use much easier to grep for).
In order to support the migration of existing timers into the new
callback arguments, timer_setup() casts its arguments to the existing
legacy types, and explicitly passes the timer pointer as the legacy
data argument. Once all setup_*timer() callers have been replaced with
timer_setup(), the casts can be removed, and the data argument can be
dropped with the timer expiration code changed to just pass the timer
to the callback directly.
:
Modern kernel callback systems pass the structure associated with a
given callback to the callback function. The timer callback remains one
of the legacy cases where an arbitrary unsigned long argument continues
to be passed as the callback argument. This has several problems:
- This bloats the timer_list structure with a normally redundant
.data field.
- No type checking is being performed, forcing callbacks to do
explicit type casts of the unsigned long argument into the object
that was passed, rather than using container_of(), as done in most
of the other callback infrastructure.
- Neighboring buffer overflows can overwrite both the .function and
the .data field, providing attackers with a way to elevate from a buffer
overflow into a simplistic ROP-like mechanism that allows calling
arbitrary functions with a controlled first argument.
- For future Control Flow Integrity work, this creates a unique function
prototype for timer callbacks, instead of allowing them to continue to
be clustered with other void functions that take a single unsigned long
argument.
This adds a new timer initialization API, which will ultimately replace
the existing setup_timer(), setup_{deferrable,pinned,etc}_timer() family,
named timer_setup() (to mirror hrtimer_setup(), making instances of its
use much easier to grep for).
In order to support the migration of existing timers into the new
callback arguments, timer_setup() casts its arguments to the existing
legacy types, and explicitly passes the timer pointer as the legacy
data argument. Once all setup_*timer() callers have been replaced with
timer_setup(), the casts can be removed, and the data argument can be
dropped with the timer expiration code changed to just pass the timer
to the callback directly.
Since the regular pattern of using container_of() during local variable
declaration repeats the need for the variable type declaration
to be included, this adds a helper modeled after other from_*()
helpers that wrap container_of(), named from_timer(). This helper uses
typeof(*variable), removing the type redundancy and minimizing the need
for line wraps in forthcoming conversions from "unsigned data long" to
"struct timer_list *" in the timer callbacks:
Finally, in order to support the handful of timer users that perform
open-coded assignments of the .function (and .data) fields, provide
cast macros (TIMER_FUNC_TYPE and TIMER_DATA_TYPE) that can be used
temporarily. Once conversion has been completed, these can be globally
trivially removed.
This converts all remaining cases of the old setup_timer() API into using
timer_setup(), where the callback argument is the structure already
holding the struct timer_list. These should have no behavioral changes,
since they just change which pointer is passed into the callback with
the same available pointers after conversion. It handles the following
examples, in addition to some other variations.
The static function __vmalloc_node is not visible by KALLSYMS_ALL on at
least some kernels, which leads to a call to a NULL function when trying
to perform allocation of lttng buffer memory under memory fragmentation
conditions (kmalloc_node failure).
Use __vmalloc_node_range instead, and check that the returned pointer
is non-NULL to ensure this type of failure does not happen in any
condition.
Fallback to __vmalloc(), even though it is not NUMA-aware, in case
we fail to find __vmalloc_node_range, and print an explicit warning
to the user console about the need to enable KALLSYMS_ALL.
This affects kernels < 4.12. Later kernels provide kvmalloc(), which
we use.
Comparing a signed return value against an unsigned nr_pages performs
the comparison as "unsigned", and therefore mistakenly considers
get_user_pages_fast() errors as success.
By passing an invalid pointer to write() to the /proc/lttng-logger
interface, unprivileged user-space processes can trigger a kernel OOPS.
There are now a number of accounting oddities such as mapped file pages
being accounted for on the node while the total number of file pages are
accounted on the zone. This can be coped with to some extent but it's
confusing so this patch moves the relevant file-based accounted. Due to
throttling logic in the page allocator for reliable OOM detection, it is
still necessary to track dirty and writeback pages on a per-zone basis.
mm: move vmscan writes and file write accounting to the node
As reclaim is now node-based, it follows that page write activity due to
page reclaim should also be accounted for on the node. For consistency,
also account page writes and page dirtying on a per-node basis.
After this patch, there are a few remaining zone counters that may appear
strange but are fine. NUMA stats are still per-zone as this is a
user-space interface that tools consume. NR_MLOCK, NR_SLAB_*,
NR_PAGETABLE, NR_KERNEL_STACK and NR_BOUNCE are all allocations that
potentially pin low memory and cannot trivially be reclaimed on demand.
This information is still useful for debugging a page allocation failure
warning.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>