Mathieu Desnoyers [Mon, 3 Feb 2020 19:49:16 +0000 (14:49 -0500)]
lib ring buffer: move subbuffer_consume_record into LTTNG_RING_BUFFER_COUNT_EVENTS ifdef
When event accounting is disabled, counting of event records consumed by
the iterator should be disabled as well, otherwise it triggers
CHAN_WARN_ON() because the accounting of events produced is not
performed.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Id8b9e657ee420886b409be1f05ef08a0807fefdc
Mathieu Desnoyers [Tue, 4 Feb 2020 20:44:55 +0000 (15:44 -0500)]
lib ring buffer iterator: introduce lib_ring_buffer_put_current_record
Ensure that the current subbuffer is put after client code has read the
payload of the current record.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Id2173ea67213f7ef8e7395b49c5aa8fff0aefffc
Mathieu Desnoyers [Mon, 3 Feb 2020 19:06:26 +0000 (14:06 -0500)]
Introduce event notifier lib ring buffer client
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I89da147ee956f5759c49bd992bf33fe760d79591
Mathieu Desnoyers [Mon, 3 Feb 2020 19:17:53 +0000 (14:17 -0500)]
lttng_abi_create_stream_fd: expect fd name as parameter
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ic9711863b58307d3ed6cc782efc78e4f59345950
Mathieu Desnoyers [Mon, 3 Feb 2020 19:09:12 +0000 (14:09 -0500)]
LTTng ring buffer clients: expect void pointer as private data to create channel
Triggers will create a channel without using the lttng_channel objects,
so allow any type of private data.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I0725616c84e401c9fcbf00a405a2e2d0f1078979
Mathieu Desnoyers [Thu, 23 Jan 2020 21:02:27 +0000 (16:02 -0500)]
lib ring buffer: use irq_work for wakeup by writer
Using irq_work (like perf does) allows using an interrupt handler
firing soon after the instrumentation execution to issue the wakeups.
This allows the RING_BUFFER_WAKEUP_BY_WRITER ring buffer configuration
to be entirely lock-free, which allows using it in NMI context for
general tracing purposes.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I842ff15736f53d1283cf953804d803f70779652b
Francis Deslauriers [Fri, 17 Jan 2020 23:17:02 +0000 (18:17 -0500)]
Rename `lttng_event_{get,put}()` to `lttng_event_desc_{get,put}()`
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I99a8b4cdf191555c28da5a38a1e65661421fd7fc
Francis Deslauriers [Wed, 18 Dec 2019 22:10:32 +0000 (17:10 -0500)]
Cleanup: extract function to borrow hashlist bucket
This is going to reused by the trigger system.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ie6d032374c3991d0a75ad4737e7f082fbc1a74b1
Francis Deslauriers [Tue, 7 Jan 2020 16:00:55 +0000 (11:00 -0500)]
Decouple `struct lttng_event` from filter code
The filter infrastructure will be used by event notifiers and decoupling
this will allow for massive code reuse.
Of all `struct lttng_event`'s fields, filter code needs:
1. The `const struct lttng_event_desc *desc` field,
2. The `struct cds_list_head bytecode_runtime_head` list.
These fields are used to do the tracepoint field relocation
(`apply_field_reloc()` and `specialize_event_payload_lookup()`).
Considering that only these two field are needed, we can pass them
directly to these functions.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: If569b7d315700660aa84241d112668f2451b715a
Francis Deslauriers [Wed, 18 Dec 2019 22:00:37 +0000 (17:00 -0500)]
Rename `lttng_create_*_if_missing()` in anticipation of event notifiers
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I58a799f992a53215ff04896b783e7ebe31965b7c
Francis Deslauriers [Thu, 5 Dec 2019 20:29:26 +0000 (15:29 -0500)]
Extract event enabler fields to specialized struct
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I356d9b91c6e20c288ca931a4d449a54b67f3937c
Francis Deslauriers [Wed, 18 Dec 2019 21:40:49 +0000 (16:40 -0500)]
Docs: explain why unused `lttng_enabler::ctx` is kept around
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: If2c6b9203ea324bb1ff4051b0a705e7303dbf3a6
Francis Deslauriers [Thu, 5 Dec 2019 19:37:57 +0000 (14:37 -0500)]
Rename `enum lttng_enabler_type` to `_format_type`
This will avoid confusion between the different types of enablers
(event notifier enablers and event enablers).
- Enabler format types describe the way the event name matching is done.
- Enabler types will describe the type of enablers (event
notifier vs event)
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ic71d05159c5f244d0b1ad74f9c0ee6247fcdfbbb
Jonathan Rajotte [Thu, 21 May 2020 13:45:25 +0000 (09:45 -0400)]
Test: add signed value and enum for testings of event notifier capture
Signed-off-by: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I4be725e3ed1e2f94420f4cdcf5ab6ac7962e2464
Francis Deslauriers [Wed, 30 Sep 2020 18:27:26 +0000 (14:27 -0400)]
Cleanup: remove usage of enum in ABI structures
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I3730f7c0341028b25231c368166ee6e5fd74fa5d
Mathieu Desnoyers [Wed, 21 Oct 2020 16:24:40 +0000 (12:24 -0400)]
Fix: type mismatch in clone instrumentation
The data and metadata types should all agree to use "unsigned long",
else it triggers babeltrace trace parsing errors.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Geneviève Bastien [Wed, 1 Apr 2020 18:31:49 +0000 (14:31 -0400)]
syscalls: Make clone()'s `flags` field a 2 enum struct.
The clone system call has a flags field, whose values are defined in
uapi/linux/sched.h file. This field is now a struct made of 2
enumerations to make the values more readable/meaningful.
The `flags` field has two parts:
1. exit signal: the least significant byte of the `unsigned long` is
the signal the kernel need to send to the parent process on child
exit,
2. clone options: the remaining bytes of the `unsigned long` is used a
bitwise flag for the clone options.
Those 2-in-1 fields should be printed using two different CTF fields.
Here's an example babeltrace output of the clone system call:
syscall_entry_clone: { cpu_id = 2 }, { flags = { exit_signal = ( "SIGCHLD" : container = 0x11 ), options = ( "CLONE_CHILD_CLEARTID" | "CLONE_CHILD_SETTID" : container = 0x12000 ) }
Change-Id: Ic375b59fb3b6564f036e1af24d66c0c7069b47d6
Signed-off-by: Geneviève Bastien <gbastien@versatic.net>
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 5 Oct 2020 19:31:42 +0000 (15:31 -0400)]
fix: strncpy equals destination size warning
Some versions of GCC when called with -Wstringop-truncation will warn
when doing a copy of the same size as the destination buffer with
strncpy :
‘strncpy’ specified bound 256 equals destination size [-Werror=stringop-truncation]
Since we unconditionally write '\0' in the last byte, reduce the copy
size by one.
Change-Id: Idb907c9550817a06fc0dffc489740f63d440e7d4
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Michael Jeanson [Tue, 6 Oct 2020 14:29:33 +0000 (10:29 -0400)]
Set version to 2.13-pre
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I4124a7d9d9c2f7a36816b7e498ffd37ae27da604
Mathieu Desnoyers [Mon, 5 Oct 2020 16:01:37 +0000 (12:01 -0400)]
Cleanup: lttng-syscalls: silence warning about uninitialized bitmap variable
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Fri, 2 Oct 2020 17:03:34 +0000 (13:03 -0400)]
Add 'kernel_read' wrapper for kernels < v4.14
See upstream commit:
commit
bdd1d2d3d251c65b74ac4493e08db18971c09240
Author: Christoph Hellwig <hch@lst.de>
Date: Fri Sep 1 17:39:13 2017 +0200
fs: fix kernel_read prototype
Use proper ssize_t and size_t types for the return value and count
argument, move the offset last and make it an in/out argument like
all other read/write helpers, and make the buf argument a void pointer
to get rid of lots of casts in the callers.
Change-Id: I825c3fcbcc17e9b46e2a661fadc66b52a94eb2da
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Thu, 24 Sep 2020 19:38:35 +0000 (15:38 -0400)]
fix: Use 'kernel_read' to read from procfs
Use the 'kernel_read' helper to read files in procfs, it's present in
the kernel since the 2.6 series and does the right thing on kernels that
require the set_fs dance and newer one which don't.
Change-Id: I1a53fda379e0bb9acc79331626925bbdba63d727
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Fri, 25 Sep 2020 20:05:00 +0000 (16:05 -0400)]
fix: don't allow userspace copy to read kernel memory
This patch fixes a security issue which allows the root user to read
arbitrary kernel memory. Considering the security model used in LTTng
userspace tooling for kernel tracing, this bug also allows members of
the 'tracing' group to read arbitrary kernel memory.
Calls to __copy_from_user_inatomic() where wrongly enclosed in
set_fs(KERNEL_DS) defeating the access_ok() calls and allowing to read
from kernel memory if a kernel address is provided.
Remove all set_fs() calls around __copy_from_user_inatomic().
As a side effect this will allow us to support v5.10 which should remove
set_fs().
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I35e4562c835217352c012ed96a7b8f93e941381e
Michael Jeanson [Fri, 25 Sep 2020 15:23:58 +0000 (11:23 -0400)]
fix: Add a 1MB limit to lttng_strlen_user_inatomic
The previous implementation was unbounded which could result in long
loops with preemption turned off.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I85afcd879258735bb2e7502f6016fcb2d3974cf7
Michael Jeanson [Wed, 23 Sep 2020 18:42:18 +0000 (14:42 -0400)]
fix: Adjust ranges for Ubuntu 4.15.0-119 kernel
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ie32f70f810c8fc756fbd31ab129aeb35500790f7
Michael Jeanson [Wed, 16 Sep 2020 19:16:17 +0000 (15:16 -0400)]
fix: Adjust ranges for Ubuntu HWE 5.0 kernels
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I36f2c3485dcc6ccb74ea86a7ce66fcb1662d060b
Mathieu Desnoyers [Tue, 28 Jan 2020 21:02:44 +0000 (16:02 -0500)]
Fix: system call filter table
The system call filter table has effectively been unused for a long
time due to system call name prefix mismatch. This means the overhead of
selective system call tracing was larger than it should have been because
the event payload preparation would be done for all system calls as soon
as a single system call is traced.
However, fixing this underlying issue unearths several issues that crept
unnoticed when the "enabler" concept was introduced (after the original
implementation of the system call filter table).
Here is a list of the issues which are resolved here:
- Split lttng_syscalls_unregister into an unregister and destroy
function, thus awaiting for a grace period (and therefore quiescence
of the users) after unregistering the system call tracepoints before
freeing the system call filter data structures. This effectively fixes
a use-after-free.
- The state for enabling "all" system calls vs enabling specific system
calls (and sequences of enable-disable) was incorrect with respect to
the "enablers" semantic. This is solved by always tracking the
bitmap of enabled system calls, and keeping this bitmap even when
enabling all system calls. The sc_filter is now always allocated
before system call tracing is registered to tracepoints, which means
it does not need to be RCU dereferenced anymore.
Padding fields in the ABI are reserved to select whether to:
- Trace either native or compat system call (or both, which is the
behavior currently implemented),
- Trace either system call entry or exit (or both, which is the
behavior currently implemented),
- Select the system call to trace by name (behavior currently
implemented) or by system call number,
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Fri, 4 Sep 2020 15:52:51 +0000 (11:52 -0400)]
fix: version ranges for ext4_discard_preallocations and writeback_queue_io
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Id4fa53cb2e713cbda651e1a75deed91013115592
Michael Jeanson [Mon, 31 Aug 2020 18:16:01 +0000 (14:16 -0400)]
fix: writeback: Fix sync livelock due to b_dirty_time processing (v5.9)
See upstream commit:
commit
f9cae926f35e8230330f28c7b743ad088611a8de
Author: Jan Kara <jack@suse.cz>
Date: Fri May 29 16:08:58 2020 +0200
writeback: Fix sync livelock due to b_dirty_time processing
When we are processing writeback for sync(2), move_expired_inodes()
didn't set any inode expiry value (older_than_this). This can result in
writeback never completing if there's steady stream of inodes added to
b_dirty_time list as writeback rechecks dirty lists after each writeback
round whether there's more work to be done. Fix the problem by using
sync(2) start time is inode expiry value when processing b_dirty_time
list similarly as for ordinarily dirtied inodes. This requires some
refactoring of older_than_this handling which simplifies the code
noticeably as a bonus.
Change-Id: I8b894b13ccc14d9b8983ee4c2810a927c319560b
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 31 Aug 2020 15:41:38 +0000 (11:41 -0400)]
fix: writeback: Drop I_DIRTY_TIME_EXPIRE (v5.9)
See upstream commit:
commit
5fcd57505c002efc5823a7355e21f48dd02d5a51
Author: Jan Kara <jack@suse.cz>
Date: Fri May 29 16:24:43 2020 +0200
writeback: Drop I_DIRTY_TIME_EXPIRE
The only use of I_DIRTY_TIME_EXPIRE is to detect in
__writeback_single_inode() that inode got there because flush worker
decided it's time to writeback the dirty inode time stamps (either
because we are syncing or because of age). However we can detect this
directly in __writeback_single_inode() and there's no need for the
strange propagation with I_DIRTY_TIME_EXPIRE flag.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I92e37c2ff3ec36d431e8f9de5c8e37c5a2da55ea
Michael Jeanson [Tue, 25 Aug 2020 14:56:29 +0000 (10:56 -0400)]
fix: removal of [smp_]read_barrier_depends (v5.9)
See upstream commits:
commit
76ebbe78f7390aee075a7f3768af197ded1bdfbb
Author: Will Deacon <will@kernel.org>
Date: Tue Oct 24 11:22:47 2017 +0100
locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()
In preparation for the removal of lockless_dereference(), which is the
same as READ_ONCE() on all architectures other than Alpha, add an
implicit smp_read_barrier_depends() to READ_ONCE() so that it can be
used to head dependency chains on all architectures.
commit
76ebbe78f7390aee075a7f3768af197ded1bdfbb
Author: Will Deacon <will.deacon@arm.com>
Date: Tue Oct 24 11:22:47 2017 +0100
locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()
In preparation for the removal of lockless_dereference(), which is the
same as READ_ONCE() on all architectures other than Alpha, add an
implicit smp_read_barrier_depends() to READ_ONCE() so that it can be
used to head dependency chains on all architectures.
Change-Id: Ife8880bd9378dca2972da8838f40fc35ccdfaaac
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 24 Aug 2020 19:37:50 +0000 (15:37 -0400)]
fix: ext4: indicate via a block bitmap read is prefetched… (v5.9)
See upstream commit:
commit
ab74c7b23f3770935016e3eb3ecdf1e42b73efaa
Author: Theodore Ts'o <tytso@mit.edu>
Date: Wed Jul 15 11:48:55 2020 -0400
ext4: indicate via a block bitmap read is prefetched via a tracepoint
Modify the ext4_read_block_bitmap_load tracepoint so that it tells us
whether a block bitmap is being prefetched.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I0e5e2c5b8004223d0928235c092449ee16a940e1
Michael Jeanson [Mon, 24 Aug 2020 19:26:04 +0000 (15:26 -0400)]
fix: ext4: limit the length of per-inode prealloc list (v5.9)
See upstream commit:
commit
27bc446e2def38db3244a6eb4bb1d6312936610a
Author: brookxu <brookxu.cn@gmail.com>
Date: Mon Aug 17 15:36:15 2020 +0800
ext4: limit the length of per-inode prealloc list
In the scenario of writing sparse files, the per-inode prealloc list may
be very long, resulting in high overhead for ext4_mb_use_preallocated().
To circumvent this problem, we limit the maximum length of per-inode
prealloc list to 512 and allow users to modify it.
After patching, we observed that the sys ratio of cpu has dropped, and
the system throughput has increased significantly. We created a process
to write the sparse file, and the running time of the process on the
fixed kernel was significantly reduced, as follows:
Running time on unfixed kernel:
[root@TENCENT64 ~]# time taskset 0x01 ./sparse /data1/sparce.dat
real 0m2.051s
user 0m0.008s
sys 0m2.026s
Running time on fixed kernel:
[root@TENCENT64 ~]# time taskset 0x01 ./sparse /data1/sparce.dat
real 0m0.471s
user 0m0.004s
sys 0m0.395s
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I5169cb24853d4da32e2862a6626f1f058689b053
Michael Jeanson [Mon, 10 Aug 2020 15:36:03 +0000 (11:36 -0400)]
fix: KVM: x86/mmu: Make kvm_mmu_page definition and accessor internal-only (v5.9)
commit
985ab2780164698ec6e7d73fad523d50449261dd
Author: Sean Christopherson <sean.j.christopherson@intel.com>
Date: Mon Jun 22 13:20:32 2020 -0700
KVM: x86/mmu: Make kvm_mmu_page definition and accessor internal-only
Make 'struct kvm_mmu_page' MMU-only, nothing outside of the MMU should
be poking into the gory details of shadow pages.
Change-Id: Ia5c1b9c49c2b00dad1d5b17c50c3dc730dafda20
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 10 Aug 2020 15:22:05 +0000 (11:22 -0400)]
fix: Move mmutrace.h into the mmu/ sub-directory (v5.9)
commit
33e3042dac6bcc33b80835f7d7b502b1d74c457c
Author: Sean Christopherson <sean.j.christopherson@intel.com>
Date: Mon Jun 22 13:20:29 2020 -0700
KVM: x86/mmu: Move mmu_audit.c and mmutrace.h into the mmu/ sub-directory
Move mmu_audit.c and mmutrace.h under mmu/ where they belong.
Change-Id: I582525ccca34e1e3bd62870364108a7d3e9df2e4
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Wed, 12 Aug 2020 20:58:26 +0000 (16:58 -0400)]
Namespace all logging statements
Add the 'LTTng:' prefix to all our logging statements to easily
distinguish them from other kernel messages.
Change-Id: I90fb4f4c75ce195734ec82946827bcf78e03429a
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Beniamin Sandu [Thu, 13 Aug 2020 13:24:39 +0000 (16:24 +0300)]
Kconfig: fix dependency issue when building in-tree without CONFIG_FTRACE
When building in-tree, one could disable CONFIG_FTRACE from kernel
config which will leave CONFIG_TRACEPOINTS selected by LTTNG modules,
but generate a lot of linker errors like below because it leaves out
other stuff, e.g.:
trace.c:(.text+0xd86b): undefined reference to `trace_event_buffer_reserve'
ld: trace.c:(.text+0xd8de): undefined reference to `trace_event_buffer_commit'
ld: trace.c:(.text+0xd926): undefined reference to `event_triggers_call'
ld: trace.c:(.text+0xd942): undefined reference to `trace_event_ignore_this_pid'
ld: net/mac80211/trace.o: in function `trace_event_raw_event_drv_tdls_cancel_channel_switch':
It appears to be caused by the fact that TRACE_EVENT macros in the Linux
kernel depend on the Ftrace ring buffer as soon as CONFIG_TRACEPOINTS is
enabled.
Steps to reproduce:
- Get a clone of an upstream stable kernel and use scripts/built-in.sh on it
- Configure a standard x86-64 build, enable built-in LTTNG but disable
CONFIG_FTRACE from Kernel Hacking-->Tracers using menuconfig
- Build will fail at linking stage
Signed-off-by: Beniamin Sandu <beniaminsandu@gmail.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Francis Deslauriers [Thu, 6 Aug 2020 15:03:00 +0000 (11:03 -0400)]
Fix: mmap enum flags build failures
Some of the mmap option flags are not available on all architectures and
are defined to zero by include/linux/mman.h. This is probably done as a
way to no-op the use of these flags on configurations that don't support
them.
To fix this, only define these flags in our enumeration if they are
defined and non-zero.
Also, the MAP_HUGE_{2MB,1GB} labels were mistakingly named
MAP_HUGETLB_{2MB,1GB}.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I778a52a0da9da6e04231a52c7f68a22d122dfb83
Francis Deslauriers [Fri, 5 Jun 2020 15:38:14 +0000 (11:38 -0400)]
syscalls: Make mmap()'s fields `prot` and `flags` enums
The `prot` flags is a simple CTF enumeration.
The `flags` field is a CTF struct of 2 CTF enumerations (`type` and
`options`). This is needed to express the two parts of this integer
flag. The 4 least significant bits of the integer are reserved to
express the type of the mapping (MAP_SHARED=0x1, MAP_PRIVATE=0x2, and
MAP_SHARED_VALIDATE=0x3).
The remaining 28 bits are used to specify optional configurations on the
mapping. As opposed to the type part, the options part is bit flag
field where all values are power of 2. This part can be expressed as
ORed bit flag values.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I5ae78754b5863b31d9a3ba1b1173502e1ae284d3
Francis Deslauriers [Fri, 5 Jun 2020 22:42:54 +0000 (18:42 -0400)]
x86: add error code enum to pagefault tracepoints
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ia939eccd1a918958f6a281595e447f33da2d64f7
Michael Jeanson [Mon, 20 Jul 2020 14:48:02 +0000 (10:48 -0400)]
Fix: TAINT_UNSAFE_SMP renamed to TAINT_CPU_OUT_OF_SPEC in v3.15
See upstream commit:
commit
8c90487cdc64847b4fdd812ab3047f426fec4d13
Author: Dave Jones <davej@redhat.com>
Date: Wed Feb 26 10:49:49 2014 -0500
Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC
Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC, so we can repurpose
the flag to encompass a wider range of pushing the CPU beyond its
warrany.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I3e91df01bfbfaa6fab4e3904e59317022a9ec0f8
Francis Deslauriers [Tue, 18 Feb 2020 16:30:54 +0000 (11:30 -0500)]
module_load: change `taints` field to `ctf_enum`
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I67b5aad0bd2bc43e06a5708f0f5e1fea56f31436
Mathieu Desnoyers [Mon, 13 Jul 2020 18:59:33 +0000 (14:59 -0400)]
Fix: Lock metadata cache on session destroy
commit
92143b2c5656 ("Fix: metadata stream leak, missing list removal and locking")
missed taking a lock protecting the metadata stream list iteration on
session destroy. This opens a race window between iteration and item
removal/free which triggers kernel OOPS.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Fri, 10 Jul 2020 15:15:40 +0000 (11:15 -0400)]
Fix: metadata stream leak, missing list removal and locking
The metadata stream is part of a list of metadata streams in the
metadata cache. Its addition to the list should be protected by
the metadata cache lock. It needs to be paired with protection
of list iteration with the same lock.
Removal from the list is entirely missing, and should be added
to lttng_metadata_ring_buffer_release (with proper locking).
This missing list removal was probably not causing issues because the
metadata stream structure was leaked: a kfree() is missing from
lttng_metadata_ring_buffer_release as well.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Fri, 10 Jul 2020 14:51:26 +0000 (10:51 -0400)]
Fix: coherent state not changed atomically with metadata written
commit
122c63cb4310 ("Fix: Implement RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK")
introduces a new ioctl which returns a flag indicating whether the
metadata is in consistent state at the end of the sub-buffer.
That commit is meant to address metadata consistency issues observable
in live sessions.
However, the "consistent" state is false as soon as a producer is
active (between an outermost metadata_begin/end pair). Unfortunately,
if the last "RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK" operation is
done between the last metadata printf and "end" of the transaction, the
last consistency state will be false, and the consumer daemon will never
send metadata to the relay daemon. This in turn causes a live viewer to
wait for metadata endlessly.
This issue can be reproduced by running lttng-tools:
tests/regression/tools/live/test_kernel
as root in a loop.
We observe two things:
1) the poll operation blocks when there is no more metadata to send,
which means there is no mean to unblock when the consistency state
changes back to "true" without producing additional metadata,
2) Even if (1) was fixed, the expectation from an ABI perspective is
that the "coherent" state is only populated when
RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK succeeds. Therefore,
there is no way to let user-space know about conherency transition
unless additional metadata is generated.
Fixing this requires to hold the metadata cache lock across the entire
production of a coherent metadata transaction. This simpler scheme is
possible because the metadata is generated in a reallocated memory area
and not directly into a ring buffer anymore. This was not the case in
earlier lttng-modules versions, when the metadata was generated directly
into a ring buffer, which explains why this simpler scheme was not
implemented.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Tue, 7 Jul 2020 18:18:37 +0000 (14:18 -0400)]
fix: include module.h for EXPORT_SYMBOL_GPL
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ic337e1eb375791ace08560555dd02b37cbefcf25
Michael Jeanson [Tue, 7 Jul 2020 17:50:15 +0000 (13:50 -0400)]
fix: __lttng_vmalloc_node_range const caller introduced in v3.6
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ib13cf03b5ab11830a8732318a12713720cf1b3e3
Michael Jeanson [Tue, 7 Jul 2020 18:07:01 +0000 (14:07 -0400)]
fix: version range for overflow_callback
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I1b8f1d59552a1723d3f4ed74780a2b57d13d0e52
Michael Jeanson [Tue, 7 Jul 2020 17:00:10 +0000 (13:00 -0400)]
fix: global_dirty_limit was introduced in v3.1
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Id97dbb2d0181a45c45cfed36c4be8753cabac283
Michael Jeanson [Tue, 7 Jul 2020 16:21:54 +0000 (12:21 -0400)]
fix: wrapper_uprobe_unregister is a void function
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ib4438da02aac3defd1245324d1b48f400f806d58
Michael Jeanson [Tue, 7 Jul 2020 15:58:03 +0000 (11:58 -0400)]
fix: prior to v4.0, __vmalloc_node_range had no vm_flags param
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ib476e32d109298d9ca3e6b6ab7ac8f63c50fb09f
Michael Jeanson [Tue, 7 Jul 2020 15:15:39 +0000 (11:15 -0400)]
fix: vmalloc on v5.8 without KALLSYMS
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ic945dad92e78a5bc2895a969a10c527e1349decf
Michael Jeanson [Thu, 14 May 2020 17:47:35 +0000 (13:47 -0400)]
Detect missing symbols used with kallsyms_lookup at compile time
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I19a9a31c386196899517899d861fe63611272139
Michael Jeanson [Wed, 12 Feb 2020 21:23:41 +0000 (16:23 -0500)]
Add time namespace context
Add a context for the new time namespace introduced in v5.6.
Change-Id: Ic3393f65702b80c87670bb21049ee2a19413111d
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Thu, 2 Jul 2020 16:06:42 +0000 (12:06 -0400)]
Use exported symbol bdevname() instead of disk_name()
bdevname() is a simple wrapper over disk_name() but has the honor to be
exported. Using it removes the need for a kallsym wrapper.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ic2b2233c4db7826175c68edea69751ddcb17a5e6
Michael Jeanson [Fri, 3 Jul 2020 14:46:12 +0000 (10:46 -0400)]
Add git-review config
Add .gitreview for contributors wishing to use gerrit for patch
reviews.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I663e66a433ddb645f580c4b9f885db9c3a08e02f
Michael Jeanson [Thu, 2 Jul 2020 15:21:42 +0000 (11:21 -0400)]
fix: mm: remove vmalloc_sync_(un)mappings() (v5.8)
See upstream commit:
commit
73f693c3a705756032c2863bfb37570276902d7d
Author: Joerg Roedel <jroedel@suse.de>
Date: Mon Jun 1 21:52:36 2020 -0700
mm: remove vmalloc_sync_(un)mappings()
These functions are not needed anymore because the vmalloc and ioremap
mappings are now synchronized when they are created or torn down.
Remove all callers and function definitions.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ifdefa35b25b4906cde407360e608b77e47cc3808
Mathieu Desnoyers [Tue, 30 Jun 2020 18:29:01 +0000 (14:29 -0400)]
Update design document
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Tue, 30 Jun 2020 18:24:29 +0000 (14:24 -0400)]
Add lttng-modules design document
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Tue, 30 Jun 2020 14:41:37 +0000 (10:41 -0400)]
Fix: callstack: initialize nested sequence length field name
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Tue, 30 Jun 2020 14:29:19 +0000 (10:29 -0400)]
Fix: callstack: NULL pointer dereference: length field also need fdata
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 29 Jun 2020 23:52:08 +0000 (19:52 -0400)]
Fix: callstack context memory corruption
commit
ceabb767180e "tracepoint: Refactor representation of nested types"
introduces two context fields for callstack contexts. Keeping a pointer
to the first field is not valid when adding the second context field to
the array, because the array is reallocated.
Fix this by introducing new context APIs which operate on indexes rather
than pointers:
- lttng_append_context_index,
- lttng_get_context_field_from_index,
- lttng_remove_context_field_index.
Add a NULL check to lttng_find_context so it can be used before adding
the first context.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 15 Jun 2020 15:12:24 +0000 (11:12 -0400)]
fix: mm/writeback: discard NR_UNSTABLE_NFS, use NR_WRITEBACK (v5.8)
See upstream commit:
commit
8d92890bd6b8502d6aee4b37430ae6444ade7a8c
Author: NeilBrown <neilb@suse.de>
Date: Mon Jun 1 21:48:21 2020 -0700
mm/writeback: discard NR_UNSTABLE_NFS, use NR_WRITEBACK instead
After an NFS page has been written it is considered "unstable" until a
COMMIT request succeeds. If the COMMIT fails, the page will be
re-written.
These "unstable" pages are currently accounted as "reclaimable", either
in WB_RECLAIMABLE, or in NR_UNSTABLE_NFS which is included in a
'reclaimable' count. This might have made sense when sending the COMMIT
required a separate action by the VFS/MM (e.g. releasepage() used to
send a COMMIT). However now that all writes generated by ->writepages()
will automatically be followed by a COMMIT (since commit
919e3bd9a875
("NFS: Ensure we commit after writeback is complete")) it makes more
sense to treat them as writeback pages.
So this patch removes NR_UNSTABLE_NFS and accounts unstable pages in
NR_WRITEBACK and WB_WRITEBACK.
A particular effect of this change is that when
wb_check_background_flush() calls wb_over_bg_threshold(), the latter
will report 'true' a lot less often as the 'unstable' pages are no
longer considered 'dirty' (as there is nothing that writeback can do
about them anyway).
Currently wb_check_background_flush() will trigger writeback to NFS even
when there are relatively few dirty pages (if there are lots of unstable
pages), this can result in small writes going to the server (10s of
Kilobytes rather than a Megabyte) which hurts throughput. With this
patch, there are fewer writes which are each larger on average.
Where the NR_UNSTABLE_NFS count was included in statistics
virtual-files, the entry is retained, but the value is hard-coded as
zero. static trace points and warning printks which mentioned this
counter no longer report it.
Change-Id: I18080ca62bc6c1cd7d6da4cb27cc1521fbdca5e1
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 15 Jun 2020 15:06:13 +0000 (11:06 -0400)]
fix: block: remove the error argument to the block_bio_complete (v5.8)
See upstream commit:
commit
d24de76af836260a99ca2ba281a937bd5bc55591
Author: Christoph Hellwig <hch@lst.de>
Date: Wed Jun 3 07:14:43 2020 +0200
block: remove the error argument to the block_bio_complete tracepoint
The status can be trivially derived from the bio itself. That also avoid
callers like NVMe to incorrectly pass a blk_status_t instead of the errno,
and the overhead of translating the blk_status_t to the errno in the I/O
completion fast path when no tracing is enabled.
Fixes: 35fe0d12c8a3 ("nvme: trace bio completion")
Change-Id: I8d1463184d79bfab418a1755bfc6a0200170fff3
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Mon, 15 Jun 2020 14:51:41 +0000 (10:51 -0400)]
fix: pipe_buf_operations rework (v5.8)
See upstream commits:
commit
c928f642c29a5ffb02e16f2430b42b876dde69de
Author: Christoph Hellwig <hch@lst.de>
Date: Wed May 20 17:58:16 2020 +0200
fs: rename pipe_buf ->steal to ->try_steal
And replace the arcane return value convention with a simple bool
where true means success and false means failure.
[AV: braino fix folded in]
commit
b8d9e7f2411b0744df2ec33e80d7698180fef21a
Author: Christoph Hellwig <hch@lst.de>
Date: Wed May 20 17:58:15 2020 +0200
fs: make the pipe_buf_operations ->confirm operation optional
Just return 0 for success if it is not present.
commit
76887c256744740d6121af9bc4aa787712a1f694
Author: Christoph Hellwig <hch@lst.de>
Date: Wed May 20 17:58:14 2020 +0200
fs: make the pipe_buf_operations ->steal operation optional
Just return 1 for failure if it is not present.
Change-Id: Ic185632202470db1eb5b012e95e793ff2cb26be7
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Ruiqiang Hao [Tue, 26 May 2020 03:36:17 +0000 (03:36 +0000)]
Fix: syscalls: Ignore fcntl cmd specific to 32-bit in 64-bit only config
When CONFIG_64BIT is defined and CONFIG_COMPAT is not defined, the fcntl system call
"F_GETLK64", "F_SETLK64" and "F_SETLKW64" should be ignored.
Signed-off-by: Ruiqiang Hao <Ruiqiang.Hao@windriver.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Fri, 24 Apr 2020 19:49:42 +0000 (15:49 -0400)]
Fix: Implement RING_BUFFER_GET_NEXT_SUBBUF_METADATA_CHECK
Get next metadata subbuffer, returning a flag indicating whether the
metadata is guaranteed to be in a consistent state at the end of this
sub-buffer (can be parsed).
This can be used by the consumer to know whether the metadata can be
parsed at the end of this sub-buffer, which is useful to distinguish
between errors and incomplete metadata in live tracing.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Fri, 15 May 2020 19:12:53 +0000 (15:12 -0400)]
fix: vmalloc_sync_mappings was backported to v5.5.12
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ie554d9c956afc2a8e114fe41e4b3c225d8af40a1
Stefan Bader [Mon, 18 May 2020 14:03:16 +0000 (16:03 +0200)]
Update: Additional kernel ranges for vmalloc_sync_mappings
Some Ubuntu kernels cannot be directly mapped to an upstream stable
version. Define distro specific ranges for those (4.15, 5.0, 5.3).
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Ovidiu Panait [Thu, 14 May 2020 11:27:17 +0000 (14:27 +0300)]
Update: Use vmalloc_sync_mappings for stable kernels
Starting from v5.4.28/v5.2.37/v4.19.113/v4.14.175/v4.9.218/v4.4.218, stable
kernel branches backported v5.6 upstream commit [1], causing the following
warnings:
...
[ 483.242037] LTTng: vmalloc_sync_all symbol lookup failed.
[ 483.257056] Page fault handler and NMI tracing might trigger faults.
...
Extend check for vmalloc_sync_mappings for stable kernels as well.
[1] https://github.com/torvalds/linux/commit/
763802b53a427ed3cbd419dbba255c414fdd9e7c
[ Edit: minor coding style fix by Mathieu Desnoyers. ]
Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Ovidiu Panait [Thu, 14 May 2020 10:05:24 +0000 (13:05 +0300)]
Fix: Use vmalloc_sync_mappings on kernel 5.6 as well
Upstream commit [1], that got rid of vmalloc_sync_all and introduced
vmalloc_sync_mappings, is a v5.6 commit:
$ git tag --contains
763802b53a427ed3cbd419dbba255c414fdd9e7c
v5.6
v5.6-rc7
v5.7-rc1
v5.7-rc2
v5.7-rc3
Extend the LINUX_VERSION_CODE check to v5.6 to fix the following warnings:
...
[ 483.242037] LTTng: vmalloc_sync_all symbol lookup failed.
[ 483.257056] Page fault handler and NMI tracing might trigger faults.
...
[1] https://github.com/torvalds/linux/commit/
763802b53a427ed3cbd419dbba255c414fdd9e7c
Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Francis Deslauriers [Tue, 12 May 2020 19:11:05 +0000 (15:11 -0400)]
Cleanup: remove unsupported `ctf_float()` macros
Tracing floats is not supported for the kernel tracer. Disallow building
kernel probes with those fields, rather than silently ignoring them.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I1cf9751df96d2af3b54f725797bd20d7b05f2b38
Francis Deslauriers [Tue, 12 May 2020 15:48:20 +0000 (11:48 -0400)]
Cleanup: have interpreter functions return _DISCARD instead of 0
It's easier to understand the meaning of the zero return value of these
function using the enum. It makes it obvious.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I1df8f704fa9f6768f413c12c3c1de61a94b3aff8
Francis Deslauriers [Mon, 11 May 2020 19:04:43 +0000 (15:04 -0400)]
Cleanup: bytecode: typo: "s16" -> "u16"
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I68901ca2d89d08f2cb69853816e0214c588aa7f8
Mathieu Desnoyers [Thu, 7 May 2020 14:51:03 +0000 (10:51 -0400)]
Cleanup: Rename patches.i to patches.h
This generated header file contains a list of patches applied on the
lttng-modules tree. Based on the C99 specification, ".i" files are not
supposed to be preprocessed, although this header header file is
expected to be preprocessed.
Rename it from ".i" to ".h" to convey that it is a C header meant to be
preprocessed.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 18:08:22 +0000 (14:08 -0400)]
Cleanup: Move all source files to src/
This includes *.c, lib/*/*.c, probes/*.c, wrapper/*.c.
Adapt Makefile and Kbuild accordingly. Introduce src/Kbuild.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Wed, 6 May 2020 18:26:01 +0000 (14:26 -0400)]
Cleanup: Move patches.i to include/generated/
Move patches.i from /extra_version to include/generated/ so we
can include them without using relative path includes.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I87927a372ffeb244f3c097c9bb80eeca7d9f99eb
Mathieu Desnoyers [Wed, 6 May 2020 17:44:57 +0000 (13:44 -0400)]
Cleanup: Move lttng-modules instrumentation headers
The directory hierarchy "instrumentation/events/lttng-module/" only
exists for historical reasons and is not needed anymore. Move all
its contents into "instrumentation/events/".
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 17:39:19 +0000 (13:39 -0400)]
Cleanup: Remove toplevel directory from include search path
Now that all include files are moved to include/ (except for those
meant to be included with "#include "...h"), we can remove the toplevel
directory from the include search path.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 17:38:49 +0000 (13:38 -0400)]
Cleanup: Move blacklist/ headers to include/blacklist/
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 17:35:50 +0000 (13:35 -0400)]
Cleanup: Move wrapper/ headers to include/wrapper/
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 17:34:11 +0000 (13:34 -0400)]
Cleanup: Move instrumentation/ headers to include/instrumentation/
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 17:15:13 +0000 (13:15 -0400)]
Cleanup: Remove deprecated TODO file
All relevant items that were left were moved to
https://bugs.lttng.org/projects/lttng-modules "Feature".
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Wed, 6 May 2020 15:11:29 +0000 (11:11 -0400)]
fix: add missing guid_t type to wrapper
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I0de39c24a7925b580fabbdaa12dbe05c43cfcd98
Michael Jeanson [Wed, 6 May 2020 15:03:32 +0000 (11:03 -0400)]
Fix: missing wrapper rename to wrapper_vmalloc_sync_mappings
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Idf7082a980c5a604bfef5c69906678b5083a9bbf
Mathieu Desnoyers [Wed, 6 May 2020 14:18:46 +0000 (10:18 -0400)]
Cleanup: Move headers from toplevel to include/lttng/
- Remove extra "lttng-" from filename (now implied by the path).
- Adapt includes accordingly.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 13:45:16 +0000 (09:45 -0400)]
Cleanup: Move headers from probes/ to include/lttng/
- Remove extra "lttng-" from filename (now implied by the path).
- Adapt includes accordingly.
- Adapt lttng-syscalls-generate-headers.sh header generation script
accordingly.
- Remove probes/lttng.h, include its PARAMS() define in the two
user headers.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 13:36:45 +0000 (09:36 -0400)]
Cleanup: Move headers from lib/ to include/lttng/
Adapt includes accordingly.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Wed, 6 May 2020 13:21:00 +0000 (09:21 -0400)]
Cleanup: Move lib/ringbuffer/ headers to include/ringbuffer/
Remove the <wrapper/ringbuffer/...> proxy include files, and add the
include/ directory to the preprocessor include search patch.
Adapt all includes accordingly.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 13 Apr 2020 18:38:51 +0000 (14:38 -0400)]
Fix: wrapper random documentation
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Tue, 5 May 2020 17:38:31 +0000 (13:38 -0400)]
Update for kernel 5.7: use vmalloc_sync_mappings on kernels >= 5.7
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 4 May 2020 19:00:53 +0000 (15:00 -0400)]
Unbreak LTTng for kernel 5.7
Linux commit
0bd476e6c67190b5eb7b6e105c8db8ff61103281 ("kallsyms:
unexport kallsyms_lookup_name() and kallsyms_on_each_symbol()") breaks
LTTng-modules by removing symbols used by the LTTng-modules out-of-tree
tracer.
I pointed this out when the change was originally considered before the
5.7 merge window. This generated some discussion but it did not lead to
any concrete proposal to fix the issue. [1]
The commit has been merged in the 5.7 merge window. At that point, as
maintainer of LTTng, I immediately raised a flag about this issue,
proposing an alternative approach to solve this: expose the few symbols
needed by LTTng to GPL modules. This was NACKed on the ground that the
Linux kernel cannot export GPL symbols when there are no in-tree
users. [2]
Steven Rostedt has shown interest in merging LTTng-modules upstream.
LTTng-modules being LGPL, this is very much doable. I have prepared a
tree of LTTng-modules "for upstreaming" and sent it to him privately so
he can review it. Even if in an ideal scenario LTTng-modules is merged
for the following merge window, it leaves LTTng-modules broken on the
5.7 kernel.
In order to ensure that the LTTng-modules kernel tracer continues working
for my end users on kernels 5.7 onwards, as a very last resort, this is
with great reluctance that I created this fix for LTTng modules. It
basically uses kprobes to lookup the kallsyms_lookup_name symbol, and
continues using kallsyms_lookup_name as before.
Link: https://lore.kernel.org/r/20200302192811.n6o5645rsib44vco@localhost
Link: https://lore.kernel.org/r/20200409193543.18115-1-mathieu.desnoyers@efficios.com
Link: https://lwn.net/Articles/817988/
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Will Deacon <will@kernel.org>
CC: akpm@linux-foundation.org
CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
CC: Masami Hiramatsu <mhiramat@kernel.org>
CC: rostedt@goodmis.org
CC: Alexei Starovoitov <ast@kernel.org>
Mathieu Desnoyers [Mon, 4 May 2020 18:52:13 +0000 (14:52 -0400)]
Move lttng wrappers into own module
Currently, we only pull the wrapper symbols into a single sub-module,
either:
lttng-tracer.o:
- wrapper/random.o
- wrapper/trace-clock.o
- wrapper/page_alloc.o
or
lttng-statedump.o:
- wrapper/irqdesc.o
- wrapper/fdtable.o
Because lttng-tracer depends on lttng-statedump, we cannot just put all
wrappers into lttng-tracer.o, because it would create a circular
dependency. This will be an issue if we introduce common wrappers which
are used in both lttng-tracer.o and in lttng-statedump.o.
Introduce a new lttng-wrapper.o to contain all wrapper symbols for all
lttng modules.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 13 Apr 2020 16:16:43 +0000 (12:16 -0400)]
Introduce lttng_guid_gen wrapper for kernels >= 5.7.0
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 13 Apr 2020 15:44:23 +0000 (11:44 -0400)]
instrumentation: update x86 kvm instrumentation for kernel >= 5.7.0
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 13 Apr 2020 15:38:48 +0000 (11:38 -0400)]
instrumentation: update mm_vmscan for kernel >= 5.7.0
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Francis Deslauriers [Fri, 17 Apr 2020 14:01:40 +0000 (10:01 -0400)]
filter: bytecode already in the list should go before
Background
==========
This `seqnum` (sequence number) feature is currently unused. It was
designed so that the session daemon could tell the tracer the order in
which the bytecode should be run.
Issue
=====
The current implementation of the session daemon doesn't use this
feature so there is only ever a single bytecode to execute per callsite.
During work on an upcoming feature uses this `seqnum` became useful and
it was realized that the current bytecode linking code would reverse the
order in which the bytecode were executed when all bytecodes have the
same `seqnum` value.
This is due to the fact that the `cds_list_for_each_entry_reverse` loops
until it finds a `seqnum` smaller than the new one.
So if all bytecodes have the same `seqnum`, the new bytecode will be
added at the beginning of the list.
This is not technically a problem since it's the session daemon's job to
set the sequence number if it wants a particular ordering. Even
considering that, we found it counterintuitive that new bytecodes are
added at the beginning of the list in those cases.
Solution
========
This commit makes it so that on equality, the insertion is done after
the existing bytecodes.
Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I784887e3e6085f9344a2bb429d4f0d30586ebc57
Mathieu Desnoyers [Tue, 7 Apr 2020 17:07:54 +0000 (13:07 -0400)]
tracepoint: Refactor representation of nested types
Refactor enumeration, sequence, array, structure, and variant types.
Implement internal data structures to support nested types.
All probe providers using ctf_enum(), ctf_array*() and ctf_sequence*()
are switched to this new internal type representation.
Each of sequence, array, struct and variant gain a "alignment" property,
which is a feature which was needed in lttng-modules to express
alignment for an array or sequence of bits.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Mathieu Desnoyers [Mon, 6 Apr 2020 16:00:47 +0000 (12:00 -0400)]
wrapper/compiler.h: Implement __LTTNG_COMPOUND_LITERAL
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Michael Jeanson [Tue, 31 Mar 2020 18:29:29 +0000 (14:29 -0400)]
Update to SPDX v3.0 identifiers
The short form of GPL-2.0 and LGPL-2.1 were deprecated in favour of the
clearer GPL-2.0-only and GPL-2.0-or-later in the SPDX license list v3.0.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: If2337f5c67a2548d7f25043e67006211213cbe3e
This page took 0.048048 seconds and 4 git commands to generate.