Support is divided into two main areas:
- reading VPD pages and setting sdev request_queue limits
- support WRITE ATOMIC (16) command and tracing
The relevant block limits VPD page need to be read to allow the block layer
request_queue atomic write limits to be set. These VPD page limits are
described in sbc4r22 section 6.6.4 - Block limits VPD page.
There are five limits of interest:
- MAXIMUM ATOMIC TRANSFER LENGTH
- ATOMIC ALIGNMENT
- ATOMIC TRANSFER LENGTH GRANULARITY
- MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY
- MAXIMUM ATOMIC BOUNDARY SIZE
MAXIMUM ATOMIC TRANSFER LENGTH is the maximum length for a WRITE ATOMIC
(16) command. It will not be greater than the device MAXIMUM TRANSFER
LENGTH.
ATOMIC ALIGNMENT and ATOMIC TRANSFER LENGTH GRANULARITY are the minimum
alignment and length values for an atomic write in terms of logical blocks.
Unlike NVMe, SCSI does not specify an LBA space boundary, but does specify
a per-IO boundary granularity. The maximum boundary size is specified in
MAXIMUM ATOMIC BOUNDARY SIZE. When used, this boundary value is set in the
WRITE ATOMIC (16) ATOMIC BOUNDARY field - layout for the WRITE_ATOMIC_16
command can be found in sbc4r22 section 5.48. This boundary value is the
granularity size at which the device may atomically write the data. A value
of zero in WRITE ATOMIC (16) ATOMIC BOUNDARY field means that all data must
be atomically written together.
MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY is the maximum atomic write
length if a non-zero boundary value is set.
For atomic write support, the WRITE ATOMIC (16) boundary is not of much
interest, as the block layer expects each request submitted to be executed
be atomically written together.
MAXIMUM ATOMIC TRANSFER LENGTH WITH BOUNDARY is the maximum atomic write
length if a non-zero boundary value is set.
For atomic write support, the WRITE ATOMIC (16) boundary is not of much
interest, as the block layer expects each request submitted to be executed
atomically. However, the SCSI spec does leave itself open to a quirky
scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero, yet MAXIMUM ATOMIC
TRANSFER LENGTH WITH BOUNDARY and MAXIMUM ATOMIC BOUNDARY SIZE are both
non-zero. This case will be supported.
To set the block layer request_queue atomic write capabilities, sanitize
the VPD page limits and set limits as follows:
- atomic_write_unit_min is derived from granularity and alignment values.
If no granularity value is not set, use physical block size
- atomic_write_unit_max is derived from MAXIMUM ATOMIC TRANSFER LENGTH. In
the scenario where MAXIMUM ATOMIC TRANSFER LENGTH is zero and boundary
limits are non-zero, use MAXIMUM ATOMIC BOUNDARY SIZE for
atomic_write_unit_max. New flag scsi_disk.use_atomic_write_boundary is
set for this scenario.
- atomic_write_boundary_bytes is set to zero always
SCSI also supports a WRITE ATOMIC (32) command, which is for type 2
protection enabled. This is not going to be supported now, so check for
T10_PI_TYPE2_PROTECTION when setting any request_queue limits.
To handle an atomic write request, add support for WRITE ATOMIC (16)
command in handler sd_setup_atomic_cmnd(). Flag use_atomic_write_boundary
is checked here for encoding ATOMIC BOUNDARY field.
Trace info is also added for WRITE_ATOMIC_16 command.
Change-Id: Ie072002fe2184615c72531ac081a324ef18cfb03 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The member extent_map::block_start can be calculated from
extent_map::disk_bytenr + extent_map::offset for regular extents.
And otherwise just extent_map::disk_bytenr.
And this is already validated by the validate_extent_map(). Now we can
remove the member.
However there is a special case in btrfs_create_dio_extent() where we
for NOCOW/PREALLOC ordered extents cannot directly use the resulting
btrfs_file_extent, as btrfs_split_ordered_extent() cannot handle them
yet.
So for that call site, we pass file_extent->disk_bytenr +
file_extent->num_bytes as disk_bytenr for the ordered extent, and 0 for
offset.
Change-Id: I2e3245bb0d1f5263e902659aa05848d5e231909b Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The extent_map::block_len is either extent_map::len (non-compressed
extent) or extent_map::disk_num_bytes (compressed extent).
Since we already have sanity checks to do the cross-checks between the
new and old members, we can drop the old extent_map::block_len now.
For most call sites, they can manually select extent_map::len or
extent_map::disk_num_bytes, since most if not all of them have checked
if the extent is compressed.
Change-Id: Ib03fc685b4e876bf4e53afdd28ca9826342a0e4e Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Since we have extent_map::offset, the old extent_map::orig_start is just
extent_map::start - extent_map::offset for non-hole/inline extents.
And since the new extent_map::offset is already verified by
validate_extent_map() while the old orig_start is not, let's just remove
the old member from all call sites.
Change-Id: I025a30d49b3e3ddc37d7846acc191ebbdf2ff19e Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
ext4: make ext4_da_reserve_space() reserve multi-clusters
Add 'nr_resv' parameter to ext4_da_reserve_space(), which indicates the
number of clusters wants to reserve, make it reserve multiple clusters
at a time.
Change-Id: Ib1ce8c3023d53a6d22ec444a435fdb3c871f64c5 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
skb does not include enough information to find out receiving
sockets/services and netns/containers on packet drops. In theory
skb->dev tells about netns, but it can get cleared/reused, e.g. by TCP
stack for OOO packet lookup. Similarly, skb->sk often identifies a local
sender, and tells nothing about a receiver.
Allow passing an extra receiving socket to the tracepoint to improve
the visibility on receiving drops.
Change-Id: I33c8ce1a48006456f198ab1592f733b55be01016 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: event notifier: set eval_capture to false for kprobe and uprobe
Trying to capture fields for kprobe and uprobe, event notifications
will end up dereferencing NULL pointers. Prevent execution of capture
code in those cases.
Also remove the confusing comment about checking if a fd exists. I
could not find one instance in the entire kernel that still matches
the description or the reason for the name fcheck.
The need for better names became apparent in the last round of
discussion of this set of changes[1].
Michael Jeanson [Wed, 29 May 2024 19:02:15 +0000 (15:02 -0400)]
Warn and return on fd overflow fdt
The fdt should only grow and iterate_fd() holds file_lock, which should
ensure the fdt does not change while the lock is taken but be cautious
and check anyway.
Change-Id: Icd6a3263026734cbe3f296f6087f79add4148a8f Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
net: udp: add IP/port data to the tracepoint udp/udp_fail_queue_rcv_skb
The udp_fail_queue_rcv_skb() tracepoint lacks any details on the source
and destination IP/port whereas this information can be critical in case
of UDP/syslog.
Change-Id: I0c337c5817b0a120298cbf5088d60671d9625b0d Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
btrfs: move ->parent and ->ref_root into btrfs_delayed_ref_node
These two members are shared by both the tree refs and data refs, so
move them into btrfs_delayed_ref_node proper. This allows us to greatly
simplify the comparison code, as the shared refs always only sort on
parent, and the non shared refs always sort first on ref_root, and then
only data refs sort on their specific fields.
Change-Id: Ib7c92cc4bb8d674ac66ccfa25c03476f7adaaf90 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Now that all of the delayed ref information is in the delayed ref node,
drastically simplify the delayed ref tracepoints by simply passing in
the btrfs_delayed_ref_node and populating the tracepoints with the
values from the structure itself.
Change-Id: Ic90bc23d6aa558baec33adc33b4d21e052e83375 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The canary __canary__get_pfnblock_flags_mask has never done its job of
detecting changes to the prototype of get_pfnblock_flags_mask because
it was actually calling the wrapper, because the wrapper/page_alloc.h
header maps get_pfnblock_flags_mask to wrapper_get_pfnblock_flags_mask.
Unfortunately, this wrapper is included by page_alloc.c only _after_ the
linux/pageblock-flags.h header is included, which means the
get_pfnblock_flags_mask prototype does _not_ have the wrapper prefix,
which prevents it from being useful for any kind of type validation.
This has been detected by a compiler warning stating that
wrapper_get_pfnblock_flags_mask() does not have a prior declaration.
Move the wrapper/page_alloc.h include _before_ including
pageblock-flags.h. This ensures the declaration has the wrapper_ prefix,
and therefore the compiler compares the declaration with the definition
of wrapper_get_pfnblock_flags_mask within page_alloc.c. The canary
function can be removed because it is redundant with this type check.
With this proper type check in place, we notice the following two
changes upstream:
commit 535b81e209219 ("mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask()")
introduced in v5.9 removes the end_bitidx argument.
commit ca891f41c4c79 ("mm: constify get_pfnblock_flags_mask and get_pfnblock_migratetype")
introduced in v5.14 adds a const qualifier to the struct page pointer.
Adapt the code to match the evolution of this prototype.
Upstream commit f8c7511db009d ("block: make block_class constant")
makes the block_class const. Reflect this change in the lttng-modules
canary function.
Timers are added to the timer wheel off by one. This is required in
case a timer is queued directly before incrementing jiffies to prevent
early timer expiry.
When reading a timer trace and relying only on the expiry time of the timer
in the timer_start trace point and on the now in the timer_expiry_entry
trace point, it seems that the timer fires late. With the current
timer_expiry_entry trace point information only now=jiffies is printed but
not the value of base->clk. This makes it impossible to draw a conclusion
to the index of base->clk and makes it impossible to examine timer problems
without additional trace points.
Therefore add the base->clk value to the timer_expire_entry trace
point, to be able to calculate the index the timer base is located at
during collecting expired timers.
Change-Id: I2ebdbb637db0966ff51f45bf66916a59a496b50c Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
mm: compaction: update the cc->nr_migratepages when allocating or freeing the freepages
Currently we will use 'cc->nr_freepages >= cc->nr_migratepages' comparison
to ensure that enough freepages are isolated in isolate_freepages(),
however it just decreases the cc->nr_freepages without updating
cc->nr_migratepages in compaction_alloc(), which will waste more CPU
cycles and cause too many freepages to be isolated.
So we should also update the cc->nr_migratepages when allocating or
freeing the freepages to avoid isolating excess freepages. And I can see
fewer free pages are scanned and isolated when running thpcompact on my
Arm64 server:
There are clearly multiple calls, one per component, but they cannot be
discriminated from each other.
Change the ftrace events to also print the component name, to make it clear
which part of the code is involved. This requires changing the passed value
from a struct snd_soc_card, where the DAPM context is not kwown, to a
struct snd_soc_dapm_context where it is obviously known but the a card
pointer is also available.
Kienan Stewart [Fri, 22 Mar 2024 13:28:08 +0000 (09:28 -0400)]
Fix: build kvm probe on EL 8.4+
The lower value of the EL range, 240.15.1, corresponds to the first
import of EL r8 kernels into Rocky Linux's kernel staging repo.
The change may have been introduced in an earlier RHEL 8 kernel,
prior to the history of imports into Rocky.
Change-Id: Icefe472d43e28cc09746e9e046b12299609ebab1 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Kienan Stewart [Fri, 22 Mar 2024 13:55:55 +0000 (09:55 -0400)]
Fix: support ext4_journal_start on EL 8.4+
The lower value of the EL range, 240.15.1, corresponds to the first
import of EL r8 kernels into Rocky Linux's kernel staging repo.
The change may have been introduced in an earlier RHEL 8 kernel,
prior to the history of imports into Rocky.
Change-Id: Ibec02b382478bee33947d079f33835823827f4c5 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Kienan Stewart [Fri, 8 Mar 2024 17:47:06 +0000 (12:47 -0500)]
Fix: Correct minimum version in jbd2 SLE kernel range
This range was introduced in commit b49650509ff072d37ec112cf45a5f14f382c9a31;
however, the range is wrong and worked because the kernel versions
(eg. `5.14.21-150400.24.100-default`) were evaluated to values
greater than `LTTNG_SLE_KERNEL_RANGE(5,14,21,24,46,1)`.
As a result builds of lttng-modules against older versions of SLE
kernels failed.
Change-Id: I23d97d84a23c7b24e957fe943932d6aefbe1b409 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Kienan Stewart [Fri, 8 Mar 2024 16:26:02 +0000 (11:26 -0500)]
Fix: Handle recent SLE major version codes
Starting in early 2022, the SLE linux version codes changed from the
previous style `5.3.18-59.40.1` to a new convention in which the major
version is a compound number consisting of the major release version,
the service pack version, and the auxillary version (currently unused
from my understanding) similar to the following `5.3.18-150300.59.43.1`[1].
The newer values used in the SLE major version causes the integer
value to "overflow" the expected number of digits and the comparisons
may fail. The `LTTNG_SLE_KERNEL_VERSION` macro also multiplies the
`LTTNG_KERNEL_VERSION` by `100000000ULL` which doesn't work in all
situations, as the resulting value is too large to be stored fully in
an `unsigned long long`.
Example of previous results:
```
// Example range comparison. True or false depending on the value of
// `LTTNG_SLE_VERSION_CODE` and `LTTNG_LINUX_VERSION_CODE`.
LTTNG_SLE_KERNEL_RANGE(5,15,21,150400,24,46, 5,15,0,0,0,0);
`LTTNG_KERNEL_VERSION` packs the kernel version into a 32-bit integer;
however, using that type of packing on the SLE kernel version will not
work well:
In this patch, the SLE version is packed into a 64-bit integer
with 48 bits for the major version, 8 bits for each of the minor and
patch versions.
As a result of packing the SLE version into a 64-bit integer,
it is not possible to coherently combine an `LTTNG_KERNEL_VERSION` and
an `LTTNG_SLE_KERNEL_VERSION`. Doing so would require an integer
larger than 64-bits. Therefore, the `LTTNG_SLE_KERNEL_RANGE` macro has
been adjusted to perform the range comparisons using the two values
separately. The usage of the `LTTNG_SLE_KERNEL_RANGE` remains
unchanged, as `LTTNG_SLE_VERSION` is only used inside that macro.
Using the adjusted macros:
```
// Example range comparison. True or false depending on the value of
// `LTTNG_SLE_VERSION_CODE` and `LTTNG_LINUX_VERSION_CODE`.
LTTNG_SLE_KERNEL_RANGE(5,15,21,150400,24,46, 5,15,0,0,0,0);
It's possible that future releases of SLE kernels have minor or patch
values that exceed 255 (SLE15SP1 has a release using `197`, for example),
requiring an adjustment to using more bits for those fields when
packing into a 64-bit integer.
The schema of multiplying an `LTTNG_KERNEL_VERSION` by a large value
is used for other distributions. RHEL in particular uses
`100000000ULL`, which could lead to overflow issues with certain
comparisons similar to the previous behaviour of
`LTTNG_SLE_KERNEL_VERSION(5,15,0,0,0,0);`.
Martin Hicks [Fri, 26 Jan 2024 17:18:33 +0000 (12:18 -0500)]
Compile fixes for RHEL 9.3 kernels
The ranges were build tested on RHEL9.2 (5.14.0-284.11.1), RHEL9.3
(5.14.0-362.8.1) and RHEL8.9 (4.18.0-513.11.1).
This disables the kmem and compaction modules. I don't believe getting
these to compile will be easy, as the required struct declarations are
in vmlinux.h, and haven't been moved into mm/internal.h and mm/slab.h in
the RHEL sources.
Change-Id: I999c593d6850e2327f6e9df8432a4ea2325a7cea Signed-off-by: Martin Hicks <martin@sr-research.com> Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
btrfs: use the flags of an extent map to identify the compression type
Currently, in struct extent_map, we use an unsigned int (32 bits) to
identify the compression type of an extent and an unsigned long (64 bits
on a 64 bits platform, 32 bits otherwise) for flags. We are only using
6 different flags, so an unsigned long is excessive and we can use flags
to identify the compression type instead of using a dedicated 32 bits
field.
We can easily have tens or hundreds of thousands (or more) of extent maps
on busy and large filesystems, specially with compression enabled or many
or large files with tons of small extents. So it's convenient to have the
extent_map structure as small as possible in order to use less memory.
So remove the compression type field from struct extent_map, use flags
to identify the compression type and shorten the flags field from an
unsigned long to a u32. This saves 8 bytes (on 64 bits platforms) and
reduces the size of the structure from 136 bytes down to 128 bytes, using
now only two cache lines, and increases the number of extent maps we can
have per 4K page from 30 to 32. By using a u32 for the flags instead of
an unsigned long, we no longer use test_bit(), set_bit() and clear_bit(),
but that level of atomicity is not needed as most flags are never cleared
once set (before adding an extent map to the tree), and the ones that can
be cleared or set after an extent map is added to the tree, are always
performed while holding the write lock on the extent map tree, while the
reader holds a lock on the tree or tests for a flag that never changes
once the extent map is in the tree (such as compression flags).
Change-Id: I95402d43f064c016b423b48652e4968d3db9b8a9 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
btrfs: use a dedicated data structure for chunk maps
Currently we abuse the extent_map structure for two purposes:
1) To actually represent extents for inodes;
2) To represent chunk mappings.
This is odd and has several disadvantages:
1) To create a chunk map, we need to do two memory allocations: one for
an extent_map structure and another one for a map_lookup structure, so
more potential for an allocation failure and more complicated code to
manage and link two structures;
2) For a chunk map we actually only use 3 fields (24 bytes) of the
respective extent map structure: the 'start' field to have the logical
start address of the chunk, the 'len' field to have the chunk's size,
and the 'orig_block_len' field to contain the chunk's stripe size.
Besides wasting a memory, it's also odd and not intuitive at all to
have the stripe size in a field named 'orig_block_len'.
We are also using 'block_len' of the extent_map structure to contain
the chunk size, so we have 2 fields for the same value, 'len' and
'block_len', which is pointless;
3) When an extent map is associated to a chunk mapping, we set the bit
EXTENT_FLAG_FS_MAPPING on its flags and then make its member named
'map_lookup' point to the associated map_lookup structure. This means
that for an extent map associated to an inode extent, we are not using
this 'map_lookup' pointer, so wasting 8 bytes (on a 64 bits platform);
4) Extent maps associated to a chunk mapping are never merged or split so
it's pointless to use the existing extent map infrastructure.
So add a dedicated data structure named 'btrfs_chunk_map' to represent
chunk mappings, this is basically the existing map_lookup structure with
some extra fields:
1) 'start' to contain the chunk logical address;
2) 'chunk_len' to contain the chunk's length;
3) 'stripe_size' for the stripe size;
4) 'rb_node' for insertion into a rb tree;
5) 'refs' for reference counting.
This way we do a single memory allocation for chunk mappings and we don't
waste memory for them with unused/unnecessary fields from an extent_map.
We also save 8 bytes from the extent_map structure by removing the
'map_lookup' pointer, so the size of struct extent_map is reduced from
144 bytes down to 136 bytes, and we can now have 30 extents map per 4K
page instead of 28.
Change-Id: Ie52b5ac83df4bc6abeb84d958c4f5d24ae0d8c75 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
With all the users of strlcpy() removed[1] from the kernel, remove the
API, self-tests, and other references. Leave mentions in Documentation
(about its deprecation), and in checkpatch.pl (to help migrate host-only
tools/ usage). Long live strscpy().
For starting a timer, the timer is enqueued into a bucket of the timer
wheel. The bucket expiry is the defacto expiry of the timer but it is not
equal the timer expiry because of increasing granularity when bucket is in
a higher level of the wheel. To be able to figure out in a trace whether a
timer expired in time or not, the bucket expiry time is required as well.
Add bucket expiry time to the timer_start tracepoint and thereby simplify
the arguments.
Change-Id: I4868092765745b1efd0c48f13c0b837f2007dcb6 Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
sched: Remove vruntime from trace_sched_stat_runtime()
Tracing the runtime delta makes sense, observer can sum over time.
Tracing the absolute vruntime makes less sense, inconsistent:
absolute-vs-delta, but also vruntime delta can be computed from
runtime delta.
Removing the vruntime thing also makes the two tracepoint sites
identical, allowing to unify the code in a later patch.
Change-Id: I24ebb4e06dbb646a1af75ac62b74f3821ff197de Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
When the Intel IBT feature is enabled, a CPU supporting this feature
validates that all indirect jumps/calls land on an ENDBR64 instruction.
The kernel seals functions which are not meant to be called indirectly,
which means that calling functions indirectly from their address fetched
using kallsyms or kprobes trigger a crash.
Use the MSR_IA32_S_CET CET_ENDBR_EN MSR bit to temporarily disable ENDBR
validation around indirect calls to kernel functions. Considering that
the main purpose of this feature is to prevent ROP-style attacks,
disabling the ENDBR validation temporarily around the call from a kernel
module does not affect the ROP protection.
The task_prio() function has been implemented as "return p->prio -
MAX_RT_PRIO;" since at least kernel v3.0, so inline it into
lttng-modules rather than using kallsyms to call the kernel
implementation.
A missing call to wrapper_task_prio_init() causes the function pointer
for task_prio to stay NULL, which triggers a OOPS when trying to use the
prio context.
In recent discussions around some performance improvements in the file
handling area we discussed switching the file cache to rely on
SLAB_TYPESAFE_BY_RCU which allows us to get rid of call_rcu() based
freeing for files completely. This is a pretty sensitive change overall
but it might actually be worth doing.
The main downside is the subtlety. The other one is that we should
really wait for Jann's patch to land that enables KASAN to handle
SLAB_TYPESAFE_BY_RCU UAFs. Currently it doesn't but a patch for this
exists.
With SLAB_TYPESAFE_BY_RCU objects may be freed and reused multiple times
which requires a few changes. So it isn't sufficient anymore to just
acquire a reference to the file in question under rcu using
atomic_long_inc_not_zero() since the file might have already been
recycled and someone else might have bumped the reference.
In other words, callers might see reference count bumps from newer
users. For this reason it is necessary to verify that the pointer is the
same before and after the reference count increment. This pattern can be
seen in get_file_rcu() and __files_get_rcu().
In addition, it isn't possible to access or check fields in struct file
without first aqcuiring a reference on it. Not doing that was always
very dodgy and it was only usable for non-pointer data in struct file.
With SLAB_TYPESAFE_BY_RCU it is necessary that callers first acquire a
reference under rcu or they must hold the files_lock of the fdtable.
Failing to do either one of this is a bug.
Thanks to Jann for pointing out that we need to ensure memory ordering
between reallocations and pointer check by ensuring that all subsequent
loads have a dependency on the second load in get_file_rcu() and
providing a fixup that was folded into this patch.
Signed-off-by: Kienan Stewart <kstewart@efficios.com>
Change-Id: Iba3663f19a54820afd31a8eeec24b3b5d4b06589 Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
This isolate_mode_t flag is effectively unused since 89f6c88a6ab4 ("mm:
__isolate_lru_page_prepare() in isolate_migratepages_block()") as
sc->may_unmap is now checked directly (and only node_reclaim has a mode
that sets it to 0). The last remaining place is mm_vmscan_lru_isolate
tracepoint for the isolate_mode parameter. That one was mainly used to
indicate the active/inactive mode, which the trace-vmscan-postprocess.pl
script consumed, but that got silently broken. After fixing the script by
the previous patch, it does not need the isolate_mode anymore. So just
remove the parameter and with that the whole ISOLATE_UNMAPPED flag.
Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: Ie7346886d926a1a9d20bcb1570c587c5e943a1c3
`struct load_op` has a trailing 0-length array `data` member that is
used to refer, in the context of BYTECODE_OP_LOAD_STAR_GLOB_STRING, to
an immediate string operand that follows it.
During the validation of a filtering bytecode, strnlen is properly used
to determine the size of the immediate string operand, with a `maxlen`
parameter that is used to ensure the string operand is contained within
the bytecode (see lttng-bytecode-validator.c:434).
However, recent KSPP-related changes have enabled additional overrun
checks when statically-sized and flexible arrays are used. Those are
enabled when the kernel is built with CONFIG_UBSAN_BOUNDS and/or
CONFIG_FORTIFY_SOURCE configured.
The KBUILD CFLAGS now contain `-fstrict-flex-arrays=3`, which is
recognized by gcc 13+[1] and allows proper coverage of dynamically sized
trailing arrays when those configuration options are used.
With those validations in place, the kernel assumes that the `data`
array is truly of length 0 and it BUGs to warn of an invalid access.
The commit linked above contains a number of links explaining the
rationale for transitioning uses of the trailing zero-length arrays (a
gcc extension) to C99 flexible array members (FAM).
This was discussed at this year's GNU Cauldron [2].
Solution
--------
Uses of zero-length arrays (`foo[0]`) are replaced by flexible array
members (`foo[]`). The only cases that are left untouched are those
where the zero-length array is used to indicate the end of a
structure (i.e. it doesn't indicate that a variable number of elements
follow), see the `metadata_packet_header`, `metadata_record_header`,
`event_notifier_packet_header`, and `event_notifier_record_header`
structures.
It may be desirable to use the new `counted_by` attribute for some of
those in the future (`lttng_kernel_abi_filter_bytecode`,
`lttng_kernel_abi_capture_bytecode`, and `bytecode_runtime`) [3].
Note
----
While this is tagged as a memory handling 'fix', it has no security
implication as far as I can tell. The accesses that are flagged by the
new validations were valid.
This merely allows the runtime validations to understand the memory
layout properly.
Explicitly include mmu.h in spte.h instead of relying on the "parent" to
include mmu.h. spte.h references a variety of macros and variables that
are defined/declared in mmu.h, and so including spte.h before (or instead
of) mmu.h will result in build errors, e.g.
Signed-off-by: Kienan Stewart <kstewart@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Change-Id: I5c3fc87d3b006cefbcca198e6e15868a342cb8dd
Michael Jeanson [Fri, 18 Aug 2023 15:28:30 +0000 (11:28 -0400)]
fix: built-in lttng with kernel >= v6.1
In kernel v6.1 the list of subdirectories was moved from Makefile to
Kbuild. Adjust our built-in.sh script to detect this change and use the
appropriate file to graft ourself to the kernel build system.
btrfs: pass find_free_extent_ctl to allocator tracepoints
The allocator tracepoints currently have a pile of values from ffe_ctl.
In modifying the allocator and adding more tracepoints, I found myself
adding to the already long argument list of the tracepoints. It makes it
a lot simpler to just send in the ffe_ctl itself.
Change-Id: Iab4132a9d3df3a6369591a50fb75374b1e399fa4 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
uuid: Decouple guid_t and uuid_le types and respective macros
The guid_t type and respective macros are being used internally only.
The uuid_le has its user outside the kernel. Decouple these types and
macros, and make guid_t completely internal type to the kernel.
Change-Id: I8644fd139b0630e9cf18886b84e33bffab1e5abd Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
vm_flags are among VMA attributes which affect decisions like VMA merging
and splitting. Therefore all vm_flags modifications are performed after
taking exclusive mmap_lock to prevent vm_flags updates racing with such
operations. Introduce modifier functions for vm_flags to be used whenever
flags are updated. This way we can better check and control correct
locking behavior during these updates.
Change-Id: I2cf662420d9d7748e5e310d3ea4bac98ba7d7f94 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The print format error was found when using ftrace event:
<...>-1406 [000] .... 23599442.895823: jbd2_end_commit: dev 252,8 transaction -1866216965 sync 0 head -1866217368
<...>-1406 [000] .... 23599442.896299: jbd2_start_commit: dev 252,8 transaction -1866216964 sync 0
Use the correct print format for transaction, head and tid.
Change-Id: Ieee3d39ed1f2515e096e87d18b5ea8f921c54bd0 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The print format error was found when using ftrace event:
<...>-1406 [000] .... 23599442.895823: jbd2_end_commit: dev 252,8 transaction -1866216965 sync 0 head -1866217368
<...>-1406 [000] .... 23599442.896299: jbd2_start_commit: dev 252,8 transaction -1866216964 sync 0
Use the correct print format for transaction, head and tid.
Change-Id: I7601f5cbb86495c2607be7b11e02724c90b3ebf9 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
This is a large patch, but because they're all macros it's impossible to
split up. Simply copy all of the item accessors in ctree.h and paste
them in accessors.h, and then update any files to include the header so
everything compiles.
Change-Id: I1f0876dd8b7a8687f6802b60c3e3baabd017cc52 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The print format error was found when using ftrace event:
<...>-1406 [000] .... 23599442.895823: jbd2_end_commit: dev 252,8 transaction -1866216965 sync 0 head -1866217368
<...>-1406 [000] .... 23599442.896299: jbd2_start_commit: dev 252,8 transaction -1866216964 sync 0
Use the correct print format for transaction, head and tid.
Change-Id: Ic053f0e0c1e24ebc75bae51d07696aaa5e1c0094 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
x86 x32 system calls are not supported by LTTng. They are currently not
traced simply because their system call number is beyond the range of
NR_compat_syscalls.
However, this mostly happens by accident rather than by design.
Enforce this with an explicit check for in_x32_syscall(), which clearly
documents that those are not supported.
mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not using
Drop kmem_alloc event class, and define kmalloc and kmem_cache_alloc
using TRACE_EVENT() macro.
And then this patch does:
- Do not pass pointer to struct kmem_cache to trace_kmalloc.
gfp flag is enough to know if it's accounted or not.
- Avoid dereferencing s->object_size and s->size when not using kmem_cache_alloc event.
- Avoid dereferencing s->name in when not using kmem_cache_free event.
- Adjust s->size to SLOB_UNITS(s->size) * SLOB_UNIT in SLOB
Change-Id: Icd7925731ed4a737699c3746cb7bb7760a4e8009 Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: handle integer capture page faults as skip field
Now that we have the appropriate save/restore position mechanism for
error handling in place, we can handle page faults on integer
copy-from-user by skipping the offending captured field entirely rather
than relying on an arbitrary 0 value.
Fix: honor "user" attribute for array/sequence of user integers
The macro _lttng_kernel_static_type_integer_from_type() should map to
_lttng_kernel_static_type_integer() to pass the "_user" attribute.
Otherwise, userspace fields such as pipe2's system call fildes field (a
ctf_user_array()) can trigger NULL pointer exceptions and read arbitrary
kernel memory if the pipe2 system call receives a bogus pointer as input
while filtering/capture is accessing this field.
He Zhe [Tue, 27 Sep 2022 07:59:42 +0000 (15:59 +0800)]
wrapper: powerpc64: fix kernel crash caused by do_get_kallsyms
Kernel crashes on powerpc64 ABIv2 as follow when lttng_tracer initializes,
since do_get_kallsyms in lttng_wrapper fails to return a proper address of
kallsyms_lookup_name.
root@qemuppc64:~# lttng create trace_session --live -U net://127.0.0.1
Spawning a session daemon
lttng_kretprobes: loading out-of-tree module taints kernel.
BUG: Unable to handle kernel data access on read at 0xfffffffffffffff8
Faulting instruction address: 0xc0000000001f6fd0
Oops: Kernel access of bad area, sig: 11 [#1]
<snip>
NIP [c0000000001f6fd0] module_kallsyms_lookup_name+0xf0/0x180
LR [c0000000001f6f28] module_kallsyms_lookup_name+0x48/0x180
Call Trace:
module_kallsyms_lookup_name+0x34/0x180 (unreliable)
kallsyms_lookup_name+0x258/0x2b0
wrapper_kallsyms_lookup_name+0x4c/0xd0 [lttng_wrapper]
wrapper_get_pfnblock_flags_mask_init+0x28/0x60 [lttng_wrapper]
lttng_events_init+0x40/0x344 [lttng_tracer]
do_one_initcall+0x78/0x340
do_init_module+0x6c/0x2f0
__do_sys_finit_module+0xd0/0x120
system_call_exception+0x194/0x2f0
system_call_vectored_common+0xe8/0x278
<snip>
do_get_kallsyms makes use of kprobe_register and in turn kprobe_lookup_name
to get the address of the kernel function kallsyms_lookup_name. In case of
PPC64_ELF_ABI_v2, when kprobes are placed at function entry,
kprobe_lookup_name adjusts the global entry point of the function returned
by kallsyms_lookup_name to the local entry point(at some fixed offset of
global one). This adjustment is all for kprobes to be able to work properly.
Global and local entry point are defined in powerpc64 ABIv2.
When the local entry point is given, some instructions at the beginning of
the function are skipped and thus causes the above kernel crash. We just
want to make a simple function call which needs global entry point.
This patch adds 4 bytes which is the length of one instruction to
kallsyms_lookup_name so that it will not trigger the global to local
adjustment, and then substracts 4 bytes from the returned address. See the
following kernel change for more details.