commit
a9ff648cc "Implement file-backed ring buffer" changes the order
of backend fields with respect to the frontend per-subbuffer
commit_counters_hot and commit_counters_cold arrays, but does not change
that order when calculating the space needed in the initial pass.
This discrepancy can be an issue for field alignment calculation.
Let's analyse the situation. If the incorrect position of alignment
calculation leads to a larger space reserved than the actual
allocations, no ill effect will be perceived by the user. However,
if space calculation is less than the allocations, it will cause the
ring buffer (and thus channel) creation to fail.
The fields that are incorrectly misplaced in size calculation (in
officially released versions) are:
* struct commit_counters_hot is aligned on CAA_CACHE_LINE_SIZE,
* struct commit_counters_cold is aligned on CAA_CACHE_LINE_SIZE,
Those are placed after (should be before) the backend fields:
* struct lttng_ust_lib_ring_buffer_backend_pages_shmp aligned on the
natural alignment of ssize_t,
* alignment on page size,
* struct lttng_ust_lib_ring_buffer_backend_pages, aligned on the natural
alignment of ssize_t,
* struct lttng_ust_lib_ring_buffer_backend_subbuffer, aligned on natural
alignment of unsigned long,
* struct lttng_ust_lib_ring_buffer_backend_counts, aligned on natural
alignment of uint64_t.
The largest alignment is the alignment on page size in the backend
fields. If we have a channel configured within specific ranges of
sub-buffer count, we should reach commit counters array dimensions
which cause the page size alignment to be lower than it should be in
the space calculation, and therefore leads to a problematic scenario
where space allocation will fail, thus leading to channel creation
failures.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
/* Per-cpu buffer size: control (prior to backend) */
shmsize = offset_align(shmsize, __alignof__(struct lttng_ust_lib_ring_buffer));
shmsize += sizeof(struct lttng_ust_lib_ring_buffer);
+ shmsize += offset_align(shmsize, __alignof__(struct commit_counters_hot));
+ shmsize += sizeof(struct commit_counters_hot) * num_subbuf;
+ shmsize += offset_align(shmsize, __alignof__(struct commit_counters_cold));
+ shmsize += sizeof(struct commit_counters_cold) * num_subbuf;
+ /* Sampled timestamp end */
+ shmsize += offset_align(shmsize, __alignof__(uint64_t));
+ shmsize += sizeof(uint64_t) * num_subbuf;
/* Per-cpu buffer size: backend */
/* num_subbuf + 1 is the worse case */
shmsize += sizeof(struct lttng_ust_lib_ring_buffer_backend_subbuffer) * num_subbuf;
shmsize += offset_align(shmsize, __alignof__(struct lttng_ust_lib_ring_buffer_backend_counts));
shmsize += sizeof(struct lttng_ust_lib_ring_buffer_backend_counts) * num_subbuf;
- /* Per-cpu buffer size: control (after backend) */
- shmsize += offset_align(shmsize, __alignof__(struct commit_counters_hot));
- shmsize += sizeof(struct commit_counters_hot) * num_subbuf;
- shmsize += offset_align(shmsize, __alignof__(struct commit_counters_cold));
- shmsize += sizeof(struct commit_counters_cold) * num_subbuf;
- /* Sampled timestamp end */
- shmsize += offset_align(shmsize, __alignof__(uint64_t));
- shmsize += sizeof(uint64_t) * num_subbuf;
if (config->alloc == RING_BUFFER_ALLOC_PER_CPU) {
struct lttng_ust_lib_ring_buffer *buf;