From: Mathieu Desnoyers Date: Fri, 10 May 2019 15:51:10 +0000 (-0400) Subject: Fix: alignment of ring buffer shm space reservation X-Git-Tag: v2.11.0-rc3~5 X-Git-Url: https://git.lttng.org./?a=commitdiff_plain;h=9cce4078365a1508d79983a5c746d71582e8ba02;p=lttng-ust.git Fix: alignment of ring buffer shm space reservation commit a9ff648cc "Implement file-backed ring buffer" changes the order of backend fields with respect to the frontend per-subbuffer commit_counters_hot and commit_counters_cold arrays, but does not change that order when calculating the space needed in the initial pass. This discrepancy can be an issue for field alignment calculation. Let's analyse the situation. If the incorrect position of alignment calculation leads to a larger space reserved than the actual allocations, no ill effect will be perceived by the user. However, if space calculation is less than the allocations, it will cause the ring buffer (and thus channel) creation to fail. The fields that are incorrectly misplaced in size calculation (in officially released versions) are: * struct commit_counters_hot is aligned on CAA_CACHE_LINE_SIZE, * struct commit_counters_cold is aligned on CAA_CACHE_LINE_SIZE, Those are placed after (should be before) the backend fields: * struct lttng_ust_lib_ring_buffer_backend_pages_shmp aligned on the natural alignment of ssize_t, * alignment on page size, * struct lttng_ust_lib_ring_buffer_backend_pages, aligned on the natural alignment of ssize_t, * struct lttng_ust_lib_ring_buffer_backend_subbuffer, aligned on natural alignment of unsigned long, * struct lttng_ust_lib_ring_buffer_backend_counts, aligned on natural alignment of uint64_t. The largest alignment is the alignment on page size in the backend fields. If we have a channel configured within specific ranges of sub-buffer count, we should reach commit counters array dimensions which cause the page size alignment to be lower than it should be in the space calculation, and therefore leads to a problematic scenario where space allocation will fail, thus leading to channel creation failures. Signed-off-by: Mathieu Desnoyers --- diff --git a/libringbuffer/ring_buffer_backend.c b/libringbuffer/ring_buffer_backend.c index 431b8eae..2dea1db5 100644 --- a/libringbuffer/ring_buffer_backend.c +++ b/libringbuffer/ring_buffer_backend.c @@ -322,6 +322,13 @@ int channel_backend_init(struct channel_backend *chanb, /* Per-cpu buffer size: control (prior to backend) */ shmsize = offset_align(shmsize, __alignof__(struct lttng_ust_lib_ring_buffer)); shmsize += sizeof(struct lttng_ust_lib_ring_buffer); + shmsize += offset_align(shmsize, __alignof__(struct commit_counters_hot)); + shmsize += sizeof(struct commit_counters_hot) * num_subbuf; + shmsize += offset_align(shmsize, __alignof__(struct commit_counters_cold)); + shmsize += sizeof(struct commit_counters_cold) * num_subbuf; + /* Sampled timestamp end */ + shmsize += offset_align(shmsize, __alignof__(uint64_t)); + shmsize += sizeof(uint64_t) * num_subbuf; /* Per-cpu buffer size: backend */ /* num_subbuf + 1 is the worse case */ @@ -336,14 +343,6 @@ int channel_backend_init(struct channel_backend *chanb, shmsize += sizeof(struct lttng_ust_lib_ring_buffer_backend_subbuffer) * num_subbuf; shmsize += offset_align(shmsize, __alignof__(struct lttng_ust_lib_ring_buffer_backend_counts)); shmsize += sizeof(struct lttng_ust_lib_ring_buffer_backend_counts) * num_subbuf; - /* Per-cpu buffer size: control (after backend) */ - shmsize += offset_align(shmsize, __alignof__(struct commit_counters_hot)); - shmsize += sizeof(struct commit_counters_hot) * num_subbuf; - shmsize += offset_align(shmsize, __alignof__(struct commit_counters_cold)); - shmsize += sizeof(struct commit_counters_cold) * num_subbuf; - /* Sampled timestamp end */ - shmsize += offset_align(shmsize, __alignof__(uint64_t)); - shmsize += sizeof(uint64_t) * num_subbuf; if (config->alloc == RING_BUFFER_ALLOC_PER_CPU) { struct lttng_ust_lib_ring_buffer *buf;