Now that the membarrier system call is allocated on tile, allocate
its number in our architecture header if the system headers don't
allocate it. This allows using the membarrier system call as soon as
implemented in the kernel, even if the distribution has old kernel
headers.
Do so by creating headers specifically for tile, which rely on the
gcc atomic and memory barrier builtins.
Now that the membarrier system call is allocated on ia64, allocate
its number in our architecture header if the system headers don't
allocate it. This allows using the membarrier system call as soon as
implemented in the kernel, even if the distribution has old kernel
headers.
Do so by creating headers specifically for ia64, which rely on the
gcc atomic and memory barrier builtins.
Now that the membarrier system call is allocated on aarch64, allocate
its number in our architecture header if the system headers don't
allocate it. This allows using the membarrier system call as soon as
implemented in the kernel, even if the distribution has old kernel
headers.
Do so by creating headers specifically for aarch64, which rely on the
gcc atomic and memory barrier builtins.
powerpc64le has been originally added to urcu with the "gcc" generic
architecture support. After testing, it appears that the "ppc"
architecture works as well.
Move to the "ppc" architecture so it becomes the same as other powerpc
32/64 (big endian) architectures.
Doing so wires up the membarrier system call on powerpc64le.
Now that the membarrier system call is allocated on ARM, allocate its
number in our architecture header if the system headers don't allocate
it. This allows using the membarrier system call as soon as implemented
in the kernel, even if the distribution has old kernel headers.
Now that the membarrier system call is allocated on s390/s390x, allocate
its number in our architecture header if the system headers don't
allocate it. This allows using the membarrier system call as soon as
implemented in the kernel, even if the distribution has old kernel
headers.
Now that the membarrier system call is allocated on powerpc, allocate
its number in our architecture header if the system headers don't
allocate it. This allows using the membarrier system call as soon as
implemented in the kernel, even if the distribution has old kernel
headers.
The documentation of the RCU-based synchronization technique in lfstack
is too strict. It currently states that the cds_lfs_node structure
cannot be overwritten before a grace period has passed. However, lfstack
pop only use the next pointer as the replacement value when doing the
cmpxchg on the head. After the node has been pop'd from the stack,
concurrent cmpxchg trying to pop that same node will necessarily fail as
long as there is a grace period before pop/pop_all and re-adding the
node into the stack.
It is therefore sufficient to wait for a grace period between:
1) pop/pop_all and
2) freeing the node (to ensure existence for concurrent pop trying to
read node->next) or re-adding the node into the stack.
This node re-use constraint relaxation is only possible because we don't
care about node->next content read by concurrent pop: it will be simply
discarded by the cmpxchg on head. Be careful not to apply this relaxed
constraint to other data structures which care about the content of the
node's next pointer (e.g. wfstack).
This relaxed constraint allows implementing efficient free-lists (memory
allocation) with a lock-free allocation/free based on lfstack: it allows
re-using the memory backing the free-list node immediately after
allocation. The only requirement with respect to this use-case is to
wait for a grace period before putting the node back into the free-list.
Also update the test_urcu_lfs to poison the next pointer immediately
after pop/pop_all to make sure we test this relaxed constraint.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Lai Jiangshan <jiangshanlai@gmail.com> CC: lttng-dev@lists.lttng.org CC: rp@svcs.cs.pdx.edu
Cleanup: tests: Branch condition evaluates to a garbage value
scan-build reported this:
Logic error Branch condition evaluates to a garbage value tests
/benchmark /test_urcu_hash_rw.c 170
Logic error Branch condition evaluates to a garbage value tests
/benchmark /test_urcu_hash_rw.c 274
It should never happen based on code review, but silence this warning by
initializing to NULL.
CID 1021635 (#1 of 2): Unchecked return value (CHECKED_RETURN)7.
check_return: Calling pthread_mutex_unlock without checking return value
(as is done elsewhere 29 out of 33 times).
CID 1021634 (#2 of 2): Unchecked return value (CHECKED_RETURN)12.
check_return: Calling pthread_mutex_unlock without checking return value
(as is done elsewhere 29 out of 33 times).
CID 1021642 (#1 of 2): Side effect in assertion
(ASSERT_SIDE_EFFECT)assert_side_effect: Argument test_array of assert()
has a side effect because the variable is volatile. The containing
function might work differently in a non-debug build.
Now that the membarrier system call is allocated on x86 32/64, allocate
its number in our architecture header if the system headers don't
allocate it. This allows using the membarrier system call as soon as
implemented in the kernel, even if the distribution has old kernel
headers.
Allows getting a reference atomically if the reference count is not
zero. Returns true if the reference is taken, false otherwise. This
needs to be used in conjunction with another synchronization technique
(e.g. RCU or mutex) to ensure existence of the reference count.
Fix: dynamic fallback to compat futex on sys_futex ENOSYS
Some MIPS processors (e.g. Cavium Octeon II) dynamically check if the
CPU supports ll/sc within sys_futex, and return a ENOSYS errno if they
don't, even though the architecture implements sys_futex.
Handle this situation by always building the sys_futex compatibility
layer, and fall-back on it if sys_futex return a ENOSYS errno. This is
a tiny compat layer which adds very little space overhead.
This adds an unlikely branch on return from sys_futex, which should
not be an issue performance-wise (we've already taken a system call).
Since this is a fall-back mode, don't try to be clever, and don't cache
the result, so that the common cases (architectures with a properly
working sys_futex) don't get two conditional branches, just one.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Michael Jeanson <mjeanson@efficios.com> CC: Jon Bernard <jbernard@debian.org>
Use the urcu_assert() macro (enabled on DEBUG_RCU) to check for
unmatched rcu_read_lock() that eventually leads to nesting counter
overflow in urcu.h and urcu-bp.h. This won't necessarily point the the
exact rcu_read_lock() that is unmatched, but will at least detect the
overflow condition.
Use the urcu_assert() macro (enabled on DEBUG_RCU) to check for
unmatched rcu_read_unlock() that leads to nesting counter underflow in
urcu.h and urcu-bp.h.
Add a "registered" flag to urcu.c and urcu-qsbr.c, set/cleared when a
thread is registered and unregistered. Add corresponding asserts in
those functions checking if a thread is registered or unregistered more
than once (which would be a bug in the way the application uses urcu).
Move the checks enabled on RCU_DEBUG to a single header: urcu/debug.h.
Add checks for the registered flag in RCU read-side lock functions (new
urcu_assert() checks, which are only built-in if RCU_DEBUG is defined at
compile-time).
From Coverity:
CID 1021642 (#1 of 3): Side effect in assertion
(ASSERT_SIDE_EFFECT)assert_side_effect: Argument test_array of assert()
has a side effect because the variable is volatile. The containing
function might work differently in a non-debug build.
sys_membarrier underwent changes between its original implementation and
its upcoming inclusion into the Linux kernel. Update its use to follow
those changes.
Should the prior user-space code be built against a kernel header that
defines SYS_membarrier, and executed against that kernel, the following
scenarios may happen:
- -1 will be returned with EINVAL errno if the 2nd argument (flags) is
non-zero (the previous ABI expected a single argument),
- (MEMBARRIER_EXPEDITED | MEMBARRIER_QUERY) defined as
(1 << 0) | (1 << 16) will return -1 with EINVAL errno, because valid
commands are now one-hot.
Therefore, should an incompatible user-space code try to use
sys_membarrier, it will simply think that the system does not have
membarrier support due to the negative return value upon query.
Khem Raj [Sun, 23 Aug 2015 04:38:30 +0000 (21:38 -0700)]
uatomic: Specify complete types for atomic function calls
This was unearthed by clang compiler where it complained about parameter
mismatch, gcc doesnt notice this
urcu/uatomic/generic.h:190:10: error: address argument to atomic builtin
must be a pointer to integer or pointer ('void *' invalid)
return __sync_add_and_fetch_4(addr, val);
Fix: handle sys_futex() FUTEX_WAIT interrupted by signal
We need to handle EINTR returned by sys_futex() FUTEX_WAIT, otherwise a
signal interrupting this system call could make sys_futex return too
early, and therefore cause a synchronization issue.
Ensure that the futex compatibility layer returns meaningful errors and
errno when using poll() or pthread cond variables.
Reported-by: Gerd Gerats <geg@ngncc.de> CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Lai Jiangshan <laijs@cn.fujitsu.com> CC: Stephen Hemminger <shemminger@vyatta.com> CC: Alan Stern <stern@rowland.harvard.edu> CC: lttng-dev@lists.lttng.org CC: rp@svcs.cs.pdx.edu Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Make call_rcu_thread() affine itself more persistently
Currently, URCU simply fails if a call_rcu_thread() fails to affine
itself. This is problematic when execution is constrained by cgroup
and hotunplugged CPUs. This commit therefore makes call_rcu_thread()
retry setting its affinity every 256 grace periods, but only if it
detects that it migrated to a different CPU. Since sched_getcpu() is
cheap on many architectures, this check is less costly than going
through a system call.
Reported-by: Michael Jeanson <mjeanson@efficios.com> Suggested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
/usr/include/features.h:148:3: warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE" [-Wcpp]
# warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE"
From http://man7.org/linux/man-pages/man7/feature_test_macros.7.html:
_BSD_SOURCE (deprecated since glibc 2.20)
[...]
Since glibc 2.20, this macro is deprecated. It now has the same effect
as defining _DEFAULT_SOURCE, but generates a compile-time warning
(unless _DEFAULT_SOURCE is also defined). Use _DEFAULT_SOURCE instead.
To allow code that requires _BSD_SOURCE in glibc 2.19 and earlier and
_DEFAULT_SOURCE in glibc 2.20 and later to compile without warnings,
define both _BSD_SOURCE and _DEFAULT_SOURCE.
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Can block a thread on join, and thus have the side-effect of deadlocking
a thread doing a pthread_join while within a RCU read-side critical
section. This join would be awaiting for completion of register_thread or
rcu_unregister_thread, which may never complete because the rcu_gp_lock
is held by synchronize_rcu executed from another thread.
One solution to fix this is to add a new lock, rcu_registry_lock. This
lock now protects the thread registry. It is released between iterations
on the registry by synchronize_rcu, thus allowing thread
registration/unregistration to complete even though synchronize_rcu is
awaiting for RCU read-side critical sections to complete.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Eugene Ivanov <Eugene.Ivanov@orc-group.com> CC: Lai Jiangshan <laijs@cn.fujitsu.com> CC: Stephen Hemminger <stephen@networkplumber.org>
Luca Boccassi [Wed, 25 Mar 2015 19:39:00 +0000 (19:39 +0000)]
Mark braced-groups within expressions with __extension__
Braced-groups within expressions are not valid ISO C, so
if a macro uses them and it's included in a project built
with -pedantic, the build will fail. GCC and CLANG do
support them as extension, so marking them as such allows
the build to complete even with -pedantic.
The Userspace RCU compatibility layer around sys_futex has a race
condition which makes pretty much all "benchmark" tests hang pretty
quickly on non-Linux systems (tested on Mac OS X).
I narrowed it down to a bug in compat_futex_noasync: this compat layer
uses a single pthread mutex and condition variable for all callers,
independently of their uaddr. The FUTEX_WAKE performs a pthread cond
broadcast to all waiters. FUTEX_WAIT must then compare *uaddr with val
to see which thread has been awakened.
Unfortunately, the check was not done again after each return from
pthread_cond_wait(), thus causing the race.
This race affects threads using the futex_noasync() compatibility layer
concurrently, thus it affects only on non-Linux systems.
Because call rcu implementation is included within RCU flavors, calling
the RCU API goes through the API for non-LGPL code (this is a special
case for the RCU flavor implementation c file). Since this is clearly
LGPL code, we can use the inline versions.
It appears that just casting to "unsigned long" already has the semantic
we are looking for (checked by reading C99 standard and
experimentation): it sign-extends smaller signed integers, and does not
sign-extend unsigned integers.
Fix: preserve example files' timestamps when copying
This fixes an issue where examples were always being rebuilt
when performing an out of tree build since the examples were
being copied to the build directory with a timestamp more
recent than the already-built example objects.
Eric Wong [Mon, 1 Sep 2014 21:25:06 +0000 (21:25 +0000)]
wfstack: implement mutex-free wfstack with transparent union
This allows users more freedom to use alternative synchronization
mechanisms.
Changes since v1:
- Fix typos in cds_wfs_stack_ptr_t documentation.
Thanks to Mathieu for spotting.
Signed-off-by: Eric Wong <normalperson@yhbt.net> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fix: incorrect parenthesis in cds_hlist_for_each_entry_safe_2
commit db903109f0031c831e8fdc95cb7197996e53f46d introduced a regression
in cds_hlist_for_each_entry_safe_2(): incorrect parenthesis assign "e"
to 1, rather than assign "e" to the next pointer, and evaluating the
expression to 1 (comma expression).
Reported-by: Daniel Thibault <Daniel.Thibault@drdc-rddc.gc.ca> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Do not free the rcu_barrier() completion struct until all threads are
done with it.
It cannot reside on the waiter's stack as rcu_barrier() may return
before the call_rcu handlers have finished checking whether it needs a
futex wakeup. Instead we dynamically allocate the structure and
determine its lifetime with a reference count.
Signed-off-by: Keir Fraser <keir@cohodata.com>
[ Edit by Mathieu Desnoyers: use urcu/ref.h. Cleanup: use
uatomic_sub_return() rather than uatomic_add_return() with negative
value. ] Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
call_rcu threads should clear their PAUSED flag when they unpause
And call_rcu_after_fork_parent should spin-wait on this.
Otherwise a second fork in the parent will see the PAUSED flags
already set and call_rcu_before_fork will not correctly wait for the
call_rcu threads to quiesce on this second occasion.
Add the missing architecture specific functions to provide support for
the hppa/PA-RISC architecture:
- the processor internal time stamp counter (Control Register CR16) is
used to get high-performance/low-latency cycle counts
- gcc provides the necessary built-in atomic functions on hppa (which in
turn uses the light-weigth atomic locking syscall-interface of the
Linux kernel)
Lars Persson [Wed, 12 Mar 2014 09:36:04 +0000 (10:36 +0100)]
Use autoconf AM_MAINTAINER_MODE
Give distribution maintainers the option to skip rebuilding
autoconf and automake generated files. The default behaviour
is still to have the rebuild rules enabled.
Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Currently there are two fairly recent architectures, which at the
moment can only be compiled with "gcc atomics" code path.
The two new architectures are (GNU Types):
* aarch64-linux-gnu (aka ARMv8, ARM64, AARCH64, etc)
* powerpc64le-linux-gnu
Fix: move wait loop increment before first conditional block
The fix "Fix: high cpu usage in synchronize_rcu with long RCU read-side
C.S." has an imperfection in urcu.c and urcu-qsbr.c: when incrementing
the wait loop counter for the last time, the first conditional branch is
not taken, but the following conditionals are, and they assume the first
conditional has been taken.
Within urcu.c (urcu-mb, urcu-membarrier and urcu-signal), and
urcu-qsbr.c, this will simply skip the first wait_gp() call, without any
noticeable ill side-effect.
Fix: high cpu usage in synchronize_rcu with long RCU read-side C.S.
We noticed that with this kind of scenario:
- application using urcu-mb, urcu-membarrier, urcu-signal, or urcu-bp,
- long RCU read-side critical sections, caused by e.g. long network I/O
system calls,
- other short lived RCU critical sections running in other threads,
- very frequent invocation of call_rcu to enqueue callbacks,
lead to abnormally high CPU usage within synchronize_rcu() in the
call_rcu worker threads.
Inspection of the code gives us the answer: in urcu.c, we expect that if
we need to wait on a futex (wait_gp()), we expect to be able to end the
grace period within the next loop, having been notified by a
rcu_read_unlock(). However, this is not always the case: we can very
well be awakened by a rcu_read_unlock() executed on a thread running
short-lived RCU read-side critical sections, while the long-running RCU
read-side C.S. is still active. We end up in a situation where we
busy-wait for a very long time, because the counter is !=
RCU_QS_ACTIVE_ATTEMPTS until a 32-bit overflow happens (or more likely,
until we complete the grace period). We need to change the wait_loops ==
RCU_QS_ACTIVE_ATTEMPTS check into an inequality to use wait_gp() for
every attempts beyond RCU_QS_ACTIVE_ATTEMPTS loops.
urcu-bp.c also has this issue. Moreover, it uses usleep() rather than
poll() when dealing with long-running RCU read-side critical sections.
Turn the usleep 1000us (1ms) into a poll of 10ms. One of the advantage
of using poll() rather than usleep() is that it does not interact with
SIGALRM.
urcu-qsbr.c already checks for wait_loops >= RCU_QS_ACTIVE_ATTEMPTS, so
it is not affected by this issue.
Looking into these loops, however, shows that overflow of the loop
counter, although unlikely, would bring us back to a situation of high
cpu usage (a negative value well below RCU_QS_ACTIVE_ATTEMPTS).
Therefore, change the counter behavior so it stops incrementing when it
reaches RCU_QS_ACTIVE_ATTEMPTS, to eliminate overflow.
Fix: urcu-bp interaction with threads vs constructors/destructors
Add a reference counter for threads using urcu-bp, thus ensuring that
even if the urcu destructor is executed before each thread using RCU
read-side critical sections exit, those threads will not see a corrupted
thread list.
Also, don't use URCU_TLS() within urcu_bp_thread_exit_notifier(). It
appears that this is racy (although this was probably due to the issue
fixed by reference counting). Anyway, play safe, and pass the rcu_key
received as parameter instead.
Those issues only reproduce when threads are still active when the
urcu-bp destructor is called.