See the relevant API documentation files in `doc/`. The APIs provided by
Userspace RCU are, by prefix:
- - `rcu_`: Read-Copy Update (see [`doc/rcu-api.txt`](doc/rcu-api.txt))
+ - `rcu_`: Read-Copy Update (see [`doc/rcu-api.md`](doc/rcu-api.md))
- `cmm_`: Concurrent Memory Model
- `caa_`: Concurrent Architecture Abstraction
- `cds_`: Concurrent Data Structures
- (see [`doc/cds-api.txt`](doc/cds-api.txt))
+ (see [`doc/cds-api.md`](doc/cds-api.md))
- `uatomic_`: Userspace Atomic
- (see [`doc/uatomic-api.txt`](doc/uatomic-api.txt))
+ (see [`doc/uatomic-api.md`](doc/uatomic-api.md))
Quick start guide
grace periods. A number of additional functions are provided
to manage the helper threads used by `call_rcu()`, but reasonable
defaults are used if these additional functions are not invoked.
- See [`doc/rcu-api.txt`](doc/rcu-api.txt) in userspace-rcu documentation
+ See [`doc/rcu-api.md`](doc/rcu-api.md) in userspace-rcu documentation
for more details.
--------
You can contact the maintainers on the following mailing list:
-`lttng-dev@lists.lttng.org`.
\ No newline at end of file
+`lttng-dev@lists.lttng.org`.
SUBDIRS = examples
-dist_doc_DATA = rcu-api.txt cds-api.txt uatomic-api.txt
+dist_doc_DATA = rcu-api.md cds-api.md uatomic-api.md
--- /dev/null
+Userspace RCU Concurrent Data Structures (CDS) API
+==================================================
+
+by Mathieu Desnoyers and Paul E. McKenney
+
+This document describes briefly the data structures contained with the
+userspace RCU library.
+
+
+Data structure files
+--------------------
+
+### `urcu/list.h`
+
+Doubly-linked list, which requires mutual exclusion on
+updates and reads.
+
+
+### `urcu/rculist.h`
+
+Doubly-linked list, which requires mutual exclusion on
+updates, allows RCU read traversals.
+
+
+### `urcu/hlist.h`
+
+Doubly-linked list, with single pointer list head. Requires
+mutual exclusion on updates and reads. Useful for implementing hash tables.
+Downside over `list.h`: lookup of tail in O(n).
+
+
+### `urcu/rcuhlist.h`
+
+Doubly-linked list, with single pointer list head.
+Requires mutual exclusion on updates, allows RCU read traversals. Useful
+for implementing hash tables. Downside over rculist.h: lookup of tail in O(n).
+
+
+### `urcu/wfstack.h`
+
+Stack with wait-free push and wait-free pop_all. Both
+blocking and non-blocking pop and traversal operations are provided. This
+stack does _not_ specifically rely on RCU. Various synchronization techniques
+can be used to deal with pop ABA. Those are detailed in the API.
+
+
+### `urcu/wfcqueue.h`
+
+Concurrent queue with wait-free enqueue. Both blocking and
+non-blocking dequeue, splice (move all elements from one queue
+to another), and traversal operations are provided.
+
+This queue does _not_ specifically rely on RCU. Mutual exclusion
+is used to protect dequeue, splice (from source queue) and
+traversal (see API for details).
+
+ - Note: deprecates `urcu/wfqueue.h`.
+
+
+### `urcu/lfstack.h`
+
+Stack with lock-free push, lock-free pop, wait-free pop_all,
+wait-free traversal. Various synchronization techniques can be
+used to deal with pop ABA. Those are detailed in the API.
+This stack does _not_ specifically rely on RCU.
+
+ - Note: deprecates `urcu/rculfstack.h`.
+
+
+### `urcu/rculfqueue.h`
+
+RCU queue with lock-free enqueue, lock-free dequeue.
+This queue relies on RCU for existence guarantees.
+
+
+### `urcu/rculfhash.h`
+
+Lock-Free Resizable RCU Hash Table. RCU used to provide
+existance guarantees. Provides scalable updates, and scalable
+RCU read-side lookups and traversals. Unique and duplicate keys
+are supported. Provides "uniquify add" and "replace add"
+operations, along with associated read-side traversal uniqueness
+guarantees. Automatic hash table resize based on number of
+elements is supported. See the API for more details.
+++ /dev/null
-Userspace RCU Concurrent Data Structures (CDS) API
-by Mathieu Desnoyers and Paul E. McKenney
-
-
-This document describes briefly the data structures contained with the
-userspace RCU library.
-
-urcu/list.h:
-
- Doubly-linked list, which requires mutual exclusion on updates
- and reads.
-
-urcu/rculist.h:
-
- Doubly-linked list, which requires mutual exclusion on updates,
- allows RCU read traversals.
-
-urcu/hlist.h:
-
- Doubly-linked list, with single pointer list head. Requires
- mutual exclusion on updates and reads. Useful for implementing
- hash tables. Downside over list.h: lookup of tail in O(n).
-
-urcu/rcuhlist.h:
-
- Doubly-linked list, with single pointer list head. Requires
- mutual exclusion on updates, allows RCU read traversals. Useful
- for implementing hash tables. Downside over rculist.h: lookup of
- tail in O(n).
-
-urcu/wfstack.h:
-
- Stack with wait-free push and wait-free pop_all. Both blocking
- and non-blocking pop and traversal operations are provided.
- This stack does _not_ specifically rely on RCU.
- Various synchronization techniques can be used to deal with
- pop ABA. Those are detailed in the API.
-
-urcu/wfcqueue.h:
-
- Concurrent queue with wait-free enqueue. Both blocking and
- non-blocking dequeue, splice (move all elements from one queue
- to another), and traversal operations are provided.
- This queue does _not_ specifically rely on RCU. Mutual exclusion
- is used to protect dequeue, splice (from source queue) and
- traversal (see API for details).
- (note: deprecates urcu/wfqueue.h)
-
-urcu/lfstack.h:
-
- Stack with lock-free push, lock-free pop, wait-free pop_all,
- wait-free traversal. Various synchronization techniques can be
- used to deal with pop ABA. Those are detailed in the API.
- This stack does _not_ specifically rely on RCU.
- (note: deprecates urcu/rculfstack.h)
-
-urcu/rculfqueue.h:
-
- RCU queue with lock-free enqueue, lock-free dequeue.
- This queue relies on RCU for existence guarantees.
-
-urcu/rculfhash.h:
-
- Lock-Free Resizable RCU Hash Table. RCU used to provide
- existance guarantees. Provides scalable updates, and scalable
- RCU read-side lookups and traversals. Unique and duplicate keys
- are supported. Provides "uniquify add" and "replace add"
- operations, along with associated read-side traversal uniqueness
- guarantees. Automatic hash table resize based on number of
- elements is supported. See the API for more details.
--- /dev/null
+Userspace RCU API
+=================
+
+by Mathieu Desnoyers and Paul E. McKenney
+
+
+API
+---
+
+```c
+void rcu_init(void);
+```
+
+This must be called before any of the following functions
+are invoked.
+
+
+```c
+void rcu_read_lock(void);
+```
+
+Begin an RCU read-side critical section. These critical
+sections may be nested.
+
+
+```c
+void rcu_read_unlock(void);
+```
+
+End an RCU read-side critical section.
+
+
+```c
+void rcu_register_thread(void);
+```
+
+Each thread must invoke this function before its first call to
+`rcu_read_lock()`. Threads that never call `rcu_read_lock()` need
+not invoke this function. In addition, `rcu-bp` ("bullet proof"
+RCU) does not require any thread to invoke `rcu_register_thread()`.
+
+
+```c
+void rcu_unregister_thread(void);
+```
+
+Each thread that invokes `rcu_register_thread()` must invoke
+`rcu_unregister_thread()` before `invoking pthread_exit()`
+or before returning from its top-level function.
+
+
+```c
+void synchronize_rcu(void);
+```
+
+Wait until every pre-existing RCU read-side critical section
+has completed. Note that this primitive will not necessarily
+wait for RCU read-side critical sections that have not yet
+started: this is not a reader-writer lock. The duration
+actually waited is called an RCU grace period.
+
+
+```c
+void call_rcu(struct rcu_head *head,
+ void (*func)(struct rcu_head *head));
+```
+
+Registers the callback indicated by "head". This means
+that `func` will be invoked after the end of a future
+RCU grace period. The `rcu_head` structure referenced
+by `head` will normally be a field in a larger RCU-protected
+structure. A typical implementation of `func` is as
+follows:
+
+```c
+void func(struct rcu_head *head)
+{
+ struct foo *p = container_of(head, struct foo, rcu);
+
+ free(p);
+}
+```
+
+This RCU callback function can be registered as follows
+given a pointer `p` to the enclosing structure:
+
+```c
+call_rcu(&p->rcu, func);
+```
+
+`call_rcu` should be called from registered RCU read-side threads.
+For the QSBR flavor, the caller should be online.
+
+
+```c
+void rcu_barrier(void);
+```
+
+Wait for all `call_rcu()` work initiated prior to `rcu_barrier()` by
+_any_ thread on the system to have completed before `rcu_barrier()`
+returns. `rcu_barrier()` should never be called from a `call_rcu()`
+thread. This function can be used, for instance, to ensure that
+all memory reclaim involving a shared object has completed
+before allowing `dlclose()` of this shared object to complete.
+
+
+```c
+struct call_rcu_data *create_call_rcu_data(unsigned long flags,
+ int cpu_affinity);
+```
+
+Returns a handle that can be passed to the following
+primitives. The `flags` argument can be zero, or can be
+`URCU_CALL_RCU_RT` if the worker threads associated with the
+new helper thread are to get real-time response. The argument
+`cpu_affinity` specifies a CPU on which the `call_rcu` thread should
+be affined to. It is ignored if negative.
+
+
+```c
+void call_rcu_data_free(struct call_rcu_data *crdp);
+```
+
+Terminates a `call_rcu()` helper thread and frees its associated
+data. The caller must have ensured that this thread is no longer
+in use, for example, by passing `NULL` to `set_thread_call_rcu_data()`
+and `set_cpu_call_rcu_data()` as required.
+
+
+```c
+struct call_rcu_data *get_default_call_rcu_data(void);
+```
+
+Returns the handle for the default `call_rcu()` helper thread.
+Creates it if necessary.
+
+
+```c
+struct call_rcu_data *get_cpu_call_rcu_data(int cpu);
+```
+
+Returns the handle for the current CPU's `call_rcu()` helper
+thread, or `NULL` if the current CPU has no helper thread
+currently assigned. The call to this function and use of the
+returned `call_rcu_data` should be protected by RCU read-side
+lock.
+
+
+```c
+struct call_rcu_data *get_thread_call_rcu_data(void);
+```
+
+Returns the handle for the current thread's hard-assigned
+`call_rcu()` helper thread, or `NULL` if the current thread is
+instead using a per-CPU or the default helper thread.
+
+
+```c
+struct call_rcu_data *get_call_rcu_data(void);
+```
+
+Returns the handle for the current thread's `call_rcu()` helper
+thread, which is either, in increasing order of preference:
+per-thread hard-assigned helper thread, per-CPU helper thread,
+or default helper thread. `get_call_rcu_data` should be called
+from registered RCU read-side threads. For the QSBR flavor, the
+caller should be online.
+
+
+```c
+pthread_t get_call_rcu_thread(struct call_rcu_data *crdp);
+```
+
+Returns the helper thread's pthread identifier linked to a call
+rcu helper thread data.
+
+
+```c
+void set_thread_call_rcu_data(struct call_rcu_data *crdp);
+```
+
+Sets the current thread's hard-assigned `call_rcu()` helper to the
+handle specified by `crdp`. Note that `crdp` can be `NULL` to
+disassociate this thread from its helper. Once a thread is
+disassociated from its helper, further `call_rcu()` invocations
+use the current CPU's helper if there is one and the default
+helper otherwise.
+
+
+```c
+int set_cpu_call_rcu_data(int cpu, struct call_rcu_data *crdp);
+```
+
+Sets the specified CPU's `call_rcu()` helper to the handle
+specified by `crdp`. Again, `crdp` can be `NULL` to disassociate
+this CPU from its helper thread. Once a CPU has been
+disassociated from its helper, further `call_rcu()` invocations
+that would otherwise have used this CPU's helper will instead
+use the default helper.
+
+The caller must wait for a grace-period to pass between return from
+`set_cpu_call_rcu_data()` and call to `call_rcu_data_free()` passing the
+previous call rcu data as argument.
+
+
+```c
+int create_all_cpu_call_rcu_data(unsigned long flags);
+```
+
+Creates a separate `call_rcu()` helper thread for each CPU.
+After this primitive is invoked, the global default `call_rcu()`
+helper thread will not be called.
+
+The `set_thread_call_rcu_data()`, `set_cpu_call_rcu_data()`, and
+`create_all_cpu_call_rcu_data()` functions may be combined to set up
+pretty much any desired association between worker and `call_rcu()`
+helper threads. If a given executable calls only `call_rcu()`,
+then that executable will have only the single global default
+`call_rcu()` helper thread. This will suffice in most cases.
+
+
+```c
+void free_all_cpu_call_rcu_data(void);
+```
+
+Clean up all the per-CPU `call_rcu` threads. Should be paired with
+`create_all_cpu_call_rcu_data()` to perform teardown. Note that
+this function invokes `synchronize_rcu()` internally, so the
+caller should be careful not to hold mutexes (or mutexes within a
+dependency chain) that are also taken within a RCU read-side
+critical section, or in a section where QSBR threads are online.
+
+
+```c
+void call_rcu_after_fork_child(void);
+```
+
+Should be used as `pthread_atfork()` handler for programs using
+`call_rcu` and performing `fork()` or `clone()` without a following
+`exec()`.
+++ /dev/null
-Userspace RCU API
-by Mathieu Desnoyers and Paul E. McKenney
-
-
-void rcu_init(void);
-
- This must be called before any of the following functions
- are invoked.
-
-void rcu_read_lock(void);
-
- Begin an RCU read-side critical section. These critical
- sections may be nested.
-
-void rcu_read_unlock(void);
-
- End an RCU read-side critical section.
-
-void rcu_register_thread(void);
-
- Each thread must invoke this function before its first call to
- rcu_read_lock(). Threads that never call rcu_read_lock() need
- not invoke this function. In addition, rcu-bp ("bullet proof"
- RCU) does not require any thread to invoke rcu_register_thread().
-
-void rcu_unregister_thread(void);
-
- Each thread that invokes rcu_register_thread() must invoke
- rcu_unregister_thread() before invoking pthread_exit()
- or before returning from its top-level function.
-
-void synchronize_rcu(void);
-
- Wait until every pre-existing RCU read-side critical section
- has completed. Note that this primitive will not necessarily
- wait for RCU read-side critical sections that have not yet
- started: this is not a reader-writer lock. The duration
- actually waited is called an RCU grace period.
-
-void call_rcu(struct rcu_head *head,
- void (*func)(struct rcu_head *head));
-
- Registers the callback indicated by "head". This means
- that "func" will be invoked after the end of a future
- RCU grace period. The rcu_head structure referenced
- by "head" will normally be a field in a larger RCU-protected
- structure. A typical implementation of "func" is as
- follows:
-
- void func(struct rcu_head *head)
- {
- struct foo *p = container_of(head, struct foo, rcu);
-
- free(p);
- }
-
- This RCU callback function can be registered as follows
- given a pointer "p" to the enclosing structure:
-
- call_rcu(&p->rcu, func);
-
- call_rcu should be called from registered RCU read-side threads.
- For the QSBR flavor, the caller should be online.
-
-void rcu_barrier(void);
-
- Wait for all call_rcu() work initiated prior to rcu_barrier() by
- _any_ thread on the system to have completed before rcu_barrier()
- returns. rcu_barrier() should never be called from a call_rcu()
- thread. This function can be used, for instance, to ensure that
- all memory reclaim involving a shared object has completed
- before allowing dlclose() of this shared object to complete.
-
-struct call_rcu_data *create_call_rcu_data(unsigned long flags,
- int cpu_affinity);
-
- Returns a handle that can be passed to the following
- primitives. The "flags" argument can be zero, or can be
- URCU_CALL_RCU_RT if the worker threads associated with the
- new helper thread are to get real-time response. The argument
- "cpu_affinity" specifies a cpu on which the call_rcu thread should
- be affined to. It is ignored if negative.
-
-void call_rcu_data_free(struct call_rcu_data *crdp);
-
- Terminates a call_rcu() helper thread and frees its associated
- data. The caller must have ensured that this thread is no longer
- in use, for example, by passing NULL to set_thread_call_rcu_data()
- and set_cpu_call_rcu_data() as required.
-
-struct call_rcu_data *get_default_call_rcu_data(void);
-
- Returns the handle for the default call_rcu() helper thread.
- Creates it if necessary.
-
-struct call_rcu_data *get_cpu_call_rcu_data(int cpu);
-
- Returns the handle for the current cpu's call_rcu() helper
- thread, or NULL if the current CPU has no helper thread
- currently assigned. The call to this function and use of the
- returned call_rcu_data should be protected by RCU read-side
- lock.
-
-struct call_rcu_data *get_thread_call_rcu_data(void);
-
- Returns the handle for the current thread's hard-assigned
- call_rcu() helper thread, or NULL if the current thread is
- instead using a per-CPU or the default helper thread.
-
-struct call_rcu_data *get_call_rcu_data(void);
-
- Returns the handle for the current thread's call_rcu() helper
- thread, which is either, in increasing order of preference:
- per-thread hard-assigned helper thread, per-cpu helper thread,
- or default helper thread. get_call_rcu_data should be called
- from registered RCU read-side threads. For the QSBR flavor, the
- caller should be online.
-
-pthread_t get_call_rcu_thread(struct call_rcu_data *crdp);
-
- Returns the helper thread's pthread identifier linked to a call
- rcu helper thread data.
-
-void set_thread_call_rcu_data(struct call_rcu_data *crdp);
-
- Sets the current thread's hard-assigned call_rcu() helper to the
- handle specified by "crdp". Note that "crdp" can be NULL to
- disassociate this thread from its helper. Once a thread is
- disassociated from its helper, further call_rcu() invocations
- use the current CPU's helper if there is one and the default
- helper otherwise.
-
-int set_cpu_call_rcu_data(int cpu, struct call_rcu_data *crdp);
-
- Sets the specified CPU's call_rcu() helper to the handle
- specified by "crdp". Again, "crdp" can be NULL to disassociate
- this CPU from its helper thread. Once a CPU has been
- disassociated from its helper, further call_rcu() invocations
- that would otherwise have used this CPU's helper will instead
- use the default helper. The caller must wait for a grace-period
- to pass between return from set_cpu_call_rcu_data() and call to
- call_rcu_data_free() passing the previous call rcu data as
- argument.
-
-int create_all_cpu_call_rcu_data(unsigned long flags)
-
- Creates a separate call_rcu() helper thread for each CPU.
- After this primitive is invoked, the global default call_rcu()
- helper thread will not be called.
-
- The set_thread_call_rcu_data(), set_cpu_call_rcu_data(), and
- create_all_cpu_call_rcu_data() functions may be combined to set up
- pretty much any desired association between worker and call_rcu()
- helper threads. If a given executable calls only call_rcu(),
- then that executable will have only the single global default
- call_rcu() helper thread. This will suffice in most cases.
-
-void free_all_cpu_call_rcu_data(void);
-
- Clean up all the per-CPU call_rcu threads. Should be paired with
- create_all_cpu_call_rcu_data() to perform teardown. Note that
- this function invokes synchronize_rcu() internally, so the
- caller should be careful not to hold mutexes (or mutexes within a
- dependency chain) that are also taken within a RCU read-side
- critical section, or in a section where QSBR threads are online.
-
-void call_rcu_after_fork_child(void);
-
- Should be used as pthread_atfork() handler for programs using
- call_rcu and performing fork() or clone() without a following
- exec().
--- /dev/null
+Userspace RCU Atomic Operations API
+===================================
+
+by Mathieu Desnoyers and Paul E. McKenney
+
+This document describes the `<urcu/uatomic.h>` API. Those are the atomic
+operations provided by the Userspace RCU library. The general rule
+regarding memory barriers is that only `uatomic_xchg()`,
+`uatomic_cmpxchg()`, `uatomic_add_return()`, and `uatomic_sub_return()` imply
+full memory barriers before and after the atomic operation. Other
+primitives don't guarantee any memory barrier.
+
+Only atomic operations performed on integers (`int` and `long`, signed
+and unsigned) are supported on all architectures. Some architectures
+also support 1-byte and 2-byte atomic operations. Those respectively
+have `UATOMIC_HAS_ATOMIC_BYTE` and `UATOMIC_HAS_ATOMIC_SHORT` defined when
+`uatomic.h` is included. An architecture trying to perform an atomic write
+to a type size not supported by the architecture will trigger an illegal
+instruction.
+
+In the description below, `type` is a type that can be atomically
+written to by the architecture. It needs to be at most word-sized, and
+its alignment needs to greater or equal to its size.
+
+
+API
+---
+
+```c
+void uatomic_set(type *addr, type v)
+```
+
+Atomically write `v` into `addr`. By "atomically", we mean that no
+concurrent operation that reads from addr will see partial
+effects of `uatomic_set()`.
+
+
+```c
+type uatomic_read(type *addr)
+```
+
+Atomically read `v` from `addr`. By "atomically", we mean that
+`uatomic_read()` cannot see a partial effect of any concurrent
+uatomic update.
+
+
+```c
+type uatomic_cmpxchg(type *addr, type old, type new)
+```
+
+An atomic read-modify-write operation that performs this
+sequence of operations atomically: check if `addr` contains `old`.
+If true, then replace the content of `addr` by `new`. Return the
+value previously contained by `addr`. This function imply a full
+memory barrier before and after the atomic operation.
+
+
+```c
+type uatomic_xchg(type *addr, type new)
+```
+
+An atomic read-modify-write operation that performs this sequence
+of operations atomically: replace the content of `addr` by `new`,
+and return the value previously contained by `addr`. This
+function imply a full memory barrier before and after the atomic
+operation.
+
+
+```c
+type uatomic_add_return(type *addr, type v)
+type uatomic_sub_return(type *addr, type v)
+```
+
+An atomic read-modify-write operation that performs this
+sequence of operations atomically: increment/decrement the
+content of `addr` by `v`, and return the resulting value. This
+function imply a full memory barrier before and after the atomic
+operation.
+
+
+```c
+void uatomic_and(type *addr, type mask)
+void uatomic_or(type *addr, type mask)
+```
+
+Atomically write the result of bitwise "and"/"or" between the
+content of `addr` and `mask` into `addr`.
+
+These operations do not necessarily imply memory barriers.
+If memory barriers are needed, they may be provided by explicitly using
+`cmm_smp_mb__before_uatomic_and()`, `cmm_smp_mb__after_uatomic_and()`,
+`cmm_smp_mb__before_uatomic_or()`, and `cmm_smp_mb__after_uatomic_or()`.
+These explicit barriers are no-ops on architectures in which the underlying
+atomic instructions implicitly supply the needed memory barriers.
+
+
+```c
+void uatomic_add(type *addr, type v)
+void uatomic_sub(type *addr, type v)
+```
+
+Atomically increment/decrement the content of `addr` by `v`.
+These operations do not necessarily imply memory barriers.
+If memory barriers are needed, they may be provided by
+explicitly using `cmm_smp_mb__before_uatomic_add()`,
+`cmm_smp_mb__after_uatomic_add()`, `cmm_smp_mb__before_uatomic_sub()`, and
+`cmm_smp_mb__after_uatomic_sub()`. These explicit barriers are
+no-ops on architectures in which the underlying atomic
+instructions implicitly supply the needed memory barriers.
+
+
+```c
+void uatomic_inc(type *addr)
+void uatomic_dec(type *addr)
+```
+
+Atomically increment/decrement the content of `addr` by 1.
+These operations do not necessarily imply memory barriers.
+If memory barriers are needed, they may be provided by
+explicitly using `cmm_smp_mb__before_uatomic_inc()`,
+`cmm_smp_mb__after_uatomic_inc()`, `cmm_smp_mb__before_uatomic_dec()`,
+and `cmm_smp_mb__after_uatomic_dec()`. These explicit barriers are
+no-ops on architectures in which the underlying atomic
+instructions implicitly supply the needed memory barriers.
+++ /dev/null
-Userspace RCU Atomic Operations API
-by Mathieu Desnoyers and Paul E. McKenney
-
-
-This document describes the <urcu/uatomic.h> API. Those are the atomic
-operations provided by the Userspace RCU library. The general rule
-regarding memory barriers is that only uatomic_xchg(),
-uatomic_cmpxchg(), uatomic_add_return(), and uatomic_sub_return() imply
-full memory barriers before and after the atomic operation. Other
-primitives don't guarantee any memory barrier.
-
-Only atomic operations performed on integers ("int" and "long", signed
-and unsigned) are supported on all architectures. Some architectures
-also support 1-byte and 2-byte atomic operations. Those respectively
-have UATOMIC_HAS_ATOMIC_BYTE and UATOMIC_HAS_ATOMIC_SHORT defined when
-uatomic.h is included. An architecture trying to perform an atomic write
-to a type size not supported by the architecture will trigger an illegal
-instruction.
-
-In the description below, "type" is a type that can be atomically
-written to by the architecture. It needs to be at most word-sized, and
-its alignment needs to greater or equal to its size.
-
-void uatomic_set(type *addr, type v)
-
- Atomically write @v into @addr. By "atomically", we mean that no
- concurrent operation that reads from addr will see partial
- effects of uatomic_set().
-
-type uatomic_read(type *addr)
-
- Atomically read @v from @addr. By "atomically", we mean that
- uatomic_read() cannot see a partial effect of any concurrent
- uatomic update.
-
-type uatomic_cmpxchg(type *addr, type old, type new)
-
- An atomic read-modify-write operation that performs this
- sequence of operations atomically: check if @addr contains @old.
- If true, then replace the content of @addr by @new. Return the
- value previously contained by @addr. This function imply a full
- memory barrier before and after the atomic operation.
-
-type uatomic_xchg(type *addr, type new)
-
- An atomic read-modify-write operation that performs this sequence
- of operations atomically: replace the content of @addr by @new,
- and return the value previously contained by @addr. This
- function imply a full memory barrier before and after the atomic
- operation.
-
-type uatomic_add_return(type *addr, type v)
-type uatomic_sub_return(type *addr, type v)
-
- An atomic read-modify-write operation that performs this
- sequence of operations atomically: increment/decrement the
- content of @addr by @v, and return the resulting value. This
- function imply a full memory barrier before and after the atomic
- operation.
-
-void uatomic_and(type *addr, type mask)
-void uatomic_or(type *addr, type mask)
-
- Atomically write the result of bitwise "and"/"or" between the
- content of @addr and @mask into @addr.
- These operations do not necessarily imply memory barriers.
- If memory barriers are needed, they may be provided by
- explicitly using
- cmm_smp_mb__before_uatomic_and(),
- cmm_smp_mb__after_uatomic_and(),
- cmm_smp_mb__before_uatomic_or(), and
- cmm_smp_mb__after_uatomic_or(). These explicit barriers are
- no-ops on architectures in which the underlying atomic
- instructions implicitly supply the needed memory barriers.
-
-void uatomic_add(type *addr, type v)
-void uatomic_sub(type *addr, type v)
-
- Atomically increment/decrement the content of @addr by @v.
- These operations do not necessarily imply memory barriers.
- If memory barriers are needed, they may be provided by
- explicitly using
- cmm_smp_mb__before_uatomic_add(),
- cmm_smp_mb__after_uatomic_add(),
- cmm_smp_mb__before_uatomic_sub(), and
- cmm_smp_mb__after_uatomic_sub(). These explicit barriers are
- no-ops on architectures in which the underlying atomic
- instructions implicitly supply the needed memory barriers.
-
-void uatomic_inc(type *addr)
-void uatomic_dec(type *addr)
-
- Atomically increment/decrement the content of @addr by 1.
- These operations do not necessarily imply memory barriers.
- If memory barriers are needed, they may be provided by
- explicitly using
- cmm_smp_mb__before_uatomic_inc(),
- cmm_smp_mb__after_uatomic_inc(),
- cmm_smp_mb__before_uatomic_dec(), and
- cmm_smp_mb__after_uatomic_dec(). These explicit barriers are
- no-ops on architectures in which the underlying atomic
- instructions implicitly supply the needed memory barriers.
/*
* Exported functions
*
- * Important: see rcu-api.txt in userspace-rcu documentation for
+ * Important: see rcu-api.md in userspace-rcu documentation for
* call_rcu family of functions usage detail, including the surrounding
* RCU usage required when using these primitives.
*/