Commit | Line | Data |
---|---|---|
d001c886 MJ |
1 | <!-- |
2 | SPDX-FileCopyrightText: 2023 EfficiOS Inc. | |
3 | ||
4 | SPDX-License-Identifier: CC-BY-4.0 | |
5 | --> | |
6 | ||
dcb9c05a PP |
7 | Userspace RCU Atomic Operations API |
8 | =================================== | |
9 | ||
10 | by Mathieu Desnoyers and Paul E. McKenney | |
11 | ||
12 | This document describes the `<urcu/uatomic.h>` API. Those are the atomic | |
13 | operations provided by the Userspace RCU library. The general rule | |
14 | regarding memory barriers is that only `uatomic_xchg()`, | |
15 | `uatomic_cmpxchg()`, `uatomic_add_return()`, and `uatomic_sub_return()` imply | |
16 | full memory barriers before and after the atomic operation. Other | |
17 | primitives don't guarantee any memory barrier. | |
18 | ||
19 | Only atomic operations performed on integers (`int` and `long`, signed | |
20 | and unsigned) are supported on all architectures. Some architectures | |
21 | also support 1-byte and 2-byte atomic operations. Those respectively | |
22 | have `UATOMIC_HAS_ATOMIC_BYTE` and `UATOMIC_HAS_ATOMIC_SHORT` defined when | |
23 | `uatomic.h` is included. An architecture trying to perform an atomic write | |
24 | to a type size not supported by the architecture will trigger an illegal | |
25 | instruction. | |
26 | ||
27 | In the description below, `type` is a type that can be atomically | |
28 | written to by the architecture. It needs to be at most word-sized, and | |
29 | its alignment needs to greater or equal to its size. | |
30 | ||
31 | ||
32 | API | |
33 | --- | |
34 | ||
35 | ```c | |
0474164f | 36 | void uatomic_set(type *addr, type v); |
dcb9c05a PP |
37 | ``` |
38 | ||
39 | Atomically write `v` into `addr`. By "atomically", we mean that no | |
40 | concurrent operation that reads from addr will see partial | |
41 | effects of `uatomic_set()`. | |
42 | ||
43 | ||
44 | ```c | |
0474164f | 45 | type uatomic_read(type *addr); |
dcb9c05a PP |
46 | ``` |
47 | ||
48 | Atomically read `v` from `addr`. By "atomically", we mean that | |
49 | `uatomic_read()` cannot see a partial effect of any concurrent | |
50 | uatomic update. | |
51 | ||
52 | ||
53 | ```c | |
0474164f | 54 | type uatomic_cmpxchg(type *addr, type old, type new); |
dcb9c05a PP |
55 | ``` |
56 | ||
57 | An atomic read-modify-write operation that performs this | |
58 | sequence of operations atomically: check if `addr` contains `old`. | |
59 | If true, then replace the content of `addr` by `new`. Return the | |
20d8db46 | 60 | value previously contained by `addr`. This function implies a full |
d1854484 OD |
61 | memory barrier before and after the atomic operation on success. |
62 | On failure, no memory order is guaranteed. | |
dcb9c05a PP |
63 | |
64 | ||
65 | ```c | |
0474164f | 66 | type uatomic_xchg(type *addr, type new); |
dcb9c05a PP |
67 | ``` |
68 | ||
69 | An atomic read-modify-write operation that performs this sequence | |
70 | of operations atomically: replace the content of `addr` by `new`, | |
71 | and return the value previously contained by `addr`. This | |
20d8db46 | 72 | function implies a full memory barrier before and after the atomic |
dcb9c05a PP |
73 | operation. |
74 | ||
75 | ||
76 | ```c | |
0474164f WY |
77 | type uatomic_add_return(type *addr, type v); |
78 | type uatomic_sub_return(type *addr, type v); | |
dcb9c05a PP |
79 | ``` |
80 | ||
81 | An atomic read-modify-write operation that performs this | |
82 | sequence of operations atomically: increment/decrement the | |
83 | content of `addr` by `v`, and return the resulting value. This | |
20d8db46 | 84 | function implies a full memory barrier before and after the atomic |
dcb9c05a PP |
85 | operation. |
86 | ||
87 | ||
88 | ```c | |
0474164f WY |
89 | void uatomic_and(type *addr, type mask); |
90 | void uatomic_or(type *addr, type mask); | |
dcb9c05a PP |
91 | ``` |
92 | ||
93 | Atomically write the result of bitwise "and"/"or" between the | |
94 | content of `addr` and `mask` into `addr`. | |
95 | ||
96 | These operations do not necessarily imply memory barriers. | |
97 | If memory barriers are needed, they may be provided by explicitly using | |
98 | `cmm_smp_mb__before_uatomic_and()`, `cmm_smp_mb__after_uatomic_and()`, | |
99 | `cmm_smp_mb__before_uatomic_or()`, and `cmm_smp_mb__after_uatomic_or()`. | |
100 | These explicit barriers are no-ops on architectures in which the underlying | |
101 | atomic instructions implicitly supply the needed memory barriers. | |
102 | ||
103 | ||
104 | ```c | |
0474164f WY |
105 | void uatomic_add(type *addr, type v); |
106 | void uatomic_sub(type *addr, type v); | |
dcb9c05a PP |
107 | ``` |
108 | ||
109 | Atomically increment/decrement the content of `addr` by `v`. | |
110 | These operations do not necessarily imply memory barriers. | |
111 | If memory barriers are needed, they may be provided by | |
112 | explicitly using `cmm_smp_mb__before_uatomic_add()`, | |
113 | `cmm_smp_mb__after_uatomic_add()`, `cmm_smp_mb__before_uatomic_sub()`, and | |
114 | `cmm_smp_mb__after_uatomic_sub()`. These explicit barriers are | |
115 | no-ops on architectures in which the underlying atomic | |
116 | instructions implicitly supply the needed memory barriers. | |
117 | ||
118 | ||
119 | ```c | |
0474164f WY |
120 | void uatomic_inc(type *addr); |
121 | void uatomic_dec(type *addr); | |
dcb9c05a PP |
122 | ``` |
123 | ||
124 | Atomically increment/decrement the content of `addr` by 1. | |
125 | These operations do not necessarily imply memory barriers. | |
126 | If memory barriers are needed, they may be provided by | |
127 | explicitly using `cmm_smp_mb__before_uatomic_inc()`, | |
128 | `cmm_smp_mb__after_uatomic_inc()`, `cmm_smp_mb__before_uatomic_dec()`, | |
129 | and `cmm_smp_mb__after_uatomic_dec()`. These explicit barriers are | |
130 | no-ops on architectures in which the underlying atomic | |
131 | instructions implicitly supply the needed memory barriers. |