Commit | Line | Data |
---|---|---|
dcb9c05a PP |
1 | Userspace RCU API |
2 | ================= | |
3 | ||
4 | by Mathieu Desnoyers and Paul E. McKenney | |
5 | ||
6 | ||
7 | API | |
8 | --- | |
9 | ||
10 | ```c | |
11 | void rcu_init(void); | |
12 | ``` | |
13 | ||
14 | This must be called before any of the following functions | |
15 | are invoked. | |
16 | ||
17 | ||
18 | ```c | |
19 | void rcu_read_lock(void); | |
20 | ``` | |
21 | ||
22 | Begin an RCU read-side critical section. These critical | |
23 | sections may be nested. | |
24 | ||
25 | ||
26 | ```c | |
27 | void rcu_read_unlock(void); | |
28 | ``` | |
29 | ||
30 | End an RCU read-side critical section. | |
31 | ||
32 | ||
33 | ```c | |
34 | void rcu_register_thread(void); | |
35 | ``` | |
36 | ||
37 | Each thread must invoke this function before its first call to | |
38 | `rcu_read_lock()`. Threads that never call `rcu_read_lock()` need | |
39 | not invoke this function. In addition, `rcu-bp` ("bullet proof" | |
40 | RCU) does not require any thread to invoke `rcu_register_thread()`. | |
41 | ||
42 | ||
43 | ```c | |
44 | void rcu_unregister_thread(void); | |
45 | ``` | |
46 | ||
47 | Each thread that invokes `rcu_register_thread()` must invoke | |
48 | `rcu_unregister_thread()` before `invoking pthread_exit()` | |
49 | or before returning from its top-level function. | |
50 | ||
51 | ||
52 | ```c | |
53 | void synchronize_rcu(void); | |
54 | ``` | |
55 | ||
56 | Wait until every pre-existing RCU read-side critical section | |
57 | has completed. Note that this primitive will not necessarily | |
58 | wait for RCU read-side critical sections that have not yet | |
59 | started: this is not a reader-writer lock. The duration | |
60 | actually waited is called an RCU grace period. | |
61 | ||
62 | ||
a88a3a86 MD |
63 | ```c |
64 | struct urcu_gp_poll_state start_poll_synchronize_rcu(void); | |
65 | ``` | |
66 | ||
b285374a JG |
67 | Provides a handle for checking if a new grace period has started |
68 | and completed since the handle was obtained. It returns a | |
69 | `struct urcu_gp_poll_state` handle that can be used with | |
70 | `poll_state_synchronize_rcu` to check, by polling, if the | |
71 | associated grace period has completed. | |
a88a3a86 | 72 | |
b285374a JG |
73 | `start_poll_synchronize_rcu` must only be called from |
74 | registered RCU read-side threads. For the QSBR flavor, the | |
75 | caller must be online. | |
a88a3a86 MD |
76 | |
77 | ||
78 | ```c | |
79 | bool poll_state_synchronize_rcu(struct urcu_gp_poll_state state); | |
80 | ``` | |
81 | ||
b285374a JG |
82 | Checks if the grace period associated with the |
83 | `struct urcu_gp_poll_state` handle has completed. If the grace | |
84 | period has completed, the function returns true. Otherwise, | |
85 | it returns false. | |
a88a3a86 MD |
86 | |
87 | ||
dcb9c05a PP |
88 | ```c |
89 | void call_rcu(struct rcu_head *head, | |
90 | void (*func)(struct rcu_head *head)); | |
91 | ``` | |
92 | ||
93 | Registers the callback indicated by "head". This means | |
94 | that `func` will be invoked after the end of a future | |
95 | RCU grace period. The `rcu_head` structure referenced | |
96 | by `head` will normally be a field in a larger RCU-protected | |
97 | structure. A typical implementation of `func` is as | |
98 | follows: | |
99 | ||
100 | ```c | |
101 | void func(struct rcu_head *head) | |
102 | { | |
103 | struct foo *p = container_of(head, struct foo, rcu); | |
104 | ||
105 | free(p); | |
106 | } | |
107 | ``` | |
108 | ||
109 | This RCU callback function can be registered as follows | |
110 | given a pointer `p` to the enclosing structure: | |
111 | ||
112 | ```c | |
113 | call_rcu(&p->rcu, func); | |
114 | ``` | |
115 | ||
116 | `call_rcu` should be called from registered RCU read-side threads. | |
117 | For the QSBR flavor, the caller should be online. | |
118 | ||
119 | ||
120 | ```c | |
121 | void rcu_barrier(void); | |
122 | ``` | |
123 | ||
124 | Wait for all `call_rcu()` work initiated prior to `rcu_barrier()` by | |
125 | _any_ thread on the system to have completed before `rcu_barrier()` | |
126 | returns. `rcu_barrier()` should never be called from a `call_rcu()` | |
127 | thread. This function can be used, for instance, to ensure that | |
128 | all memory reclaim involving a shared object has completed | |
129 | before allowing `dlclose()` of this shared object to complete. | |
130 | ||
131 | ||
132 | ```c | |
133 | struct call_rcu_data *create_call_rcu_data(unsigned long flags, | |
134 | int cpu_affinity); | |
135 | ``` | |
136 | ||
137 | Returns a handle that can be passed to the following | |
138 | primitives. The `flags` argument can be zero, or can be | |
139 | `URCU_CALL_RCU_RT` if the worker threads associated with the | |
140 | new helper thread are to get real-time response. The argument | |
141 | `cpu_affinity` specifies a CPU on which the `call_rcu` thread should | |
142 | be affined to. It is ignored if negative. | |
143 | ||
144 | ||
145 | ```c | |
146 | void call_rcu_data_free(struct call_rcu_data *crdp); | |
147 | ``` | |
148 | ||
149 | Terminates a `call_rcu()` helper thread and frees its associated | |
150 | data. The caller must have ensured that this thread is no longer | |
151 | in use, for example, by passing `NULL` to `set_thread_call_rcu_data()` | |
152 | and `set_cpu_call_rcu_data()` as required. | |
153 | ||
154 | ||
155 | ```c | |
156 | struct call_rcu_data *get_default_call_rcu_data(void); | |
157 | ``` | |
158 | ||
159 | Returns the handle for the default `call_rcu()` helper thread. | |
160 | Creates it if necessary. | |
161 | ||
162 | ||
163 | ```c | |
164 | struct call_rcu_data *get_cpu_call_rcu_data(int cpu); | |
165 | ``` | |
166 | ||
167 | Returns the handle for the current CPU's `call_rcu()` helper | |
168 | thread, or `NULL` if the current CPU has no helper thread | |
169 | currently assigned. The call to this function and use of the | |
170 | returned `call_rcu_data` should be protected by RCU read-side | |
171 | lock. | |
172 | ||
173 | ||
174 | ```c | |
175 | struct call_rcu_data *get_thread_call_rcu_data(void); | |
176 | ``` | |
177 | ||
178 | Returns the handle for the current thread's hard-assigned | |
179 | `call_rcu()` helper thread, or `NULL` if the current thread is | |
180 | instead using a per-CPU or the default helper thread. | |
181 | ||
182 | ||
183 | ```c | |
184 | struct call_rcu_data *get_call_rcu_data(void); | |
185 | ``` | |
186 | ||
187 | Returns the handle for the current thread's `call_rcu()` helper | |
188 | thread, which is either, in increasing order of preference: | |
189 | per-thread hard-assigned helper thread, per-CPU helper thread, | |
190 | or default helper thread. `get_call_rcu_data` should be called | |
191 | from registered RCU read-side threads. For the QSBR flavor, the | |
192 | caller should be online. | |
193 | ||
194 | ||
195 | ```c | |
196 | pthread_t get_call_rcu_thread(struct call_rcu_data *crdp); | |
197 | ``` | |
198 | ||
199 | Returns the helper thread's pthread identifier linked to a call | |
200 | rcu helper thread data. | |
201 | ||
202 | ||
203 | ```c | |
204 | void set_thread_call_rcu_data(struct call_rcu_data *crdp); | |
205 | ``` | |
206 | ||
207 | Sets the current thread's hard-assigned `call_rcu()` helper to the | |
208 | handle specified by `crdp`. Note that `crdp` can be `NULL` to | |
209 | disassociate this thread from its helper. Once a thread is | |
210 | disassociated from its helper, further `call_rcu()` invocations | |
211 | use the current CPU's helper if there is one and the default | |
212 | helper otherwise. | |
213 | ||
214 | ||
215 | ```c | |
216 | int set_cpu_call_rcu_data(int cpu, struct call_rcu_data *crdp); | |
217 | ``` | |
218 | ||
219 | Sets the specified CPU's `call_rcu()` helper to the handle | |
220 | specified by `crdp`. Again, `crdp` can be `NULL` to disassociate | |
221 | this CPU from its helper thread. Once a CPU has been | |
222 | disassociated from its helper, further `call_rcu()` invocations | |
223 | that would otherwise have used this CPU's helper will instead | |
224 | use the default helper. | |
225 | ||
226 | The caller must wait for a grace-period to pass between return from | |
227 | `set_cpu_call_rcu_data()` and call to `call_rcu_data_free()` passing the | |
228 | previous call rcu data as argument. | |
229 | ||
230 | ||
231 | ```c | |
232 | int create_all_cpu_call_rcu_data(unsigned long flags); | |
233 | ``` | |
234 | ||
235 | Creates a separate `call_rcu()` helper thread for each CPU. | |
236 | After this primitive is invoked, the global default `call_rcu()` | |
237 | helper thread will not be called. | |
238 | ||
239 | The `set_thread_call_rcu_data()`, `set_cpu_call_rcu_data()`, and | |
240 | `create_all_cpu_call_rcu_data()` functions may be combined to set up | |
241 | pretty much any desired association between worker and `call_rcu()` | |
242 | helper threads. If a given executable calls only `call_rcu()`, | |
243 | then that executable will have only the single global default | |
244 | `call_rcu()` helper thread. This will suffice in most cases. | |
245 | ||
246 | ||
247 | ```c | |
248 | void free_all_cpu_call_rcu_data(void); | |
249 | ``` | |
250 | ||
251 | Clean up all the per-CPU `call_rcu` threads. Should be paired with | |
252 | `create_all_cpu_call_rcu_data()` to perform teardown. Note that | |
253 | this function invokes `synchronize_rcu()` internally, so the | |
254 | caller should be careful not to hold mutexes (or mutexes within a | |
255 | dependency chain) that are also taken within a RCU read-side | |
256 | critical section, or in a section where QSBR threads are online. | |
257 | ||
258 | ||
259 | ```c | |
ceb592f9 MD |
260 | void call_rcu_before_fork_parent(void); |
261 | void call_rcu_after_fork_parent(void); | |
dcb9c05a PP |
262 | void call_rcu_after_fork_child(void); |
263 | ``` | |
264 | ||
265 | Should be used as `pthread_atfork()` handler for programs using | |
266 | `call_rcu` and performing `fork()` or `clone()` without a following | |
267 | `exec()`. |