From: Mathieu Desnoyers Date: Sat, 1 Mar 2014 21:22:52 +0000 (-0500) Subject: Fix: move wait loop increment before first conditional block X-Git-Tag: v0.7.12~1 X-Git-Url: https://git.lttng.org./?a=commitdiff_plain;h=cca4c8dc770c0a5d6cdcd7375fd9d949ab5d7b99;p=urcu.git Fix: move wait loop increment before first conditional block The fix "Fix: high cpu usage in synchronize_rcu with long RCU read-side C.S." has an imperfection in urcu.c and urcu-qsbr.c: when incrementing the wait loop counter for the last time, the first conditional branch is not taken, but the following conditionals are, and they assume the first conditional has been taken. Within urcu.c (urcu-mb, urcu-membarrier and urcu-signal), and urcu-qsbr.c, this will simply skip the first wait_gp() call, without any noticeable ill side-effect. Signed-off-by: Mathieu Desnoyers --- diff --git a/urcu-qsbr.c b/urcu-qsbr.c index 76aaabb..a2cabb4 100644 --- a/urcu-qsbr.c +++ b/urcu-qsbr.c @@ -150,6 +150,8 @@ static void update_counter_and_wait(void) * Wait for each thread rcu_reader_qs_gp count to become 0. */ for (;;) { + if (wait_loops < RCU_QS_ACTIVE_ATTEMPTS) + wait_loops++; if (wait_loops >= RCU_QS_ACTIVE_ATTEMPTS) { uatomic_set(&gp_futex, -1); /* @@ -162,8 +164,6 @@ static void update_counter_and_wait(void) } /* Write futex before read reader_gp */ cmm_smp_mb(); - } else { - wait_loops++; } cds_list_for_each_entry_safe(index, tmp, ®istry, node) { if (!rcu_gp_ongoing(&index->ctr)) diff --git a/urcu.c b/urcu.c index 33e35e1..8420ee4 100644 --- a/urcu.c +++ b/urcu.c @@ -247,12 +247,12 @@ void update_counter_and_wait(void) * Wait for each thread URCU_TLS(rcu_reader).ctr count to become 0. */ for (;;) { + if (wait_loops < RCU_QS_ACTIVE_ATTEMPTS) + wait_loops++; if (wait_loops >= RCU_QS_ACTIVE_ATTEMPTS) { uatomic_dec(&gp_futex); /* Write futex before read reader_gp */ smp_mb_master(RCU_MB_GROUP); - } else { - wait_loops++; } cds_list_for_each_entry_safe(index, tmp, ®istry, node) {