Skip to content

Commit b9a4952

Browse files
committed
rcuref: Plug slowpath race in rcuref_put()
Kernel test robot reported an "imbalanced put" in the rcuref_put() slow path, which turned out to be a false positive. Consider the following race: ref = 0 (via rcuref_init(ref, 1)) T1 T2 rcuref_put(ref) -> atomic_add_negative_release(-1, ref) # ref -> 0xffffffff -> rcuref_put_slowpath(ref) rcuref_get(ref) -> atomic_add_negative_relaxed(1, &ref->refcnt) -> return true; # ref -> 0 rcuref_put(ref) -> atomic_add_negative_release(-1, ref) # ref -> 0xffffffff -> rcuref_put_slowpath() -> cnt = atomic_read(&ref->refcnt); # cnt -> 0xffffffff / RCUREF_NOREF -> atomic_try_cmpxchg_release(&ref->refcnt, &cnt, RCUREF_DEAD)) # ref -> 0xe0000000 / RCUREF_DEAD -> return true -> cnt = atomic_read(&ref->refcnt); # cnt -> 0xe0000000 / RCUREF_DEAD -> if (cnt > RCUREF_RELEASED) # 0xe0000000 > 0xc0000000 -> WARN_ONCE(cnt >= RCUREF_RELEASED, "rcuref - imbalanced put()") The problem is the additional read in the slow path (after it decremented to RCUREF_NOREF) which can happen after the counter has been marked RCUREF_DEAD. Prevent this by reusing the return value of the decrement. Now every "final" put uses RCUREF_NOREF in the slow path and attempts the final cmpxchg() to RCUREF_DEAD. [ bigeasy: Add changelog ] Fixes: ee1ee6d ("atomics: Provide rcuref - scalable reference counting") Reported-by: kernel test robot <[email protected]> Debugged-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Sebastian Andrzej Siewior <[email protected]> Cc: [email protected] Closes: https://lore.kernel.org/oe-lkp/[email protected]
1 parent 5e0e02f commit b9a4952

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed

include/linux/rcuref.h

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -71,27 +71,30 @@ static inline __must_check bool rcuref_get(rcuref_t *ref)
7171
return rcuref_get_slowpath(ref);
7272
}
7373

74-
extern __must_check bool rcuref_put_slowpath(rcuref_t *ref);
74+
extern __must_check bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt);
7575

7676
/*
7777
* Internal helper. Do not invoke directly.
7878
*/
7979
static __always_inline __must_check bool __rcuref_put(rcuref_t *ref)
8080
{
81+
int cnt;
82+
8183
RCU_LOCKDEP_WARN(!rcu_read_lock_held() && preemptible(),
8284
"suspicious rcuref_put_rcusafe() usage");
8385
/*
8486
* Unconditionally decrease the reference count. The saturation and
8587
* dead zones provide enough tolerance for this.
8688
*/
87-
if (likely(!atomic_add_negative_release(-1, &ref->refcnt)))
89+
cnt = atomic_sub_return_release(1, &ref->refcnt);
90+
if (likely(cnt >= 0))
8891
return false;
8992

9093
/*
9194
* Handle the last reference drop and cases inside the saturation
9295
* and dead zones.
9396
*/
94-
return rcuref_put_slowpath(ref);
97+
return rcuref_put_slowpath(ref, cnt);
9598
}
9699

97100
/**

lib/rcuref.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -220,6 +220,7 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
220220
/**
221221
* rcuref_put_slowpath - Slowpath of __rcuref_put()
222222
* @ref: Pointer to the reference count
223+
* @cnt: The resulting value of the fastpath decrement
223224
*
224225
* Invoked when the reference count is outside of the valid zone.
225226
*
@@ -233,10 +234,8 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
233234
* with a concurrent get()/put() pair. Caller is not allowed to
234235
* deconstruct the protected object.
235236
*/
236-
bool rcuref_put_slowpath(rcuref_t *ref)
237+
bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt)
237238
{
238-
unsigned int cnt = atomic_read(&ref->refcnt);
239-
240239
/* Did this drop the last reference? */
241240
if (likely(cnt == RCUREF_NOREF)) {
242241
/*

0 commit comments

Comments
 (0)