There is a common pattern in C++ of using an RAII type to manage a synchronization primitive. There are different versions of this, but they all have the same basic pattern:
- Creating the object from a synchronization object: Locks the synchronization object.
- Destructing the object: Unlocks the synchronization object.
These types go by various names, like std::
, std::
, or std::
, and specific libraries may have versions for their own types, such as C++/WinRT’s winrt::
and WIL’s wil::
(which you thankfully never actually write out; just use auto
).
One thing that is missing from most standard libraries, however, is the anti-lock.
The idea of the anti-lock is that it counteracts an active lock.
template<typename Mutex> struct anti_lock { anti_lock() = default; explicit anti_lock(Mutex& mutex) : m_mutex(std::addressof(mutex)) { if (m_mutex) m_mutex->unlock(); } private: struct anti_lock_deleter { void operator()(Mutex* mutex) { mutex->lock(); } }; std::unique_ptr<Mutex, anti_lock_deleter> m_mutex; };
The anti-lock unlocks a mutex at construction and locks it at destruction. Here’s an example:
void Widget::DoSomething() { auto guard = std::lock_guard(m_mutex); ⟦ do stuff under the lock ⟧ int cost; if (m_isStandard) { cost = GetStandardCost(); } else { // Drop the lock temporarily while we call out. auto anti_guard = anti_lock(m_mutex); cost = m_callback->GetCost(); }(); // We are back under the lock. ⟦ do more stuff under the lock ⟧ }
The idea here is that you know you are running some code that acquires a lock, but you need to drop the lock temporarily, and then reacquire it afterward. The reason for dropping the lock might be that you are calling out to another component and don’t want to create a deadlock.
For example, commenter Joshua Hudson could have used this around all of the co_await
s.
winrt::fire_and_forget DoSomething() { auto guard = std::lock_guard(m_mutex); step1(); // All co_awaits must be under an anti-lock. int cost = [&] { auto anti_guard = anti_lock(m_mutex); return co_await GetCostAsync(); }(); step2(cost); }
For extra safety, you might require that the anti-lock be given the lock guard that it is counteracting.
template<typename Guard> struct anti_lock { using mutex_type = typename Guard::mutex_type; anti_lock() = default; explicit anti_lock(Guard& guard) : m_mutex(guard.mutex()) if (m_mutex) m_mutex->unlock(); } private: struct anti_lock_deleter { void operator()(mutex_type* mutex) { m_mutex->lock(); } }; std::unique_ptr<mutex_type, anti_lock_deleter> m_mutex; };
Being given the lock guard means that we can also make it so that the anti-lock of a non-owning guard is a non-owning anti-lock. The negative of zero is zero.
Being given the lock guard also makes it slightly more noticeable to the caller that the anti-lock might mess with the lock state.
Now, an anti-lock sounds weird, but you could very well be using it without realizing it: std::
s are secretly anti-locks. They enter with the lock held, then drop the lock while blocked, then reacquire the lock when unblocked.
Here’s another scenario where you may want to use an anti-lock:
void DoSomething() { // Hold the lock while we check m_nextQuery std::unique_lock lock(m_mutex); while (auto query = std::exchange(m_nextQuery, nullptr)) { // Drop the lock while we do work anti_lock anti(lock); refresh_from_query(query); // Reacquire the lock before rechecking m_nextQuery } }
You need to hold the mutex while checking if there is a new query (because that mutex protects the code that sets the new query), but you can drop the mutex while you process the query.
One downside of the anti-lock is that if you have an early return, the mutex is re-locked (when the anti-lock destructs) and then unlocked (when the outer guard destructs). This is hard to fix because there’s no guarantee that the outer guard is going to destruct when the anti-lock destructs:
void DoSomething() { // Hold the lock while we check m_nextQuery std::unique_lock lock(m_mutex); while (auto query = std::exchange(m_nextQuery, nullptr)) { try { // Drop the lock while we do work anti_lock anti(lock); refresh_from_query(query); // Reacquire the lock before rechecking m_nextQuery } CATCH_LOG(); // log refresh failures but don't stop } }
If you can live with this suboptimal behavior (which presumably is infrequent), the anti-lock is pretty handy.
Bonus chatter: The anti-lock does require you to know for sure that the lock is held exactly once. If it’s not held at all, then your anti-lock is unlocking a mutex that isn’t even locked, which is not allowed. And if it’s held twice (allowed by mutex classes such as std::shared_mutex
and std::recursive_mutex
), then your anti-lock only counteracts one of the locks, leave the other lock still active.
And of course anti-locks complicate lock analysis. If you use an anti-lock to counteract a lock held by the caller, then this invalidates the assumption that holding a lock across a function call protects the state guarded by the lock.
There was this one counterintuitive thing that bugged me about lock scope guards: They don’t introduce scope. So I solved it with custom implementation and the results are way more readable, see:
> If you use an anti-lock to counteract a lock held by the caller, then this invalidates the assumption that holding a lock across a function call protects the state guarded by the lock.
If the anti-lock requires access to the lock guard, then this can’t happen unless the caller passes a reference to the guard, in which case it’s aware that the callee might unlock the guard.
Rust’s pretty popular ‘parking_lot’ crate has such an API on its guards called ‘unlocked’, which takes a closure and temporarily unlocks the mutex while the closure is running.
Also note the devilish detail that
unlocked()
requires not just a reference to the guard, but an exclusive one (&mut
) at that; this means the caller, who hands out access to the guard to the callee, is forbidden from also holding a reference (shared or mutable) to the protected data across the function call (and not requiring&mut
forunlocked()
would in fact be unsound, because any reference (to data that don’t have interior mutability) in Rust may assume the data they point to don’t get randomly mutated from outside). The caller is forced to abandon any reference, direct or indirect, to the protected data before they call such a callee, and reacquire any such reference after the call returns, which, if you’re astute enough, can be a signal to you that the data may not be the same at this point.I generally like your brief style but such a dangerous tool needs much more careful analysis and justification. Another commenter described it as “punch a hole inside a critical section” which is exactly right. I am struggling to think of cases where breaking a lock from the outside is a good or useful or safe idea.
I don’t think it’s a good idea to simply present ways you can “punch a hole inside a critical section” without mentioning the implications of doing so.
If you’re using a mutex at all, you must be doing so to protect the mutation of some state (unsynchronized operations that are all reads are not a data race). A corollary of it is that every time you release a mutex, the next time you reacquire it, the mutable state may have changed. If you’re a reader, you’ll probably use the presence of a
lock_guard
in the scope as a visual cue that you can assume the state never changes inside that scope, but if you puncture the scope with an anti-lock, you suddenly no longer have that guarantee: inside the lower half of the scope, the value of the mutable state may have changed relative to its value in the upper half of the scope.And yet, you can still see that visual cue — the
lock_guard
— which is now giving you a false sense of immutability.If the multi-step computation you’re doing with
config
requiresconfig
be consistent throughout, the correct solution is not to puncture the critical section, but to read out the entireconfig
at once before you do any part of the computation, and stick to the local copy afterwards. I can certainly see anti-locks being useful in practice, but shaping its API after the regular lock is in my opinion going to lead programmers into the pit of failure more often that it does the pit of success.EDIT: Just saw the last paragraph of the article.
You can scope your `unique_lock` instances instead:
If lock+unlock sequences happen frequently in a short snippet, the design may have serious flaws and violate basic principles(SOLID…). In such cases before refactoring, just fast patching with simple calls to `lock` and `unlock` functions is at least more readable.
Not only this, but you can simply call
unlock()
on theunique_lock
(note, not themutex
) when you need to temporarily release it, and calllock()
afterwards to reacquire it. This works without using RAII itself because, at all points during the execution of the function, you want the mutex to be unlocked if you leave the function, andunique_lock
guarantees that, regardless of ifunlock()
has been manually called on it or not.As opposed to the lock anti-pattern, which is where you try to manage all of the locking and unlocking manually and overlook an edge case.
Why check that the pointer has a non-null address, when it comes from a reference, which cannot be null?
Also, why the unique_ptr and deleter complication, and not just a member reference and destructor? Is this to prevent copying automatically or such? Or maybe just because there’s a default ctor, but why is that needed?
The null test is a copy/pasta bug from the later version that takes a guard. I’ll fix that. The unique_ptr avoids the Rule of Five complicates that come with having a destructor.
>> These types go by various names, like std::lock_guard, std::unique_lock, or std::coped_lock
I think you mean `std::scoped_lock`.
std::coped_lock is a C++-29 concept which keeps all the benefits of std::scoped_lock, but allows to safely continue even when the lock cannot be acquired. In simple terms, you don’t get a lock and you just live with it. /s