Swift Tip: Atomic Variables — Part 2
In the last Swift Tip of 2018 we showed you a simple wrapper that provides synchronized access to properties. You sent us a lot of feedback — thank you! In part two we'll offer a little more explanation, and show some alternatives.
Here's the Atomic
wrapper from the previous post:
final class Atomic<A> {
private let queue = DispatchQueue(label: "Atomic serial queue")
private var _value: A
init(_ value: A) {
self._value = value
}
var value: A {
get { return queue.sync { self._value } }
}
func mutate(_ transform: (inout A) -> ()) {
queue.sync { transform(&self._value) }
}
}
We're using a serial dispatch queue to synchronize access to the private _value
property and provide a thread-safe public API exposing the read-only value
property and the mutate
method.
We wrote this wrapper for our specific use case. In the backend for Swift Talk we have some global properties, which store static data loaded from a Github repository. Many parts of the app read these properties to render pages. Perhaps twice a week something in the static data changes, and when it does, we reload it from Github and store the new data in these global properties.
We had to be sure to avoid race conditions, where read and write access to one of these global properties would occur simultaneously — however unlikely this is in our scenario. There is practically zero contention on our synchronized properties, and even if we hit the unlikely case of a simultaneous access, performance is irrelevant. Given this context, we wrote the wrapper above as the simplest possible solution. We regularly optimize for simple code, not for the highest possible performance, or the most general solution.
That being said, let's look at some of the suggestions we received.
First, we could make mutation occur asynchronously. This has the potential advantage of not blocking the caller in case the property is contended, with the disadvantage that the value might not have changed yet after mutate
finishes:
func mutate(_ transform: (inout A) -> ()) {
queue.async { transform(&self._value) }
}
If there is a lot of contention from reading the property, we could use a concurrent queue to allow multiple reads simultaneously, while making sure that write accesses are exclusive (as suggested by @unger and @_ok1a):
final class Atomic<A> {
private let queue = DispatchQueue(label: "Atomic serial queue", attributes: .concurrent)
// ...
func mutate(_ transform: (inout A) -> ()) {
queue.sync(flags: .barrier) { transform(&self._value) }
}
}
If performance is absolutely critical, we could choose to use os_unfair_lock
instead of a dispatch queue (as suggested by @jasongregori).
As always, there are many ways to solve a particular problem, each with their own set of tradeoffs. For our application we chose to keep our code unchanged: it's the simplest solution for our problem, and it's more than fast enough.
In a similar manner, we suggest that you make sure you really need these kinds of optimization before you apply them, and that you fully understand the behavior of the locking API you're using (this remains true for our original implementation).
We always try to choose the simplest solution for the task at hand, and simple here means simple for us: we need to feel we understand our code.
In Swift Talk 42, we implement thread safety for a type in a reactive framework; a specific case that helps us learn generic solutions that can be used in many other places.
If you're interested in server-side Swift programming, we cover a range of related topics in our Server-Side Swift Collection. We'll be talking more about our backend rewrite soon!
If you'd like to support us, you can subscribe, or give a gift. 🙏