Concurrency issues aside, I've been working on a greenfield iOS project recently and I've really been enjoying much of Swift's syntax.
I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
func fetchUser(id: Int) async throws -> User {
let url = URL(string: "https://api.example.com/users/\(id)")!
let (data, _) = try await URLSession.shared.data(from: url)
return try JSONDecoder().decode(User.self, from: data)
}
And in Go (roughly similar semantics)
func fetchUser(ctx context.Context, client *http.Client, id int) (User, error) {
req, err := http.NewRequestWithContext(
ctx,
http.MethodGet,
fmt.Sprintf("https://api.example.com/users/%d", id),
nil,
)
if err != nil {
return User{}, err
}
resp, err := client.Do(req)
if err != nil {
return User{}, err
}
defer resp.Body.Close()
var u User
if err := json.NewDecoder(resp.Body).Decode(&u); err != nil {
return User{}, err
}
return u, nil
}
I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.
This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)
As someone who has been coding production Swift since 1.0 the Go example is a lot more what Swift in practice will look like. I suppose there are advantages to being able to only show the important parts.
The first line won't crash but in practice it is fairly rare where you'd implicitly unwrap something like that. URLs might be the only case where it is somewhat safe. But a more fair example would be something like
func fetchUser(id: Int) async throws -> User {
guard let url = URL(string: "https://api.example.com/users/\(id)") else {
throw MyError.invalidURL
}
// you'll pretty much never see data(url: ...) in real life
let request = URLRequest(url: url)
// configure request
let (data, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse,
200..<300 ~= httpResponse.statusCode else {
throw MyError.invalidResponseCode
}
// possibly other things you'd want to check
return try JSONDecoder().decode(User.self, from: data)
}
I don't code in Go so I don't know how production ready that code is. What I posted has a lot of issues with it as well but it is much closer to what would need to be done as a start. The Swift example is hiding a lot of the error checking that Go forces you to do to some extent.
Oh, don't get me started on handling the same type of exception differently depending on what call actually threw the exception. I find the happy path in Swift to be lovely, but as soon as exceptions have to be handled or there's some details that have to be gotten right in terms of order of execution, everything turns into a massive mess.
Still better than the 3 lines of if err is not nil that go gets you to do though.
The first one you can configure and it is the default way you'd see this done in real life. You can add headers, change the request type, etc. Likely, if you were making an actual app the request configuration would be much longer than 1 line I used. I was mostly trying to show that the Swift example was hiding a lot of things.
The second one is for downloading directly from a URL and I've never seen it used outside of examples in blog posts on the internet.
Anything taking arbitrary values or user input could cause an invalid URL, especially on earlier OS versions. Newer OS versions will use a newer URL standard which is more flexible. You could wrap your URL encoding logic into a throwing or non-throwing init as well, the example just used the simple version provided by Foundation.
Not OP, There is none as far as I can tell but still force unwrapping that way is something people try to avoid in Swift and often times have the linter warn about. Reason being people will copy the pattern elsewhere and make the assumption something is safe, ultimately be wrong and crash your app. It is admittedly an opinionated choice though.
I'm conflicted about the implicit named returns using this pattern in go. It's definitely tidier but I feel like the control flow is harder to follow: "I never defined `user` how can I return it?".
Also those variables are returned even if you don't explicitly return them, which feels a little unintuitive.
I haven't written any Go in many years (way before generics), but I'm shocked that something so implicit and magical is now valid Go syntax.
I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.
No real point, here. Just felt so surprised that I couldn't resist saying so...
yeah, not really an expert but my understanding is that naming the return struct automatically allocates the object and places it into the scope.
I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.
I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:
async Task<User> FetchUser(int id, HttpClient http, CancellationToken token)
{
var addr = $"https://api.example.com/users/{id}";
var user = await http.GetFromJsonAsync<User>(addr, token);
return user ?? throw new Exception("User not found");
}
What is the problem with that though? I honestly wish they moved the async key word to the front `async func ...` but given the relative newness of all of this I've yet to see anyone get confused by this. The compiler also ensures everything is used correctly anyway.
The problem is that that the Swift function signature is telling you that someone else needs dealing with async suspension and exception handling, clearly not the same semantics.
Isn’t the Go version also forcing the callers to deal with the errors by returning them? The swift version just lets the compiler do it rather than manually returning them.
In a sense it is telling someone else that yes, but more importantly, it is telling the compiler. I am not sure what the alternative is here, is this not common in other languages? I know Java does this at least. In Python it is hidden and you have to know to catch the exception. I'm not sure how that is better as it can be easily forgotten or ignored. There may be another alternative I'm not aware of?
And sometimes you even have to use the return value the way the function annoates as well instead of just pretending it's a string or whatever when it's not! Having the language tell me how to use things is so frustrating.
It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
async let and TaskGroups are not parallelism, they're concurrency. They're usually parallel because the Swift concurrency runtime allows them to be, but there's no guarantee. If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel.
Sure, but as soon as they released their first iteration, they immediately went back to the drawing board and just slapped @MainActor on everything they could because most people really do not care.
Well yes, but that’s because the iOS UI is single threaded, just like every other UI framework under the sun.
It doesn’t mean there isn’t good support for true parallelism in swift concurrency, it’s super useful to model interactions with isolated actors (e.g. the UI thread and the data it owns) as “asynchronous” from the perspective of other tasks… allowing you to spawn off CPU-heavy operations that can still “talk back” to the UI, but they simply have to “await” the calls to the UI actor in case it’s currently executing.
The model works well for both asynchronous tasks (you await the long IO operation, your executor can go back to doing other things) and concurrent processing (you await any synchronization primitives that require mutual exclusivity, etc.)
There’s a lot of gripes I have with swift concurrency but my memory is about 2 years old at this point and I know Swift 6 has changed a lot. Mainly around the complete breakage you get if you ever call ObjC code which is using GCD, and how ridiculously easy it is to shoot yourself in the foot with unsafe concurrency primitives (semaphores, etc) that you don’t even know the code you’re calling is using. But I digress…
Not really true; @MainActor was already part of the initial version of Swift Concurrency. That Apple has yet to complete the needed updates to their frameworks to properly mark up everything is a separate issue.
> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.
I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.
> Instead of callbacks, you write code that looks sequential [but isn’t]
(bracketed statement added by me to make the implied explicit)
This sums up my (personal, I guess) beef with coroutines in general. I have dabbled with them since different experiments were tried in C many moons ago.
I find that programming can be hard. Computers are very pedantic about how they get things done. And it pays for me to be explicit and intentional about how computation happens. The illusory nature of async/await coroutines that makes it seem as if code continues procedurally demos well for simple cases, but often grows difficult to reason about (for me).
That is the price you pay. If you refuse to pay you are left to express a potentially complex state machine in terms of a flat state-transition table, so you have a huge python cases statement saying on event x do this and on event y do that. That obscures evident state-chart sequentiality, alternatives or loops (the stuff visible in the good old flow-charts) that otherwise could be mapped in their natural language constructs. But yes, is not honest flow. Is a tradeoff.
This is just wrong. It looks sequential and it is! What the original author means is that it looks synchronous but isn't. But of course that's not really true either, given the use of the await keyword, but that can be explained by the brief learning curve.
Swift concurrency may use coroutines as an implementation detail, but it doesn't expose most of the complexity of that model, or exposes it in different ways.
I've written, tested and debugged low-level java concurrency code involving atomics, the memory safety model and other nasty things. All the way down to considerations if data races are a problem or just redundant work and similar things. Also implementing coroutines in some complang-stuff in uniersity.
This level is rocket science. If you can't tell why it is right, you fail. Such a failure, which was just a singular missing synchronized block, is the _worst_ 3-6 month debugging horror I've ever faced. Singular data corruptions once a week on a system pushing millions and trillions of player interactions in that time frame.
We first designed with many smart people just being adverse and trying to break it. Then one guy implemented, and 5-6 really talented java devs reviewed entirely destructively, and then all of us started to work with hardware to write testing setups to break the thing. If there was doubt, it was wrong.
We then put that queue, which sequentialized for a singular partition (aka user account) but parallelized across as many partitions as possible live and it just worked. It just worked.
We did similar work on a caching trie later on with the same group of people. But during these two projects I very much realized: This kind of work just isn't feasible with the majority of developers. Out of hundreds of devs, I know 4-5 who can think this way.
Thus, most code should be structured by lower-level frameworks in a way such that it is not concurrent on data. Once you're concurrent on singular pieces of data, the complexity explodes so much. Just don't be concurrent, unless it's trivial concurrency.
Heisembugs aren't just technical debt but project killer time bombs so one must better have a perfect thread design in head that works first attempt, else is hell on earth. I can be safe in a bubble world with whole process scope individual threads or from a thread pool (so strong guarantees of joining every created thread) and having share-nothing threads communicating only by prosumer sync-queues that bring a clear information-flow picture. One can have a message pump in one thread, as GUI apps do. That is just a particular case of the prosumer channel idea before. Avoid busy waits, wait on complex event conditions by blocking calls to select() on handler-sets or WaitForMultipleObjects(). Exceptions are per thread, but is good to have a polite mechanism to make desired ones to be potentially process-fatal, and fail earliest. This won't cover all needs but is a field-tested start.
The idea that making things immutable somehow fixes concurrency issues always made me chuckle.
I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.
That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).
It's not that shared-xor-mutate magically solves everything, it's that shared-and-mutate magically breaks everything.
Same thing with goto and pointers. Goto kills structured programming and pointers kill memory safety. We're doing fine without both.
Use transactions when you want to synchronise between threads. If your language doesn't have transactions, it probably can't because it already handed out shared mutation, and now it's too late to put the genie in the bottle.
> This, we realized, is just part and parcel of an optimistic TM system that does in-place writes.
+5 insightful. Programming language design is all about having the right nexus of features. Having all the features or the wrong mix of features is actually an anti-feature.
In our present context, most mainstream languages have already handed out shared mutation. To my eye, this is the main reason so many languages have issues with writing asynch/parallel/distributed programs. It's also why Rust has an easier time of it, they didn't just hand out shared mutation. And also why Erlang has the best time of it, they built the language around no shared mutation.
I don't know a ton about Swift, but it does feel like for a lot of apps (especially outside of the gaming and video encoding world), you can almost treat CPU power as infinite and exclusively focus on reducing latency.
Obviously I'm not saying you throw out big O notation or stop benchmarking, but it does seem like eliminating an extra network call from your pipeline is likely to have a much higher ROI than nearly any amount of CPU optimization has; people forget how unbelievably slow the network actually is compared to CPU cache and even system memory. I think the advent of these async-first frameworks and languages like Node.js and Vert.x and Tokio is sort of the industry acknowledgement of this.
We all learn all these fun CPU optimization tricks in school, and it's all for not because anything we do in CPU land is probably going to be undone by a lazy engineer making superfluous calls to postgres.
The answer to that would very much be: "it depends".
Yes, of course, network I/O > local I/O > most things you'll do on your CPU. But regardless, the answer is always to measure performance (through benchmarking or telemetry), find your bottlenecks, then act upon them.
I recall a case in Firefox in which we were bitten by a O(n^2) algorithm running at startup, where n was the number of tabs to restore, another in which several threads were fighting each other to load components of Firefox and ended up hammering the I/O subsystem, but also cases of executable too large, data not fitting in the CPU cache, Windows requiring a disk access to normalize paths, etc.
I worked on a resource-intensive android app for some years and it had a good perfomannce boost after implementing parallelization. But mostly for old shitty devices
Some of this is because you’re leaning on the system to be fast. A simple async call does a lot of stuff for you. If it was implemented by people who treated CPU power as if it was infinite, it would slow you down a lot. Since it was carefully built to be fast, you can write your stuff in a straightforward manner. (This isn’t a criticism. I work in lower levels of the stack, and I consider a big part of the job to be making it so people working higher up have to think about this stuff as little as possible. I solve these problems so they can solve the user’s problem.)
It’s also very context dependent. If your code is on the critical path for animations, it’s not too hard to be too slow. Especially since standards are higher. You’re now expected to draw a frame in 8ms on many devices. You could write some straightforward code that decodes JSON to extract some base64 to decompress a zip to retrieve a JPEG and completely blow out your 8ms if you manage to forget about caching that and end up doing it every frame.
Yeah, fair. I never found poll/select/epoll or the Java NIO Selector to be terribly hard to use, but even those are fairly high-level compared to how these things are implemented in the kernel.
Right, and consider how many transformations happen to the data between the network call and the screen. In a modern app it's likely coming in as raw bytes, going through a JSON decoder (possibly with a detour through a native string type), likely getting marshaled into hash tables and arrays before being shoved into more specific model types, then pass that data along to a fully Unicode-aware text renderer that does high quality vector graphics.... There's a lot in there that could be incredibly slow. But since it's not, we can write a few lines of code to make all of this happen and not worry about optimization.
One of the things that really took me a long time to map in my head correctly is that in theory async/await should NOT be the same as spinning up a new thread (across most languages). It's just suspending that closure on the current thread and coming back around to it on the next loop of that existing thread. It makes certain data reads and writes safe in a way that multithreading doesn't. However, as noted in article, it is possible to eject a task onto a different thread and then deal with data access across those boundaries. But that is an enhancement to the model, not the default.
EDIT: Seems like newer versions of Xcode change the Swift language defaults here, but that is just the IDE, not the language (and Swift Package Manager does not appear to do the same!)
I'd argue the default is that work _does_ move across system threads, and single-threaded async/await is the uncommon case.
Whether async "tasks" move across system threads is a property of the executor - by default C#, Swift and Go (though without the explicit syntax) all have work-stealing executors that _do_ move work between threads.
In Rust, you typically are more explicit about that choice, since you construct the executor in your "own" [1] code and can make certain optimizations such as not making futures Send if you build a single threaded one, again depending on the constraints of the executor.
You can see this in action in Swift with this kind of program:
import Foundation
for i in 1...100 {
Task {
let originalThread = Thread.current
try? await Task.sleep(for: Duration.seconds(1))
if Thread.current != originalThread {
print("Task \(i) moved from \(originalThread) to \(Thread.current)")
}
}
}
RunLoop.main.run()
Note to run it as-is you have to use a version of Swift < 6.0, which has prevented Thread.current being exposed in asynchronous context.
[1]: I'm counting the output of a macro here as your "own" code.
if you want to go into a long discussion/deep-dive into swift concurrency vs gcd and threads etc i'd recommend this thread on swift.org, it was very
illuminating for me personally (and really fun to read)
actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int
init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}
Here, the ‘actor’ keyword provides a strong hint that this defines an actor. The code to call an actor in Swift also is clean, and clearly signals “this is an async call” by using await:
await logger.max
I know Akka is a library, and one cannot expect all library code to look as nice as code that has actual support from the language, but the simplest Akka example seems to be something like this (from https://doc.akka.io/libraries/akka-core/current/typed/actors...):
object HelloWorld {
final case class Greet(whom: String, replyTo: ActorRef[Greeted])
final case class Greeted(whom: String, from: ActorRef[Greet])
def apply(): Behavior[Greet] = Behaviors.receive { (context, message) =>
context.log.info("Hello {}!", message.whom)
message.replyTo ! Greeted(message.whom, context.self)
Behaviors.same
}
}
I have no idea how naive readers of that would easily infer that’s an actor. I also would not have much idea about how to use this (and I _do_ have experience writing scala; that is not the blocker).
You may claim that’s because Akka http isn’t good code, but I think the point still stands that Akka allows writing code that doesn’t make it obvious what is an actor.
Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.
Because it's extremely hard to retrofit actors (or, really, any type of concurrency and/or parallelism) onto a language not explicitly designed to support it from scratch.
This is my feeling as well. It feels to me that based on the current product, Swift had two different designers: one designer who felt swift needed to be a replacement for Objective C and therefore needed to feel like a spiritual successor to that language, which meant it had to be fundamentally OOP, imperative, and familiar to iOS devs; and another designer who wanted it to be a modern functional, concurrent language for writing dynamic user interfaces with an advanced type checker, static analysis, and reactive updates for dynamic variables.
The end result is a language that brings the worst of both worlds while not really bringing the benefits. An example I will give is SwiftUI, which I absolutely hate. You'd think this thing would be polished, because it's built by Apple for use on Apple devices, so they've designed the full stack from editor to language to OS to hardware. Yet when writing SwiftUI code, it's very common for the compiler to keel over and complain it can't infer the types of the system, and components which are ostensibly "reactive" are plagued by stale data issues.
Seeing that Chris Lattner has moved on from Swift to work on his own language, I'm left to wonder how much of this situation will actually improve. My feeling on Swift at this point is it's not clear what it's supposed to be. It's the language for the Apple ecosystem, but they also want it to be a general purpose thing as well. My feeling is it's never not going to be explicitly tied to and limited by Apple, so it's never really going to take off as a general purpose programming language even if they eventually solve the design challenges.
The thing I often ask or mention in discussions about SwiftUI is, if SwiftUI is so good and easy to use and made for cross-platform, why did take Apple themselves for example so long to port their Journal app to macOS? This is a trivial application, something you'd have found in a beginner programming book as an example project 10 or 20 years ago.
I get all the points about Swift and SwiftUI in theory, I just don't see the results in practice. Also or especially with Apple's first party applications.
It's an unpopular opinion, but my belief is that trying to go all-in on one paradigm is the actual mistake. There's several types of awkwardness that arise when a UI library is strictly declarative, for example.
On Apple platforms, I've had a lot of success in a hybrid model where the "bones" of the app are imperative AppKit/UIKit and declarative SwiftUI is used where it's a good fit, which gives you the benefits of both wherever they're needed and as well as an escape hatch for otherwise unavoidable contortions. Swift's nature as something of a hodgepodge enables this.
I really don't know why Apple decided to substitute terms like "actor" and "task" with their own custom semantics. Was the goal to make it so complicated that devs would run out of spoons if they try to learn other languages?
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.
I still feel like Swift 5 (5.2?) was the sweet spot. Right now there are just way too many keywords, making C++ look easy again.
Similarly, I find Combine / GCD code far easier to write and read and understand, and the semantics are better than structured concurrency. I have plenty of production Combine code in use by (hundreds of) millions of people and it hasn't needed to be touched in years.
Do people actually believe that there are too many keywords? I’ve never met a dev irl that says this but I see it regurgitated on every post about Swift. Most of the new keywords are for library writers and not iOS devs.
Preventing deadlock wasn’t a goal of concurrency. Like all options - there are trade offs. You can still used gcd.
> Do people actually believe that there are too many keywords?
Yes they do. Just imagine seeing the following in a single file/function: Sendable, @unchecked Sendable, @Sendable, sending, and nonsending, @conccurent, async, @escaping, weak, Task, MainActor.
For comparison, Rust has 59 keywords in total. Swift has 203 (?!), Elixir has 15, Go has 25, Python has 38.
> You can still used gcd.
Not if you want to use anything of concurrency, because they're not made to work together.
Replying to someone talking about keywords with a list of something that's not keywords, then retreating to "you are an Apple bootlicker" when someone points that out, is not a good look.
Are there any reference counting optimizations like biased counting? One big problem with Python multithreading is that atomic RCs are expensive, so you often don't get as much performance from multiple threads as you expect.
But in Swift it's possible to avoid atomics in most cases, I think?
(We don't have a problem with profanity in general but in this case I think it's distracting so I've de-fuckinged the title above. It's still in the sitename for those who care.)
Not saying if the title is good or bad, but just to provide context: there's a tradition of these style of apple-language explainers / cheat sheets titled in this pattern. First I'm aware of is https://fuckingblocksyntax.com/ . There seem to be at least 15 of them listed on https://fuckingsyntaxsite.com/ , and probably more with different swears. It's a genre of titles in the same vein as "considered harmful" or "falsehoods programmers believe about"
Concurrency issues aside, I've been working on a greenfield iOS project recently and I've really been enjoying much of Swift's syntax.
I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
And in Go (roughly similar semantics) I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)
As someone who has been coding production Swift since 1.0 the Go example is a lot more what Swift in practice will look like. I suppose there are advantages to being able to only show the important parts.
The first line won't crash but in practice it is fairly rare where you'd implicitly unwrap something like that. URLs might be the only case where it is somewhat safe. But a more fair example would be something like
I don't code in Go so I don't know how production ready that code is. What I posted has a lot of issues with it as well but it is much closer to what would need to be done as a start. The Swift example is hiding a lot of the error checking that Go forces you to do to some extent.Oh, don't get me started on handling the same type of exception differently depending on what call actually threw the exception. I find the happy path in Swift to be lovely, but as soon as exceptions have to be handled or there's some details that have to be gotten right in terms of order of execution, everything turns into a massive mess.
Still better than the 3 lines of if err is not nil that go gets you to do though.
Yes swift-format will say "never force-unwrap" because it is a potential crash.
I'm not familiar with Swift's libraries, but what's the point of making this two lines instead of one:
That aside, your Swift version is still about half the size of the Go version with similar levels of error handling.The first one you can configure and it is the default way you'd see this done in real life. You can add headers, change the request type, etc. Likely, if you were making an actual app the request configuration would be much longer than 1 line I used. I was mostly trying to show that the Swift example was hiding a lot of things.
The second one is for downloading directly from a URL and I've never seen it used outside of examples in blog posts on the internet.
Thanks. What could possibly cause an invalid URL in this example though?
Anything taking arbitrary values or user input could cause an invalid URL, especially on earlier OS versions. Newer OS versions will use a newer URL standard which is more flexible. You could wrap your URL encoding logic into a throwing or non-throwing init as well, the example just used the simple version provided by Foundation.
Not OP, There is none as far as I can tell but still force unwrapping that way is something people try to avoid in Swift and often times have the linter warn about. Reason being people will copy the pattern elsewhere and make the assumption something is safe, ultimately be wrong and crash your app. It is admittedly an opinionated choice though.
there has been some talk ofmusing macros ro validate at runtime; personally i'd love it if that got in officially.
[0] https://forums.swift.org/t/url-macro/63772
In my experience, migrating to a new API endpoint while your stubborn users just refused to update your app for some reason.
Or this.
I'm conflicted about the implicit named returns using this pattern in go. It's definitely tidier but I feel like the control flow is harder to follow: "I never defined `user` how can I return it?".
Also those variables are returned even if you don't explicitly return them, which feels a little unintuitive.
I haven't written any Go in many years (way before generics), but I'm shocked that something so implicit and magical is now valid Go syntax.
I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.
No real point, here. Just felt so surprised that I couldn't resist saying so...
yeah, not really an expert but my understanding is that naming the return struct automatically allocates the object and places it into the scope.
I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.
I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:
``` type IItem interface { Inventory(id int) (price float64, quantity int, err error) } ```
compared to
``` type IItem interface { Inventory(id int) (float64, int, error) } ```
but feel like the memory allocation and control flow implications make it hard to reason about at a glance for non-trivial functions.
> My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out
It doesn’t set `user`, it returns the User passed to the function.
Computing the second return value modifies that value.
Looks weird indeed, but conceptually, both values get computed before they are returned.
C# :)
Not defending Go's braindead error handling, but you'll note that Swift is doubly coloring the function here (async throws).
What is the problem with that though? I honestly wish they moved the async key word to the front `async func ...` but given the relative newness of all of this I've yet to see anyone get confused by this. The compiler also ensures everything is used correctly anyway.
The problem is that that the Swift function signature is telling you that someone else needs dealing with async suspension and exception handling, clearly not the same semantics.
Isn’t the Go version also forcing the callers to deal with the errors by returning them? The swift version just lets the compiler do it rather than manually returning them.
In a sense it is telling someone else that yes, but more importantly, it is telling the compiler. I am not sure what the alternative is here, is this not common in other languages? I know Java does this at least. In Python it is hidden and you have to know to catch the exception. I'm not sure how that is better as it can be easily forgotten or ignored. There may be another alternative I'm not aware of?
And sometimes you even have to use the return value the way the function annoates as well instead of just pretending it's a string or whatever when it's not! Having the language tell me how to use things is so frustrating.
It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
> The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer.
That's parallelism. Concurrency is mostly about hiding latency from I/O operations like network tasks.
Network operations are "asynchrony". Together with parallelism, they are both kinds of concurrency and Swift concurrency handles both.
Swift's "async let" is parallelism. As are Task groups.
async let and TaskGroups are not parallelism, they're concurrency. They're usually parallel because the Swift concurrency runtime allows them to be, but there's no guarantee. If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel.
Sure, but as soon as they released their first iteration, they immediately went back to the drawing board and just slapped @MainActor on everything they could because most people really do not care.
Well yes, but that’s because the iOS UI is single threaded, just like every other UI framework under the sun.
It doesn’t mean there isn’t good support for true parallelism in swift concurrency, it’s super useful to model interactions with isolated actors (e.g. the UI thread and the data it owns) as “asynchronous” from the perspective of other tasks… allowing you to spawn off CPU-heavy operations that can still “talk back” to the UI, but they simply have to “await” the calls to the UI actor in case it’s currently executing.
The model works well for both asynchronous tasks (you await the long IO operation, your executor can go back to doing other things) and concurrent processing (you await any synchronization primitives that require mutual exclusivity, etc.)
There’s a lot of gripes I have with swift concurrency but my memory is about 2 years old at this point and I know Swift 6 has changed a lot. Mainly around the complete breakage you get if you ever call ObjC code which is using GCD, and how ridiculously easy it is to shoot yourself in the foot with unsafe concurrency primitives (semaphores, etc) that you don’t even know the code you’re calling is using. But I digress…
Not really true; @MainActor was already part of the initial version of Swift Concurrency. That Apple has yet to complete the needed updates to their frameworks to properly mark up everything is a separate issue.
> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.
I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.
> Instead of callbacks, you write code that looks sequential [but isn’t]
(bracketed statement added by me to make the implied explicit)
This sums up my (personal, I guess) beef with coroutines in general. I have dabbled with them since different experiments were tried in C many moons ago.
I find that programming can be hard. Computers are very pedantic about how they get things done. And it pays for me to be explicit and intentional about how computation happens. The illusory nature of async/await coroutines that makes it seem as if code continues procedurally demos well for simple cases, but often grows difficult to reason about (for me).
That is the price you pay. If you refuse to pay you are left to express a potentially complex state machine in terms of a flat state-transition table, so you have a huge python cases statement saying on event x do this and on event y do that. That obscures evident state-chart sequentiality, alternatives or loops (the stuff visible in the good old flow-charts) that otherwise could be mapped in their natural language constructs. But yes, is not honest flow. Is a tradeoff.
> looks sequential [but isn’t]
This is just wrong. It looks sequential and it is! What the original author means is that it looks synchronous but isn't. But of course that's not really true either, given the use of the await keyword, but that can be explained by the brief learning curve.
Swift concurrency may use coroutines as an implementation detail, but it doesn't expose most of the complexity of that model, or exposes it in different ways.
Sequential doesn’t mean reentrancy safe, something which has bitten me a few times in Swift concurrency.
How do you actually learn concurrency without fooling yourself?
Every time I think I “get” concurrency, a real bug proves otherwise.
What finally helped wasn’t more theory, but forcing myself to answer basic questions:
What can run at the same time here?
What must be ordered?
What happens if this suspends at the worst moment?
A rough framework I use now:
First understand the shape of execution (what overlaps)
Then define ownership (who’s allowed to touch what)
Only then worry about syntax or tools
Still feels fragile.
How do you know when your mental model is actually correct? Do you rely on tests, diagrams, or just scars over time?
I've written, tested and debugged low-level java concurrency code involving atomics, the memory safety model and other nasty things. All the way down to considerations if data races are a problem or just redundant work and similar things. Also implementing coroutines in some complang-stuff in uniersity.
This level is rocket science. If you can't tell why it is right, you fail. Such a failure, which was just a singular missing synchronized block, is the _worst_ 3-6 month debugging horror I've ever faced. Singular data corruptions once a week on a system pushing millions and trillions of player interactions in that time frame.
We first designed with many smart people just being adverse and trying to break it. Then one guy implemented, and 5-6 really talented java devs reviewed entirely destructively, and then all of us started to work with hardware to write testing setups to break the thing. If there was doubt, it was wrong.
We then put that queue, which sequentialized for a singular partition (aka user account) but parallelized across as many partitions as possible live and it just worked. It just worked.
We did similar work on a caching trie later on with the same group of people. But during these two projects I very much realized: This kind of work just isn't feasible with the majority of developers. Out of hundreds of devs, I know 4-5 who can think this way.
Thus, most code should be structured by lower-level frameworks in a way such that it is not concurrent on data. Once you're concurrent on singular pieces of data, the complexity explodes so much. Just don't be concurrent, unless it's trivial concurrency.
I'm interested in knowing more details about this if you happen to have a post written up somewhere!
Heisembugs aren't just technical debt but project killer time bombs so one must better have a perfect thread design in head that works first attempt, else is hell on earth. I can be safe in a bubble world with whole process scope individual threads or from a thread pool (so strong guarantees of joining every created thread) and having share-nothing threads communicating only by prosumer sync-queues that bring a clear information-flow picture. One can have a message pump in one thread, as GUI apps do. That is just a particular case of the prosumer channel idea before. Avoid busy waits, wait on complex event conditions by blocking calls to select() on handler-sets or WaitForMultipleObjects(). Exceptions are per thread, but is good to have a polite mechanism to make desired ones to be potentially process-fatal, and fail earliest. This won't cover all needs but is a field-tested start.
Share xor mutate, that's really all there is
Talk about trivializing complexity...
The idea that making things immutable somehow fixes concurrency issues always made me chuckle.
I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.
That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).
It's not that shared-xor-mutate magically solves everything, it's that shared-and-mutate magically breaks everything.
Same thing with goto and pointers. Goto kills structured programming and pointers kill memory safety. We're doing fine without both.
Use transactions when you want to synchronise between threads. If your language doesn't have transactions, it probably can't because it already handed out shared mutation, and now it's too late to put the genie in the bottle.
> This, we realized, is just part and parcel of an optimistic TM system that does in-place writes.
[1] https://joeduffyblog.com/2010/01/03/a-brief-retrospective-on...
+5 insightful. Programming language design is all about having the right nexus of features. Having all the features or the wrong mix of features is actually an anti-feature.
In our present context, most mainstream languages have already handed out shared mutation. To my eye, this is the main reason so many languages have issues with writing asynch/parallel/distributed programs. It's also why Rust has an easier time of it, they didn't just hand out shared mutation. And also why Erlang has the best time of it, they built the language around no shared mutation.
I don't know a ton about Swift, but it does feel like for a lot of apps (especially outside of the gaming and video encoding world), you can almost treat CPU power as infinite and exclusively focus on reducing latency.
Obviously I'm not saying you throw out big O notation or stop benchmarking, but it does seem like eliminating an extra network call from your pipeline is likely to have a much higher ROI than nearly any amount of CPU optimization has; people forget how unbelievably slow the network actually is compared to CPU cache and even system memory. I think the advent of these async-first frameworks and languages like Node.js and Vert.x and Tokio is sort of the industry acknowledgement of this.
We all learn all these fun CPU optimization tricks in school, and it's all for not because anything we do in CPU land is probably going to be undone by a lazy engineer making superfluous calls to postgres.
The answer to that would very much be: "it depends".
Yes, of course, network I/O > local I/O > most things you'll do on your CPU. But regardless, the answer is always to measure performance (through benchmarking or telemetry), find your bottlenecks, then act upon them.
I recall a case in Firefox in which we were bitten by a O(n^2) algorithm running at startup, where n was the number of tabs to restore, another in which several threads were fighting each other to load components of Firefox and ended up hammering the I/O subsystem, but also cases of executable too large, data not fitting in the CPU cache, Windows requiring a disk access to normalize paths, etc.
Sure, I will admit I was a bit hyperbolic here.
Obviously sometimes you need to do a CPU optimization, and I certainly do not think you should ignore big O for anything.
It just feels like 90+% of the time my “optimizing” boils down to figuring out how to batch a SQL or reduce a call to Redis or something.
I worked on a resource-intensive android app for some years and it had a good perfomannce boost after implementing parallelization. But mostly for old shitty devices
On latest phones it's barely noticiable
Some of this is because you’re leaning on the system to be fast. A simple async call does a lot of stuff for you. If it was implemented by people who treated CPU power as if it was infinite, it would slow you down a lot. Since it was carefully built to be fast, you can write your stuff in a straightforward manner. (This isn’t a criticism. I work in lower levels of the stack, and I consider a big part of the job to be making it so people working higher up have to think about this stuff as little as possible. I solve these problems so they can solve the user’s problem.)
It’s also very context dependent. If your code is on the critical path for animations, it’s not too hard to be too slow. Especially since standards are higher. You’re now expected to draw a frame in 8ms on many devices. You could write some straightforward code that decodes JSON to extract some base64 to decompress a zip to retrieve a JPEG and completely blow out your 8ms if you manage to forget about caching that and end up doing it every frame.
Yeah, fair. I never found poll/select/epoll or the Java NIO Selector to be terribly hard to use, but even those are fairly high-level compared to how these things are implemented in the kernel.
Right, and consider how many transformations happen to the data between the network call and the screen. In a modern app it's likely coming in as raw bytes, going through a JSON decoder (possibly with a detour through a native string type), likely getting marshaled into hash tables and arrays before being shoved into more specific model types, then pass that data along to a fully Unicode-aware text renderer that does high quality vector graphics.... There's a lot in there that could be incredibly slow. But since it's not, we can write a few lines of code to make all of this happen and not worry about optimization.
One of the things that really took me a long time to map in my head correctly is that in theory async/await should NOT be the same as spinning up a new thread (across most languages). It's just suspending that closure on the current thread and coming back around to it on the next loop of that existing thread. It makes certain data reads and writes safe in a way that multithreading doesn't. However, as noted in article, it is possible to eject a task onto a different thread and then deal with data access across those boundaries. But that is an enhancement to the model, not the default.
EDIT: Seems like newer versions of Xcode change the Swift language defaults here, but that is just the IDE, not the language (and Swift Package Manager does not appear to do the same!)
I'd argue the default is that work _does_ move across system threads, and single-threaded async/await is the uncommon case.
Whether async "tasks" move across system threads is a property of the executor - by default C#, Swift and Go (though without the explicit syntax) all have work-stealing executors that _do_ move work between threads.
In Rust, you typically are more explicit about that choice, since you construct the executor in your "own" [1] code and can make certain optimizations such as not making futures Send if you build a single threaded one, again depending on the constraints of the executor.
You can see this in action in Swift with this kind of program:
Note to run it as-is you have to use a version of Swift < 6.0, which has prevented Thread.current being exposed in asynchronous context.[1]: I'm counting the output of a macro here as your "own" code.
if you want to go into a long discussion/deep-dive into swift concurrency vs gcd and threads etc i'd recommend this thread on swift.org, it was very illuminating for me personally (and really fun to read)
https://forums.swift.org/t/is-concurrent-now-the-standard-to...
Good stuff. Would appreciate a section on bridging pre-async (system) libraries.
This looks like it's well-written and approachable. I'll need to spend more time reviewing it, but, at first scan, it looks like it's nicely done.
I loved the idea of Swift adopting actors however the implementation seems shoehorned. I wanted something more like Akka or QP/C++...
I feel the reverse. I can see one can claim Swift has everything but the kitchen sink, but its actors, to me, don’t look shoehorned in.
Reading https://docs.swift.org/swift-book/documentation/the-swift-pr..., their first example is:
Here, the ‘actor’ keyword provides a strong hint that this defines an actor. The code to call an actor in Swift also is clean, and clearly signals “this is an async call” by using await: I know Akka is a library, and one cannot expect all library code to look as nice as code that has actual support from the language, but the simplest Akka example seems to be something like this (from https://doc.akka.io/libraries/akka-core/current/typed/actors...): I have no idea how naive readers of that would easily infer that’s an actor. I also would not have much idea about how to use this (and I _do_ have experience writing scala; that is not the blocker).And that gets worse when you look at Akka http (https://doc.akka.io/libraries/akka-http/current/index.html). I have debugged code using it, but still find it hard to figure out where it has suspension points.
You may claim that’s because Akka http isn’t good code, but I think the point still stands that Akka allows writing code that doesn’t make it obvious what is an actor.
Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.
- Robert Virding
> I wanted something more like Akka
https://github.com/apple/swift-distributed-actors is more like Akka, but with better guarantees from the underlying platform because of the first-class nature of actors.
> the implementation seems shoehorned.
Because it's extremely hard to retrofit actors (or, really, any type of concurrency and/or parallelism) onto a language not explicitly designed to support it from scratch.
This is my feeling as well. It feels to me that based on the current product, Swift had two different designers: one designer who felt swift needed to be a replacement for Objective C and therefore needed to feel like a spiritual successor to that language, which meant it had to be fundamentally OOP, imperative, and familiar to iOS devs; and another designer who wanted it to be a modern functional, concurrent language for writing dynamic user interfaces with an advanced type checker, static analysis, and reactive updates for dynamic variables.
The end result is a language that brings the worst of both worlds while not really bringing the benefits. An example I will give is SwiftUI, which I absolutely hate. You'd think this thing would be polished, because it's built by Apple for use on Apple devices, so they've designed the full stack from editor to language to OS to hardware. Yet when writing SwiftUI code, it's very common for the compiler to keel over and complain it can't infer the types of the system, and components which are ostensibly "reactive" are plagued by stale data issues.
Seeing that Chris Lattner has moved on from Swift to work on his own language, I'm left to wonder how much of this situation will actually improve. My feeling on Swift at this point is it's not clear what it's supposed to be. It's the language for the Apple ecosystem, but they also want it to be a general purpose thing as well. My feeling is it's never not going to be explicitly tied to and limited by Apple, so it's never really going to take off as a general purpose programming language even if they eventually solve the design challenges.
The thing I often ask or mention in discussions about SwiftUI is, if SwiftUI is so good and easy to use and made for cross-platform, why did take Apple themselves for example so long to port their Journal app to macOS? This is a trivial application, something you'd have found in a beginner programming book as an example project 10 or 20 years ago.
I get all the points about Swift and SwiftUI in theory, I just don't see the results in practice. Also or especially with Apple's first party applications.
Journal has a lot of extra features where it autogenerates suggestions based on what you've done lately.
It's an unpopular opinion, but my belief is that trying to go all-in on one paradigm is the actual mistake. There's several types of awkwardness that arise when a UI library is strictly declarative, for example.
On Apple platforms, I've had a lot of success in a hybrid model where the "bones" of the app are imperative AppKit/UIKit and declarative SwiftUI is used where it's a good fit, which gives you the benefits of both wherever they're needed and as well as an escape hatch for otherwise unavoidable contortions. Swift's nature as something of a hodgepodge enables this.
But how would you do what in Rust's Tokio is a `spawn_blocking` in Swift?
I really don't know why Apple decided to substitute terms like "actor" and "task" with their own custom semantics. Was the goal to make it so complicated that devs would run out of spoons if they try to learn other languages?
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.
I still feel like Swift 5 (5.2?) was the sweet spot. Right now there are just way too many keywords, making C++ look easy again.
Similarly, I find Combine / GCD code far easier to write and read and understand, and the semantics are better than structured concurrency. I have plenty of production Combine code in use by (hundreds of) millions of people and it hasn't needed to be touched in years.
Do people actually believe that there are too many keywords? I’ve never met a dev irl that says this but I see it regurgitated on every post about Swift. Most of the new keywords are for library writers and not iOS devs.
Preventing deadlock wasn’t a goal of concurrency. Like all options - there are trade offs. You can still used gcd.
> Do people actually believe that there are too many keywords?
Yes they do. Just imagine seeing the following in a single file/function: Sendable, @unchecked Sendable, @Sendable, sending, and nonsending, @conccurent, async, @escaping, weak, Task, MainActor.
For comparison, Rust has 59 keywords in total. Swift has 203 (?!), Elixir has 15, Go has 25, Python has 38.
> You can still used gcd.
Not if you want to use anything of concurrency, because they're not made to work together.
Most of your listed examples aren’t keywords though. They’re built in types or macro decorators.
Task and MainActor are types.
So?
If you’re including types, you’d hit the many hundreds if not thousands in most languages.
It dilutes any point you were trying to make if you don’t actually delineate between what’s a keyword and a type.
So... they aren't keywords.
Swift does indeed have a lot of keywords [1], but neither Task or MainActor are among them.
[1]: https://github.com/swiftlang/swift-syntax/blob/main/CodeGene...
I never said they’re keywords. Y’all way too focused on defending Apple at all cost.
Replying to someone talking about keywords with a list of something that's not keywords, then retreating to "you are an Apple bootlicker" when someone points that out, is not a good look.
Does it do any refcounting optimizations?
Are there any reference counting optimizations like biased counting? One big problem with Python multithreading is that atomic RCs are expensive, so you often don't get as much performance from multiple threads as you expect.
But in Swift it's possible to avoid atomics in most cases, I think?
@dang I think it's important that "fucking" remains in the title
(It certainly makes it easier to find the topic some time after when going back to search for it on HN.)
(We don't have a problem with profanity in general but in this case I think it's distracting so I've de-fuckinged the title above. It's still in the sitename for those who care.)
Not saying if the title is good or bad, but just to provide context: there's a tradition of these style of apple-language explainers / cheat sheets titled in this pattern. First I'm aware of is https://fuckingblocksyntax.com/ . There seem to be at least 15 of them listed on https://fuckingsyntaxsite.com/ , and probably more with different swears. It's a genre of titles in the same vein as "considered harmful" or "falsehoods programmers believe about"