🤯 How I got confused — and learned to like ownership

When I started learning Rust, I was already familiar with both ends of the memory management spectrum.

On one side, I had the manual allocation and deallocation model from C, where every malloc demands a matching free.

On the other, I had languages with automatic garbage collection, like Java or TypeScript, where the runtime takes care of memory without asking for permission.

Rust, however, didn’t fit neatly into either category. Its ownership and borrowing system seemed like a third model entirely — something that felt automatic but was still completely deterministic.

That contrast sparked a few questions:

  • Was Rust’s drop() mechanism just a form of ARC?
  • Was ARC just a simplified GC?
  • And if Rust’s model was so efficient, why don’t all modern languages adopt it?

This curiosity led me down a rabbit hole of memory management research, including the paper A Unified Theory of Garbage Collection by David F. Bacon, Perry Cheng, and V.T. Rajan — which argues that ARC and GC are both valid forms of garbage collection, unified under the same theory.

This post is a reflection of that journey: an attempt to make sense of the three main approaches to automatic memory management.


🧮 Reference Counting (ARC)

ARC (Automatic Reference Counting) is used by Objective-C, Swift, and also appears in Rust (Rc, Arc) and C++ (shared_ptr). Each object has an associated reference counter, managed by the runtime. This counter is incremented when new references are made and decremented when those references go out of scope.

Behind the scenes, this is done automatically through compiler-inserted calls to retain and release. These operations increase or decrease the reference counter without requiring the developer to manage them manually.

When the counter reaches zero at runtime, the object is deallocated immediately.

Characteristics

  • Deterministic: memory is freed exactly when the last reference disappears.
  • No global pauses: there’s no heap scanning.
  • Predictable latency: great for UI or real-time apps.

Limitation — cyclic references

ARC doesn’t have a global view of the heap. If two objects reference each other, their counters never drop to zero — they “keep each other alive”:

1class Node {
2    var next: Node?
3}
4
5let a = Node()
6let b = Node()
7a.next = b
8b.next = a // reference cycle

To avoid such circular retention, ARC provides weak and unowned references, which do not increment the counter. These mechanisms reduce cyclic references but don’t eliminate them entirely — developers must still design object relationships carefully to prevent leaks or dangling references.


♻️ Tracing Garbage Collector (Java, C#, Go)

The tracing GC used by Java, C#, and Go works differently. It doesn’t count references. Instead, it periodically traces all reachable objects starting from a set of roots (stack variables, globals, registers). Anything unreachable is considered garbage and freed in batches.

Characteristics

  • Fully automatic: no implicit retain or release calls — the collector manages object lifetimes globally.
  • Cycle-safe: it detects unreachable objects even in circular graphs.
  • Good throughput: ideal for environments with many short-lived allocations.

Downsides

  • Non-deterministic: you never know when cleanup happens.
  • Pause times: the collector occasionally suspends threads to scan memory.
  • Memory overhead: the heap tends to grow between collection cycles.

Modern collectors like G1GC, ZGC, and Go’s concurrent GC mitigate many of these issues by running incrementally and in parallel.


⚗️ Hybrid Approaches (Python, JavaScript, etc.)

Some languages combine both techniques:

  • Python uses reference counting for most objects plus a cycle detector that occasionally runs to clean up unreachable reference loops.
  • JavaScript (V8, SpiderMonkey) and Ruby also adopt hybrid or incremental tracing algorithms tuned for responsiveness.
  • Even Swift internally supplements ARC with heuristics for releasing unused objects opportunistically.

These hybrid models balance determinism (ARC-style) and simplicity (GC-style), accepting small trade-offs in both directions.


🦀 Ownership and Borrow Checker (Rust)

Rust takes a completely different path. It enforces memory safety statically at compile time, with no counters or background threads.

Each value has exactly one owner, and the compiler inserts drop() calls when that owner goes out of scope:

1fn main() {
2    let s = String::from("hello");
3    println!("{}", s);
4} // drop(s) automatically inserted here

This is achieved through ownership and borrowing rules checked by the compiler — not by runtime mechanisms.

Characteristics

  • Zero runtime overhead: no counters, no GC thread.
  • Deterministic: memory is released when the scope ends.
  • Memory-safe: prevents both leaks and dangling pointers.
  • Predictable performance: ideal for games, embedded systems, and servers.

When shared ownership is required

Rust’s default model allows only one owner per value. When multiple parts of the program need to hold the same data alive at once, Rust provides explicit, opt-in reference-counted types:

1use std::rc::Rc;
2let a = Rc::new(String::from("hello"));
3let b = Rc::clone(&a); // increments counter

This kind of shared ownership is always explicit — you pay the cost of reference counting only when you choose to.


⚖️ Comparing the three models

Aspect ARC (Reference Counting) Tracing GC Rust (Ownership)
Memory release Immediate (count = 0) Periodic, in batches End of scope
Deterministic ✅ Yes ❌ No ✅ Yes
Runtime cost Constant, predictable Variable, amortized Zero
Global pauses None Possible None
Cyclic references Leak if not weak Automatically collected Impossible by design
Ideal for Mobile, real-time apps Servers, dynamic runtimes Systems, low-level software
Examples Swift, Obj-C, Rust (Rc/Arc) Java, C#, Go Rust (default model)

Why not everyone uses Rust’s model

At first glance, Rust’s model seems to have only advantages — no GC, no pauses, no leaks. But it comes with real trade-offs, mostly in developer experience and ecosystem design:

  • Steeper learning curve: the ownership and borrowing rules are strict; beginners hit the infamous “borrow checker errors” early.
  • Reduced flexibility: some high-level patterns (like graph structures or shared caches) require explicit workarounds using Rc, Arc, or interior mutability.
  • Harder dynamic behavior: languages with dynamic typing, reflection, or runtime code loading (like Python or JavaScript) can’t apply Rust’s static guarantees.
  • Compilation cost: the compiler does heavy analysis to prove memory safety, which increases compile times and complexity.

In other words: Rust’s model trades convenience for control. It’s perfect for system-level code, but less suited for highly dynamic environments.


💡 Conclusion: a unified view

As Bacon et al. note, ARC and tracing GC are two forms of garbage collection within the same theoretical framework:

“Reference counting is a form of garbage collection which maintains liveness information incrementally.”

In short, their paper describes ARC as local and incremental, and tracing GC as global and periodic.

To that framework, we can add Rust’s ownership model as a third, static approach — one that moves memory safety entirely to compile time.

All three models aim for the same goal: automatically reclaiming unused memory safely, while balancing control, simplicity, and runtime cost in different ways. If we also include the manual model, the comparison looks like this:

Manual (C) → Ownership (Rust) → Reference Counting (Swift) → Tracing GC (Java)
  • Manual (C) gives full control at the cost of safety.
  • ARC favors predictability.
  • Tracing GC favors simplicity and developer productivity.
  • Rust favors safety and zero runtime cost — but at the price of stricter rules and heavier compilation.

Each language chooses the model that best matches its philosophy. The key is understanding the trade-offs behind each approach — and recognizing that Rust, Swift, and Java each represent a distinct, elegant answer to the same fundamental question:

How can we let go of memory safely?