Skip to main content
Rust for Backend Engineers

Async and Tokio

Ravinder··5 min read
RustBackendAsyncTokioConcurrency
Share:
Async and Tokio

Rust's async model is powerful and unfamiliar in equal measure. Engineers coming from Node.js expect an event loop they can reason about globally. Engineers from Go expect goroutines to be cheap threads. Rust's futures are neither. Understanding the actual model — not the analogy — saves hours of debugging later.

What async fn Actually Produces

An async fn in Rust does not run when you call it. It returns a Future — a state machine that represents a computation that will run when polled. Nothing happens until something polls it.

async fn fetch_user(id: u64) -> String {
    // this returns a Future<Output = String>
    // calling fetch_user(42) does NOT execute this body
    format!("user-{}", id)
}
 
#[tokio::main]
async fn main() {
    // .await polls the future to completion
    let user = fetch_user(42).await;
    println!("{}", user);
}

The #[tokio::main] macro wraps your async main in a call to tokio::runtime::Runtime::block_on, which starts the runtime and drives the top-level future to completion.

Tokio's Architecture

Tokio is the de facto async runtime for Rust backend services. It provides:

  • A multi-threaded executor that schedules and polls futures
  • An I/O driver built on epoll/kqueue/IOCP
  • A timer subsystem for tokio::time
  • Async-aware channels, mutexes, and semaphores
graph TD A[Your async fn] -->|returns| B[Future state machine] B -->|spawned onto| C[Tokio Executor] C -->|polls| B B -->|registers interest| D[I/O Driver - epoll/kqueue] D -->|wakes task| C C -->|polls again| B B -->|returns Poll::Ready| E[Value available]

The key insight: the executor calls poll() on your future. If the future is waiting for I/O, it registers a waker with the I/O driver and returns Poll::Pending. When the I/O completes, the driver wakes the task, and the executor polls the future again. Your code never blocks a thread — it just suspends and resumes.

Runtime Configuration

Tokio offers two runtime modes:

// Multi-threaded: uses all available cores (default for servers)
#[tokio::main]
async fn main() { /* ... */ }
 
// Equivalent explicit form:
fn main() {
    tokio::runtime::Builder::new_multi_thread()
        .worker_threads(4)
        .enable_all()
        .build()
        .unwrap()
        .block_on(async_main());
}
 
// Single-threaded: useful for tests, CLIs, or specific isolation needs
#[tokio::main(flavor = "current_thread")]
async fn main() { /* ... */ }

For backend services, the multi-threaded runtime is almost always correct. Use current_thread in tests or when you need strict ordering guarantees.

Spawning Tasks

tokio::spawn puts a future on the executor. It returns a JoinHandle you can await to get the result.

use tokio::task;
 
async fn process_batch(ids: Vec<u64>) -> Vec<String> {
    let handles: Vec<_> = ids
        .into_iter()
        .map(|id| task::spawn(async move { fetch_user(id).await }))
        .collect();
 
    let mut results = Vec::new();
    for handle in handles {
        results.push(handle.await.unwrap());
    }
    results
}
 
async fn fetch_user(id: u64) -> String {
    // simulate async work
    tokio::time::sleep(std::time::Duration::from_millis(10)).await;
    format!("user-{}", id)
}

Important: spawned tasks must be 'static — they cannot borrow local variables. Pass data with move closures or wrap in Arc.

The Traps

Blocking in Async Context

The biggest mistake newcomers make is calling blocking code — file I/O, CPU-heavy computation, or synchronous library calls — inside an async task. This stalls the Tokio worker thread and can starve other tasks.

// BAD: blocks the executor thread
async fn bad_handler() -> String {
    std::thread::sleep(std::time::Duration::from_secs(1)); // blocks!
    "done".to_string()
}
 
// GOOD: use tokio's async sleep
async fn good_handler() -> String {
    tokio::time::sleep(std::time::Duration::from_secs(1)).await;
    "done".to_string()
}
 
// GOOD: run CPU-heavy work on a blocking thread pool
async fn cpu_handler() -> u64 {
    tokio::task::spawn_blocking(|| {
        // this runs on a dedicated blocking thread pool
        expensive_computation()
    })
    .await
    .unwrap()
}
 
fn expensive_computation() -> u64 {
    (0..10_000_000u64).sum()
}

Forgetting to Await

Futures do nothing without .await. Forgetting it is a silent bug that Rust fortunately warns about with unused_must_use.

async fn save(data: &str) {
    tokio::fs::write("/tmp/out.txt", data).await.unwrap();
}
 
async fn caller() {
    save("hello"); // WARNING: future not awaited
    save("hello").await; // correct
}

Mutex in Async Code

std::sync::Mutex is fine across .await only if you do not hold the guard across the await point. If you do, use tokio::sync::Mutex.

use std::sync::{Arc, Mutex};
use tokio::sync::Mutex as AsyncMutex;
 
// Fine: lock released before .await
async fn std_mutex_ok(counter: Arc<Mutex<u64>>) {
    {
        let mut c = counter.lock().unwrap();
        *c += 1;
    } // guard dropped here — no issue
    tokio::time::sleep(std::time::Duration::from_millis(1)).await;
}
 
// Required: lock held across .await
async fn async_mutex_required(counter: Arc<AsyncMutex<u64>>) {
    let mut c = counter.lock().await;
    *c += 1;
    tokio::time::sleep(std::time::Duration::from_millis(1)).await;
    // guard still held — fine with tokio::sync::Mutex
}

Channels for Task Communication

Tokio's channels are the idiomatic way to coordinate between tasks. mpsc (multi-producer, single-consumer) handles the majority of cases.

use tokio::sync::mpsc;
 
#[tokio::main]
async fn main() {
    let (tx, mut rx) = mpsc::channel::<String>(32);
 
    // Producer task
    let tx_clone = tx.clone();
    tokio::spawn(async move {
        tx_clone.send("event-1".to_string()).await.unwrap();
        tx_clone.send("event-2".to_string()).await.unwrap();
    });
 
    drop(tx); // close sender side
 
    // Consumer
    while let Some(msg) = rx.recv().await {
        println!("received: {}", msg);
    }
}

For broadcast patterns, tokio::sync::broadcast. For a single response, tokio::sync::oneshot.

Timeouts and Cancellation

use tokio::time::{timeout, Duration};
 
async fn fetch_with_timeout(url: &str) -> Result<String, String> {
    match timeout(Duration::from_secs(5), do_fetch(url)).await {
        Ok(result) => Ok(result),
        Err(_) => Err(format!("request to {} timed out", url)),
    }
}
 
async fn do_fetch(_url: &str) -> String {
    tokio::time::sleep(Duration::from_secs(10)).await;
    "response".to_string()
}

When timeout fires, the inner future is dropped. Rust's drop guarantees mean cleanup runs deterministically — connections are closed, file handles released, RAII guards fire.

Key Takeaways

  • Calling an async fn returns a Future and does nothing — the body executes only when something polls it via .await or spawn.
  • Tokio is a work-stealing executor backed by epoll/kqueue; your tasks cooperatively yield at .await points, freeing threads to run other work.
  • Never call blocking code inside an async task without spawn_blocking — it stalls the entire executor thread.
  • Use tokio::sync::Mutex when a guard crosses an .await boundary; std::sync::Mutex is fine otherwise.
  • mpsc, broadcast, and oneshot channels are the primary coordination primitives — prefer them over shared mutable state.
  • Cancellation is structural in Rust: dropping a future runs its cleanup code, making timeout handling safe and predictable.
Share: