Learn the Axum web framework in 5 minutes

Build production-ready Rust APIs with Axum in 5 minutes. Learn routing, JWT auth, state management, and observability with real code examples.

A rusty chain on a gear cog
Photo by Jay Heike / Unsplash

Axum is a web framework that makes it easy to build robust, async web services in Rust. But here's the secret sauce: unlike other frameworks that impose their own abstractions, Axum feels refreshingly honest. Your handlers are just async functions. Your middleware is just Tower services. When something goes wrong, the compiler tells you exactly what to fix, not some cryptic macro error five layers deep.

In the next five minutes, you'll learn how to build a production-ready API with proper routing, authentication, state management, and observability. We'll skip the toy examples and dive straight into patterns you'll actually use - the same ones powering services handling millions of requests in production right now.

Let's dive into the key concepts using real production code.

Setting Up Routes and Nested Routes

Axum's routing is where the framework starts to shine. Like organizing ingredients in a kitchen, you want related functionality grouped logically. The routing API is intuitive and composable, making it easy to build robust API structures from simple pieces.

Basic Routes and Nesting

Routes are declared using HTTP method functions, and .nest() creates hierarchical structures:

use axum::{Router, routing::get, Json};
use serde_json::{json, Value};

fn api_router(state: AppState) -> Router {
    Router::new()
        .nest(
            "/v1",
            Router::new()
                .route("/health", get(health_check))
                .nest("/users", users::user_routes())
                .nest("/webhooks", webhooks::webhook_routes()),
        )
        .fallback(handler_404)
        .with_state(state)
}

async fn health_check() -> Json<Value> {
    Json(json!({ "status": "OK" }))
}

The nest() method creates route prefixes - /v1/users/me maps to the /me route inside user_routes(). This modular approach keeps related functionality together while staying maintainable.

Route Modules and Parameters

Structure routes in modules to keep handlers close to their definitions:

// users/mod.rs
use axum::{Router, routing::{get, post}, extract::{Path, State}};

pub fn user_routes() -> Router<AppState> {
    Router::new()
        .route("/", get(list_users))
        .route("/me", get(get_current_user))     
        .route("/:id", get(get_user))            
        .route("/:id", post(update_user))        
}

async fn get_user(
    State(state): State<AppState>,
    Path(user_id): Path<String>,
) -> Result<Json<PublicUser>, (StatusCode, Json<Value>)> {
    let user = fetch_user(&state.database, &user_id).await?;
    Ok(Json(user.to_public()))
}

Multiple routes can share the same path with different HTTP methods. The :id syntax creates path parameters that are extracted in handlers.

Middleware on Route Groups

Apply middleware to entire route groups for authentication, CORS, and other cross-cutting concerns:

use tower_http::cors::{Any, CorsLayer};

pub fn protected_routes() -> Router<AppState> {
    Router::new()
        .route("/profile", get(get_profile))
        .route("/settings", get(get_settings))
        // Authentication middleware applied to all routes
        .layer(middleware::from_fn_with_state(auth_middleware))
}

pub fn public_routes() -> Router<AppState> {
    Router::new()
        .route("/login", post(login))
        .route("/register", post(register))
        .layer(
            CorsLayer::new()
                .allow_methods(vec![Method::GET, Method::POST])
                .allow_origin(Any)
        )
}

This pattern lets you apply different middleware stacks to different parts of your API - authentication for protected routes, CORS for public endpoints, rate limiting for webhooks, etc.

CORS and Headers

Cross-Origin Resource Sharing (CORS) is essential for web APIs. If you're building an API that'll be consumed by a browser-based frontend, CORS configuration is non-negotiable - browsers will flat-out reject your responses without proper headers. Think of CORS as the bouncer at your API's door, deciding which origins get in and what they're allowed to do once inside.

Axum integrates seamlessly with tower-http to provide CORS as a middleware layer. The beauty of this approach is that you can apply different CORS policies to different parts of your application. Your public endpoints might allow any origin, while your admin routes stay locked down. You configure what methods are allowed, which headers can be sent, and whether credentials like cookies can tag along for the ride.

use tower_http::cors::{Any, CorsLayer};
use axum::http::{Method, header::{AUTHORIZATION, CONTENT_TYPE}, routing::get};

pub fn user_routes() -> Router<AppState> {
    Router::new()
        .route("/me", get(me_handler))
        .layer(
            CorsLayer::new()
                .allow_methods(vec![Method::GET, Method::OPTIONS])
                .allow_origin(Any)
                .allow_headers(vec![AUTHORIZATION, CONTENT_TYPE]),
        )
}

Layers in Axum wrap your routes like a protective coating, applying transformations or adding functionality. They're composable too - stack them up and each request passes through in order. Beyond CORS, you'll often want to limit request body sizes to prevent abuse. Nobody needs to upload a 500MB JSON payload to your webhook endpoint:

use tower_http::limit::RequestBodyLimitLayer;

pub fn webhook_routes() -> Router<AppState> {
    Router::new()
        .route("/kinde", post(handle_kinde_webhook))
        .layer(RequestBodyLimitLayer::new(64 * 1024)) // 64 KiB limit
}

The middleware approach means you're not sprinkling CORS logic throughout your handlers. Set it once at the router level, and every endpoint underneath inherits the policy. When requirements change (and they always do), you're updating one place, not hunting through dozens of handler functions.

App State and Custom State

Axum's state management lets you share resources across handlers without resorting to globals or complicated dependency injection. Think of state as the mise en place of your application - all your essential ingredients prepped and ready to use. The most common use case? Your database connection pool.

In web applications, you don't want each request spinning up its own database connection. That's a recipe for exhausting your connection limits and tanking performance. Instead, you create a connection pool at startup and share it across all handlers through Axum's state mechanism. Every handler gets access to the same pool, connections are reused efficiently, and your database doesn't get overwhelmed.

#[derive(Clone)]
pub struct AppState {
    pub database: Pool<Postgres>,
}

async fn create_app_state() -> Result<AppState, Box<dyn Error>> {
    let database_url = std::env::var("DATABASE_URL")?;
    let database = PgPoolOptions::new()
        .max_connections(20)
        .connect(&database_url)
        .await?;

    Ok(AppState { database })
}

fn main() -> Result<(), Box<dyn Error>> {
  // other code...

  let rt = tokio::runtime::Builder::new_multi_thread()
        .enable_all()
        .build()?;

  rt.block_on(async move {
    let app_state = create_app_state().await?;

    let app = api_router(app_state.clone()).layer(sentry_service_layer);

    // Bind to address based on environment
    let addr = match std::env::var("ENVIRONMENT").as_deref() {
        Ok("local") => "0.0.0.0:8787",
        _ => "0.0.0.0:8080",
    };

    let listener = tokio::net::TcpListener::bind(addr).await?;

    // Start serving with graceful shutdown
    axum::serve(listener, app)
        .with_graceful_shutdown(shutdown_signal())
        .await?;

    Ok::<(), Box<dyn Error>>(())
  })?;

  Ok(())
}

The magic happens with the State extractor. Any handler can request the app state, and Axum injects it automatically. No need to thread it through function calls or manage complex lifetimes - the framework handles the plumbing:

async fn get_user(
    State(state): State<AppState>,
    Path(user_id): Path<String>,
) -> Result<Json<User>, StatusCode> {
    let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE id = $1")
        .bind(user_id)
        .fetch_one(&state.database)
        .await
        .map_err(|_| StatusCode::NOT_FOUND)?;
    
    Ok(Json(user))
}

Performance Note: The Clone bound on AppState is crucial - Axum clones your state for each request. This sounds expensive, but it's not. Types like Pool<Postgres> are internally reference-counted (Arc), so cloning is essentially free - just an atomic increment. You get thread-safe sharing with zero overhead.

As your application grows, you can add more shared resources to your state: cache connections, configuration, API clients, whatever your handlers need to do their job.

JWT Authentication Middleware

Authentication in Axum doesn't require a special auth library or framework plugin. Instead, it leverages Rust's type system through custom extractors - if a handler needs an authenticated user, it just asks for one in its parameters. The compiler enforces authentication at compile time, not runtime. Forget to check auth? Your code won't even build.

The Extractor Pattern

The extractor pattern is Axum's killer feature for authentication. You define a type representing an authenticated user, implement the FromRequestParts trait to tell Axum how to extract and validate it, and then use it like any other parameter. When a request comes in, Axum runs your extraction logic before the handler even sees it. Invalid token? The request gets rejected before touching your business logic. It's like having a security checkpoint that knows exactly what credentials to check based on which door you're trying to enter.

use axum::{
    extract::FromRequestParts,
    http::{header::AUTHORIZATION, request::Parts},
};

#[derive(Debug, Clone)]
pub struct AuthenticatedUser {
    pub user_id: String,
    pub email: String,
    pub roles: Vec<String>,
}

impl FromRequestParts<AppState> for AuthenticatedUser {
    type Rejection = (StatusCode, Json<Value>);

    async fn from_request_parts(
        parts: &mut Parts,
        state: &AppState,
    ) -> Result<Self, Self::Rejection> {
        // Extract the Authorization header
        let auth_header = parts
            .headers
            .get(AUTHORIZATION)
            .ok_or((StatusCode::UNAUTHORIZED, Json(json!({"error": "Missing token"}))))?;

        // Extract token from "Bearer <token>"
        let token = auth_header
            .to_str()
            .ok()
            .and_then(|s| s.strip_prefix("Bearer "))
            .ok_or((StatusCode::UNAUTHORIZED, Json(json!({"error": "Invalid token format"}))))?;

        // Validate JWT (simplified - in production you'd verify signatures, expiry, etc.)
        let claims = validate_jwt(token, &state.jwt_secret)
            .await
            .map_err(|_| (StatusCode::UNAUTHORIZED, Json(json!({"error": "Invalid token"}))))?;

        Ok(AuthenticatedUser {
            user_id: claims.sub,
            email: claims.email,
            roles: claims.roles,
        })
    }
}

Performance Note: Validation happens once per request during extraction, not in every handler that needs auth. The framework's design ensures you're not redundantly checking tokens.

Protected vs Public Handlers

Now any handler that needs authentication just includes the AuthenticatedUser parameter. Axum handles the validation automatically - if the extractor succeeds, your handler runs with a verified user. If it fails, the client gets a 401 before your handler is even called:

// This handler REQUIRES authentication
async fn get_profile(
    auth: AuthenticatedUser,
    State(state): State<AppState>,
) -> Result<Json<Profile>, StatusCode> {
    // We KNOW we have a valid user here - the type system guarantees it
    let profile = fetch_profile(&state.database, &auth.user_id).await?;
    Ok(Json(profile))
}

// This handler is public - no auth parameter needed
async fn health_check() -> &'static str {
    "OK"
}

Optional Authentication

For routes where authentication is optional (like endpoints that show more data to logged-in users), you can create a wrapper type. The extraction never fails, but you get an Option to check:

#[derive(Debug, Clone)]
pub struct OptionalAuth(pub Option<AuthenticatedUser>);

impl FromRequestParts<AppState> for OptionalAuth {
    type Rejection = std::convert::Infallible; // Never fails

    async fn from_request_parts(
        parts: &mut Parts,
        state: &AppState,
    ) -> Result<Self, Self::Rejection> {
        match AuthenticatedUser::from_request_parts(parts, state).await {
            Ok(user) => Ok(OptionalAuth(Some(user))),
            Err(_) => Ok(OptionalAuth(None)),
        }
    }
}

// Handler with optional auth
async fn list_posts(
    OptionalAuth(user): OptionalAuth,
    State(state): State<AppState>,
) -> Json<Vec<Post>> {
    let posts = match user {
        Some(u) => fetch_all_posts(&state.database).await,  // Show all posts
        None => fetch_public_posts(&state.database).await,   // Show only public
    };
    Json(posts)
}

The beauty of this approach is that authentication becomes a type-level guarantee. You can't accidentally expose authenticated endpoints - if the handler takes an AuthenticatedUser, it's protected. No middleware configuration to forget, no decorator attributes to miss. Just Rust's type system keeping your API secure at compile time.

Tracing, Logging, and Instrumentation

Observability is crucial for production services. When your Rust service is handling thousands of requests per second at 3 AM and something goes wrong, logs are your only window into what happened. But traditional line-by-line logging quickly becomes noise. Enter structured tracing - instead of scattered print statements, you get a complete story of each request's journey through your system.

Setting Up Tracing

Axum integrates beautifully with Rust's tracing ecosystem, which is honestly one of the best observability stories in any language. The tracing crate provides structured, contextual logging that actually makes sense when you're debugging production issues. You're not grep'ing through walls of text trying to piece together what happened - you're following a request's complete lifecycle with all its context preserved.

The setup is refreshingly straightforward. Configure your tracing subscriber once at startup, and every log throughout your application automatically respects your configuration. Use environment variables to control verbosity without recompiling - RUST_LOG=debug for development, RUST_LOG=warn for production. It's like having different heat settings on your stove, adjusting the detail level to match what you're trying to diagnose:

use tracing::{info, debug, error, instrument};
use tracing_subscriber::{filter::EnvFilter, layer::SubscriberExt};

// Initialize tracing subscriber with environment-based filtering
let subscriber = tracing_subscriber::Registry::default()
    .with(EnvFilter::try_from_default_env()
        .unwrap_or_else(|_| EnvFilter::new("info")))
    .with(tracing_subscriber::fmt::layer());

// Add error tracking for production - Sentry, Datadog, New Relic, whatever you prefer
if environment != "local" {
    // This example uses Sentry, but any tracing-compatible platform works
    let sentry_layer = sentry::integrations::tracing::layer()
        .event_filter(|md| match *md.level() {
            tracing::Level::ERROR => EventFilter::Event,
            tracing::Level::TRACE | tracing::Level::DEBUG => EventFilter::Ignore,
            _ => EventFilter::Log,
        });
    subscriber.with(sentry_layer).init();
}

Production Integrations

The beauty of the layer system is its flexibility - Sentry, Datadog, Honeycomb, New Relic, or your own custom observability platform can all plug in as layers. You're not locked into any vendor. Switch your error tracking service by swapping out one layer for another, and the rest of your application doesn't even know it happened.

Structured Logging with #[instrument]

The real magic happens with the #[instrument] macro. Slap it on any function and you automatically get entry/exit logs, timing information, and all the context you need. The macro is smart enough to capture function arguments (excluding sensitive ones you mark with skip), track how long operations take, and maintain the relationship between parent and child spans. When a request fails three services deep, you can trace it back to the original HTTP call that started it all:

#[instrument(skip(state, auth))]
// Skip sensitive data from logs
async fn get_user_profile(
    State(state): State<AppState>,
    auth: AuthenticatedUser,
    Path(user_id): Path<String>,
) -> Result<Json<Profile>, StatusCode> {
    debug!(?user_id, "Fetching user profile");
    
    let profile = sqlx::query_as::<_, Profile>("SELECT * FROM profiles WHERE user_id = $1")
        .bind(&user_id)
        .fetch_one(&state.database)
        .await
        .map_err(|e| {
            error!(?e, "Failed to fetch profile");
            StatusCode::INTERNAL_SERVER_ERROR
        })?;
    
    info!(profile_id = %profile.id, "Successfully fetched profile");
    Ok(Json(profile))
}

Notice the different log macros? They're not just different severity levels - they support structured fields. That ?user_id syntax? It's Debug formatting. The %profile.id? Display formatting. These fields become searchable, filterable metadata in your log aggregation system. Instead of parsing strings with regex, you're querying structured data. Finding all errors for a specific user becomes a simple filter, not a grep nightmare.

Performance Note: Zero runtime overhead when disabled. Trace levels you're not recording are compiled away. That debug! statement in production when you're only recording info and above? It doesn't even exist in the binary. You get comprehensive observability in development and debugging, with production performance when it matters.

The integration with error tracking services is seamless. Errors bubble up with their full context - not just a stack trace, but the entire request flow that led to the failure. You can see that the error in your database query was triggered by a webhook, which was triggered by a specific user action, all with timing information showing where your service spent its time. It's the difference between knowing something failed and understanding why it failed.

Common Gotchas

⚠️ Watch Out For These Common Axum PitfallsForgetting Clone on AppState: Your AppState must derive Clone. If you see "the trait bound AppState: Clone is not satisfied", add #[derive(Clone)] to your state struct.Blocking in async handlers: Never use blocking operations like std::thread::sleep() or synchronous file I/O in async handlers. Use tokio::time::sleep() and async alternatives instead. Blocking the runtime thread will tank your performance.Database pool sizing: Don't set your pool max_connections higher than your database's connection limit divided by your service instance count. Running 10 instances with 20 connections each? Make sure your database allows 200+ connections.Extractors consume the request: Some extractors like String or Bytes consume the request body. Always put these last in your handler parameters, after State, Path, etc.

Putting It All Together

Here's where all the ingredients come together into a complete service. Every Axum application follows the same basic recipe: create your shared state, build your router hierarchy, apply your middleware stack, and start serving. The beauty is in how these pieces compose - each component we've covered slots perfectly into place, creating a production-ready service that's more than the sum of its parts.

use axum::{Router, Server};
use tower::ServiceBuilder;
use std::error::Error;
use sqlx::{Pool, Postgres, postgres::PgPoolOptions};
use tracing_subscriber;
use sentry_tower::{NewSentryLayer, SentryHttpLayer};

fn main() -> Result<(), Box<dyn Error>> {
    let _guard = sentry::init((
        // Sentry config here
    ));

    let sentry_layer =
        sentry::integrations::tracing::layer().event_filter(move |md| match *md.level() {
            // Sentry tracing layer here
        });

    // Create the Sentry layer here, outside the `rt` block
    let sentry_service_layer = ServiceBuilder::new()
        .layer(NewSentryLayer::<Request<Body>>::new_from_top())
        .layer(SentryHttpLayer::new())
        .into_inner();

    let rt = tokio::runtime::Builder::new_multi_thread()
        .enable_all()
        .build()?;

    rt.block_on(async move {
        let subscriber = tracing_subscriber::Registry::default()
            .with(EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info")))
            .with(tracing_subscriber::fmt::layer());

        // Only add Sentry layer in non-local environments
        if environment != "local" {
            subscriber.with(sentry_layer).init();
        } else {
            subscriber.init();
        }
    
        // Create app state with database, etc.
        let app_state = create_app_state().await?;
    
        // Build the application with routes and middleware
        let app = api_router(app_state.clone()).layer(sentry_service_layer);
    
        // Bind to address based on environment
        let addr = match std::env::var("ENVIRONMENT").as_deref() {
            Ok("local") => "0.0.0.0:8787",
            _ => "0.0.0.0:8080",
        };
    
        let listener = tokio::net::TcpListener::bind(addr).await?;
    
        axum::serve(listener, app).await?;
    
        Ok::<(), Box<dyn Error>>(())
    })?;

    Ok(())
}

async fn create_app_state() -> Result<AppState, Box<dyn Error>> {
    let database_url = std::env::var("DATABASE_URL")?;
    let database = PgPoolOptions::new()
        .max_connections(20)
        .connect(&database_url)
        .await?;

    Ok(AppState { database })
}

Notice how clean this is? No framework boilerplate, no magic incantations. You can trace every line of code and understand exactly what it does. The state gets created with your database pool and any other shared resources. The router brings together all your route modules with their specific middleware. The global middleware layers (like error tracking) wrap everything. Finally, you bind to a port and start serving, with graceful shutdown to ensure in-flight requests complete even when you're deploying new versions.

The ServiceBuilder pattern deserves special attention. It's Tower's way of composing middleware, and it reads in reverse order - the last layer added is the first to see incoming requests. Think of it like a stack of filters: requests flow down through each layer to your handler, then responses bubble back up. This predictable ordering makes debugging middleware issues straightforward. When something's wrong, you know exactly which layer touched the request and in what order.

Key Takeaways

Building production Rust services with Axum isn't about learning complex abstractions - it's about understanding a few core patterns that compose beautifully:

  1. Routes are composable: Use .nest() to build hierarchical APIs that mirror your application's structure. Each module owns its routes, making large applications manageable.
  2. Middleware as layers: Apply CORS, rate limiting, and logging via .layer(). Stack them up for fine-grained control over request processing.
  3. State is just Clone: Pass database connections and configs through AppState. The framework handles the distribution, you just use it. Arc makes cloning essentially free.
  4. Extractors are your security guards: Custom extractors like AuthenticatedUser handle complex validation logic once, then become type-safe guarantees throughout your application.
  5. Observability is built-in: The tracing ecosystem gives you production-grade logging and monitoring with minimal effort. Use #[instrument] liberally - it costs nothing when disabled.
  6. The compiler is your sous chef: Lean on Rust's type system. When authentication is a type, you can't forget it. When state is cloned, you can't have data races. The compiler catches mistakes before they hit production.

Wrapping Up

Axum strikes a perfect balance between ergonomics and performance, making it an excellent choice for building production-ready web services in Rust. You're not fighting the framework or dealing with magical abstractions - you're composing simple, understandable pieces that work exactly as expected.

The patterns we've covered aren't just toy examples. This is how real Axum services are built, from startups handling their first thousand users to established companies processing millions of requests. The same structure scales from a simple CRUD API to complex distributed systems.

What makes Axum special isn't any single feature - it's how everything fits together. Authentication isn't bolted on; it's just types and extractors. Middleware isn't configuration; it's just function composition. State management isn't magic; it's just Clone and Send. When you understand these fundamentals, you can build anything.

Ready to start cooking with Axum? The framework is stable, the ecosystem is mature, and the patterns you've learned here will serve you well as your service grows. Your next production Rust service is just a cargo new away.