How to deploy an Axum server to Fly.io
Learn how to deploy an Axum web server to Fly.io. Benefit from scale-to-zero deployments, internal wireguard networking, anycast routing to your nearest server, and more.
Fly.io is an excellent platform for deploying Rust applications, offering global distribution, automatic scaling, and cheap, flexible VMs. In this guide, we'll walk through deploying an Axum web application to Fly.io using Docker, covering all the configuration files and code you'll need.
Prerequisites
Before we begin, make sure you have:
- Rust and Cargo installed
- Docker installed
- A Fly.io account
- The Fly CLI installed
- An Axum application ready to deploy
Project Structure
Your project should have the following key files:
your-project/
├── src/
│ └── main.rs
├── Cargo.toml
├── Dockerfile
└── fly.toml
Configuring Your Axum Application
First, let's set up your main.rs
to work seamlessly across local and production environments:
use axum::{Router, routing::get};
use std::error::Error;
use tracing::error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// Initialize tracing for logging
tracing_subscriber::fmt::init();
// Determine the environment (defaults to "production" if not set)
let environment = std::env::var("ENVIRONMENT")
.unwrap_or_else(|_| "production".to_string());
// Configure the binding address based on environment
let connection = match environment.as_str() {
"local" => "0.0.0.0:8787",
_ => "0.0.0.0:8080",
};
// Create your Axum router
let app = Router::new()
.route("/", get(|| async { "Hello from Fly.io!" }))
.route("/health", get(|| async { "OK" }));
// Bind the TCP listener
let listener = tokio::net::TcpListener::bind(connection)
.await
.map_err(|e| -> Box<dyn Error> {
error!(address = %connection, error = ?e, "Failed to bind");
Box::new(e)
})?;
println!("Server listening on {}", connection);
// Start the server
axum::serve(listener, app).await?;
Ok(())
}
Key Points:
- Environment Detection: We check the
ENVIRONMENT
variable to determine whether we're running locally or in production. This allows you to use different ports for development (8787) and production (8080). - Flexible Binding: The server binds to
0.0.0.0
(all network interfaces) rather thanlocalhost
, which is essential for Docker containers to accept external connections. - Error Handling: We use proper error propagation with the
?
operator and provide detailed error messages for debugging. - Health Check Route: The
/health
endpoint is useful for Fly.io's health checks and monitoring.
Creating the Dockerfile
The Dockerfile uses a multi-stage build to keep the final image size small and secure:
# Stage 1: Builder - compiles your Rust application
FROM rust:bookworm as builder
WORKDIR /app/
# Copy the entire project
COPY . .
# Build the release binary
# This creates an optimized, production-ready binary
RUN cargo build --release
# Stage 2: Runtime - creates a minimal image with just the binary
FROM debian:bookworm-slim
# Install runtime dependencies
# OpenSSL is needed for HTTPS connections
# ca-certificates provides SSL certificate authorities
RUN apt-get update && apt install -y openssl ca-certificates
# Copy only the compiled binary from the builder stage
COPY --from=builder /app/target/release/axum_server /app/axum_server
# Expose the port your application listens on
EXPOSE 8080
# Run the application
ENTRYPOINT ["/app/axum_server"]
Explanation:
- Multi-Stage Build: The first stage (
builder
) contains all the Rust toolchain and dependencies needed to compile your application. The second stage (runtime
) contains only the compiled binary and minimal runtime dependencies. This dramatically reduces the final image size from ~2GB to ~100MB. - Debian Bookworm: We use Debian Bookworm as our base images for both stages to ensure compatibility between the build and runtime environments.
- Runtime Dependencies: Most Rust applications need OpenSSL for TLS/HTTPS and ca-certificates for SSL certificate verification.
- Binary Name: Make sure
axum_server
matches your binary name inCargo.toml
. You can find this under[package]
→name
.
Optional Optimization: For even smaller images, consider using FROM gcr.io/distroless/cc-debian12
for the runtime stage, though you'll need to handle dependencies differently.
Configuring Fly.io with fly.toml
The fly.toml
file tells Fly.io how to run and scale your application:
app = '<app_name>'
primary_region = 'lhr'
[build]
[env]
PORT = '8080'
[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = 'stop'
auto_start_machines = true
min_machines_running = 0
processes = ['app']
[services.concurrency]
soft_limit = 1000
[[vm]]
size = 'shared-cpu-2x'
Configuration Breakdown:
- app: Replace
<app_name>
with your desired application name. This must be unique across all Fly.io. - primary_region: The region where your app primarily runs. Use
lhr
for London,iad
for Virginia,nrt
for Tokyo, etc. - internal_port: Must match the port in your Dockerfile's
EXPOSE
and your Rust application's bind address (8080). - force_https: Automatically redirects HTTP requests to HTTPS, which is a security best practice.
- auto_stop_machines & auto_start_machines: These enable Fly.io's scale-to-zero feature. Your app will stop when idle and automatically start when a request arrives. This is a perfect way to save costs, especially since most Axum web servers spin up quickly.
- min_machines_running: Set to 0 for scale-to-zero, or 1+ for always-on availability (recommended for production).
- soft_limit: When concurrent connections reach 1000, Fly will spin up additional machines to handle load.
- vm size:
shared-cpu-2x
is part of the free tier and suitable for most applications. Upgrade toshared-cpu-4x
or dedicated CPUs for higher traffic.
Deploying Your Application
Now that everything is configured, let's deploy:
1. Initialize Your Fly Application
fly launch
This command will:
- Detect your
fly.toml
configuration - Create your app on Fly.io
- Prompt you to set up a PostgreSQL database (if needed)
- Deploy your application
If you already have a fly.toml
, Fly will use those settings. Otherwise, it will interactively guide you through setup.
2. Deploy Subsequent Updates
After the initial launch, deploy updates with:
fly deploy
This builds your Docker image, pushes it to Fly's registry, and deploys it to your machines.
3. Monitor Your Application
Check your application's status:
fly status
View logs in real-time:
fly logs
Open your application in a browser:
fly open
4. Set Environment Variables
If your application needs additional environment variables (database URLs, API keys, etc.):
fly secrets set DATABASE_URL=postgres://... API_KEY=your_key
Secrets are encrypted and securely injected into your application at runtime.
Testing Locally
Before deploying, test your Docker build locally:
# Build the image
docker build -t axum-app .
# Run the container
docker run -p 8080:8080 -e ENVIRONMENT=production axum-app
# Test the endpoint
curl http://localhost:8080
Troubleshooting Tips
Build Failures: If cargo build fails, check that your Cargo.toml
dependencies are compatible and that you're not using platform-specific features without proper conditional compilation.
Connection Issues: Ensure your application binds to 0.0.0.0
, not 127.0.0.1
or localhost
. Docker containers need to accept connections from all interfaces.
Scale-to-Zero Cold Starts: The first request after an idle period may take 2-5 seconds as the machine starts up if you have a very large application. Setmin_machines_running = 1
for always-on availability. Most cold starts are between 50 and 500ms, though.
Memory Issues: If your application crashes, you may need a larger VM size. Check logs with fly logs
and consider upgrading to shared-cpu-4x
or higher, or specify more memory. However, this is only likely to happen if you have a lot of traffic or a huge application.
Conclusion
You now have a production-ready Axum application deployed to Fly.io with automatic scaling, HTTPS, and global distribution. The configuration we've covered provides a solid foundation that you can customize based on your specific needs, whether that's adding databases, multiple regions, or custom domains.
Happy deploying!