JWT (JSON Web Token) authentication is a crucial component of modern APIs, especially when managing user sessions, role-based permissions, and third-party access. Depending on your API’s use case and security requirements, various types of JWT authentication can be implemented.
In this article, we’ll focus on implementing JWT authentication using the Bearer Token approach in Rust, and we will use the HS256 algorithm to ensure the integrity and authenticity of the tokens.
While we’ve previously covered refresh tokens and bearer tokens in our Rust series, here we’ll take a closer look at bearer token authentication. In future articles, we’ll dive into other authentication strategies to further strengthen the security and flexibility of our API.
By the end of this tutorial, you’ll have a solid understanding of how to securely implement JWT authentication in your Rust API using Bearer Tokens and HS256.
Related Articles
- Build a CRUD API with Axum and MongoDB in Rust
- Building a Rust API with Unit Testing in Mind
- How to Write Unit Tests for Your Rust API
- How to Add Swagger UI, Redoc and RapiDoc to a Rust API
- JWT Authentication and Authorization in a Rust API using Actix-Web
- How to Write Unit Tests for Your Rust API
- Dockerizing a Rust API Project: SQL Database and pgAdmin
- Deploy Rust App on VPS with GitHub Actions and Docker
Test the JWT Authentication on Your Machine
To run the JWT project on your local machine and interact with the different authentication endpoints, follow the steps outlined below:
- Download or clone the API project from its GitHub repository at https://github.com/wpcodevo/jwt-auth-axum-rust and open the source code in your preferred code editor or IDE (integrated development environment).
- Start the Postgres and pgAdmin Docker containers by executing
docker-compose up -d
. If you don’t have Docker installed on your machine, you can download it from the official website.- Postgres will act as the database for the Rust project.
- pgAdmin offers a graphical interface to access and modify the data within the Postgres database. The login credentials can be found in the
.env
file.
- After the Postgres database is up and running, apply the database migrations by executing
sqlx migrate run
. If you haven’t installed the SQLx-CLI yet, you can do so with the following command:cargo install sqlx-cli --no-default-features --features postgres
. - At this point, the database schema is in sync with our migration files. Install the necessary packages and start the Axum development server by running
cargo run
. - Import the
Feedback App.postman_collection.json
file into the Postman desktop app or the VS Code extension to access the Postman collection I used for testing the JWT authentication flow. - Test the user registration, login, and logout endpoints by sending requests to the API. Additionally, attempt to access the
getMe
protected route, which is restricted to logged-in users, to verify that the middleware guard is functioning properly.
Set Up the Rust Project
Now that you’ve explored the API we’ll be building in this tutorial, let’s move on to setting up the Rust project and installing the necessary dependencies.
- Create Project Folder: Start by creating a folder to store your source code. You can name it
jwt-auth-axum-rust
and place it on your desktop or any preferred location. - Open Terminal: Navigate to the newly created folder and open it in your terminal.
- Initialize Rust Project: Run the command
cargo init
to initialize the folder as a Rust binary project. - Install Dependencies: After initializing the project, run the following commands to install the necessary dependencies needed for implementing JWT authentication in Rust.
cargo add axum
cargo add axum-extra -F cookie
cargo add time
cargo add tokio -F full
cargo add tower-http -F "cors"
cargo add serde_json
cargo add serde -F derive
cargo add chrono -F serde
cargo add dotenv
cargo add uuid -F "serde v4"
cargo add sqlx -F "runtime-async-std-native-tls postgres chrono uuid"
cargo add jsonwebtoken
cargo add argon2
cargo add rand_core --features "std"
If you encounter any issues with future updates of the crates, you can revert to the versions I used. Below are the crates and their respective versions.
Cargo.toml
[package]
name = "jwt-auth-axum-rust"
version = "0.1.0"
edition = "2021"
[dependencies]
argon2 = "0.5.3"
axum = "0.7.7"
axum-extra = { version = "0.9.4", features = ["cookie"] }
chrono = { version = "0.4.38", features = ["serde"] }
dotenv = "0.15.0"
jsonwebtoken = "9.3.0"
rand_core = { version = "0.6.4", features = ["std"] }
serde = { version = "1.0.210", features = ["derive"] }
serde_json = "1.0.128"
sqlx = { version = "0.8.2", features = ["runtime-async-std-native-tls", "postgres", "chrono", "uuid"] }
time = "0.3.36"
tokio = { version = "1.40.0", features = ["full"] }
tower-http = { version = "0.6.1", features = ["cors"] }
uuid = { version = "1.10.0", features = ["serde", "v4"] }
Launch a PostgreSQL Server with Docker
Let’s launch our PostgreSQL instance using Docker, and we’ll include pgAdmin for easy access to the users in our database via its graphical interface.
To get started, create a docker-compose.yml
file in the root directory and add the following Docker Compose configurations:
docker-compose.yml
services:
postgres:
image: postgres:latest
container_name: postgres
ports:
- '6500:5432'
volumes:
- progresDB:/var/lib/postgresql/data
env_file:
- ./.env
pgAdmin:
image: dpage/pgadmin4
container_name: pgAdmin
env_file:
- ./.env
ports:
- '5050:80'
volumes:
progresDB:
In the Docker Compose configuration, we used the env_file
key to load the necessary credentials for building the Postgres and pgAdmin containers. To supply these credentials to Docker Compose, create a .env
file in the root directory and add the following content:
.env
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=6500
POSTGRES_USER=admin
POSTGRES_PASSWORD=password123
POSTGRES_DB=rust_hs256
DATABASE_URL="postgresql://admin:password123@localhost:6500/rust_hs256?schema=public"
PGADMIN_DEFAULT_EMAIL=admin@admin.com
PGADMIN_DEFAULT_PASSWORD=password123
JWT_SECRET=my_ultra_secure_secret
With the environment variables set, execute the command docker-compose up -d
to launch the Postgres and pgAdmin containers.
Perform Database Migrations
The database is now up and running. Next, let’s create our SQLx migration files to help us track changes to the database schema.
First, ensure that you have the SQLx command-line tool installed. If you haven’t done so yet, you can install it by running the following command:cargo install sqlx-cli --no-default-features --features postgres
.
Once the tool is installed, execute the command sqlx migrate add -r init
to generate reversible migration scripts. These will be located in a ‘migrations‘ folder within the root directory.
To define the SQL code required for creating the ‘users‘ table in the database, open the ‘up‘ migration script and add the following SQL statements:
migrations/20241009182511_init.up.sql
-- Add up migration script here
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE
"users" (
id UUID NOT NULL PRIMARY KEY DEFAULT (uuid_generate_v4()),
name VARCHAR(100) NOT NULL,
email VARCHAR(255) NOT NULL UNIQUE,
photo VARCHAR NOT NULL DEFAULT 'default.png',
verified BOOLEAN NOT NULL DEFAULT FALSE,
password VARCHAR(100) NOT NULL,
role VARCHAR(50) NOT NULL DEFAULT 'user',
created_at TIMESTAMP
WITH
TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP
WITH
TIME ZONE DEFAULT NOW()
);
CREATE INDEX users_email_idx ON users (email);
We created a unique index on the email field to prevent duplicate email entries in the database.
The SQL statements in the ‘up‘ migration script will create the ‘users‘ table. Now, we need to implement the reverse operation in the ‘down‘ script, which involves dropping the ‘users‘ table.
To do this, open the corresponding ‘down‘ migration script and add the following SQL statement:
migrations/20241009182511_init.down.sql
-- Add down migration script here
DROP TABLE IF EXISTS "users";
After setting up the migration scripts, execute the command sqlx migrate run
to apply the ‘up‘ migration script to the database. If you need to revert the changes made by the ‘up‘ script, you can use the command sqlx migrate revert
.
Load the Environment Variables
Next, let’s load the environment variables from the .env
file and store them in a struct for easy access throughout our code. For this simple API, we’ll be loading just two environment variables, but larger projects will typically require more. Create a config.rs
file in the src
directory and add the following code:
src/config.rs
#[derive(Debug, Clone)]
pub struct Config {
pub database_url: String,
pub jwt_secret: String,
}
impl Config {
pub fn init() -> Config {
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
let jwt_secret = std::env::var("JWT_SECRET").expect("JWT_SECRET must be set");
Config {
database_url,
jwt_secret,
}
}
}
Create the SQLx Database Model
Let’s proceed to create the SQLx model for our ‘users’ table. Inside the ‘src’ directory, create a model.rs
file and include the following code:
src/model.rs
use chrono::prelude::*;
use serde::{Deserialize, Serialize};
#[allow(non_snake_case)]
#[derive(Debug, Deserialize, sqlx::FromRow, Serialize, Clone)]
pub struct User {
pub id: uuid::Uuid,
pub name: String,
pub email: String,
pub password: String,
pub role: String,
pub photo: String,
pub verified: bool,
#[serde(rename = "createdAt")]
pub created_at: Option<DateTime<Utc>>,
#[serde(rename = "updatedAt")]
pub updated_at: Option<DateTime<Utc>>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct TokenClaims {
pub sub: String,
pub iat: usize,
pub exp: usize,
}
#[derive(Debug, Deserialize)]
pub struct RegisterUserSchema {
pub name: String,
pub email: String,
pub password: String,
}
#[derive(Debug, Deserialize)]
pub struct LoginUserSchema {
pub email: String,
pub password: String,
}
Below the ‘User‘ model, you will find additional structs:
-
TokenClaims
– This struct contains the fields we will store in the JWT payload. RegisterUserSchema
– This struct includes the fields necessary for the user registration process.LoginUserSchema
– This struct encompasses the fields required for the user login process.
Create the HTTP Response Schemas
Next, we’ll create response schemas to specify the structure of the data we send to users. This approach helps us omit sensitive fields returned by the database, ensuring that only the necessary information is shared.
To do this, create a response.rs
file in the src
directory and include the following code:
src/response.rs
use chrono::prelude::*;
use serde::Serialize;
#[allow(non_snake_case)]
#[derive(Debug, Serialize)]
pub struct FilteredUser {
pub id: String,
pub name: String,
pub email: String,
pub role: String,
pub photo: String,
pub verified: bool,
pub createdAt: DateTime<Utc>,
pub updatedAt: DateTime<Utc>,
}
#[derive(Serialize, Debug)]
pub struct UserData {
pub user: FilteredUser,
}
#[derive(Serialize, Debug)]
pub struct UserResponse {
pub status: String,
pub data: UserData,
}
Create the Axum HTTP Route Handlers
Now, it’s time to implement the Axum HTTP handlers for processing authentication requests to our API. We’ll start by creating a utility function called filter_user_record
, which will help us exclude sensitive fields, such as the password, from the data returned by the database.
To get started, create a new file named handlers.rs
in the src
directory and add the following code:
src/handlers.rs
fn filter_user_record(user: &User) -> FilteredUser {
FilteredUser {
id: user.id.to_string(),
email: user.email.to_owned(),
name: user.name.to_owned(),
photo: user.photo.to_owned(),
role: user.role.to_owned(),
verified: user.verified,
createdAt: user.created_at.unwrap(),
updatedAt: user.updated_at.unwrap(),
}
}
Register User Route Handler
Let’s begin with the route handler responsible for user registration. In this function, we first check if a user with the provided email already exists in the database. If a user is found, we return a 409 Conflict status code. If no match is found, we proceed to hash the password and store the user’s information in the database. Once the operation is successful, we return a copy of the registered user’s details in the response body.
To implement this, add the following code to the src/handlers.rs
file:
src/handlers.rs
pub async fn register_user_handler(
State(data): State<Arc<AppState>>,
Json(body): Json<RegisterUserSchema>,
) -> Result<impl IntoResponse, (StatusCode, Json<serde_json::Value>)> {
let user_exists: Option<bool> =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM users WHERE email = $1)")
.bind(body.email.to_owned().to_ascii_lowercase())
.fetch_one(&data.db)
.await
.map_err(|e| {
let error_response = serde_json::json!({
"status": "fail",
"message": format!("Database error: {}", e),
});
(StatusCode::INTERNAL_SERVER_ERROR, Json(error_response))
})?;
if let Some(exists) = user_exists {
if exists {
let error_response = serde_json::json!({
"status": "fail",
"message": "User with that email already exists",
});
return Err((StatusCode::CONFLICT, Json(error_response)));
}
}
let salt = SaltString::generate(&mut OsRng);
let hashed_password = Argon2::default()
.hash_password(body.password.as_bytes(), &salt)
.map_err(|e| {
let error_response = serde_json::json!({
"status": "fail",
"message": format!("Error while hashing password: {}", e),
});
(StatusCode::INTERNAL_SERVER_ERROR, Json(error_response))
})
.map(|hash| hash.to_string())?;
let user = sqlx::query_as!(
User,
"INSERT INTO users (name,email,password) VALUES ($1, $2, $3) RETURNING *",
body.name.to_string(),
body.email.to_string().to_ascii_lowercase(),
hashed_password
)
.fetch_one(&data.db)
.await
.map_err(|e| {
let error_response = serde_json::json!({
"status": "fail",
"message": format!("Database error: {}", e),
});
(StatusCode::INTERNAL_SERVER_ERROR, Json(error_response))
})?;
let user_response = serde_json::json!({"status": "success","data": serde_json::json!({
"user": filter_user_record(&user)
})});
Ok(Json(user_response))
}
Log In User Route Handler
Next, let’s create the route handler for user login. In this handler, we will query the database to check if a user with the provided email address exists. If the user is found, we will hash the plain-text password from the request body and compare it with the hashed password stored in the database.
If the passwords match, we will generate a JWT using the HS256 algorithm and return it both as a cookie and as part of the JSON response.
src/handlers.rs
pub async fn login_user_handler(
State(data): State<Arc<AppState>>,
Json(body): Json<LoginUserSchema>,
) -> Result<impl IntoResponse, (StatusCode, Json<serde_json::Value>)> {
let user = sqlx::query_as!(
User,
"SELECT * FROM users WHERE email = $1",
body.email.to_ascii_lowercase()
)
.fetch_optional(&data.db)
.await
.map_err(|e| {
let error_response = serde_json::json!({
"status": "error",
"message": format!("Database error: {}", e),
});
(StatusCode::INTERNAL_SERVER_ERROR, Json(error_response))
})?
.ok_or_else(|| {
let error_response = serde_json::json!({
"status": "fail",
"message": "Invalid email or password",
});
(StatusCode::BAD_REQUEST, Json(error_response))
})?;
let is_valid = match PasswordHash::new(&user.password) {
Ok(parsed_hash) => Argon2::default()
.verify_password(body.password.as_bytes(), &parsed_hash)
.map_or(false, |_| true),
Err(_) => false,
};
if !is_valid {
let error_response = serde_json::json!({
"status": "fail",
"message": "Invalid email or password"
});
return Err((StatusCode::BAD_REQUEST, Json(error_response)));
}
let now = chrono::Utc::now();
let iat = now.timestamp() as usize;
let exp = (now + chrono::Duration::minutes(60)).timestamp() as usize;
let claims: TokenClaims = TokenClaims {
sub: user.id.to_string(),
exp,
iat,
};
let token = encode(
&Header::default(),
&claims,
&EncodingKey::from_secret(data.env.jwt_secret.as_ref()),
)
.unwrap();
let cookie = Cookie::build(("token", token.to_owned()))
.path("/")
.max_age(time::Duration::hours(1))
.same_site(SameSite::Lax)
.http_only(true);
let mut response = Response::new(json!({"status": "success", "token": token}).to_string());
response
.headers_mut()
.insert(header::SET_COOKIE, cookie.to_string().parse().unwrap());
Ok(response)
}
Log Out User Route Handler
With the registration and login logic in place, let’s proceed to implement the logout functionality. This process is straightforward: we will send an expired token with the same name to remove the existing token stored in the user’s browser or API client. Add the following code to the src/handlers.rs
file:
src/handlers.rs
pub async fn logout_handler() -> Result<impl IntoResponse, (StatusCode, Json<serde_json::Value>)> {
let cookie = Cookie::build(("token", ""))
.path("/")
.max_age(time::Duration::hours(-1))
.same_site(SameSite::Lax)
.http_only(true);
let mut response = Response::new(json!({"status": "success"}).to_string());
response
.headers_mut()
.insert(header::SET_COOKIE, cookie.to_string().parse().unwrap());
Ok(response)
}
Retrieve Authenticated User Handler
Next, let’s create a route handler that will return the user’s account details. This handler will be protected, allowing access only to users with valid JSON Web Tokens. In this implementation, we will retrieve the user data from the Axum extension and include it in the response.
src/handlers.rs
pub async fn get_me_handler(
Extension(user): Extension<User>,
) -> Result<impl IntoResponse, (StatusCode, Json<serde_json::Value>)> {
let json_response = serde_json::json!({
"status": "success",
"data": serde_json::json!({
"user": filter_user_record(&user)
})
});
Ok(Json(json_response))
}
Create the JWT Authentication Middleware
Next, let’s create an Axum middleware to restrict access to certain route handlers for unauthenticated users. In this middleware, we will decode the JWT to verify its authenticity and extract the payload.
We then retrieve the user ID from the JWT payload and query the database to check for the existence of a user with that ID. If no user is found, we return a 403 Unauthorized error. Conversely, if the user exists, we store their information in the request extension, allowing subsequent handlers to access this data.
src/jwt.rs
use std::sync::Arc;
use axum::{
body::Body,
extract::State,
http::{header, Request, StatusCode},
middleware::Next,
response::IntoResponse,
Json,
};
use axum_extra::extract::cookie::CookieJar;
use jsonwebtoken::{decode, DecodingKey, Validation};
use serde::Serialize;
use crate::{
model::{TokenClaims, User},
AppState,
};
#[derive(Debug, Serialize)]
pub struct ErrorResponse {
pub status: &'static str,
pub message: String,
}
pub async fn auth(
cookie_jar: CookieJar,
State(data): State<Arc<AppState>>,
mut req: Request<Body>,
next: Next,
) -> Result<impl IntoResponse, (StatusCode, Json<ErrorResponse>)> {
let token = cookie_jar
.get("token")
.map(|cookie| cookie.value().to_string())
.or_else(|| {
req.headers()
.get(header::AUTHORIZATION)
.and_then(|auth_header| auth_header.to_str().ok())
.and_then(|auth_value| {
if auth_value.starts_with("Bearer ") {
Some(auth_value[7..].to_owned())
} else {
None
}
})
});
let token = token.ok_or_else(|| {
let json_error = ErrorResponse {
status: "fail",
message: "You are not logged in, please provide token".to_string(),
};
(StatusCode::UNAUTHORIZED, Json(json_error))
})?;
let claims = decode::<TokenClaims>(
&token,
&DecodingKey::from_secret(data.env.jwt_secret.as_ref()),
&Validation::default(),
)
.map_err(|_| {
let json_error = ErrorResponse {
status: "fail",
message: "Invalid token".to_string(),
};
(StatusCode::UNAUTHORIZED, Json(json_error))
})?
.claims;
let user_id = uuid::Uuid::parse_str(&claims.sub).map_err(|_| {
let json_error = ErrorResponse {
status: "fail",
message: "Invalid token".to_string(),
};
(StatusCode::UNAUTHORIZED, Json(json_error))
})?;
let user = sqlx::query_as!(User, "SELECT * FROM users WHERE id = $1", user_id)
.fetch_optional(&data.db)
.await
.map_err(|e| {
let json_error = ErrorResponse {
status: "fail",
message: format!("Error fetching user from database: {}", e),
};
(StatusCode::INTERNAL_SERVER_ERROR, Json(json_error))
})?;
let user = user.ok_or_else(|| {
let json_error = ErrorResponse {
status: "fail",
message: "The user belonging to this token no longer exists".to_string(),
};
(StatusCode::UNAUTHORIZED, Json(json_error))
})?;
req.extensions_mut().insert(user);
Ok(next.run(req).await)
}
Create the Authentication Routes
At this point, we have created all the route handlers necessary for authentication. Now, it’s time to create the routes that will invoke these handlers. To achieve this, create a route.rs
file in the ‘src‘ directory and add the following code:
src/route.rs
use std::sync::Arc;
use axum::{
middleware,
routing::{get, post},
Router,
};
use crate::{
handlers::{
get_me_handler, health_checker_handler, login_user_handler, logout_handler,
register_user_handler,
},
jwt::auth,
AppState,
};
pub fn create_router(app_state: Arc<AppState>) -> Router {
Router::new()
.route("/api/healthchecker", get(health_checker_handler))
.route("/api/auth/register", post(register_user_handler))
.route("/api/auth/login", post(login_user_handler))
.route(
"/api/auth/logout",
get(logout_handler)
.route_layer(middleware::from_fn_with_state(app_state.clone(), auth)),
)
.route(
"/api/users/me",
get(get_me_handler)
.route_layer(middleware::from_fn_with_state(app_state.clone(), auth)),
)
.with_state(app_state)
}
Register the Axum Router and Set Up CORS
Let’s conclude by registering the API router we created earlier and setting up CORS on the server. Configuring CORS will allow our application to accept requests from cross-origin domains. Open the main.rs
file and replace its contents with the following code:
src/main.rs
mod config;
mod handlers;
mod jwt;
mod model;
mod response;
mod route;
use config::Config;
use std::sync::Arc;
use axum::http::{
header::{ACCEPT, AUTHORIZATION, CONTENT_TYPE},
HeaderValue, Method,
};
use dotenv::dotenv;
use route::create_router;
use tower_http::cors::CorsLayer;
use sqlx::{postgres::PgPoolOptions, Pool, Postgres};
pub struct AppState {
db: Pool<Postgres>,
env: Config,
}
#[tokio::main]
async fn main() {
dotenv().ok();
let config = Config::init();
let pool = match PgPoolOptions::new()
.max_connections(10)
.connect(&config.database_url)
.await
{
Ok(pool) => {
println!("✅Connection to the database is successful!");
pool
}
Err(err) => {
println!("🔥 Failed to connect to the database: {:?}", err);
std::process::exit(1);
}
};
let cors = CorsLayer::new()
.allow_origin("http://localhost:3000".parse::<HeaderValue>().unwrap())
.allow_methods([Method::GET, Method::POST, Method::PATCH, Method::DELETE])
.allow_credentials(true)
.allow_headers([AUTHORIZATION, ACCEPT, CONTENT_TYPE]);
let app = create_router(Arc::new(AppState {
db: pool.clone(),
env: config.clone(),
}))
.layer(cors);
println!("🚀 Server started successfully");
let listener = tokio::net::TcpListener::bind("0.0.0.0:8000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
And that’s it! You can now start the Axum HTTP server if you haven’t done so already. Feel free to send HTTP requests from your API client to test the authentication endpoints. If you’d like to access the Postman collection I used for testing the API, you can download or clone the project from https://github.com/wpcodevo/jwt-auth-axum-rust, where you’ll find it located in the root directory.
Conclusion
Congratulations on reaching this point! In this article, you learned how to implement JSON Web Token (JWT) authentication in Rust using the Axum framework and PostgreSQL.
I hope you found this guide both helpful and enjoyable. If you have any questions or feedback, please feel free to leave your thoughts in the comment section below.