November 2023

Creating a web service in Rust and running it in WebAssembly

In this blog post, I will show you how to create a simple web service in Rust and compile it to Wasm. Then, I will show you how to run the Wasm service on the server using a Wasm runtime.

Creating a web service

To create a web service in Rust, we will use the hyper crate, which is a fast and low-level HTTP library. Hyper provides both server and client APIs for working with HTTP requests and responses. To use hyper, we need to add it as a dependency in our Cargo.toml file:

[dependencies]
hyper = "0.14"

Then, we can write our web service code in the src/main.rs file. The code below creates a simple web service that responds with “Hello, World!” to any GET request:

use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};
use std::convert::Infallible;
// A function that handles an incoming request and returns a response
async fn hello_world(_req: Request) -> Result, Infallible> {
Ok(Response::new(Body::from("Hello, World!")))
}
#[tokyo::main]
async fn main() {
// Bind the server to an address
let addr = ([127, 0, 0, 1], 3000).into();
// Create a service function that maps each connection to a hello_world function
let make_service = make_service_fn(|conn| async { Ok::<, Infallible>(service_fn(hello_world))
});
// Create a server with the service function
let server = Server::bind(&addr).serve(make_service);
// Run the server and handle any error
if let Err(e) = server.await {
eprintln!("server error: {}", e);
}
}

To run the web service locally, we can use the cargo run command in the terminal. This will compile and execute our Rust code. We can then test our web service by sending a GET request using curl or a web browser:

$ curl http://localhost:3000
Hello, World!

Creating a web service client

To demonstrate how to use hyper as a web service client, we can write another Rust program that sends a GET request to our web service and prints the response body. The code below shows how to do this using the hyper::Client struct:

use hyper::{Body, Client};
use hyper::body::HttpBody as _;
#[tokyo::main]
async fn main() {
// Create a client
let client = Client::new();
// Send a GET request to the web service
let uri = "http://localhost:3000".parse().unwrap();
let mut resp = client.get(uri).await.unwrap();
// Print the status code and headers
println!("Response: {}", resp.status());
println!("Headers: {:#?}\n", resp.headers());
// Print the response body
while let Some(chunk) = resp.body_mut().data().await {
let chunk = chunk.unwrap();
println!("{}", std::str::from_utf8(&chunk).unwrap());
}
}

To run the web service client locally, we can use the cargo run command in another terminal. This will compile and execute our Rust code. We should see something like this:

$ cargo run
Response: 200 OK
Headers: {
"content-length": "13",
}
Hello, World!

Creating a database client


To make our web service more useful, we can add some database functionality to it. For example, we can store and retrieve some data from a MySQL database using the mysql_async crate, which is an asynchronous MySQL driver based on tokio. To use mysql_async, we need to add it as a dependency in our Cargo.toml file:

[dependencies]
mysql_async = "0.28"

Then, we can modify our web service code in the src/main.rs file to connect to a MySQL database and execute some queries. The code below assumes that we have a MySQL database running on localhost with the default port (3306), username (root), password (password), and database name (test). The code also assumes that we have a table called users with two columns: id (int) and name (varchar).

use hyper::{Body, Request, Response, Server};
use hyper::service::{make_service_fn, service_fn};
use mysql_async::{Pool, Row};
use std::convert::Infallible;
// A function that handles an incoming request and returns a response
async fn hello_world(_req: Request) -> Result, Infallible> {
// Create a pool of connections to the MySQL database
let pool = Pool::new("mysql://root:password@localhost:3306/test");
// Get a connection from the pool
let mut conn = pool.get_conn().await.unwrap();
// Execute a query to insert a new user
conn.exec_drop("INSERT INTO users (name) VALUES (?)", ("Alice",)).await.unwrap();
// Execute a query to select all users
let users: Vec = conn.query("SELECT id, name FROM users").await.unwrap();
// Drop the connection and return it to the pool
drop(conn);
// Format the users as a string
let mut output = String::new();
for user in users {
let (id, name) = mysql_async::from_row(user);
output.push_str(&format!("User {}: {}\n", id, name));
}
// Return the output as the response body
Ok(Response::new(Body::from(output)))
}
#[tokyo::main]
async fn main() {
// Bind the server to an address
let addr = ([127, 0, 0, 1], 3000).into();
// Create a service function that maps each connection to a hello_world function
let make_service = make_service_fn(|conn| async { Ok::<, Infallible>(service_fn(hello_world))
});
// Create a server with the service function
let server = Server::bind(&addr).serve(make_service);
// Run the server and handle any error
if let Err(e) = server.await {
eprintln!("server error: {}", e);
}
}

To run the web service locally, we can use the cargo run command in the terminal. This will compile and execute our Rust code. We can then test our web service by sending a GET request using curl or a web browser:

$ curl http://localhost:3000
User 1: Alice
User 2: Alice
User 3: Alice

Building and running the web service

To build our web service as a Wasm binary, we need to use the cargo-wasi crate, which is a Cargo subcommand for building Rust code for Wasm using the WebAssembly System Interface (WASI). WASI is a standard interface for Wasm programs to access system resources such as files, network, and environment variables. To install cargo-wasi, we can use the cargo install command:

$ cargo install cargo-wasi

Then, we can use the cargo wasi build command to build our web service as a Wasm binary. This will create a target/wasm32-wasi/debug directory with our Wasm binary file:

$ cargo wasi build
Compiling hyper v0.14.15
Compiling mysql_async v0.28.0
Compiling wasm-service v0.1.0 (/home/user/wasm-service)
Finished dev [unoptimized + debuginfo] target(s) in 1m 23s

To run our web service as a Wasm binary on the server, we must use a Wasm runtime that supports WASI and network features. There are several Wasm runtimes available, such as Wasmtime, Wasmer, and WasmEdge. In this blog post, I will use WasmEdge as an example.

To install WasmEdge follow the instructions on its website.

Then, we use the wasmedge command to run our web service as a Wasm binary. We need to pass some arguments to enable WASI and network features and to bind our web service to an address:

$ wasmedge --dir .:. --dir /tmp:/tmp --net 127.0.0.1:3000 target/wasm32-wasi/debug/wasm_service.wasm --addr 127.0.0.1:3000

We can then test our web service by sending a GET request using curl or a web browser:

$ curl http://localhost:3000
User 1: Alice
User 2: Alice
User 3: Alice

Conclusion

Here we’ve created a simple web service in Rust and compiled it to Wasm. I have also shown you how to run the Wasm service on the server using a Wasm runtime. I hope you have enjoyed this tutorial and learned something new.

Creating a web service in Rust and running it in WebAssembly Read More »

How to Install Pixie on a Ubuntu VM

Pixie is an open source observability platform that uses eBPF to collect and analyze data from Kubernetes applications. Pixie can help you monitor and debug your applications without any code changes or instrumentation. In this blog post, I will show you how to install Pixie on a stand-alone virtual machine using Minikube, a tool that lets you run Kubernetes locally.

Prerequisites

To follow this tutorial, you will need:

• A stand-alone virtual machine running Ubuntu 22.04 or later. This tutorial assumes that the VM

  • has at least 6 vCPUs and at least 16 GB RAM
  • is installed with Desktop and has a Web Browser, which will be later used for user’s authentication with Pixie Community Cloud. An alternative auth method is described here.

• Basic dev tools such as build-essential, git, curl, make, gcc, etc.

• Docker, a software that allows you to run containers.

• KVM2 driver, a hypervisor that allows you to run virtual machines.

• Kubectl, a command-line tool that allows you to interact with Kubernetes.

• Minikube, a tool that allows you to run Kubernetes locally.

• Optionally, Go and/or Python, programming languages that allow you to write Pixie scripts.

Step 1: Update and Upgrade Your System

The first step is to update and upgrade your system to ensure that you have the latest packages and dependencies. You can do this by running the following command:

sudo apt update -y && sudo apt upgrade -y

Step 2: Install Basic Dev Tools

The next step is to install some basic dev tools that you will need to build and run Pixie. You can do this by running the following command:

sudo apt install -y build-essential git curl make gcc libssl-dev bc libelf-dev libcap-dev \
clang gcc-multilib llvm libncurses5-dev git pkg-config libmnl-dev bison flex \
graphviz software-properties-common wget htop

Step 3: Install Docker

Docker is a software that allows you to run containers, which are isolated environments that can run applications. You will need Docker to run Pixie and its components. To install Docker, you can follow the instructions from the official Docker website:

# Add Docker's official GPG key:
sudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update -y
# docker install
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Step 4: Add Your User to the ‘docker’ Group

By default, Docker requires root privileges to run containers. To avoid this, you can add your user to the ‘docker’ group, which will allow you to run Docker commands without sudo. To do this, you can follow the instructions from the DigitalOcean website:

sudo usermod -aG docker ${USER}

Step 5: Install KVM2 Driver

KVM2 driver is a hypervisor that allows you to run virtual machines. You will need KVM2 driver to run Minikube, which will create a virtual machine to run Kubernetes. To install KVM2 driver, you can follow the instructions from the Ubuntu website:

sudo apt-get install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
sudo adduser id -un libvirt
sudo adduser id -un kvm

Step 6: Install Kubectl

Kubectl is a command-line tool that allows you to interact with Kubernetes. You will need kubectl to deploy and manage Pixie and its components on Kubernetes. To install kubectl, you can follow the instructions from the Kubernetes website:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check

This should print:

kubectl: OK
# install kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Test kubectl version:

kubectl version --client

Step 7: Install Minikube

Minikube is a tool that allows you to run Kubernetes locally. You will need Minikube to create a local Kubernetes cluster that will run Pixie and its components. To install Minikube, you can follow the instructions from the Minikube website:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

Step 8: Reboot Your System

After installing all the required tools, you should reboot your system to ensure that the changes take effect. You can do this by running the following command:

sudo reboot

Step 9: Run Kubernetes with Minikube

After rebooting your system, you can run Kubernetes with Minikube. Minikube will create a virtual machine and install Kubernetes on it. You can specify various options and configurations for Minikube, such as the driver, the CNI, the CPU, and the memory. For example, you can run the following command to start Minikube with the KVM2 driver, the flannel CNI, 4 CPUs, and 8000 MB of memory:

minikube start --driver=kvm2 --cni=flannel --cpus=4 --memory=8000

You can also specify a profile name for your Minikube cluster, such as px-test, by adding the -p flag, if you want.

You can list all the clusters and their profiles by running the following command:

minikube profile list

This should print something like:

ProfileVM DriverRuntimeIPPortVersionStatusNodesActive
minikubekvm2docker192.168.39.1608443v1.27.4Running1*
———-———–————————-——————————-——–

Step 10: Install Pixie

Pixie is an open source observability platform that uses eBPF to collect and analyze data from Kubernetes applications. Pixie can help you monitor and debug your applications without any code changes or instrumentation. To install Pixie, you can run the following command:

bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"

This will download and run the Pixie install script, which will guide you through the installation process. After installing Pixie, you should reboot your system to ensure that the changes take effect. You can do this by running the following command:

sudo reboot

Step 11: Start Kubernetes Cluster and Deploy Pixie

After rebooting your system, you can start your Kubernetes cluster again with Minikube. You can use the same command and options that you used before, or you can omit them if you have only one cluster and profile. For example:

minikube start
px deploy

Step 12: Register with Pixie Community Cloud and Check All Works

After starting your Kubernetes cluster, you can check if everything works as expected. You can use the following command to list all the pods in all namespaces and see if they are running:

kubectl get pods -A

Register with Pixie Community Cloud to see your K8s cluster’s stats.

You will have to authenticate with Pixie and log in to the Pixie platform at your VM using a web browser, which Pixie will open for you once you run:

px auth login

Step 13: Deploy Pixie’s Demo

Pixie provides a few demo apps. We deploy a demo application called px-sock-shop, which is a sample online shop that sells socks, based on an open source microservices demo. Some more information on this demo app is available here. The demo shows how Pixie can be used to monitor and debug the microservices running on Kubernetes. To deploy Pixie’s demo, run:

px demo deploy px-sock-shop

Your view in Pixie Community Cloud should be similar to this screenshot

How to Install Pixie on a Ubuntu VM Read More »