Websockets with Rust and Actix Web

The Actix framework for rust is an actor based framework strictly following the actor pattern. REST APIs can be built simply and intuitively. One of the main reasons I chose it over rocket was, at the time, it runs on stable rust!

The syntax is very easy to work with and will be familiar to Java / Spring developers if that is your background. Let’s take a look at a very simple API

//main.rs
use actix_cors::Cors;
use actix_web::{Responder, HttpResponse, HttpServer, App, get};

#[get("/")]
async fn get() -> impl Responder {
    println!("GET /");
    HttpResponse::Ok().body("test")
}

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    println!("Started");
    HttpServer::new(move || {
        App::new()
            .wrap(Cors::new().send_wildcard().finish())
            .service(get)
    })
        .bind("0.0.0.0:8120")?
        .run()
        .await
}

The few dependencies you will need are

actix-cors="0.2.0"
actix-rt = "1.1.1"
actix-web = "2.0.0"

Checkout the docs for more detailed examples! When looking at using websockets, personally I hit a few stumbling blocks on how best to put the pieces together. There are some code examples but they seem long winded and lack any real guidance on what they are doing. Let’s have a look at a simpler example than the ones in the repo but with a bit more meat on the bones than you get in the docs.

Using the example from the docs we start with

use actix_cors::Cors;
use actix_web::{web, Responder, HttpResponse, HttpServer, App, get, Error, HttpRequest};
use actix::{Actor, StreamHandler};
use actix_web_actors::ws;


#[get("/")]
async fn get() -> impl Responder {
    println!("GET /");
    HttpResponse::Ok().body("test")
}

/// Define http actor
struct MyWs;

impl Actor for MyWs {
    type Context = ws::WebsocketContext<Self>;
}

/// Handler for ws::Message message
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for MyWs {
    fn handle(
        &mut self,
        msg: Result<ws::Message, ws::ProtocolError>,
        ctx: &mut Self::Context,
    ) {
        match msg {
            Ok(ws::Message::Ping(msg)) => ctx.pong(&msg),
            Ok(ws::Message::Text(text)) => ctx.text(text),
            Ok(ws::Message::Binary(bin)) => ctx.binary(bin),
            _ => (),
        }
    }
}

async fn index(req: HttpRequest, stream: web::Payload) -> HttpResponse {
    let resp = ws::start(MyWs {}, &req, stream).unwrap();
    println!("{:?}", resp);
    resp
}

#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    println!("Started");
    HttpServer::new(move || {
        App::new()
            .wrap(Cors::new().send_wildcard().finish())
            .service(get)
            .route("/ws/", web::get().to(index))
    })
        .bind("0.0.0.0:8120")?
        .run()
        .await
}

So we have added a new index function and a MyWs struct. The index function is the entry point to the websocket connection, API consumers will be accessing this method to switch protocols and upgrade to a websocket connection. This will call the actix-web-actors method ws::start passing a new instance of our MyWs struct and the original request and stream.

MyWs

Looking at the call made to ws::start, the actor struct that is passed to the method is constrained so that it must implement Actor with a type definition for WebsocketContext and StreamHandler<..>. These have both been provided in the example code.

The type definition is quite interesting and is used as an argument to the StreamHandler implementation and ensures that the instance passed to ws::start has its methods called when websocket actions are performed. This can be seen with the ctx methods: pong, text and binary as called in the handle function in the StreamHandler implementation.

This is all great and clients can push data to the server, we can receive it and do stuff. There is however a different case that is almost always the reason I end up using websockets, updating a UI in realtime as things happen.

Publishing data to clients

With some small additions to the example given, we can enable publishing data on the websocket to connected clients. This is best achieved through leveraging the actor pattern inherent in the actix framework. We will extend our actor to handle the types of messages we will publish and we will alter the way we start the web socket in order to get a handle on the actor for use by our publishing mechanism

The additions to the actor

#[derive(Message)]
#[rtype(result = "()")]
pub struct Payload<T> {
    pub payload: T,
}

impl<T> Handler<Payload<T>> for MyWs where T: Serialize + Debug {
    type Result = ();

    fn handle(&mut self, msg: Payload<T>, ctx: &mut Self::Context) {
        println!("handle {:?}", msg.payload);
        ctx.text(serde_json::to_string(&msg.payload).expect("Cannot serialize"));
    }
}

The people from actix have been very kind here and given us some macro rules for implementing the required code for our payload struct. Simply add the 2 macros as above to whatever struct you want to send to your actor.

The new implementation is for handling the messages. Notice the type definition matching the rtype of the Payload macro? This is for tying together what should be returned from the handle function.

Now we need a way of sending messages to this method. To do this we alter our index method above to start the websocket and return a handle we can use to send messages to the actor

async fn index(req: HttpRequest, stream: web::Payload) -> HttpResponse {
    let (addr, resp) = ws::start_with_addr(MyWs {}, &req, stream).unwrap();
    let recipient = addr.recipient();
    task::spawn(async move {
        loop {
            println!("Send ping");
            let result = recipient.send(Payload { payload: "ping".to_string() });
            let result = result.await;
            result.unwrap();
            sleep(Duration::from_secs(1));
        }
    });
    println!("{:?}", resp);
    resp
}

It should be noted that tokio::task is being used here to spawn the thread. This allows the async send method to be called on the recipient. Notice how the send and await calls are on separate lines. This is because the recipient does not implement Send and cannot be used across an await in a single line. If this is combined to a single line, you will get compile errors saying future is not Send. That took me ages to debug! Hopefully this will save your bacon

The full source code for this walk through is available here

Building debian packages

Debian packages (.deb files) are the packages installed by apt and apt-get in ubuntu. They can be installed manually using the dpkg command or hosted in a PPA as described here.

Installing software

You will need to have the following software installed on a Linux based system:

  1. build-essential
  2. software-properties-common
  3. devscripts
  4. debhelper
apt-get install -y \
	build-essential \
	software-properties-common \
        devscripts \
        debhelper

Nice to haves:

  1. lintian
apt-get install -y \
        lintian

2. git (latest version from the git-core PPA)

add-apt-repository ppa:git-core/ppa
apt update
apt install -y git

Cross building for RPI4?

  1. crossbuild-essential-arm64
apt-get install -y \
        crossbuild-essential-arm64

Anatomy of a debian package

The basic structure of a debian package consists of a number of files, you can see the full documentation here. There are some requirements on folder structures too, specifically around version numbers and where the debian package files reside.

  1. Make a folder with the named formatted as <your package name>-<version number> for example: my-awesome-package-0.0.1. You can read more about the version numbering here, but the above is sufficient
  2. Make a subfolder called debian
  3. Inside the debian folder a source/format file with 3.0 (native) as the text indicating a debian specific package
  4. Inside the debian folder a compat file with 10 as the text – this is the debhelper compatibility version

control file

This is the heart of the definition of your package, you can specify values for all the fields shown by the apt command when installing and interrogating your package. You can see a detailed description of this file here. For a simple build:

Source: my-awesome-package
Section: misc
Priority: optional
Maintainer: Me <info@my-awesome-compant.com>
Standards-Version: 3.9.7

Package: my-awesome-package
Depends:
Architecture: amd64
Essential: no
Description: Does awesome things to your computer

rules file

This file defines how the debhelper application will build your package. The simplest definition is:

#!/usr/bin/make -f

PKGDIR=debian/tmp

%:
	dh $@

If you are packaging your application as a SystemD unit:

#!/usr/bin/make -f

PKGDIR=debian/tmp

%:
	dh $@ --with systemd

override_dh_installinit:
	dh_systemd_enable -pmy-awesome-package --name=my-awesome-package my-awesome-package.service
	dh_installinit -pmy-awesome-package --no-start --noscripts
	dh_systemd_start -pmy-awesome-package --no-restart-on-upgrade

override_dh_systemd_start:
	echo "Not running dh_systemd_start"

Make sure you replace my-awesome-package with the actual name of your package and include the .service file in your debian folder.

Adding files to the package

This is done using a .install file, you will need to be very careful with the name. It is <package name>.install. Continuing the above theme that would be my-awesome-package.install. This can be used to copy files in to the debian package and specify where on the target machine they will be placed. The format is 1 item per line with the source first, destination last separated by a space. The source is relative to the debian directory. You can use wildcards, for a react frontend application installed to a running apache instance as the only site, I have used this:

../*.json var/www/html
../*.js var/www/html
../*.ico var/www/html
../*.html var/www/html
../*.png var/www/html
../*.txt var/www/html
../static var/www/html

changelog

The final file is the changelog you can see the details of the composition here. I personally like to generate this from the tag history of my repositories as that is how a initiate deb package builds in my CI pipeline:

VERSION=$(git tag --sort=-taggerdate --format "%(refname:lstrip=-1)" | head -1)
git tag --sort=-taggerdate \
        --format "my-awesome-package (%(refname:lstrip=-1)) focal; urgency=medium%0a%0a  * %(subject)%0a%0a -- %(taggername) %(taggeremail)  %(taggerdate:rfc2822)" \
    > my-awesome-package-"$VERSION"/debian/changelog

There is a bit going on here, refname:lstrip=-1 is getting the last portion of the tag name, so for 0.0.1 it is 0.0.1, for development/0.0.1 it is 0.0.1 too. The %0a text is line breaks, the tag date has to be formatted as rfc2822 to be compatible with the deb package.

summary

You should now have a folder structure similar to this:

my-awesome-package-0.0.1
└───debian
|   └───source
│   │   │   format
│   │   my-awesome-package.install
│   │   my-awesome-package.service # if installing as system d unit
│   |   compat
│   |   control
│   |   rules

Building the package

You should now be able to cd in to the my-awesome-package-0.0.1 directory and run dpkg-buildpackage. This will produce

my-awesome-package_0.0.1.dsc
my-awesome-package_0.0.1.tar.xz
my-awesome-package_0.0.1_amd64.buildinfo
my-awesome-package_0.0.1_amd64.changes
my-awesome-package_0.0.1_amd64.deb

If you are building for an RPI4, the command is:

CONFIG_SITE=/etc/dpkg-cross/cross-config.amd64 DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -aarm64 -Pcross,nocheck

CI/CD

This whole process has been formalised in a GitHub action here for your convenience

Cross Compiling Rust for RPI4

I really enjoy writing Rust, I own a growing number of Raspberry PIs. The RPI4 is a big step forward in terms of hardware resources on the platform. It is however 64bit not 32 like its predecessors and requires a different tool chain for cross compilation. In this guide I will take you through the setup. If you just want something that works now, you can use the albeego/rust-musl-builder-aarch64:0.0.1 docker image.

docker run --rm -it -v "$(pwd)":/home/rust/src albeego/rust-musl-builder:0.0.1 cargo build --release --target=aarch64-unknown-linux-gnu

Tool Chain

We’re going to need some packages for this, the following apt command will pull them all in:

apt install -y \
    build-essential \
    libssl-dev \
    linux-libc-dev \
    gcc-aarch64-linux-gnu \
    software-properties-common \
    crossbuild-essential-arm64

A note here on linking, you will need to tell rust to use /usr/bin/aarch64-linux-gnu-gcc as the linker for the rust applications you compile. You can do this by including a .cargo/config file in your $HOME directory, the project directory with the following contents

[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"

Now you need to let rust know about your cross compilation target. You do this by adding the target to rustup:

rustup target add aarch64-unknown-linux-gnu

That’s it for the tool chain

Open SSL

One of the challenges with cross compiling rust applications is getting the dependencies correct for the build. Open SSL is pretty prevalent and if you are going to do anything web related, you will need it. This cannot be the pre-compiled Open SSL binaries that come with your OS, you need to produce a cross compiled binary for linking with your rust application. You don’t want to overwrite that pre-compile binary that came with your OS either. That would cause other issues! So, let’s make a place to store the cross compiled Open SSL binary and its source and open it in our terminal

mkdir /build
cd /build

Grab the Open SSL source from the GitHub release and extract it (1_0_2r) at the time of writing

curl -LO "https://github.com/openssl/openssl/archive/OpenSSL_1_0_2r.tar.gz"
tar xvzf "OpenSSL_1_0_2r.tar.gz
cd "openssl-OpenSSL_1_0_2r"

Now we are ready to build Open SSL, we will configure it without ZLib (we need to provide our own cross compiled ZLib to consumers anyway), it needs to be non shared so we are not linking any of our x86_64 objects, position independent in memory and installed to a custom location

./Configure no-shared \
            no-zlib \
            -fPIC \
            --prefix=/build/openssl-OpenSSL_1_0_2r/target \
            --cross-compile-prefix=aarch64-linux-gnu- \
            linux-aarch64

That’s the build setup, now run it to produce the binary in /build/openssl-OpenSSL_1_0_2r/target/bin

make depend
make
sudo make install

ZLib

We will need to do something similar with the compression library ZLib, again, if you are doing anything web related, you can’t avoid it. Lets keep everything together and drop the source code in to our /build directory

cd /build
ZLIB_VERSION=1.2.11
curl -LO "http://zlib.net/zlib-1.2.11.tar.gz"
tar xzf "zlib-1.2.11.tar.gz"
cd "zlib-1.2.11"

and for the build (making sure we use out aarch64 compiler)

CC=aarch64-linux-gnu-gcc ./configure --static
make
sudo make install

Running the builds

You will need to pass a few signals over to rust to let it know where your pre-compiled binaries are and how to use them. After that it is simply a case of doing a cargo build with the target architecture specified

OPENSSL_DIR=/build/openssl-OpenSSL_$OPENSSL_VERSION/target \
    PKG_CONFIG_ALLOW_CROSS=true \
    LIBZ_SYS_STATIC=1 \
    CC="aarch64-linux-gnu-gcc -static -Os" \
    cargo build --release --target=aarch64-unknown-linux-gnu

Please let me know in the comments if you get any issues. I may have seen them and may be able to help!

I’d like to give a special mention to Eric Kidd from Vermont USA on this one. His repository here https://github.com/emk/rust-musl-builder formed the basis on which I did a lot of this work

Hosting a signed APT repository

Distributing code for Debian based distributions and derivatives through a PPA can be a little difficult. The following guide will break down the steps and try to explain what is going on. At a high level, you will need a GPG Keypair, somewhere to store the PPA, a machine to do the building and some deb packages to host!

GPG Keyset

For the sake of repeatability I have scripted this out.

#!/bin/bash
set -e

REAL_NAME=$1
EMAIL=$2
PASS_PHRASE=$3

cat > ppa-key <<EOF
     %echo Generating a basic OpenPGP key
     Key-Type: 1
     Key-Length: 4096
     Subkey-Type: 1
     Subkey-Length: 4096
     Name-Real: $REAL_NAME
     Name-Email: $EMAIL
     Expire-Date: 0
     Passphrase: $PASS_PHRASE
     # Do a commit here, so that we can later print "done" 🙂
     %commit
     %echo done
EOF

gpg --batch --generate-key ppa-key
rm -rf ppa-key
echo "$PASS_PHRASE" | gpg --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback --export-secret-keys --armor info@albeego.com > ppa-private-key.asc
gpg --export --armor info@albeego.com > KEY.gpg

This script will setup a batch file for creating they key, run the key generation, remove the batch file then export the private key as ppa-private-key.asc and the public key as KEY.gpg. You will need to store these 3 items securely in backup somewhere. If you loose them, you will no longer be able to update your PPA without regenerating the keys, your consumers will see some strongly worded warnings about the validity of your PPA in this case.

The PPA

To build the PPA we will be using apt-ftparchive. This tool will generate the folder structure, cache and files describing the repository structure for apt to consume.

apt-ftparchive.conf

This configuration file is used to specify the structure of your PPA, where are things stored, what compressions to use, supported version and what architectures are available.

Dir {
	ArchiveDir "./debian";
	CacheDir "./cache";
};
Default {
	Packages::Compress ". gzip bzip2";
	Sources::Compress ". gzip";
	Contents::Compress ". gzip";
};
TreeDefault {
	BinCacheDB "packages-$(SECTION)-$(ARCH).db";
	Directory "pool/$(SECTION)";
	Packages "$(DIST)/$(SECTION)/binary-$(ARCH)/Packages";
	SrcDirectory "pool/$(SECTION)";
	Contents "$(DIST)/Contents-$(ARCH)";
};
Tree "dists/bionic" {
	Sections "main";
	Architectures "amd64 armhf arm64";
};
Tree "dists/focal" {
	Sections "main";
	Architectures "amd64 armhf arm64";
};

This configuration will support ubuntu 18.04 and 20.04 for 64 bit x86 systems and raspberry PIs including the new 4 series.

You will need to create the following folders to support the configuration

  1. debian/dists/bionic/main/binary-amd64
  2. debian/dists/bionic/main/binary-arm64
  3. debian/dists/bionic/main/binary-armhf
  4. debian/pool/main
  5. cache
mkdir -p debian/dists/bionic/main/binary-amd64
mkdir -p debian/dists/bionic/main/binary-arm64
mkdir -p debian/dists/bionic/main/binary-armhf
mkdir -p debian/pool/main
mkdir cache

Your .debs need to be copied in to the debian/pool/main directory.

NB: If you are updating the PPA, make sure you include all the previously uploaded .debs too or they will not be indexed

You can now generate the indexes using the following command:

apt-ftparchive generate apt-ftparchive.conf

There will be some files missing in the resulting structure, you will need to add these. They are the Release files. These files correspond to the supported distributions, each one will require a configuration file. In our case bionic.conf and focal.conf

bionic.conf

APT::FTPArchive::Release::Codename "bionic";
APT::FTPArchive::Release::Origin "My repository";
APT::FTPArchive::Release::Components "main";
APT::FTPArchive::Release::Label "Packages hosted by me!!!";
APT::FTPArchive::Release::Architectures "amd64 arm64 armhf";
APT::FTPArchive::Release::Suite "bionic";

focal.conf

APT::FTPArchive::Release::Codename "focal";
APT::FTPArchive::Release::Origin "My repository";
APT::FTPArchive::Release::Components "main";
APT::FTPArchive::Release::Label "Packages hosted by me";
APT::FTPArchive::Release::Architectures "amd64 arm64 armhf";
APT::FTPArchive::Release::Suite "focal";

These configuration files are important, without them, consumers will not find packages in your archive as there will be no indexes for their architecture or distribution

You can now generate the Release files

apt-ftparchive -c bionic.conf release debian/dists/bionic >>debian/dists/bionic/Release
apt-ftparchive -c focal.conf release debian/dists/focal >>debian/dists/focal/Release

The release files will now need signatures attached to attest to the validity and your ownership of these .debs

echo "$PASS_PHRASE" | gpg -u "${PRIVATE_KEY_EMAIL}" --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback -abs -o - debian/dists/bionic/Release >debian/dists/bionic/Release.gpg
echo "$PASS_PHRASE" | gpg -u "${PRIVATE_KEY_EMAIL}" --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback --clearsign -o - debian/dists/bionic/Release >debian/dists/bionic/InRelease
echo "$PASS_PHRASE" | gpg -u "${PRIVATE_KEY_EMAIL}" --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback -abs -o - debian/dists/focal/Release >debian/dists/focal/Release.gpg
echo "$PASS_PHRASE" | gpg -u "${PRIVATE_KEY_EMAIL}" --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback --clearsign -o - debian/dists/focal/Release >debian/dists/focal/InRelease

You will notice above that the commands are expecting PASS_PHRASE and PRIVATE_KEY_EMAIL variables to be available in your shell. I use this as part of a script which will be included for your convenience at the end of the article

You now have the Release.gpg files which are detached signatures and the InRelease files which are the Release contents with the signature wrapping the message (attached) at the correct points in the file structure.

Simply upload your debian directory to your target hosting system. I personally used https://www.ovh.co.uk Object Storage, it’s cheap and will support some gigantic .debs if you need them. You could also use github pages as long as none of your .debs are 500Mb or larger and your entire PPA is within their repository size limit

<my_repository>.list

This is the final item to load in to your hosting platform, the .list file call it something sensible for your PPA, led-sys.list would do for me! Its contents should be as follows:

deb http://your-hosting-url bionic main
deb http://your-hosting-url focal main

Consuming your PPA

curl -s --compressed http://your-hosting-url/KEY.gpg | sudo apt-key add -
sudo curl -s --compressed -o /etc/apt/sources.list.d/<my_repository>.list "http://your-hosting-url/<my_repository>.list"
sudo apt update

Make sure you change the URL of the PPA and the name of the .list file to match, you will then be able to apt install your packages from your Signed APT repository

A Full Script for Managing a PPA in an OVH Object Storage container

#!/bin/bash
set -e

STORAGE_CONTAINER_URL=$1
PRIVATE_KEY=$2
PRIVATE_KEY_EMAIL=$3
PASS_PHRASE=$4
PUBLIC_KEY=$5
PROJECT_ID=$6
SWIFT_USERNAME=$7
SWIFT_PASSWORD=$8
REGION=$9
CONTAINER_NAME=${10}
LIST_FILE_NAME=${11}

download_files() {
  swift --os-auth-url https://auth.cloud.ovh.net/v3 --auth-version 3 \
    --os-project-id "$PROJECT_ID" \
    --os-username "$SWIFT_USERNAME" \
    --os-password "$SWIFT_PASSWORD" \
    --os-region-name "$REGION" \
    download "$CONTAINER_NAME" \
    --prefix debian/pool/main/
}

upload() {
  swift --os-auth-url https://auth.cloud.ovh.net/v3 --auth-version 3 \
    --os-project-id "$PROJECT_ID" \
    --os-username "$SWIFT_USERNAME" \
    --os-password "$SWIFT_PASSWORD" \
    --os-region-name "$REGION" \
    upload "$CONTAINER_NAME" "$1"
}

write_key_to_file() {

  KEY="${3//-----BEGIN PGP $1 KEY BLOCK-----/}"
  KEY="${KEY//-----END PGP $1 KEY BLOCK-----/}"

  echo "-----BEGIN PGP $1 KEY BLOCK-----" >"$2"
  printf "%s\n" "$KEY" >>"$2"
  echo "-----END PGP $1 KEY BLOCK-----" >>"$2"
}

write_private_key_to_file() {
  write_key_to_file "PRIVATE" private.key "$PRIVATE_KEY"
}

write_public_key_to_file() {
  write_key_to_file "PUBLIC" KEY.gpg "$PUBLIC_KEY"
}

rm $LIST_FILE_NAME || true

write_private_key_to_file
gpg --import private.key
rm private.key

mkdir -p debian/dists/bionic/main/binary-amd64
mkdir -p debian/pool/main
cp -r *.deb debian/pool/main
download_files
mkdir cache
apt-ftparchive generate apt-ftparchive.conf
apt-ftparchive -c bionic.conf release debian/dists/bionic >>debian/dists/bionic/Release
echo "$PASS_PHRASE" | gpg -u "${PRIVATE_KEY_EMAIL}" --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback -abs -o - debian/dists/bionic/Release >debian/dists/bionic/Release.gpg
echo "$PASS_PHRASE" | gpg -u "${PRIVATE_KEY_EMAIL}" --batch --quiet --yes --passphrase-fd 0 --pinentry-mode loopback --clearsign -o - debian/dists/bionic/Release >debian/dists/bionic/InRelease
upload debian
upload cache

wget "$STORAGE_CONTAINER_URL"/$LIST_FILE_NAME || echo "deb $STORAGE_CONTAINER_URL bionic main" >$LIST_FILE_NAME
upload $LIST_FILE_NAME

wget "$STORAGE_CONTAINER_URL"/KEY.gpg || write_public_key_to_file
upload KEY.gpg

rm KEY.gpg
rm debian
rm cache

The above script is ready to go as part of a build pipeline, it will synchronise the object storage to the local machine, copy any .debs to the pool directory and rebuild the indexes, upload everything and tidy up after itself. This is using the OpenStack swift client, any OpenStack compatible Object Storage container will work. Just change the URI for the authorisations in the swift commands.

The whole process is available as a GitHub action here: https://github.com/albeego/apt-repository-action

Building RPI 4 images from ubuntu 18.04

In this post I will be running through some of the steps needed to build custom images for Raspberry PI 4 boards. There are a number of steps along the way that took extra research and ended with some head scratching and wondering about how and why things were failing. Hopefully this guide can help someone struggling down the same path. I used my ubuntu workstation though out this process and would recommend using a Linux based OS

First off you will need an image to start from, I used http://cdimage.ubuntu.com/releases/18.04.4/release/ and selected the ArmV8 / Aarch64 server install image. Although using http://cdimage.ubuntu.com/releases/20.04/release/ is fine too Don’t forget to decompress the .xz file!

xz --decompress <image_file_name>.img.xz

One of the first problems to overcome is that the image contains 2 partitions, the root file system and the (FAT) boot partition. A number of tools will show you where the offsets on the partition table in the image are for the 2 partitions, however, losetup has a flag that will use a partition table scan inside the image to register loop device paritions

sudo losetup -P /dev/loop99 <image_file_name>.img

Now we will need an area to setup our mount points in to the partitions of the image. We will be mounting the root filesystem in partition 2 and the boot partition in partition 1

mkdir rpi
mkdir rpi/boot

Normally we could just create the rpi directory, however as this is an ubuntu image, we will be mounting the boot partition to rpi/boot/firmware this is because the boot partition is FAT, which does not support sym links. If you are going to do any kernel installs; flash-kernel will will encounter a number of issues trying to write the kernel to the rpi/boot directory. To begin with, flash-kernel will report “Unsupported platform”, reading through the documentation, you can override the target platform in /etc/flash-kernel/machine:

echo "Raspberry Pi 4 Model B Rev 1.2" > /etc/flash-kernel/machine

Next you will get a failure on missing device tree blobs:

Couldn't find DTB bcm2711-rpi-4-b.dtb on the following paths: /etc/flash-kernel/dtbs /usr/lib/linux-image- /lib/firmware//device-tree/
Installing  into /boot/dtbs//./bcm2711-rpi-4-b.dtb
/bin/cp: cannot stat '': No such file or directory

You can copy that device tree blob from /usr/lib/linux-image-/broadcom/bcm2711-rpi-4-b.dtb to /etc/flash-kernel/dtbs the result of which will be:

/bin/ln: failed to create symbolic link '/boot/dtb-': Operation not permitted

The correct approach is to mount the boot partition to rpi/boot/firmware

sudo mount -o rw /dev/loop99p2 rpi
sudo mount -o rw /dev/loop99p1 rpi/boot/firmware

You will need to bind mount some of your running system in to the mounted partitions next. This will enable you to install packages inside the images in later steps

sudo mount --bind /dev rpi/dev/
sudo mount --bind /sys rpi/sys/
sudo mount --bind /proc rpi/proc/
sudo mount --bind /dev/pts rpi/dev/pts
sudo mount --bind /run rpi/run

Finally, you will need to install qemu-user-static.

sudo apt install -y qemu-user-static

The mount points have now been fully prepared and the next step is to create a chroot to access your image and install software packages as though it is your local system.

First make a script to execute inside the chroot and copy it into the directory. I used the following contents

install-packages.sh

apt update
dpkg --configure -a
apt-get --print-uris --yes install <list of packages> | grep ^\' | cut -d\' -f2 > downloads.list
wget --input-file downloads.list
dpkg -i *.deb

Every time I tried to get apt to install the packages, it froze after unpacking. It would quite happily fetch the debs, so the above script will print the packages and all their dependencies URIs to a file called downloads.list after which wget and dpkg take over to do the download and install respectively.

Copy the script in to the rpi directory and run your package installation with the following commands

sudo chroot rpi ./install-packages.sh
sudo rm rpi/*.deb
sudo rm rpi/install-packages.sh
sudo rm rpi/downloads.list

That is everything, you should now have an image with packages pre-installed. You will need to unmount everything on your local file system then you are ready to flash your image and try it out

sudo umount rpi/dev/pts
sudo umount rpi/dev/
sudo umount rpi/sys/
sudo umount rpi/proc/
sudo umount rpi/run
sudo umount /dev/loop99p1
sudo umount /dev/loop99p2 -l
sudo losetup -d /dev/loop99