This is a copy of the Github readme.
Find the original on


CRUSTDE - Containerized Rust development environment - Hack Without Fear and Trust! (2024-03)
version: 3.0 date: 2022-09-06 author: repository: GitHub

Lines in md Lines in bash scripts

License Hits

Hashtags: #rustlang #buildtool #developmenttool #tutorial #docker #ssh #container #podman #Linux #OCI
My projects on GitHub are more like a tutorial than a finished product: bestia-dev tutorials.


Try it

The installation is just a bunch of bash scripts.
The scripts are for the Debian OS. It can be installed on bare metal or inside Win10+WSL2.
First download and run the download_script. It will download the rest of the scripts.
After downloading, you can inspect them to see exactly what they are doing. There are a lot of comments and descriptions inside. A more detailed explanation is in this
Every script will show step-by-step instructions on what to do next.

mkdir -p ~/rustprojects/crustde_install;
cd ~/rustprojects/crustde_install;
curl -Sf -L --output;
# you can read the bash script, it only creates dirs, download scripts and suggests what script to run next

That's it !

This project has also a youtube video tutorial. Watch it:

Now we can test the connection from various locations.

1. Try the SSH connection from the Debian host:

ssh -i ~/.ssh/crustde_rustdevuser_ssh_1 -p 2201 rustdevuser@localhost
# or using the ssh config file
ssh -F ~/.ssh/config crustde_vscode_cnt
# Choose `yes` to save fingerprint if asked, just the first time.
# type passphrase
# should work !
# try for example
# ls result: rustprojects
# to exit the container

2. If you are in Win10+WSL2, then try the SSH connection from the Windows command prompt or Windows Powershell terminal:

# test the ssh connection from Windows cmd prompt
"C:\WINDOWS\System32\OpenSSH\ssh.exe" -i ~\.ssh\rustdevuser_key -p 2201 rustdevuser@localhost
# Choose `y` to save fingerprint if asked, just the first time.
# type passphrase
# should work !
# try for example
# ls result: rustprojects
# to exit the container

3. Open VSCode and install extension Remote - SSH.

4. Then in VSCode F1, type ssh and choose Remote-SSH: Connect to Host... and choose crustde_vscode_cnt.
Choose Linux if asked, just the first time.
Type your passphrase.
If we are lucky, everything works and you are now inside the container over SSH.

5. In the VSCode terminal create a simple Rust project and run it:

cd ~/rustprojects
cargo new crustde_hello
cd crustde_hello
cargo run

That should work and greet you with "Hello, world!"

6. After reboot, WSL2 can create some network problems for Podman.
No problem for Debian on bare metal. But the script is ok to restart the pod and start the sshd server. So use it in both cases.
We can simulate the WSL2 reboot in Powershell in Windows:

wsl --shutdown 

Before entering any Podman command we need first to clean some temporary files, restart the pod and restart the SSH server.
In the host terminal restart the pod after reboot:

sh ~/rustprojects/crustde_install/
podman ps

If the restart is successful every container will be started a few seconds. It is not enough for containers to be in the status "created". Then just repeat the restart procedure.

7. Eventually you will want to remove the entire pod. Linux OCI containers and pods are ephemeral, which means just temporary. But your code and data must persist. Before destroying the pod/containers, push your changes to GitHub because removing the pod will destroy also all the data that is inside.
Be careful !
In the WSL2 terminal:

podman pod rm -f crustde_pod 


Rust is a fantastic young language that empowers everyone to build reliable and efficient software. It enables simultaneously low-level control without giving up high-level conveniences. But with great power comes great responsibility !

Rust programs can do any "potentially harmful" things to your system. That is true for any compiled program, Rust is no exception.

But Rust can do anything also in the compile-time using and procedural macros. Even worse, if you open a code editor (like VSCode) with auto-completion (like Rust-analyzer), it will compile the code in the background without you knowing it. And the and procedural macros will run and they can do "anything" !

Even if you are very careful and avoid and procedural macros, your Rust project will have a lot of crates in the dependency tree. Any of those can surprise you with some "malware". This is called a "supply chain attack".

It is very hard to avoid "supply chain attacks" in Rust as things are today. We are just lucky, that the ecosystem is young and small and the malevolent players are waiting for Rust to become more popular. Then they will strike and strike hard. We need to be skeptical about anything that comes from the internet. We need to isolate/sandbox it so it cannot harm our system.

For a big open-source project, you will not read and understand every line of code. It is impossible because of the sheer size of projects and it is impossible to gain a deep understanding of all the underlying principles, rules and exceptions of everything. And everything is moving and changing fast and continuously. It is impossible to follow all the changes.
We need to have layered protection between our computer system and some unknown code. In this project, I propose a containerized development environment that will allow some degree of isolation. And in the same time easy to install, transfer and repeat.

Let's learn to develop "everything" inside a Linux OCI container and to isolate/sandbox it as much as possible from the underlying system.

I have to acknowledge that Linux OCI Containers are not the perfect sandboxing solution. But I believe that it is "good enough" for my "Rust development environment". I expect that container isolation will get better with time (google, amazon, Intel, OpenStack and IBM are working on it).
It is possible to use the same Linux OCI container also inside a virtual machine for better isolation. For example, My main system is Win10. Inside that, I have WSL2, which is a Linux virtual machine. And inside that, I have Linux OCI Containers. But because of compiling performance, I decided to go with a Debian dual boot with Linux OCI containers. My opinionated preferences:

Yes, there exists the possibility of abusing a kernel vulnerability, but I believe it is hard and they will get patched.
I didn't choose a true virtualization approach, but it is easy to run the container inside a virtual machine. More layers, more protection.

Trust in Rust open-source 2022

First I have to talk about TRUST. This is the whole point of this entire project.
We live in dangerous times for "supply chain attacks" in open-source and it is getting worse. This is a problem we need to address!
In this project, you don't need to TRUST ME! You can run all the bash commands inside bash scripts line-by-line. My emphasis is to thoroughly comment and describe what is my intention for every single command. You can follow and observe exactly what is going on. This is the beauty of open source. But this is realistic only for very simple projects.
To be meticulously precise, you still have to trust the Windows code, Linux, GNU tools, drivers, Podman, Buildah, VSCode, extensions, the microprocessor, memory, disc and many other projects. There is no system without an inherent minimal level of TRUST.

Compile (build) speed in various environments

After using these containers for some time I was curious about compile performance in various environments. And I was right!
I tried cargo auto build on my project database_web_ui_on_server in different environments. I build a few times and find an average.

18s in container on WSL2 without shared volume
8s in container on WSL2 with shared volume
6s in WSL2

11s in the container on Debian (dual boot) without shared volume
7s in the container on Debian (dual boot) with shared volume
6s in Debian (dual boot)

Then I changed the linker to "mold" on Debian. It is 3x faster!
The "mold" linker is experimental Linux only. That's ok for me.

6.84s in container on WSL2 without shared volume
5.05s in container on WSL2 with shared volume
3.66s in WSL2

4.43s in the container on Debian (dual boot) without shared volume
5.35s in the container on Debian (dual boot) with shared volume
3.61s in Debian (dual boot)

That is a big difference! I decided I will develop Rust projects in Debian (dual boot) without shared volume with the mold linker. The container steals a little performance for itself, but it is not a big deal in that combination. Security is not cheap!

Docker and OCI

We all call it Docker, but Docker is just a well-known company name with a funny whale logo. They developed and promoted Linux containers. Then they helped to create an open industry standard around container formats and runtimes. This standard is now called OCI (Open Container Initiative). So the correct name is "Linux OCI containers" and "Linux OCI images". Somebody also calls them just "Linux containers".

There are alternatives to using Docker software that I will explore here.

Install Podman in Debian 11(Bullseye) (on bare metal or in Win10 + WSL2)

Podman is a daemonless, open-source, Linux native tool designed to work with Open Containers Initiative (OCI) Containers and Container Images. Containers under the control of Podman can be run by a non-privileged user. The CLI commands of the Podman "Container Engine" are practically identical to the Docker CLI. Podman relies on an OCI-compliant "Container Runtime" (runc, crun, runv, etc) to interface with the operating system and create the running containers.

I already wrote some information on how to install and use the combination of Win10 + WSL2 + Debian11(Bullseye):

Podman is available from the Debian11 package manager.

Let's install it. Open the WSL2 terminal and type:

sudo apt update
sudo apt install -y podman
podman version
   Version: 3.0.1

In WSL2 we see some errors, that don't exist on bare metal.
Wsl2 has a special kernel and Podman needs a small trick to work.

mkdir -p $HOME/.config
mkdir -p $HOME/.config/containers
nano $HOME/.config/containers/containers.conf

In this empty new file containers.conf write just these 3 lines and save:

cgroup_manager = "cgroupfs"
events_logger = "file"

Now you can command again and the result has no errors:

podman version
# 3.0.1

Using Podman

Using the Podman CLI is just the same as using Docker CLI.
Inside the WSL2 terminal type:

podman images

We have no images for now. The words "image" and "container" are somewhat confusing. Super simplifying: When it runs it is a "container". If it does not run it is an "image". The image is just an "installation file". The containers can be started and stopped or attached and detached, but they are run only one time.
For a test, run a sample container. It is a web server.

-d means it is run in a detached mode
-t adds a pseudo-tty to run arbitrary commands in an interactive shell
-p stays for publish port

The run command will download/pull the image if needed.

podman run -dt --name sample_cnt -p 8001:80/tcp

List all containers:

podman ps -a
# it will list also closed container. To clean all the closed container:
podman container cleanup -a --rm

Testing the httpd container:

curl http://localhost:8001

That should print the HTML page.

Finally, you can remove the sample_cnt container we used:

podman rm sample_cnt

You can also remove the image because this was just practiced for learning:

podman rmi

Buildah for our Rust development images

Buildah is a replacement for the docker build command. It is easier to incorporate into scripts. It is pronounced exactly as Builder but with a Bostonian accent ;-)

The Rust official images are on the Docker hub:

I was surprised by the size of the image. It is big from 500 MB compressed to 1.4 GB uncompressed. But this is the size of rust development tools.

I don't like that these images have only the root user. I will start from the Debian-11 image and install all I need as a non-privileged user rustdevuser.

In the bash terminal pull the image from the Docker hub:

podman pull

I wrote the bash script

Run it with

cd ~/rustprojects/crustde_cnt_img_pod/create_and_push_container_images 

This will create the image crustde_cargo_img.

The scripts are just bash scripts and are super easy to read, follow, learn and modify. Much easier than Dockerfile. You can even run the commands one by one in the bash terminal and inspect the container to debug the building process.

Rust development in a Linux OCI container

There are a lot of benefits to making a development environment in a container.
We want that everything is isolated/sandboxed and cannot affect our host system (Debian on bare metal or in WSL2 in Win10).
We also don't want to make any changes to our system because of Rust tools or our project needs.
We can have simultaneously more containers, each with a different version of Rust or a different toolchain with all the necessary configuration and tools. We can easily transfer the container to another system or another developer and use it exactly as it is configured. Effortlessly.
We can save/export the container into an image with the source code and the exact state of all developer tools for a particular app version. Then years later we can still work on it for some security patches without the fear that new tools will break the old source code.
You will see that everybody uses podman run, but this is essentially 4 commands in one: pull the image from a repository, create the container, start or attach to the container and exec the bash in interactive mode. I like to use these commands separately because it makes more sense for learning.
Create the container with a fixed name crustde_cargo_cnt:

--name - the container name will be crustde_cargo_cnt
-ti - we will use the container interactively in the terminal

podman create -ti --name crustde_cargo_cnt

We can list the existing containers with:

podman ps -a

Now we can start the container:

podman start crustde_cargo_cnt

Open the bash to interact with the container terminal:
-it - interactive terminal

podman exec -it crustde_cargo_cnt bash

We are now inside the container terminal and we can use cargo, rustup and other rust tools. The files we create will be inside the container. We are rustdevuser inside this container, so we will put our rustprojects in the /home/rustdevuser/rustprojects directory.
This container is started from Podman without root access to the host system !
This is a small, but important difference between Docker and Podman.
First let's find the rustc version:

rustc --version
  rustc 1.69.0 

Let's create and run a small Rust program:

cd ~/rustprojects
cargo new crustde_hello
cd crustde_hello
cargo run

That should work fine and greet you with "Hello, world!"
We can exit the container now with the command


When we exited the container we returned to the host terminal of the Debian host.
The container still exists and is still running. Check with podman ps -a.
To interact with it again, repeat the previous command podman exec -it crustde_cargo_cnt bash.
This container does not work with VSCode and we will not need it anymore. If you use another editor, you can use this image/container as a base for your image/container for your editor.

Remove the container with:

podman rm crustde_cargo_cnt -f

How to install the "mold linker"

I discovered later that my compile times are bad and that could be better using the "mold linker". It is experimental, but that is ok for me.

Download mold from: and extract only the mold binary executable into ~.
Copy it as root into /usr/bin and adjust ownership and permissions:

podman cp $HOME/mold  crustde_vscode_cnt:/usr/bin/
podman exec --user=root crustde_vscode_cnt chown root:root /usr/bin/mold
podman exec --user=root crustde_vscode_cnt chmod 755 /usr/bin/mold
podman exec --user=root crustde_vscode_cnt mkdir -p /home/rustdevuser/.cargo/bin/mold
podman exec --user=root crustde_vscode_cnt ln -s /usr/bin/mold /home/rustdevuser/.cargo/bin/mold/ld

Create or modify the global config.toml file that will be used for all rust builds:

nano ~/.cargo/config.toml

with GCC advice to use a workaround to -fuse-ld

rustflags = ["-C", "link-arg=-B/home/rustdevuser/.cargo/bin/mold"]

Linux Oci image for cross-compile to Linux, Windows, WASI and WASM

I wrote the bash script

Run it with

cd ~/rustprojects/crustde_cnt_img_pod/create_and_push_container_images 

This will create the image crustde_cross_img.

Cross-compile for Windows

I added to the image crustde_cross_img the target and needed utilities for cross-compiling to Windows.
It is nice for some programs to compile the executables both for Linux and Windows.
This is now simple to cross-compile with this command:

cargo build --target x86_64-pc-windows-gnu

The result will be in the folder target/x86_64-pc-windows-gnu/debug.
You can then copy this file from the container to the host system.
Run inside the host system (example for the simple crustde_hello project):

mkdir -p ~/rustprojects/crustde_hello/win
podman cp crustde_cross_cnt:/home/rustdevuser/rustprojects/crustde_hello/target/x86_64-pc-windows-gnu/debug/crustde_hello.exe ~/rustprojects/crustde_hello/win/crustde_hello.exe

Now in the host system (Linux) you can copy this file (somehow) to your Windows system and run it there. It works.

Cross-compile for Musl (standalone executable 100% statically linked)

I added to the image crustde_cross_img the target and needed utilities for cross-compiling to Musl.
These executables are 100% statically linked and don't need any other dynamic library.
Using a container to publish your executable to a server makes distribution and isolation much easier. These executables can run on the empty container scratch. Or on the smallest Linux container images like Alpine (7 MB) or distroless static-debian11 (3 MB).
Most of the programs will run just fine with musl. Cross-compile with this:

cargo build --target x86_64-unknown-linux-musl

The result will be in the folder target/x86_64-unknown-linux-musl/debug.
You can then copy this file from the container to the host system.
Run inside the host system (example for the simple crustde_hello project):

mkdir -p ~/rustprojects/crustde_hello/musl
podman cp crustde_cross_cnt:/home/rustdevuser/rustprojects/crustde_hello/target/x86_64-unknown-linux-musl/debug/crustde_hello ~/rustprojects/crustde_hello/musl/crustde_hello

First let's make an empty scratch container with only this executable:

# build the container image from scratch

buildah from \
--name scratch_hello_world_img \

buildah config \ \
--label name=scratch_hello_world_img \
--label \

buildah copy scratch_hello_world_img  ~/rustprojects/crustde_hello/musl/crustde_hello /crustde_hello

buildah commit scratch_hello_world_img

# now run the container and executable

podman run scratch_hello_world_img /crustde_hello

We can create also a small Alpine container and copy this executable into it.

# build the container image

buildah from \
--name alpine_hello_world_img \

buildah config \ \
--label name=alpine_hello_world_img \
--label \

buildah copy alpine_hello_world_img  ~/rustprojects/crustde_hello/musl/crustde_hello /usr/bin/crustde_hello
buildah run --user root  alpine_hello_world_img    chown root:root /usr/bin/crustde_hello
buildah run --user root  alpine_hello_world_img    chmod 755 /usr/bin/crustde_hello

buildah commit alpine_hello_world_img

# now run the container and executable

podman run alpine_hello_world_img /usr/bin/crustde_hello

The commands are similar for distroless static.

# build the container image

buildah from \
--name distroless_hello_world_img \

buildah config \ \
--label name=distroless_hello_world_img \
--label \

buildah copy distroless_hello_world_img  ~/rustprojects/crustde_hello/musl/crustde_hello /usr/bin/crustde_hello

buildah commit distroless_hello_world_img

# now run the container and executable

podman run distroless_hello_world_img /usr/bin/crustde_hello

There is an example of this code in the folder test_cross_compile.

You can use this image for distribution of the program to your server. It is only 11 MB in size.

Cross-compile to Wasi

I added to the image crustde_cross_img the target wasm32-wasi for cross-compiling to Wasi and the CLI wasmtime to run wasi programs.

cargo build --target wasm32-wasi
wasmtime ./target/wasm32-wasi/debug/crustde_hello.wasm upper world

We can also run this wasm program in the WASI playground at

Cross-compile to Wasm/Webassembly

I added to the image crustde_cross_img the utility wasm-pack for cross-compiling to Wasm/Webassembly.
It is an in-place substitute for the default cargo command:

wasm-pack build --release --target web

Linux OCI image with VSCode server and extensions

I use VSCode as my primary code editor in Windows and in Debian GUI.
I will install the Remote SSH extension for remote development. That is very broadly usable. We need to create an image that contains the VSCode server and extensions.

In the host terminal run the bash script It will create the image crustde_vscode_img.

cd ~/rustprojects/crustde_cnt_img_pod/create_and_push_container_images 

This is based on the image crustde_cargo_img and adds the VSCode server and extensions. VSCode is great because of its extensions. Most of these extensions are installed inside the image crustde_vscode_img:

Other extensions you can add manually through VSCode, but then it is not repeatable. Better is to modify the script and recreate the image

Push the image to the Docker hub

I signed in to
In Account Settings - Security I created an access token. This is the password for podman login. It is needed only once.
Then I created a new image repository with the name crustde_vscode_img and tagged it as the latest. Docker is helping with the push command syntax. I use Podman, so I just renamed Docker to Podman. The same for crustde_squid_img.
In host terminal:

podman login --username bestiadev
# type docker access token

podman push
podman push

podman push
podman push

podman push
podman push
podman push

podman push
podman push

It takes some time to upload more than 3 GB with my slow internet connection.

Enter the container as root

Sometimes you need to do something as root.
You don't need to use sudo. It is not installed. Just open the container bash as root user.

podman exec -it --user root crustde_vscode_cnt bash

Image sizes

Rust is not so small. The official Rust image is 500 MB compressed to 1.4 GB uncompressed.
I saved some 600MB of space just by deleting the docs folder, which no one needs because you can find it on the internet.
I added in the image a lot of useful tools:

Docker Hub stores compressed images, so they are a third of the size to download.

Image Label Size compressed cargo-1.69.0 1.30 GB 0.59 GB cargo-1.69.0 1.70 GB 0.57 GB cargo-1.69.0 0.27 GB 0.10 GB

Users keys for SSH

We need to create 2 SSH keys, one for the SSH server identity host key of the container and the other for the identity of rustdevuser. This is done only once. To avoid old cryptographic algorithms I will force the new ed25519.
In host terminal:

# generate user key
ssh-keygen -f ~/.ssh/crustde_rustdevuser_ssh_1 -t ed25519 -C "info@my.domain"
# give it a passphrase and remember it, you will need it
# generate host key
mkdir -p ~/.ssh/crustde_pod_keys/etc/ssh
ssh-keygen -A -f ~/.ssh/crustde_pod_keys

# check the new files
# list user keys
ls -la ~/.ssh | grep "rustdevuser"
# -rw------- 1 rustdevuser rustdevuser  2655 Apr  3 12:03 crustde_rustdevuser_ssh_1
# -rw-r--r-- 1 rustdevuser rustdevuser   569 Apr  3 12:03

# list host keys
ls -la ~/.ssh/crustde_pod_keys/etc/ssh
# -rw------- 1 rustdevuser rustdevuser 1381 Apr  4 10:44 ssh_host_dsa_key
# -rw-r--r-- 1 rustdevuser rustdevuser  603 Apr  4 10:44
# -rw------- 1 rustdevuser rustdevuser  505 Apr  4 10:44 ssh_host_ecdsa_key
# -rw-r--r-- 1 rustdevuser rustdevuser  175 Apr  4 10:44
# -rw------- 1 rustdevuser rustdevuser  399 Apr  4 10:44 ssh_host_ed25519_key
# -rw-r--r-- 1 rustdevuser rustdevuser   95 Apr  4 10:44
# -rw------- 1 rustdevuser rustdevuser 2602 Apr  4 10:44 ssh_host_rsa_key
# -rw-r--r-- 1 rustdevuser rustdevuser  567 Apr  4 10:44

If we use WSL2, the same keys we will need in Windows because the VSCode client works in Windows. We will copy them.
In the WSL2 terminal:

printf $USERPROFILE/.ssh/crustde_rustdevuser_ssh_1
cp -v ~/.ssh/crustde_rustdevuser_ssh_1 $USERPROFILE/.ssh/crustde_rustdevuser_ssh_1
cp -v ~/.ssh/ $USERPROFILE/.ssh/
cp -v -r ~/.ssh/crustde_pod_keys $USERPROFILE/.ssh/crustde_pod_keys
# check
ls -la $USERPROFILE/.ssh | grep "rustdevuser"
ls -la $USERPROFILE/.ssh/crustde_pod_keys/etc/ssh

Volumes or mount restrictions

I don't want that the container can access any file on my local system.
This is a "standalone" development container and everything must run inside.
The files must be cloned/pulled from GitHub or copied manually with podman cp.
Before removing the containers the source files must be pushed to GitHub or exported some other way.

Network Inbound restrictions

I would like to restrict the use of the network from/to the container.
When using Podman as a rootless user, the network is set up automatically. Only the localhost can be used. The container itself does not have an IP Address because, without root privileges, network association is not allowed. Port publishing as rootless containers can be done for "high ports" only. All ports below 1024 are privileged and cannot be used for publishing.
I think that all inbound ports are closed by default and I need to explicitly expose them manually.

Network Outbound restrictions with Squid proxy in container

I would like to limit access to the internet only to whitelisted domains:,,...
Some malware could want to "call home" and I will try to disable this.
What I need is a "proxy" or "transparent proxy". I will use the leading open-source proxy Squid, but in a container.
It can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains while being fully transparent for instances in the private subnet.
I want to use this proxy for the container crustde_vscode_cnt. Container-to-container networking can be complex.
Podman works with pods, that make networking easy. This is usually the simplest approach for two rootless containers to communicate. Putting them in a pod allows them to communicate directly over localhost.
First, create a modified image for Squid:

cd ~/rustprojects/crustde_cnt_img_pod/create_and_push_container_images 

If you need, you can modify the file etc_squid_squid.conf to add more whitelisted domains. Then run sh to build the modified image. You can also add whitelisted domains later when you use the squid container. First modify the file ~/rustprojects/crustde_cnt_img_pod/create_and_push_container_images/etc_squid_squid.con. Then copy this file into the squid container:

podman cp ~/rustprojects/crustde_cnt_img_pod/create_and_push_container_images/etc_squid_squid.conf crustde_squid_cnt:/etc/squid/squid.conf
# Finally restart the squid container
podman restart crustde_squid_cnt

Watch the squid log if the access has been denied to some domains:

podman exec crustde_squid_cnt cat /var/log/squid/access.log
podman exec crustde_squid_cnt tail -f /var/log/squid/access.log

Check later, if these env variables are set inside crustde_vscode_cnt bash terminal. These variables should be set when creating the pod.

# you can set the env variable manually 
export http_proxy='http://localhost:3128'
export https_proxy='http://localhost:3128'
export all_proxy='http://localhost:3128'

One pod with 2 containers

Podman and Kubernetes have the concept of pods, where more containers are tightly coupled. Here we will have the crustde_vscode_cnt that will use crustde_squid_cnt as a proxy. From the outside, the pod is like one entity with one address. All the network communication goes through the pod. Inside the pod, everything is in the localhost address. That makes it easy to configure.
Inside the container crustde_vscode_cnt I want that everything goes through the proxy. These env variables should do that: http_proxy, https_proxy,all_proxy.
Run the bash script to create a new pod crustde_pod with proxy settings:

cd ~/rustprojects/crustde_cnt_img_pod/crustde_install/pod_with_rust_vscode 

The pod is now running:

podman pod list
podman ps -a 

Try SSH from Debian

Try the SSH connection from WSL2 to the container:

ssh -i ~/.ssh/crustde_rustdevuser_ssh_1 -p 2201 rustdevuser@localhost
# Choose `yes` to save fingerprint if asked, just the first time.
# type passphrase
# should work !
# try for example
# ls result: rustprojects
# to exit the container

Try SSH from Windows

Run in Windows cmd prompt to access the container over SSH from Windows:

# test the ssh connection from Windows cmd prompt
"C:\WINDOWS\System32\OpenSSH\ssh.exe" -i ~\.ssh\rustdevuser_key -p 2201 rustdevuser@localhost
# Choose `y` to save fingerprint if asked, just the first time.
# type passphrase
# should work !
# try for example
# ls result: rustprojects
# to exit the container

Debug SSH connection

Sometimes it is needed to debug the connection to the ssh server because the normal error messages are completely useless.
From the host terminal, I enter the container terminal as root:

podman exec -it --user=root  crustde_vscode_cnt bash

In container terminal:

service ssh stop
/usr/sbin/sshd -ddd -p 2201
# now we can see the verbose log when we attach an SSH client to this server. And we can see where is the problem.
# after debug, start the service, before exit
service ssh start

To see the verbose log of the SSH client add -v like this:

ssh -i ~/.ssh/github_com_git_ssh_1 -p 2201 rustdevuser@localhost -v

To see the listening ports:

netstat -tan 


Open VSCode and install the extension Remote - SSH. In VSCode F1, type ssh and choose Remote-SSH: Open SSH configuration File....
In Debian on bare metal:
choose ~/.ssh/config and type (if is missing)

Host crustde_vscode_cnt
  HostName localhost
  Port 2201
  User rustdevuser
  IdentityFile ~/.ssh/crustde_rustdevuser_ssh_1
  IdentitiesOnly yes

In Windows +WSL2:
choose c:\users\user_name\ssh\config and type (if is missing)

Host crustde_vscode_cnt
  HostName localhost
  Port 2201
  User rustdevuser
  IdentityFile ~\.ssh\rustdevuser_key
  IdentitiesOnly yes

The big difference is only / or \ for the file path. Bad Windows!
Save and close.
Then in VSCode F1, type ssh and choose Remote-SSH: Connect to Host... and choose crustde_vscode_cnt.
Choose Linux and yes for the fingerprint if asked, just the first time.
Type your passphrase.
If we are lucky, everything works and VSCode is now inside the container over SSH.

VSCode terminal

VSCode has an integrated VSCode terminal. It has some advantages for developers that the standard bash terminal does not have. It is great to use it for everything around code in containers. You can open more than one VSCode terminal if you need to. For example, if you run a web server.
If the VSCode terminal is not opened simply press Ctrl+j to open it and the same to close it.
Inside the VSCode terminal, we will create a sample project:

cd ~/rustprojects
cargo new crustde_hello

This easy command opens a new VSCode window exactly for this project/folder inside the container:

code crustde_hello

A new VSCode window will open for the crustde_hello project. Because of the SSH communication, it asks for the passphrase again. You can close now all other VSCode windows.

Build and run the project in the VSCode terminal:

cargo run

That should work and greet you with "Hello, world!".
Leave VSCode open because the next chapter will continue from here.

Open the VSCode project from the command line

You can call directly an existing VSCode project inside the container from the Linux host over SSH like this:

code --remote ssh-remote+crustde_vscode_cnt /home/rustdevuser/rustprojects

GitHub in the container

Download the template for the bash script from here:
into Debian folder ~\.ssh. It contains all the steps explained below. First, rename it to You have to personalize it with your personal data.
Run in host terminal:

sh ~/.ssh/crustde_pod_keys/

Manually step-by-step instructions that are inside the
Git inside the container does not yet have your information, that it needs.
In host terminal:

podman exec --user=rustdevuser crustde_vscode_cnt git config --global "info@your.mail"
podman exec --user=rustdevuser crustde_vscode_cnt git config --global "your_name"
podman exec --user=rustdevuser crustde_vscode_cnt git config --global -l

I like to work with GitHub over SSH and not over HTTPS. I think it is the natural and safe thing for Linux.
To make the SSH client work in the container I need the file with the private key for SSH connection to GitHub. I already have this in the file ~/.ssh/github_com_git_ssh_1. I will copy it into the container with podman cp.
Be careful ! This is a secret !
It means that this container I cannot share any more with anybody. It is now my private container. I must never make an image from it and share it. Never !

In host terminal:

podman exec --user=rustdevuser crustde_vscode_cnt ls -la /home/rustdevuser/.ssh
podman cp ~/.ssh/github_com_git_ssh_1 crustde_vscode_cnt:/home/rustdevuser/.ssh/github_com_git_ssh_1
podman exec --user=rustdevuser crustde_vscode_cnt chmod 600 /home/rustdevuser/.ssh/github_com_git_ssh_1
podman cp ~/.ssh/ crustde_vscode_cnt:/home/rustdevuser/.ssh/
podman exec --user=rustdevuser crustde_vscode_cnt ls -la /home/rustdevuser/.ssh

The VSCode terminal is still open on the project crustde_hello from the previous chapter.

SSH Agent

It is comfortable to use the ssh-agent to store the passphrase in memory, so we type it only once. The ssh-agent is already started on login in the ~/.bashrc script.
Again attention, that this container has secrets and must not be shared ! Never !
In the VSCode terminal (Ctrl+j) run:

ssh-add /home/rustdevuser/.ssh/github_com_git_ssh_1
# enter your passphrase

You can download the template from GitHub and save it into the Debian folder ~/.ssh. Rename it to and personalize it with your SSH key file names.
Run in host Terminal:

podman cp ~/.ssh/ crustde_vscode_cnt:/home/rustdevuser/.ssh/  

You will then run inside the VSCode terminal for each window/project separately:

sh ~/.ssh/
# or simply
# if you add the alias into ~/.bashrc

After you enter the passphrase, it will remember it until the terminal is open.
When you open the terminal again, you will have to run the script again and enter the passphrase again.

GitHub push

Open in the browser and sign in, click New and create a new repository named crustde_hello.
GitHub is user-friendly and shows the standard commands we need to run. Choose SSH commands and not HTTPS or CLI. You will find commands similar to the commands below.
In VSCode click on the Source control and click Initialize, then type the commit-msg "init" and click Commit.
Then in VSCode terminal run:

git remote add origin
git push -u origin main

Done! Check your GitHub repository.
Always push the changes to GitHub. So you can destroy this pod/container and create a new empty one, then pull the code from GitHub and continue developing. Containers are the worst place to have persistent data stored. They can be deleted at any second for any reason. Leave VSCode open because the next chapter will continue from here.

Existing Rust projects on GitHub

You probably already have a Rust project on GitHub. You want to continue its development inside the container.
For example, we will use my PWA+WebAssembly/WASM project rust_wasm_pwa_minimal_clock, which needs to forward port 8001 because our project needs a web server. That is fairly common. I am not a fan of autoForward automagic in VSCode, so I disable it in File-Preferences-Settings search remote.autoForwardPorts and uncheck it to false.
We will continue to use the existing VSCode terminal, which is already opened in the folder /home/rustdevuser/rustprojects/crustde_hello. Just to practice.
Run the commands to clone the repository from GitHub and open a new VSCode window. We already have the SSH private key and ssh-agent running:

cd /home/rustdevuser/rustprojects/
git clone
code rust_wasm_pwa_minimal_clock

The code command will open a new VSCode window in the folder rust_wasm_pwa_minimal_clock. Enter the SSH passphrase when asked. In the new VSCode window, we can now edit, compile and run the project. All are sandboxed/isolated inside the container. We can now close the other VSCode windows, we don't need it anymore.
This example is somewhat more complex because it is WebAssembly, but it is good for learning. I used the automation tool cargo-auto to script a more complex building process. You can read the automation task code in automation_task_rs/src/ On the first, it will download the wasm components and wasm-bindgen. That can take some time. Don't whine!

Now we can build and run the project in the VSCode terminal (Ctrl+j):

cargo auto build_and_run

In VSCode go to Ports and add port 4000.
Open the browser in Windows:
This is an example of Webassembly and PWA, directly from a Linux OCI container.
A good learning example.

After reboot

After reboot, WSL2 can create some network problems for Podman.
No problem for Debian on bare metal. But the script is ok to restart the pod and start the sshd server. So use it in both cases.
We can simulate the WSL2 reboot in Powershell in Windows:

wsl --shutdown 

Before entering any Podman command we need first to clean some temporary files, restart the pod and restart the SSH server.
In host terminal:

sh ~/rustprojects/crustde_install/
podman ps

If the restart is successful every container will be started a few seconds. It is not enough for containers to be in the status "created". Then just repeat the restart procedure.

VSCode and file copying from win10

It is easy to copy files from Win10 to the VSCode project inside the container just with drag&drop.
In the other direction, we right-click a file in VSCode Explorer and choose Download and then a download folder. It works for entire folders too.

Protect the SSH private key in Windows

In Linux, the private keys inside ~/.ssh are protected with chmod 600. We need to do similarly for the private keys inside Win10 folder ~\.ssh Run in Powershell terminal with the standard user:

cd ~/.ssh
# Set Key File Variable:
  New-Variable -Name Key -Value "$env:UserProfile\.ssh\id_rsa"
# Remove Inheritance:
  Icacls $Key /c /t /Inheritance:d
# Set Ownership to Owner:
  Icacls $Key /c /t /grant ${env:UserName}:F
# Remove All Users, except for Owner:
  Icacls $Key  /c /t /Remove Administrator BUILTIN\Administrators BUILTIN Everyone System Users
# Verify:
  Icacls $Key
# Remove Variable:
  Remove-Variable -Name Key

Debian shutdown

I got this error on shutdown: "A stop job running..." and it waits for 3 minutes. I think it is Podman. I will always shutdown Debian with a script that stops Podman first. Create a bash script with this text:

printf 'podman pod stop --all'
podman pod stop --all
printf 'podman stop --all'
podman stop --all
printf 'sudo shutdown -h now'
sudo shutdown -h now
nano ~/
sudo chmod a+x ~/
sh ~/

In ~/.bashrc add these lines, then use just the short command shut:

printf "For correct shutdown that stops podman use the command 'shut'"
alias shut="sh ~/"


It is a complex setting. There can be some quirks sometimes.

ssh could not resolve hostname

Warning: The "ssh could not resolve hostname" is a common error. It is not that big of an issue. I closed everything and restart my computer and everything works fine now.


SSH client remembers the key of the servers in the file ~/.ssh/known_hosts.
If we created a new key for the ephemeral container, we can get the error REMOTE HOST IDENTIFICATION HAS CHANGED. It is enough to open the file ~/.ssh/known_hosts and delete the offending line. In the WSL2 terminal we can use:

ssh-keygen -f ~/.ssh/known_hosts -R "[localhost]:2201";
ssh-keygen -f $USERPROFILE/.ssh/known_hosts -R "[localhost]:2201";

Double-commander SFTP

On Debian, I use Double-commander as an alternative to Total-commander on Windows. It has an Ftp functionality that allows SSH and SFTP. But the private key must be PEM/rsa. It does not work with the existing crustde_rustdevuser_ssh_1 which is OpenSSH. I tried to convert the key format, but neither key-gen, OpenSsl nor putty was up to the task. So I decided to make a new private key just for Double-commander.
In the DoubleCmd ftp setting, I must enable the "use SSH+SCP protocol (no SFTP)" to make it work.
On the host Debian system run:

ssh-keygen -t rsa -b 4096 -m PEM -C rustdevuser@crustde_pod -f /home/rustdevuser/.ssh/rustdevuser_rsa_key
podman cp ~/.ssh/ crustde_vscode_cnt:/home/rustdevuser/.ssh/
podman exec --user=rustdevuser crustde_vscode_cnt /bin/sh -c 'cat /home/rustdevuser/.ssh/ >> /home/rustdevuser/.ssh/authorized_keys'
ssh-keyscan -p 2201 -H >> ~/.ssh/known_hosts

Now I can use this key for Double-commander SFTP.

Typescript compiler

Some projects need the typescript compiler tsc. First, we need to install nodejs with npm to install typescript. That is a lot of installation. This is because I don't want it in my default container. For typescript, I created a new container image: rust_ts_dev_image.
The bash script sh will create the new image with typescript.
Then we can use sh crustde_install\pod_with_rust_ts_vscode\ to create the Podman pod with typescript.
The same sh ~/rustprojects/crustde_install/ is used after reboot.

PostgreSQL and pgAdmin

Some projects need the database PostgreSQL 13. I created a new pod with the command sh crustde_install\pod_with_rust_pg_vscode\
The same sh ~/rustprojects/crustde_install/ is used after reboot.
To use the administrative tool pgAdmin open the browser on localhost:9876.
If you want, you can change the user and passwords in the bash script to something stronger.

Read more

Read more about how I use my Development environment.

WSL problems

I still have problems after the WSL reboot. Some say: The /tmp files should be on a temporary filesystem.
Here is how I set fstab to mount tmpfs, it works.

printf "none  /tmp  tmpfs  defaults  0 0" | sudo tee -a /etc/fstab
# create /tmp folder if not exist
sudo mkdir /tmp
sudo chmod 1777 /tmp


cargo crev reviews and advisory

We live in times of danger with supply chain attacks.
It is recommended to always use cargo-crev
to verify the trustworthiness of each of your dependencies.
Please, spread this info.
You can also read reviews quickly on the web:

Open-source and free as a beer

My open-source projects are free as a beer (MIT license).
I just love programming.
But I need also to drink. If you find my projects and tutorials helpful, please buy me a beer by donating to my PayPal.
You know the price of a beer in your local bar ;-)
So I can drink a free beer for your health :-)
Na zdravje! Alla salute! Prost! Nazdravlje! 🍻