[GH-ISSUE #29] support multi-platform docker image #24

Closed
opened 2026-02-27 10:15:20 +03:00 by kerem · 20 comments
Owner

Originally created by @Triple-Z on GitHub (Jun 1, 2023).
Original GitHub issue: https://github.com/matze/wastebin/issues/29

I try to run quxfoo/wastebin in Raspberry Pi, but i got standard_init_linux.go:219: exec user process caused: exec format error.

Docker multi-platform image: https://docs.docker.com/build/building/multi-platform/

Originally created by @Triple-Z on GitHub (Jun 1, 2023). Original GitHub issue: https://github.com/matze/wastebin/issues/29 I try to run `quxfoo/wastebin` in Raspberry Pi, but i got `standard_init_linux.go:219: exec user process caused: exec format error`. Docker multi-platform image: https://docs.docker.com/build/building/multi-platform/
kerem closed this issue 2026-02-27 10:15:20 +03:00
Author
Owner

@matze commented on GitHub (Jun 1, 2023):

I'll take care after 9th of June unless you're willing to open a PR?

Zhenzhen Zhao @.***> schrieb am Do., 1. Juni 2023,
14:27:

I try to run quxfoo/wastebin in Raspberry Pi, but i got standard_init_linux.go:219:
exec user process caused: exec format error.

Docker multi-platform image:
https://docs.docker.com/build/building/multi-platform/


Reply to this email directly, view it on GitHub
https://github.com/matze/wastebin/issues/29, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAA4ERR66RNCXGJZSZVU7J3XJCDBPANCNFSM6AAAAAAYW3BMQM
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

<!-- gh-comment-id:1572522037 --> @matze commented on GitHub (Jun 1, 2023): I'll take care after 9th of June unless you're willing to open a PR? Zhenzhen Zhao ***@***.***> schrieb am Do., 1. Juni 2023, 14:27: > I try to run quxfoo/wastebin in Raspberry Pi, but i got standard_init_linux.go:219: > exec user process caused: exec format error. > > Docker multi-platform image: > https://docs.docker.com/build/building/multi-platform/ > > — > Reply to this email directly, view it on GitHub > <https://github.com/matze/wastebin/issues/29>, or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAA4ERR66RNCXGJZSZVU7J3XJCDBPANCNFSM6AAAAAAYW3BMQM> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Author
Owner

@Myzel394 commented on GitHub (Jan 28, 2024):

Any news?

<!-- gh-comment-id:1913619473 --> @Myzel394 commented on GitHub (Jan 28, 2024): Any news?
Author
Owner

@matze commented on GitHub (Feb 6, 2024):

Ah sorry, will look into it. Unless some of you have incentive to come up with a PR. Contributions are welcome.

<!-- gh-comment-id:1928982432 --> @matze commented on GitHub (Feb 6, 2024): Ah sorry, will look into it. Unless some of you have incentive to come up with a PR. Contributions are welcome.
Author
Owner

@MichaelSasser commented on GitHub (May 22, 2024):

I haven't found a simple solution yet to create multi-arch container images using docker buildx for my projects when cross compiling. I would have expected that the rust base image would work fine in this condition, but it does not. I currently only run amd64 and aarch64 machines, so I tried to use two different images based on the TARGETPLATFORM. My solution works consistently, but it looks a bit weird. Maybe it's a starting point for you. My dockerfile looks like:

FROM --platform=$BUILDPLATFORM rust:latest AS base-amd64
FROM --platform=$BUILDPLATFORM messense/rust-musl-cross:aarch64-musl AS base-arm64

# This was the only way I could find how to choose entirely different images
FROM --platform=$BUILDPLATFORM base-$TARGETARCH AS builder


ARG TARGETPLATFORM
# Select the toolchain based on the TARGETPLATFORM. This is just a temporary file. I have it in gitignore and in my cleanup step in my makefile
RUN case "$TARGETPLATFORM" in \
  "linux/amd64") echo x86_64-unknown-linux-musl > /rust_target.txt ;; \
  "linux/arm64v8") echo aarch64-unknown-linux-musl > /rust_target.txt ;; \
  "linux/arm64") echo aarch64-unknown-linux-musl > /rust_target.txt ;; \
  *) exit 1 ;; \
  esac


RUN rustup target add "$(cat /rust_target.txt)"
RUN apt-get update && apt-get install --no-install-recommends -y musl-tools musl-dev
RUN update-ca-certificates

# Create my-project user
ENV USER=my-project
ENV UID=10001

RUN adduser \
  --disabled-password \
  --gecos "" \
  --home "/nonexistent" \
  --shell "/sbin/nologin" \
  --no-create-home \
  --uid "${UID}" \
  "${USER}"


WORKDIR /my-project
COPY .cargo ./.cargo
COPY Cargo.toml Cargo.lock ./
COPY src ./src

RUN cargo build --release --target "$(cat /rust_target.txt)"
RUN cp "target/$(cat /rust_target.txt)/release/my-project" .

###############################################################################
FROM scratch  
#  --platform=$TARGETPLATFORM

ARG GIT_COMMIT=unspecified
ARG BUILD_DATE=unspecified
ARG AUTHORS=unspecified
ARG LICENSES=unspecified
LABEL org.opencontainers.image.revision="$GIT_COMMIT"
LABEL org.opencontainers.image.created="$BUILD_DATE"
LABEL org.opencontainers.image.authors="$AUTHORS"
LABEL org.opencontainers.image.licenses="$LICENSES"

# Import from builder.
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group

WORKDIR /my-project

# Copy our build
COPY --from=builder /my-project/my-project ./

# Use an unprivileged user.
USER my-project:my-project

I build images and deploy stuff using a self-hosted Woodpecker CI instance because I don't want to store secrets on GH servers. Therefore, I haven't looked into GH actions which build container images. There is probably a buildx action somewhere.

If I'm building locally, just to try something out or so, I use a Makefile. The image related stuff looks somewhat like this snippet:

ROOT_DIR := $(realpath $(dir $(realpath $(lastword $(MAKEFILE_LIST)))))

LICENSES := MIT
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_DATE := $(shell date --rfc-3339=seconds)
AUTHORS ?= $(shell git config user.name) <$(shell git config user.email)>
DOCKER_REGISTRY ?= $(DOCKER_REGISTRY)
DOCKER_USERNAME ?= $(shell whoami)
DOCKER_TAG ?= ${GIT_COMMIT}-dev
DOCKER_BUILDER ?= mybuilder
APPLICATION_NAME ?= ${ROOT_DIR}

c-builder-create: # Create a new buildx builder "mybuilder"
	docker buildx create \
		--name ${DOCKER_BUILDER} \
		--bootstrap\
		--use

c-build: # Build the images
	docker buildx build \
		--provenance false \
		--platform linux/arm64,linux/amd64 \
		--build-arg GIT_COMMIT="${GIT_COMMIT}" \
		--build-arg BUILD_DATE="${BUILD_DATE}" \
		--build-arg AUTHORS="${AUTHORS}" \
		--build-arg LICENSES="${LICENSES}" \
		--tag ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} \
		.

c-release: # Build the images and push them to a registry (in this case a private one)		
	docker buildx build \
		--provenance false \
		--platform linux/arm64,linux/amd64 \
		--build-arg GIT_COMMIT="${GIT_COMMIT}" \
		--build-arg BUILD_DATE="${BUILD_DATE}" \
		--build-arg AUTHORS="${AUTHORS}" \
		--build-arg LICENSES="${LICENSES}" \
		--tag ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} \
		--push \
		.


c-registry-inspect: # Inspect the image metadata at the registry
	docker buildx imagetools inspect ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG}

One note: If you don't use labels, you can omit all the "build-arg"s. I just have them in there because I use them in CI.

<!-- gh-comment-id:2125440547 --> @MichaelSasser commented on GitHub (May 22, 2024): I haven't found a simple solution yet to create multi-arch container images using `docker buildx` for my projects when cross compiling. I would have expected that the `rust` base image would work fine in this condition, but it does not. I currently only run amd64 and aarch64 machines, so I tried to use two different images based on the `TARGETPLATFORM`. My solution works consistently, but it looks a bit weird. Maybe it's a starting point for you. My dockerfile looks like: ```dockerfile FROM --platform=$BUILDPLATFORM rust:latest AS base-amd64 FROM --platform=$BUILDPLATFORM messense/rust-musl-cross:aarch64-musl AS base-arm64 # This was the only way I could find how to choose entirely different images FROM --platform=$BUILDPLATFORM base-$TARGETARCH AS builder ARG TARGETPLATFORM # Select the toolchain based on the TARGETPLATFORM. This is just a temporary file. I have it in gitignore and in my cleanup step in my makefile RUN case "$TARGETPLATFORM" in \ "linux/amd64") echo x86_64-unknown-linux-musl > /rust_target.txt ;; \ "linux/arm64v8") echo aarch64-unknown-linux-musl > /rust_target.txt ;; \ "linux/arm64") echo aarch64-unknown-linux-musl > /rust_target.txt ;; \ *) exit 1 ;; \ esac RUN rustup target add "$(cat /rust_target.txt)" RUN apt-get update && apt-get install --no-install-recommends -y musl-tools musl-dev RUN update-ca-certificates # Create my-project user ENV USER=my-project ENV UID=10001 RUN adduser \ --disabled-password \ --gecos "" \ --home "/nonexistent" \ --shell "/sbin/nologin" \ --no-create-home \ --uid "${UID}" \ "${USER}" WORKDIR /my-project COPY .cargo ./.cargo COPY Cargo.toml Cargo.lock ./ COPY src ./src RUN cargo build --release --target "$(cat /rust_target.txt)" RUN cp "target/$(cat /rust_target.txt)/release/my-project" . ############################################################################### FROM scratch # --platform=$TARGETPLATFORM ARG GIT_COMMIT=unspecified ARG BUILD_DATE=unspecified ARG AUTHORS=unspecified ARG LICENSES=unspecified LABEL org.opencontainers.image.revision="$GIT_COMMIT" LABEL org.opencontainers.image.created="$BUILD_DATE" LABEL org.opencontainers.image.authors="$AUTHORS" LABEL org.opencontainers.image.licenses="$LICENSES" # Import from builder. COPY --from=builder /etc/passwd /etc/passwd COPY --from=builder /etc/group /etc/group WORKDIR /my-project # Copy our build COPY --from=builder /my-project/my-project ./ # Use an unprivileged user. USER my-project:my-project ``` I build images and deploy stuff using a self-hosted Woodpecker CI instance because I don't want to store secrets on GH servers. Therefore, I haven't looked into GH actions which build container images. There is probably a buildx action somewhere. If I'm building locally, just to try something out or so, I use a Makefile. The image related stuff looks somewhat like this snippet: ```makefile ROOT_DIR := $(realpath $(dir $(realpath $(lastword $(MAKEFILE_LIST))))) LICENSES := MIT GIT_COMMIT := $(shell git rev-parse --short HEAD) BUILD_DATE := $(shell date --rfc-3339=seconds) AUTHORS ?= $(shell git config user.name) <$(shell git config user.email)> DOCKER_REGISTRY ?= $(DOCKER_REGISTRY) DOCKER_USERNAME ?= $(shell whoami) DOCKER_TAG ?= ${GIT_COMMIT}-dev DOCKER_BUILDER ?= mybuilder APPLICATION_NAME ?= ${ROOT_DIR} c-builder-create: # Create a new buildx builder "mybuilder" docker buildx create \ --name ${DOCKER_BUILDER} \ --bootstrap\ --use c-build: # Build the images docker buildx build \ --provenance false \ --platform linux/arm64,linux/amd64 \ --build-arg GIT_COMMIT="${GIT_COMMIT}" \ --build-arg BUILD_DATE="${BUILD_DATE}" \ --build-arg AUTHORS="${AUTHORS}" \ --build-arg LICENSES="${LICENSES}" \ --tag ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} \ . c-release: # Build the images and push them to a registry (in this case a private one) docker buildx build \ --provenance false \ --platform linux/arm64,linux/amd64 \ --build-arg GIT_COMMIT="${GIT_COMMIT}" \ --build-arg BUILD_DATE="${BUILD_DATE}" \ --build-arg AUTHORS="${AUTHORS}" \ --build-arg LICENSES="${LICENSES}" \ --tag ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} \ --push \ . c-registry-inspect: # Inspect the image metadata at the registry docker buildx imagetools inspect ${DOCKER_REGISTRY}/${DOCKER_USERNAME}/${APPLICATION_NAME}:${DOCKER_TAG} ``` One note: If you don't use labels, you can omit all the "build-arg"s. I just have them in there because I use them in CI.
Author
Owner

@ajvn commented on GitHub (Jul 9, 2024):

Hello, I've recently set this up in my local Kubernetes cluster based on ARM platform, and had to cross-compile it in order to get it to work.
It's possible to do it with the same Dockerfile, although it needs some adjustments, biggest one being using Ubuntu (or something similar) instead of scratch.
I couldn't get it to work with scratch, not sure if some shared library is needed, or if it simply doesn't behave properly on ARM architecture.

If this is something you'd like to avoid, and keep using scratch for x86_64, other option is using separate Dockerfile for ARM build.

If it sounds useful, or something you'd like to add to this project, let me know and I can open a PR with my changes.

Also, I used Podman for all of my testing, in case you want to proceed with this I'll test it with Docker as well.

<!-- gh-comment-id:2217956673 --> @ajvn commented on GitHub (Jul 9, 2024): Hello, I've recently set this up in my local Kubernetes cluster based on ARM platform, and had to cross-compile it in order to get it to work. It's possible to do it with the same Dockerfile, although it needs some adjustments, biggest one being using Ubuntu (or something similar) instead of `scratch`. I couldn't get it to work with `scratch`, not sure if some shared library is needed, or if it simply doesn't behave properly on ARM architecture. If this is something you'd like to avoid, and keep using `scratch` for `x86_64`, other option is using separate Dockerfile for ARM build. If it sounds useful, or something you'd like to add to this project, let me know and I can open a PR with my changes. Also, I used Podman for all of my testing, in case you want to proceed with this I'll test it with Docker as well.
Author
Owner

@matze commented on GitHub (Jul 9, 2024):

I'd like to avoid anything larger than scratch for x86_64, so please go ahead with a separate Dockerfile. Although I don't see why a scratch image shouldn't work in case of ARM.

<!-- gh-comment-id:2218705944 --> @matze commented on GitHub (Jul 9, 2024): I'd like to avoid anything larger than `scratch` for x86_64, so please go ahead with a separate Dockerfile. Although I don't see why a `scratch` image shouldn't work in case of ARM.
Author
Owner

@ajvn commented on GitHub (Jul 9, 2024):

Neither do I, but this error is returned, doesn't happen with x86_64:

exec /app/wastebin: no such file or directory

It's probably some library missing, I'll investigate some more.

<!-- gh-comment-id:2218719194 --> @ajvn commented on GitHub (Jul 9, 2024): Neither do I, but this error is returned, doesn't happen with x86_64: ``` exec /app/wastebin: no such file or directory ``` It's probably some library missing, I'll investigate some more.
Author
Owner

@ajvn commented on GitHub (Jul 9, 2024):

I have it running using scratch and there's a way to build it for multiple architectures using single Dockerfile, but it seems much cleaner to have it in a separate file, so I'll proceed that way.

I'll open a PR once I'm done testing it using Docker.

<!-- gh-comment-id:2218836716 --> @ajvn commented on GitHub (Jul 9, 2024): I have it running using `scratch` and there's a way to build it for multiple architectures using single Dockerfile, but it seems much cleaner to have it in a separate file, so I'll proceed that way. I'll open a PR once I'm done testing it using Docker.
Author
Owner

@ajvn commented on GitHub (Jul 13, 2024):

@matze From what I can see there's no GitHub Action nor any other pipeline in this repository which builds and pushes images to DockerHub, so I assume you do this manually.

If that's the case, here's step-by-step guide on how you can use Podman to create multi-arch manifest and push image to Docker Hub.

I'm using internal Docker registry running in my Kubernetes cluster, but principle should be the same, tag in this example is v2.4.4, domain of my local registry is registry.at.home, and branch I'm using to build images is current master.

  • Create manifest:
$ podman manifest create wastebin:v2.4.4
  • Build x86_64 image from root directory of this repository:
$ podman build --platform linux/amd64 --manifest localhost/wastebin:v2.4.4 -f Dockerfile
  • Build arm64 image from root directory of this repository:
$ podman build --platform linux/arm64 --manifest localhost/wastebin:v2.4.4 -f Dockerfile.arm
  • After builds are done, check if manifests contains both platforms, it should look something like this:
$ podman manifest inspect --verbose localhost/wastebin:v2.4.4
{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
       {
           "mediaType": "application/vnd.oci.image.manifest.v1+json",
           "size": 800,
           "digest": "sha256:f02c775c193ab9de5b7d68cb4e71ef2ab8bb9c852f603dd0e71d015f622725ce",
           "platform": {
               "architecture": "amd64",
               "os": "linux"
           }
       },
       {
           "mediaType": "application/vnd.oci.image.manifest.v1+json",
           "size": 1408,
           "digest": "sha256:3f8289518e56f9e666d51b34b66e67311671eb1c4fef4463020d6a3b21a06f08",
           "platform": {
               "architecture": "arm64",
               "os": "linux"
           }
       }
   ]
}
  • Push manifest to Docker Hub (in my case my internal Docker registry):
$ podman manifest push localhost/wastebin:v2.4.4 registry.at.home/wastebin:v2.4.4
  • Inspect remote manifest:
$ podman manifest inspect --verbose registry.at.home/wastebin:v2.4.4
{
    "schemaVersion": 2,
    "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
    "manifests": [
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 812,
            "digest": "sha256:243843ed744fadcf208f75caf692075fd1f2314f1ece6a7c515522d006cc6a64",
            "platform": {
                "architecture": "amd64",
                "os": "linux"
            }
        },
        {
            "mediaType": "application/vnd.oci.image.manifest.v1+json",
            "size": 1438,
            "digest": "sha256:07d06f364cdc0da2373ef17d758a87d2c1a480f36481fe16c0cac426ba0c4b61",
            "platform": {
                "architecture": "arm64",
                "os": "linux"
            }
        }
    ]
}
  • Testing in my local ARM based cluster:
$ kubectl --context <relevant-context> -n wastebin logs wastebin-d678cdbb6-mm49m
2024-07-12T23:45:00.564484Z  INFO rusqlite_migration: Database migrated to version 6

Relevant deployment part looks like this:

...
    spec:
      containers:
      - image: registry.at.home/wastebin:v2.4.4
        imagePullPolicy: IfNotPresent
...
  • Testing on x86_64 machine:
$ podman run -p 8088:8088 -it registry.at.home/wastebin:v2.4.4
2024-07-13T00:00:57.533325Z  INFO rusqlite_migration: Database migrated to version 6

If this looks good, I'll do the same with Docker, in case that's how you prefer to build the images.

Once/if you're willing to tackle this, and after multi-arch image is available in Docker Hub repository, let me know and I can test it on ARM hardware.

<!-- gh-comment-id:2226553118 --> @ajvn commented on GitHub (Jul 13, 2024): @matze From what I can see there's no GitHub Action nor any other pipeline in this repository which builds and pushes images to DockerHub, so I assume you do this manually. If that's the case, here's step-by-step guide on how you can use Podman to create multi-arch manifest and push image to Docker Hub. I'm using internal Docker registry running in my Kubernetes cluster, but principle should be the same, tag in this example is `v2.4.4`, domain of my local registry is `registry.at.home`, and branch I'm using to build images is current `master`. * Create manifest: ```bash $ podman manifest create wastebin:v2.4.4 ``` * Build `x86_64` image from root directory of this repository: ```bash $ podman build --platform linux/amd64 --manifest localhost/wastebin:v2.4.4 -f Dockerfile ``` * Build `arm64` image from root directory of this repository: ```bash $ podman build --platform linux/arm64 --manifest localhost/wastebin:v2.4.4 -f Dockerfile.arm ``` * After builds are done, check if manifests contains both platforms, it should look something like this: ```bash $ podman manifest inspect --verbose localhost/wastebin:v2.4.4 { "schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", "manifests": [ { "mediaType": "application/vnd.oci.image.manifest.v1+json", "size": 800, "digest": "sha256:f02c775c193ab9de5b7d68cb4e71ef2ab8bb9c852f603dd0e71d015f622725ce", "platform": { "architecture": "amd64", "os": "linux" } }, { "mediaType": "application/vnd.oci.image.manifest.v1+json", "size": 1408, "digest": "sha256:3f8289518e56f9e666d51b34b66e67311671eb1c4fef4463020d6a3b21a06f08", "platform": { "architecture": "arm64", "os": "linux" } } ] } ``` * Push manifest to Docker Hub (in my case my internal Docker registry): ```bash $ podman manifest push localhost/wastebin:v2.4.4 registry.at.home/wastebin:v2.4.4 ``` * Inspect remote manifest: ```bash $ podman manifest inspect --verbose registry.at.home/wastebin:v2.4.4 { "schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", "manifests": [ { "mediaType": "application/vnd.oci.image.manifest.v1+json", "size": 812, "digest": "sha256:243843ed744fadcf208f75caf692075fd1f2314f1ece6a7c515522d006cc6a64", "platform": { "architecture": "amd64", "os": "linux" } }, { "mediaType": "application/vnd.oci.image.manifest.v1+json", "size": 1438, "digest": "sha256:07d06f364cdc0da2373ef17d758a87d2c1a480f36481fe16c0cac426ba0c4b61", "platform": { "architecture": "arm64", "os": "linux" } } ] } ``` * Testing in my local ARM based cluster: ```bash $ kubectl --context <relevant-context> -n wastebin logs wastebin-d678cdbb6-mm49m 2024-07-12T23:45:00.564484Z INFO rusqlite_migration: Database migrated to version 6 ``` Relevant deployment part looks like this: ```yml ... spec: containers: - image: registry.at.home/wastebin:v2.4.4 imagePullPolicy: IfNotPresent ... ``` * Testing on `x86_64` machine: ```bash $ podman run -p 8088:8088 -it registry.at.home/wastebin:v2.4.4 2024-07-13T00:00:57.533325Z INFO rusqlite_migration: Database migrated to version 6 ``` If this looks good, I'll do the same with Docker, in case that's how you prefer to build the images. Once/if you're willing to tackle this, and after multi-arch image is available in Docker Hub repository, let me know and I can test it on ARM hardware.
Author
Owner

@matze commented on GitHub (Dec 8, 2025):

How did I miss that last comment from @ajvn ?! If you still have time and capacity, can you try building an image from the Dockerfile in #191 for aarch64 and tell me if it works?

<!-- gh-comment-id:3629392089 --> @matze commented on GitHub (Dec 8, 2025): How did I miss that last comment from @ajvn ?! If you still have time and capacity, can you try building an image from the `Dockerfile` in #191 for aarch64 and tell me if it works?
Author
Owner

@ajvn commented on GitHub (Dec 9, 2025):

How did I miss that last comment from @ajvn ?! If you still have time and capacity, can you try building an image from the Dockerfile in #191 for aarch64 and tell me if it works?

Hey, will do. Next couple of days are going to be tricky as I have quite filled work agenda, but I'll try to test it out during the weekend.

Without testing your branch, this is something I had to do couple of months back, but I see 1.90 is now used in master, so it should be fine:

diff --git a/Dockerfile.arm b/Dockerfile.arm
index e989e1f..a6b0eef 100644
--- a/Dockerfile.arm
+++ b/Dockerfile.arm
@@ -1,6 +1,6 @@
 # --- build image

-FROM rust:1.87 AS builder
+FROM rust:1.88 AS builder

 RUN rustup target add aarch64-unknown-linux-musl && \
     apt-get update && \
<!-- gh-comment-id:3631429237 --> @ajvn commented on GitHub (Dec 9, 2025): > How did I miss that last comment from [@ajvn](https://github.com/ajvn) ?! If you still have time and capacity, can you try building an image from the `Dockerfile` in [#191](https://github.com/matze/wastebin/pull/191) for aarch64 and tell me if it works? Hey, will do. Next couple of days are going to be tricky as I have quite filled work agenda, but I'll try to test it out during the weekend. Without testing your branch, this is something I had to do couple of months back, but I see `1.90` is now used in `master`, so it should be fine: ``` diff --git a/Dockerfile.arm b/Dockerfile.arm index e989e1f..a6b0eef 100644 --- a/Dockerfile.arm +++ b/Dockerfile.arm @@ -1,6 +1,6 @@ # --- build image -FROM rust:1.87 AS builder +FROM rust:1.88 AS builder RUN rustup target add aarch64-unknown-linux-musl && \ apt-get update && \ ```
Author
Owner

@matze commented on GitHub (Dec 9, 2025):

I ran it with qemu and that at least seems to work fine. But yeah, real hardware would be nifty.

<!-- gh-comment-id:3631519305 --> @matze commented on GitHub (Dec 9, 2025): I ran it with qemu and that at least seems to work fine. But yeah, real hardware would be nifty.
Author
Owner

@ajvn commented on GitHub (Dec 11, 2025):

Managed to test it out (49bd69bb29da16c297f84cc3d48675b16933df7a commit), seems to be working fine.

Is it possible to add something like this so we can cache stuff? Building it takes a while, and you have to wait full time with each rebuild. I don't know how that works when using Zig as a build system, but I'd assume there's a way to cache dependencies.

It adds up when you want to rebuild minor changes (cross build in this case):

$ podman build --arch=arm64 -t wastebin:49bd69bb -f Dockerfile  7744.03s user 233.47s system 496% cpu 26:45.54 total
<!-- gh-comment-id:3642657304 --> @ajvn commented on GitHub (Dec 11, 2025): Managed to test it out ([49bd69bb29da16c297f84cc3d48675b16933df7a commit](https://github.com/matze/wastebin/pull/191/commits/49bd69bb29da16c297f84cc3d48675b16933df7a)), seems to be working fine. Is it possible to add something like [this](https://github.com/matze/wastebin/pull/63/files#diff-dc7bdd2e9362ec02fbbe27e693d135290f99b53479264a84542e47f27828fed8R21-R22) so we can cache stuff? Building it takes a while, and you have to wait full time with each rebuild. I don't know how that works when using Zig as a build system, but I'd assume there's a way to cache dependencies. It adds up when you want to rebuild minor changes (cross build in this case): ``` $ podman build --arch=arm64 -t wastebin:49bd69bb -f Dockerfile 7744.03s user 233.47s system 496% cpu 26:45.54 total ```
Author
Owner

@matze commented on GitHub (Dec 11, 2025):

Sure I can add that. Zig is not actually used as a build system, just as a linker which makes cross compiling a breeze.

<!-- gh-comment-id:3642671113 --> @matze commented on GitHub (Dec 11, 2025): Sure I can add that. Zig is not actually used as a build system, just as a linker which makes cross compiling a breeze.
Author
Owner

@matze commented on GitHub (Dec 11, 2025):

P.S.: aren't you actually host-compiling? I noticed when doing docker build --platform linux/arm64 it was running everything under QEMU which is of course dog slow. The Dockerfile I changed uses Rust to do actual cross compilation, i.e. building aarch64 under x86_64. So no matter if you build for x86_64 or aarch64 ideally both are run from the same host.

<!-- gh-comment-id:3642679650 --> @matze commented on GitHub (Dec 11, 2025): P.S.: aren't you actually host-compiling? I noticed when doing `docker build --platform linux/arm64` it was running everything under QEMU which is of course dog slow. The Dockerfile I changed uses Rust to do actual cross compilation, i.e. building aarch64 under x86_64. So no matter if you build for x86_64 or aarch64 ideally both are run from the same host.
Author
Owner

@ajvn commented on GitHub (Dec 11, 2025):

I'm building arm64 on amd64 machine, then upload image to my self-hosted registry, and use that in Kubernetes deployment manifest to run Wastebin on Raspberry Pi 4 k3s (Kubernetes distribution) cluster which is natively arm64, so it will be slower than building amd64 image.

This is how I usually do it, but instead of targeting single architecture, I'd create manifest that would hold information about all of the architectures I build for, push that to the container registry, and then I'd use same image tag/digest no matter where I deploy it.
Container runtime can then automatically pull image for a proper architecture without need to provide separate images per architecture e.g. wastebin-arm64, wastebin-amd64, etc...

If you'd like, I can also build it on one of Raspberry Pi units to test it out.
Currently it is natively running on arm64 without any issues.

<!-- gh-comment-id:3642848675 --> @ajvn commented on GitHub (Dec 11, 2025): I'm building `arm64` on `amd64` machine, then upload image to my self-hosted registry, and use that in Kubernetes deployment manifest to run Wastebin on Raspberry Pi 4 k3s (Kubernetes distribution) cluster which is natively `arm64`, so it will be slower than building `amd64` image. This is how I usually do it, but instead of targeting single architecture, I'd create manifest that would hold information about all of the architectures I build for, push that to the container registry, and then I'd use same image tag/digest no matter where I deploy it. Container runtime can then automatically pull image for a proper architecture without need to provide separate images per architecture e.g. `wastebin-arm64`, `wastebin-amd64`, etc... If you'd like, I can also build it on one of Raspberry Pi units to test it out. Currently it is natively running on `arm64` without any issues.
Author
Owner

@ajvn commented on GitHub (Dec 11, 2025):

One thing I forgot to mention regarding caches, we need to be careful in scenario where multiple architectures are built on same machine. With previous Dockerfile approach, I sometime hit an issue where it will try to use cache of previous architecture, build everything, and even though arm64 is specified, it'd use packages meant for amd64, and then you'd get:

exec user process caused: exec format error

I worked around this by using containers or different VMs to target different architectures, which helps keeping host clean as well, but it can be tricky issue to debug.

<!-- gh-comment-id:3642871816 --> @ajvn commented on GitHub (Dec 11, 2025): One thing I forgot to mention regarding caches, we need to be careful in scenario where multiple architectures are built on same machine. With previous Dockerfile approach, I sometime hit an issue where it will try to use cache of previous architecture, build everything, and even though `arm64` is specified, it'd use packages meant for `amd64`, and then you'd get: ``` exec user process caused: exec format error ``` I worked around this by using containers or different VMs to target different architectures, which helps keeping host clean as well, but it can be tricky issue to debug.
Author
Owner

@matze commented on GitHub (Dec 11, 2025):

I don't think we need to be careful. As I said, with the new Dockerfile we would use native Rust cross compilation and not switch the host. The final binaries will be located in /target/<arch> and can't be mistaken.

<!-- gh-comment-id:3643603710 --> @matze commented on GitHub (Dec 11, 2025): I don't think we need to be careful. As I said, with the new Dockerfile we would use native Rust cross compilation and not switch the host. The final binaries will be located in `/target/<arch>` and can't be mistaken.
Author
Owner

@ajvn commented on GitHub (Dec 11, 2025):

All right. Do you need me to test anything else?

<!-- gh-comment-id:3643617814 --> @ajvn commented on GitHub (Dec 11, 2025): All right. Do you need me to test anything else?
Author
Owner

@matze commented on GitHub (Dec 11, 2025):

I think we are good, I'll test that with the next release and then we will see and adjust (and maybe re-open this issue). I noticed that with those cache mounts builds were failing because the binaries could not be found, so I'll leave it out for now. In general, rebuilding an image should not be a frequent operation.

<!-- gh-comment-id:3643873676 --> @matze commented on GitHub (Dec 11, 2025): I think we are good, I'll test that with the next release and then we will see and adjust (and maybe re-open this issue). I noticed that with those cache mounts builds were failing because the binaries could not be found, so I'll leave it out for now. In general, rebuilding an image should not be a frequent operation.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/wastebin-matze#24
No description provided.