mirror of
https://github.com/matze/wastebin.git
synced 2026-04-25 08:36:00 +03:00
[GH-ISSUE #29] support multi-platform docker image #24
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/wastebin-matze#24
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Triple-Z on GitHub (Jun 1, 2023).
Original GitHub issue: https://github.com/matze/wastebin/issues/29
I try to run
quxfoo/wastebinin Raspberry Pi, but i gotstandard_init_linux.go:219: exec user process caused: exec format error.Docker multi-platform image: https://docs.docker.com/build/building/multi-platform/
@matze commented on GitHub (Jun 1, 2023):
I'll take care after 9th of June unless you're willing to open a PR?
Zhenzhen Zhao @.***> schrieb am Do., 1. Juni 2023,
14:27:
@Myzel394 commented on GitHub (Jan 28, 2024):
Any news?
@matze commented on GitHub (Feb 6, 2024):
Ah sorry, will look into it. Unless some of you have incentive to come up with a PR. Contributions are welcome.
@MichaelSasser commented on GitHub (May 22, 2024):
I haven't found a simple solution yet to create multi-arch container images using
docker buildxfor my projects when cross compiling. I would have expected that therustbase image would work fine in this condition, but it does not. I currently only run amd64 and aarch64 machines, so I tried to use two different images based on theTARGETPLATFORM. My solution works consistently, but it looks a bit weird. Maybe it's a starting point for you. My dockerfile looks like:I build images and deploy stuff using a self-hosted Woodpecker CI instance because I don't want to store secrets on GH servers. Therefore, I haven't looked into GH actions which build container images. There is probably a buildx action somewhere.
If I'm building locally, just to try something out or so, I use a Makefile. The image related stuff looks somewhat like this snippet:
One note: If you don't use labels, you can omit all the "build-arg"s. I just have them in there because I use them in CI.
@ajvn commented on GitHub (Jul 9, 2024):
Hello, I've recently set this up in my local Kubernetes cluster based on ARM platform, and had to cross-compile it in order to get it to work.
It's possible to do it with the same Dockerfile, although it needs some adjustments, biggest one being using Ubuntu (or something similar) instead of
scratch.I couldn't get it to work with
scratch, not sure if some shared library is needed, or if it simply doesn't behave properly on ARM architecture.If this is something you'd like to avoid, and keep using
scratchforx86_64, other option is using separate Dockerfile for ARM build.If it sounds useful, or something you'd like to add to this project, let me know and I can open a PR with my changes.
Also, I used Podman for all of my testing, in case you want to proceed with this I'll test it with Docker as well.
@matze commented on GitHub (Jul 9, 2024):
I'd like to avoid anything larger than
scratchfor x86_64, so please go ahead with a separate Dockerfile. Although I don't see why ascratchimage shouldn't work in case of ARM.@ajvn commented on GitHub (Jul 9, 2024):
Neither do I, but this error is returned, doesn't happen with x86_64:
It's probably some library missing, I'll investigate some more.
@ajvn commented on GitHub (Jul 9, 2024):
I have it running using
scratchand there's a way to build it for multiple architectures using single Dockerfile, but it seems much cleaner to have it in a separate file, so I'll proceed that way.I'll open a PR once I'm done testing it using Docker.
@ajvn commented on GitHub (Jul 13, 2024):
@matze From what I can see there's no GitHub Action nor any other pipeline in this repository which builds and pushes images to DockerHub, so I assume you do this manually.
If that's the case, here's step-by-step guide on how you can use Podman to create multi-arch manifest and push image to Docker Hub.
I'm using internal Docker registry running in my Kubernetes cluster, but principle should be the same, tag in this example is
v2.4.4, domain of my local registry isregistry.at.home, and branch I'm using to build images is currentmaster.x86_64image from root directory of this repository:arm64image from root directory of this repository:Relevant deployment part looks like this:
x86_64machine:If this looks good, I'll do the same with Docker, in case that's how you prefer to build the images.
Once/if you're willing to tackle this, and after multi-arch image is available in Docker Hub repository, let me know and I can test it on ARM hardware.
@matze commented on GitHub (Dec 8, 2025):
How did I miss that last comment from @ajvn ?! If you still have time and capacity, can you try building an image from the
Dockerfilein #191 for aarch64 and tell me if it works?@ajvn commented on GitHub (Dec 9, 2025):
Hey, will do. Next couple of days are going to be tricky as I have quite filled work agenda, but I'll try to test it out during the weekend.
Without testing your branch, this is something I had to do couple of months back, but I see
1.90is now used inmaster, so it should be fine:@matze commented on GitHub (Dec 9, 2025):
I ran it with qemu and that at least seems to work fine. But yeah, real hardware would be nifty.
@ajvn commented on GitHub (Dec 11, 2025):
Managed to test it out (49bd69bb29da16c297f84cc3d48675b16933df7a commit), seems to be working fine.
Is it possible to add something like this so we can cache stuff? Building it takes a while, and you have to wait full time with each rebuild. I don't know how that works when using Zig as a build system, but I'd assume there's a way to cache dependencies.
It adds up when you want to rebuild minor changes (cross build in this case):
@matze commented on GitHub (Dec 11, 2025):
Sure I can add that. Zig is not actually used as a build system, just as a linker which makes cross compiling a breeze.
@matze commented on GitHub (Dec 11, 2025):
P.S.: aren't you actually host-compiling? I noticed when doing
docker build --platform linux/arm64it was running everything under QEMU which is of course dog slow. The Dockerfile I changed uses Rust to do actual cross compilation, i.e. building aarch64 under x86_64. So no matter if you build for x86_64 or aarch64 ideally both are run from the same host.@ajvn commented on GitHub (Dec 11, 2025):
I'm building
arm64onamd64machine, then upload image to my self-hosted registry, and use that in Kubernetes deployment manifest to run Wastebin on Raspberry Pi 4 k3s (Kubernetes distribution) cluster which is nativelyarm64, so it will be slower than buildingamd64image.This is how I usually do it, but instead of targeting single architecture, I'd create manifest that would hold information about all of the architectures I build for, push that to the container registry, and then I'd use same image tag/digest no matter where I deploy it.
Container runtime can then automatically pull image for a proper architecture without need to provide separate images per architecture e.g.
wastebin-arm64,wastebin-amd64, etc...If you'd like, I can also build it on one of Raspberry Pi units to test it out.
Currently it is natively running on
arm64without any issues.@ajvn commented on GitHub (Dec 11, 2025):
One thing I forgot to mention regarding caches, we need to be careful in scenario where multiple architectures are built on same machine. With previous Dockerfile approach, I sometime hit an issue where it will try to use cache of previous architecture, build everything, and even though
arm64is specified, it'd use packages meant foramd64, and then you'd get:I worked around this by using containers or different VMs to target different architectures, which helps keeping host clean as well, but it can be tricky issue to debug.
@matze commented on GitHub (Dec 11, 2025):
I don't think we need to be careful. As I said, with the new Dockerfile we would use native Rust cross compilation and not switch the host. The final binaries will be located in
/target/<arch>and can't be mistaken.@ajvn commented on GitHub (Dec 11, 2025):
All right. Do you need me to test anything else?
@matze commented on GitHub (Dec 11, 2025):
I think we are good, I'll test that with the next release and then we will see and adjust (and maybe re-open this issue). I noticed that with those cache mounts builds were failing because the binaries could not be found, so I'll leave it out for now. In general, rebuilding an image should not be a frequent operation.