mirror of
https://github.com/hoppscotch/hoppscotch.git
synced 2026-04-26 01:06:00 +03:00
[GH-ISSUE #4257] [bug]: Version 2024.7.1 won't start on Kubernetes "address already in use" #1542
Labels
No labels
CodeDay
a11y
browser limited
bug
bug fix
cli
core
critical
design
desktop
discussion
docker
documentation
duplicate
enterprise
feature
feature
fosshack
future
good first issue
hacktoberfest
help wanted
i18n
invalid
major
minor
need information
need testing
not applicable to hoppscotch
not reproducible
pull-request
question
refactor
resolved
sandbox
self-host
spam
stale
testmu
wip
wont fix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/hoppscotch#1542
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Jonathan-Diaz-Rosa on GitHub (Aug 13, 2024).
Original GitHub issue: https://github.com/hoppscotch/hoppscotch/issues/4257
Is there an existing issue for this?
Current behavior
When i try to deploy the version 2024.7.1 on kubernetes without any changes made to the configuration, only one container can start and the others crashed with the log :
With the version 2024.7.0 all works perfectly, what's happened on 7.1 ?
Steps to reproduce
Resources and Kustomize file can be found her :
gist files
Environment
Release
Version
Self-hosted
@elixxx commented on GitHub (Aug 13, 2024):
It's look like https://github.com/hoppscotch/hoppscotch/pull/4233/files made some issues. I had also an issue in web container with an 404 error
@AndrewBastin commented on GitHub (Aug 13, 2024):
@Jonathan-Diaz-Rosa @elixxx thanks for the logs and the config info. We are looking into this and will look to bringing up a patch as soon as possible, in the meantime please stay on 2024.7.0 if possible.
Out of curiosity, have you folks considered moving to the all-in-one containers we have now ? Are there any blockers towards migrating to that ?
See: https://docs.hoppscotch.io/documentation/self-host/community-edition/install-and-build#using-the-aio-container
@Jonathan-Diaz-Rosa commented on GitHub (Aug 13, 2024):
Yes i can stay on 2024.7.0 of course.
No there is nothing blocking for moving to AIO. We just want to keep the hand on each elements for the moments, but, we will definitely move to AIO in a few weeks or months.
@elixxx commented on GitHub (Aug 13, 2024):
Thanks for you answer. I will move to the AIO container in our next update cylce!
I think the reson we used and also build the container ourself was that we had issues with setting the right hostname on runtime.
@Brainpitcher commented on GitHub (Aug 14, 2024):
We use AllinOne in our k8s deply and getting some difficulties with update to 7.1 (7.0 works perfectly)
the container crashes with
npm notice npm notice New patch version of npm available! 10.8.1 -> 10.8.2 npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.2 npm notice To update run: npm install -g npm@10.8.2 npm notice App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.507825,"msg":"using provided configuration","config_file":"/etc/caddy/aio-subpath-access.Caddyfile","config_adapter":"caddyfile"} App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.5110373,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//[::1]:2019","//127.0.0.1:2019","//localhost:2019"]} App/Admin Dashboard Caddy | {"level":"warn","ts":1723464428.5112789,"logger":"http.auto_https","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv3","http_port":80} App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.5113385,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0006a1700"} App/Admin Dashboard Caddy | {"level":"warn","ts":1723464428.5118368,"logger":"tls","msg":"unable to get instance ID; storage clean stamps will be incomplete","error":"open /home/hoppuser/.local/share/caddy/instance.uuid: no such file or directory"} App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.5118783,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]} App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.5119252,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]} App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.5119631,"logger":"http.log","msg":"server running","name":"srv2","protocols":["h1","h2","h3"]} App/Admin Dashboard Caddy | {"level":"info","ts":1723464428.5120103,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc0006a1700"} App/Admin Dashboard Caddy | Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: permission denied Exiting process because Caddy Server exited with code 1@AndrewBastin commented on GitHub (Aug 15, 2024):
@Brainpitcher There is another similar ticket open for the situation with port 80, #4264, which we are also looking at.
We are having a tough time reproducing this on our machines though. Can you guys provide a bit more details about your environment, probably stuff like are you using Docker, or any other OCI runtime (like Podman and friends) and maybe stuff about the host environment, CPU architecture or anything.UPDATE: Can repro in my local minikube setup with the above given configs (thanks btw!), I have a patch running locally that should work (I just disable the Caddy admin endpoints which are not used anyways) but it really doesn't explain why the port conflict occurs since they are containers and should have independent port spaces, so I am not able to pin point the root cause of it (would love pointers as to possibilities).
Will have the patch up live as part of 2024.7.2 hopefully early next week to close this issue.
Also, I have deemed this as a separate issue from #4264 as that is an issue I was able to reproduce in Podman only and doesn't seem to be in affected by the same issue as this.
@AndrewBastin commented on GitHub (Aug 19, 2024):
@Brainpitcher @elixxx @Jonathan-Diaz-Rosa #4279 should ideally fix this issue. If you are able to help verify whether the fix solves your issue, would appreciate it. The fix will go live as soon as the PR is merged (with the release of 2024.7.2)
@Brainpitcher commented on GitHub (Aug 21, 2024):
yeap it should work. BTW i've managed to deploy .2024.7.1 in my cluster and it was a one security context change:
but it is not so good for the security reasons
@AndrewBastin commented on GitHub (Aug 24, 2024):
@Brainpitcher @elixxx @Jonathan-Diaz-Rosa 2024.7.2 got released today, please do let me know if the issue still persists.