mirror of
https://github.com/hoppscotch/hoppscotch.git
synced 2026-04-25 16:55:59 +03:00
[GH-ISSUE #4891] [bug]: Desktop app [Connection failed: Verification error: Invalid file hash] #1840
Labels
No labels
CodeDay
a11y
browser limited
bug
bug fix
cli
core
critical
design
desktop
discussion
docker
documentation
duplicate
enterprise
feature
feature
fosshack
future
good first issue
hacktoberfest
help wanted
i18n
invalid
major
minor
need information
need testing
not applicable to hoppscotch
not reproducible
pull-request
question
refactor
resolved
sandbox
self-host
spam
stale
testmu
wip
wont fix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/hoppscotch#1840
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @TinT1 on GitHub (Mar 14, 2025).
Original GitHub issue: https://github.com/hoppscotch/hoppscotch/issues/4891
Originally assigned to: @CuriousCorrelation on GitHub.
Is there an existing issue for this?
Current behavior
Hi,
We are running a self-hosted Hoppscotch instance (2025.2.2) together with Proxyscotch (v0.1.4) in Kubernetes. However, after each Hoppscotch pod restart, we need to delete the instance in the Desktop app and recreate it. Otherwise, we are unable to reconnect to our self-hosted Hoppscotch from the Desktop app, receiving the following error:
"Connection failed: Verification error: Invalid file hash: expected
<< hash >>, got<< another hash >>."Environment variables:
Unfortunately I don't see any errors in console
Would appreciate any help, thanks!
Steps to reproduce
Deploy hoppscotch to Kubernetes, connect to it from Desktop app, restart hoppscotch pod, try to reconnect.
Logs and Screenshots
Environment
Deploy preview
Hoppscotch Version
Self-hosted
Interceptor
Proxy - Desktop App
Browsers Affected
No response
Operating System
MacOS
Additional Information
No response
@CuriousCorrelation commented on GitHub (Mar 14, 2025):
Hi, this happens when the manifest sent to the desktop app by the self-hosted instance doesn’t match the internal list of hashes the desktop app maintains. This could happen due to several reasons, one being any changes to the files hosted by the self-hosted instance after the first connection that downloaded the instance bundle.
This is to prevent app data mismatch, same domain hosting a different app overriding or invalidating user data, changes to the local data when offline, and many more. In this particular case, where you know the cause for the changes to the self-hosted instance bundle, simply clearing cache from the “Add a new instance” section and choosing the instance from the dropdown will be suitable, no need to remove and add the instance.
@TinT1 commented on GitHub (Mar 14, 2025):
Hi,
Thank you for your quick response—I really appreciate it.
Is there anything we can do to bypass this behavior?
Since we are running it in Kubernetes, where pods may restart for various reasons, it is inconvenient for clients using the desktop app to have to clear their cache after each pod restart.
Thanks
@CuriousCorrelation commented on GitHub (Mar 14, 2025):
Hi,
You're welcome! Happy to help with this issue.
Restarting pods alone shouldn't invalidate the manifest. Verification only fails when there's an actual change to the file, source code, or other mutations affecting the file hash.
While we don't currently offer a way to bypass this verification—as it's important for preventing several cases mentioned above—we could potentially implement it as a security override feature.
I believe such option would allow users who understand the implications to skip the check. Please feel free to create a separate issue for this request. We can definitely evaluate its feasibility and consider it for future releases!
@TinT1 commented on GitHub (Mar 14, 2025):
Hi,
Unfortunately, this seems to be our case, and we can replicate the issue with the following steps:
hoppscotch.example.cominstance to the Desktop appkubectl rollout restart deployment hoppscotchhoppscotch.example.com→ Connection failed: Verification error: Invalid file hash.BR
@CuriousCorrelation commented on GitHub (Mar 14, 2025):
Hi,
That is very strange indeed.
I'll try to see if I am able to reproduce this and get to the bottom of what might be causing the hash to change. I wonder if this is due to changes in environment variables upon pod restart that get picked up by the file, changing their content and as a result the hash.
This would be a very valid use case for skipping validation. I'll try to investigate further and create a new issue based on my findings. In the meantime, let's keep this issue open for tracking.
Just for context, you can see which exact file is causing this verification failure by looking for the hash that was retrieved by the desktop app from the manifest hosted at
[instance-url]/desktop-app-server/api/v1/manifest. Since these are web app build files, the minification might make it tricky, but could still help identify the issue.Thank you for reporting the issue and detailed replication steps, they'll be very helpful for our investigation.
@TinT1 commented on GitHub (Mar 14, 2025):
Hi @CuriousCorrelation, thank you for your time.
It seems that index.html hash is changed after rollout, see below
curl https://hoppscotch.example.com/desktop-app-server/api/v1/manifestkubectl rollout restart deployment hoppscotchcurl https://hoppscotch.example.com/desktop-app-server/api/v1/manifestThere are also other hashes that were changed as well.
EDIT:
Disregard this—it seems that this only applies to index.html.
BR
@CuriousCorrelation commented on GitHub (Mar 14, 2025):
Of course, happy to help! Thanks for taking time to diagnose this with me.
You've done some good detective work! Looking at those manifest hash changes, it strengthens our suspicion.
If it's only the
index.htmlthat changes between deployments, there's a good chance it's due to environment variables causing the hash changes.The desktop app (through some indirections) uses the
distfolder that is generated when you dopnpm generatein thepackages/hoppscotch-selfhost-webpackage, which is then served via thedesktop-app-serverpath. In thatdistdirectory, if you look intoindex.html, you'll find this line:This is using the import-meta-env package, which replaces that placeholder with actual environment variables at runtime (so we don't have to rebuild the entire image if there are changes to
.envfile). When your pods restart, even though the file content appears identical (same size of 5747 bytes), the environment variables are likely being re-injected with slight changes.To be clear, this behavior is intentional by design in the import-meta-env package - it's supposed to inject environment variables at runtime. However, I'm unsure if it is intentional that these environment variables contain values that might be changing each deployment, changing on each pod restart, causing the hash verification to fail.
If it's is indeed the environment variables that are changing and if it is intentional, then this does strengthens the case for having an option to skip verification for these kinds of deployment scenarios.
@TinT1 commented on GitHub (Mar 14, 2025):
I've printed out the environment variables before and after rolling out the deployment.
The only change I noticed was the
HOSTNAME, which always corresponds to the pod name.Before rollout ->
HOSTNAME=hoppscotch-57cc66cb7b-6gf6dAfter rollout ->
HOSTNAME=hoppscotch-f76bbc4b-bththI also tried running
console.log(globalThis.import_meta_env)in my Desktop app, but the only variable present was:BR
@TinT1 commented on GitHub (Mar 14, 2025):
Tried setting
HOSTNAMEtohoppscotch.example.comso that it doesn't change when deployment is restarted, it didn't help.@CuriousCorrelation commented on GitHub (Mar 15, 2025):
Hi,
So went ahead and set up a minimal reproducible example with Docker, but couldn't find any hash changes between container restarts when keeping environment variables consistent (foreshadowing!). Only the bundle signature changes, which is expected and handled correctly by the desktop app.
What's particularly interesting is that in our bundle generation process, hashes are calculated on the raw file content before being added to the ZIP archive. This confirms that the hash changes you're seeing come from actual content modifications in the
index.htmlfile itself, not just metadata or packaging differences. I haven't seen this issue reported by other users running Hoppscotch on k8s with Helm, though their setups might be more minimal.I tested your observation about the
HOSTNAMEchanging after each restart, and had a hunch: while the values replaced byglobalThis.import_meta_envaren't changing, perhaps the order in which they're injected into the file is non-deterministic when environment variables change.Bingo! That was it! The order of environment variable injection seems non-deterministic only when there are any modification to
.envfile, this doesn’t happen when say we move the entire file to.env.backup, and move it back as.env.You can test this theory yourself by inspecting the actual
index.htmlfile before and after restarts:Option 1: Look inside the container:
Option 2: Fetch and compare the bundles:
This case - where environment changes after each restart affect file hashes - might warrant a selective verification skip option for
index.html. While changing environment variables could technically be considered a fresh deployment (where clearing cache makes sense), I understand how skippingindex.htmlhash verification would provide a much better user experience.@CuriousCorrelation commented on GitHub (Mar 15, 2025):
I have a bit more research and testing to do on this before I can make a PR, but in the meantime you can use the changes described here to keep the env vars that are being injected consistent across pod restarts.
@TinT1 commented on GitHub (Mar 19, 2025):
Hi @CuriousCorrelation
Thank you for the research. I've compared /site/selfhost-web/index.html before and after the pod restart and can confirm that the only difference is the order of environment variables in:
@CuriousCorrelation commented on GitHub (Mar 19, 2025):
Excellent! Thanks for the confirmation, this should be fixed with the upcoming patch release scheduled soon 🚀
@CuriousCorrelation commented on GitHub (Mar 19, 2025):
Hi @TinT1, the latest release
v25.2.3is out, could you see if it has resolved your issue? 🤞@TinT1 commented on GitHub (Mar 19, 2025):
Hi @CuriousCorrelation,
We just upgraded our Hoppscotch instance and tested restarting the pods multiple times. The order of environment variables remained the same across restarts, and we didn’t encounter any issues connecting the Desktop app after each restart.
Thank you very much for the fix! 🎉
Closing the issue.