[GH-ISSUE #4891] [bug]: Desktop app [Connection failed: Verification error: Invalid file hash] #1840

Closed
opened 2026-03-16 22:02:31 +03:00 by kerem · 15 comments
Owner

Originally created by @TinT1 on GitHub (Mar 14, 2025).
Original GitHub issue: https://github.com/hoppscotch/hoppscotch/issues/4891

Originally assigned to: @CuriousCorrelation on GitHub.

Is there an existing issue for this?

  • I have searched existing issues and this bug hasn't been reported yet

Current behavior

Hi,

We are running a self-hosted Hoppscotch instance (2025.2.2) together with Proxyscotch (v0.1.4) in Kubernetes. However, after each Hoppscotch pod restart, we need to delete the instance in the Desktop app and recreate it. Otherwise, we are unable to reconnect to our self-hosted Hoppscotch from the Desktop app, receiving the following error:

"Connection failed: Verification error: Invalid file hash: expected << hash >>, got << another hash >>."

Environment variables:

ACCESS_TOKEN_VALIDITY=1d
ENABLE_SUBPATH_BASED_ACCESS='true'
GOOGLE_SCOPE=email,profile
MAGIC_LINK_TOKEN_VALIDITY='3'
MAILER_SMTP_ENABLE='false'
MAILER_USE_CUSTOM_CONFIGS='false'
RATE_LIMIT_MAX='100'
RATE_LIMIT_TTL='60'
REDIRECT_URL=<< hoppscotch url >>
REFRESH_TOKEN_VALIDITY=7d
TOKEN_SALT_COMPLEXITY='10'
VITE_ADMIN_URL=<< hoppscotch url >>/admin
VITE_ALLOWED_AUTH_PROVIDERS=GOOGLE
VITE_APP_PRIVACY_POLICY_LINK=https://docs.example.com/privacy
VITE_APP_TOS_LINK=https://docs.example.com/terms
VITE_BACKEND_API_URL=<< hoppscotch url >>/backend/v1
VITE_BACKEND_GQL_URL=<< hoppscotch url >>/backend/graphql
VITE_BACKEND_WS_URL=wss:hoppscotch.example.com/backend/graphql
VITE_BASE_URL=<< hoppscotch url >>
VITE_SHORTCODE_BASE_URL=<< hoppscotch url >>
WHITELISTED_ORIGINS=<< hoppscotch url >>,app://<< hoppscotch app >>
HOPP_AIO_ALTERNATE_PORT='3300'

Unfortunately I don't see any errors in console

Would appreciate any help, thanks!

Steps to reproduce

Deploy hoppscotch to Kubernetes, connect to it from Desktop app, restart hoppscotch pod, try to reconnect.

Image

Logs and Screenshots


Environment

Deploy preview

Hoppscotch Version

Self-hosted

Interceptor

Proxy - Desktop App

Browsers Affected

No response

Operating System

MacOS

Additional Information

No response

Originally created by @TinT1 on GitHub (Mar 14, 2025). Original GitHub issue: https://github.com/hoppscotch/hoppscotch/issues/4891 Originally assigned to: @CuriousCorrelation on GitHub. ### Is there an existing issue for this? - [x] I have searched existing issues and this bug hasn't been reported yet ### Current behavior Hi, We are running a self-hosted Hoppscotch instance (2025.2.2) together with Proxyscotch (v0.1.4) in Kubernetes. However, after each Hoppscotch pod restart, we need to delete the instance in the Desktop app and recreate it. Otherwise, we are unable to reconnect to our self-hosted Hoppscotch from the Desktop app, receiving the following error: "Connection failed: Verification error: Invalid file hash: expected `<< hash >>`, got `<< another hash >>`." Environment variables: ``` ACCESS_TOKEN_VALIDITY=1d ENABLE_SUBPATH_BASED_ACCESS='true' GOOGLE_SCOPE=email,profile MAGIC_LINK_TOKEN_VALIDITY='3' MAILER_SMTP_ENABLE='false' MAILER_USE_CUSTOM_CONFIGS='false' RATE_LIMIT_MAX='100' RATE_LIMIT_TTL='60' REDIRECT_URL=<< hoppscotch url >> REFRESH_TOKEN_VALIDITY=7d TOKEN_SALT_COMPLEXITY='10' VITE_ADMIN_URL=<< hoppscotch url >>/admin VITE_ALLOWED_AUTH_PROVIDERS=GOOGLE VITE_APP_PRIVACY_POLICY_LINK=https://docs.example.com/privacy VITE_APP_TOS_LINK=https://docs.example.com/terms VITE_BACKEND_API_URL=<< hoppscotch url >>/backend/v1 VITE_BACKEND_GQL_URL=<< hoppscotch url >>/backend/graphql VITE_BACKEND_WS_URL=wss:hoppscotch.example.com/backend/graphql VITE_BASE_URL=<< hoppscotch url >> VITE_SHORTCODE_BASE_URL=<< hoppscotch url >> WHITELISTED_ORIGINS=<< hoppscotch url >>,app://<< hoppscotch app >> HOPP_AIO_ALTERNATE_PORT='3300' ``` Unfortunately I don't see any errors in console Would appreciate any help, thanks! ### Steps to reproduce Deploy hoppscotch to Kubernetes, connect to it from Desktop app, restart hoppscotch pod, try to reconnect. ![Image](https://github.com/user-attachments/assets/88ebe52b-3e0a-4a8c-8f5a-ed7b9be9521c) ### Logs and Screenshots ```shell ``` ### Environment Deploy preview ### Hoppscotch Version Self-hosted ### Interceptor Proxy - Desktop App ### Browsers Affected _No response_ ### Operating System MacOS ### Additional Information _No response_
kerem 2026-03-16 22:02:31 +03:00
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 14, 2025):

Hi, this happens when the manifest sent to the desktop app by the self-hosted instance doesn’t match the internal list of hashes the desktop app maintains. This could happen due to several reasons, one being any changes to the files hosted by the self-hosted instance after the first connection that downloaded the instance bundle.

This is to prevent app data mismatch, same domain hosting a different app overriding or invalidating user data, changes to the local data when offline, and many more. In this particular case, where you know the cause for the changes to the self-hosted instance bundle, simply clearing cache from the “Add a new instance” section and choosing the instance from the dropdown will be suitable, no need to remove and add the instance.

Image

<!-- gh-comment-id:2724510128 --> @CuriousCorrelation commented on GitHub (Mar 14, 2025): Hi, this happens when the manifest sent to the desktop app by the self-hosted instance doesn’t match the internal list of hashes the desktop app maintains. This could happen due to several reasons, one being any changes to the files hosted by the self-hosted instance after the first connection that downloaded the instance bundle. This is to prevent app data mismatch, same domain hosting a different app overriding or invalidating user data, changes to the local data when offline, and many more. In this particular case, where you know the cause for the changes to the self-hosted instance bundle, simply clearing cache from the “Add a new instance” section and choosing the instance from the dropdown will be suitable, no need to remove and add the instance. ![Image](https://github.com/user-attachments/assets/da78b02d-4e44-4406-a8bd-b8d7ee8a8090)
Author
Owner

@TinT1 commented on GitHub (Mar 14, 2025):

Hi,

Thank you for your quick response—I really appreciate it.

Is there anything we can do to bypass this behavior?

Since we are running it in Kubernetes, where pods may restart for various reasons, it is inconvenient for clients using the desktop app to have to clear their cache after each pod restart.

Thanks

<!-- gh-comment-id:2724767871 --> @TinT1 commented on GitHub (Mar 14, 2025): Hi, Thank you for your quick response—I really appreciate it. Is there anything we can do to bypass this behavior? Since we are running it in Kubernetes, where pods may restart for various reasons, it is inconvenient for clients using the desktop app to have to clear their cache after each pod restart. Thanks
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 14, 2025):

Hi,

You're welcome! Happy to help with this issue.

Restarting pods alone shouldn't invalidate the manifest. Verification only fails when there's an actual change to the file, source code, or other mutations affecting the file hash.

While we don't currently offer a way to bypass this verification—as it's important for preventing several cases mentioned above—we could potentially implement it as a security override feature.

I believe such option would allow users who understand the implications to skip the check. Please feel free to create a separate issue for this request. We can definitely evaluate its feasibility and consider it for future releases!

<!-- gh-comment-id:2724947661 --> @CuriousCorrelation commented on GitHub (Mar 14, 2025): Hi, You're welcome! Happy to help with this issue. Restarting pods alone shouldn't invalidate the manifest. Verification only fails when there's an actual change to the file, source code, or other mutations affecting the file hash. While we don't currently offer a way to bypass this verification—as it's important for preventing several cases mentioned above—we could potentially implement it as a security override feature. I believe such option would allow users who understand the implications to skip the check. Please feel free to create a separate issue for this request. We can definitely evaluate its feasibility and consider it for future releases!
Author
Owner

@TinT1 commented on GitHub (Mar 14, 2025):

Hi,

Restarting pods alone shouldn't invalidate the manifest. Verification only fails when there's an actual change to the file, source code, or other mutations affecting the file hash.

Unfortunately, this seems to be our case, and we can replicate the issue with the following steps:

  1. Add an hoppscotch.example.com instance to the Desktop app
  2. kubectl rollout restart deployment hoppscotch
  3. In the Desktop app, switch to another instance (e.g., Hoppscotch Cloud).
  4. Switch back to hoppscotch.example.comConnection failed: Verification error: Invalid file hash.

BR

<!-- gh-comment-id:2725119730 --> @TinT1 commented on GitHub (Mar 14, 2025): Hi, > Restarting pods alone shouldn't invalidate the manifest. Verification only fails when there's an actual change to the file, source code, or other mutations affecting the file hash. Unfortunately, this seems to be our case, and we can replicate the issue with the following steps: 1. Add an `hoppscotch.example.com` instance to the Desktop app 2. `kubectl rollout restart deployment hoppscotch` 3. In the Desktop app, switch to another instance (e.g., Hoppscotch Cloud). 4. Switch back to `hoppscotch.example.com` → _Connection failed: Verification error: Invalid file hash._ BR
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 14, 2025):

Hi,

That is very strange indeed.

I'll try to see if I am able to reproduce this and get to the bottom of what might be causing the hash to change. I wonder if this is due to changes in environment variables upon pod restart that get picked up by the file, changing their content and as a result the hash.

This would be a very valid use case for skipping validation. I'll try to investigate further and create a new issue based on my findings. In the meantime, let's keep this issue open for tracking.

Just for context, you can see which exact file is causing this verification failure by looking for the hash that was retrieved by the desktop app from the manifest hosted at [instance-url]/desktop-app-server/api/v1/manifest. Since these are web app build files, the minification might make it tricky, but could still help identify the issue.

Thank you for reporting the issue and detailed replication steps, they'll be very helpful for our investigation.

<!-- gh-comment-id:2725218059 --> @CuriousCorrelation commented on GitHub (Mar 14, 2025): Hi, That is very strange indeed. I'll try to see if I am able to reproduce this and get to the bottom of what might be causing the hash to change. I wonder if this is due to changes in environment variables upon pod restart that get picked up by the file, changing their content and as a result the hash. This would be a very valid use case for skipping validation. I'll try to investigate further and create a new issue based on my findings. In the meantime, let's keep this issue open for tracking. Just for context, you can see which exact file is causing this verification failure by looking for the hash that was retrieved by the desktop app from the manifest hosted at `[instance-url]/desktop-app-server/api/v1/manifest`. Since these are web app build files, the minification might make it tricky, but could still help identify the issue. Thank you for reporting the issue and detailed replication steps, they'll be very helpful for our investigation.
Author
Owner

@TinT1 commented on GitHub (Mar 14, 2025):

Hi @CuriousCorrelation, thank you for your time.

It seems that index.html hash is changed after rollout, see below

  1. curl https://hoppscotch.example.com/desktop-app-server/api/v1/manifest
  {
      "path": "index.html",
      "size": 5747,
      "hash": "l9fuhdYOXSdbypauuJoR7zK0LoI5bmdYytybTZbgTDg=",
      "mime_type": "text/html"
  },
  1. kubectl rollout restart deployment hoppscotch
  2. curl https://hoppscotch.example.com/desktop-app-server/api/v1/manifest
{
    "path": "index.html",
    "size": 5747,
    "hash": "2iNpp4tPEFl46J8xlBZZO3GNVeClQW6Uzg3XUnRcmyY=",
    "mime_type": "text/html"
},
Image

There are also other hashes that were changed as well.

EDIT:

There are also other hashes that were changed as well

Disregard this—it seems that this only applies to index.html.

BR

<!-- gh-comment-id:2725352561 --> @TinT1 commented on GitHub (Mar 14, 2025): Hi @CuriousCorrelation, thank you for your time. It seems that index.html hash is changed after rollout, see below 1. `curl https://hoppscotch.example.com/desktop-app-server/api/v1/manifest` ``` { "path": "index.html", "size": 5747, "hash": "l9fuhdYOXSdbypauuJoR7zK0LoI5bmdYytybTZbgTDg=", "mime_type": "text/html" }, ``` 2. `kubectl rollout restart deployment hoppscotch` 3. `curl https://hoppscotch.example.com/desktop-app-server/api/v1/manifest` ``` { "path": "index.html", "size": 5747, "hash": "2iNpp4tPEFl46J8xlBZZO3GNVeClQW6Uzg3XUnRcmyY=", "mime_type": "text/html" }, ``` <img width="506" alt="Image" src="https://github.com/user-attachments/assets/40d4f2c8-c335-4bd1-9577-0d9e6c48aee3" /> There are also other hashes that were changed as well. EDIT: > There are also other hashes that were changed as well Disregard this—it seems that this only applies to index.html. BR
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 14, 2025):

Of course, happy to help! Thanks for taking time to diagnose this with me.

You've done some good detective work! Looking at those manifest hash changes, it strengthens our suspicion.

If it's only the index.html that changes between deployments, there's a good chance it's due to environment variables causing the hash changes.

The desktop app (through some indirections) uses the dist folder that is generated when you do pnpm generate in the packages/hoppscotch-selfhost-web package, which is then served via the desktop-app-server path. In that dist directory, if you look into index.html, you'll find this line:

<script>
  globalThis.import_meta_env = JSON.parse('"import_meta_env_placeholder"')
</script>

This is using the import-meta-env package, which replaces that placeholder with actual environment variables at runtime (so we don't have to rebuild the entire image if there are changes to .env file). When your pods restart, even though the file content appears identical (same size of 5747 bytes), the environment variables are likely being re-injected with slight changes.

To be clear, this behavior is intentional by design in the import-meta-env package - it's supposed to inject environment variables at runtime. However, I'm unsure if it is intentional that these environment variables contain values that might be changing each deployment, changing on each pod restart, causing the hash verification to fail.

If it's is indeed the environment variables that are changing and if it is intentional, then this does strengthens the case for having an option to skip verification for these kinds of deployment scenarios.

<!-- gh-comment-id:2725400247 --> @CuriousCorrelation commented on GitHub (Mar 14, 2025): Of course, happy to help! Thanks for taking time to diagnose this with me. You've done some good detective work! Looking at those manifest hash changes, it strengthens our suspicion. If it's only the `index.html` that changes between deployments, there's a good chance it's due to environment variables causing the hash changes. The desktop app (through some indirections) uses the `dist` folder that is generated when you do `pnpm generate` in the `packages/hoppscotch-selfhost-web` package, which is then served via the `desktop-app-server` path. In that `dist` directory, if you look into `index.html`, you'll find this line: ``` <script> globalThis.import_meta_env = JSON.parse('"import_meta_env_placeholder"') </script> ``` This is using the [import-meta-env](https://import-meta-env.org/guide/getting-started/introduction.html) package, which replaces that placeholder with actual environment variables at runtime (so we don't have to rebuild the entire image if there are changes to `.env` file). When your pods restart, even though the file content appears identical (same size of 5747 bytes), the environment variables are likely being re-injected with slight changes. To be clear, this behavior is intentional by design in the import-meta-env package - it's supposed to inject environment variables at runtime. However, I'm unsure if it is intentional that these environment variables contain values that might be changing each deployment, changing on each pod restart, causing the hash verification to fail. If it's is indeed the environment variables that are changing and if it is intentional, then this does strengthens the case for having an option to skip verification for these kinds of deployment scenarios.
Author
Owner

@TinT1 commented on GitHub (Mar 14, 2025):

I've printed out the environment variables before and after rolling out the deployment.

The only change I noticed was the HOSTNAME, which always corresponds to the pod name.

Before rollout -> HOSTNAME=hoppscotch-57cc66cb7b-6gf6d
After rollout -> HOSTNAME=hoppscotch-f76bbc4b-bthth

I also tried running console.log(globalThis.import_meta_env) in my Desktop app, but the only variable present was:

VITE_ALLOWED_AUTH_PROVIDERS: "GOOGLE"
VITE_APP_PRIVACY_POLICY_LINK: "https://docs.example.com/privacy"
VITE_APP_TOS_LINK: "https://docs.example.com/terms"
VITE_BACKEND_API_URL: "https://hoppscotch.example.com/backend/v1"
VITE_BACKEND_GQL_URL: "https://hoppscotch.example.com/backend/graphql"
VITE_BACKEND_WS_URL: "wss:hoppscotch.example.com/backend/graphql"
VITE_BASE_URL: "https://hoppscotch.example.com"
VITE_SHORTCODE_BASE_URL: "https://hoppscotch.example.com"

BR

<!-- gh-comment-id:2725467473 --> @TinT1 commented on GitHub (Mar 14, 2025): I've printed out the environment variables before and after rolling out the deployment. The only change I noticed was the `HOSTNAME`, which always corresponds to the pod name. Before rollout -> `HOSTNAME=hoppscotch-57cc66cb7b-6gf6d` After rollout -> `HOSTNAME=hoppscotch-f76bbc4b-bthth` I also tried running `console.log(globalThis.import_meta_env)` in my Desktop app, but the only variable present was: ``` VITE_ALLOWED_AUTH_PROVIDERS: "GOOGLE" VITE_APP_PRIVACY_POLICY_LINK: "https://docs.example.com/privacy" VITE_APP_TOS_LINK: "https://docs.example.com/terms" VITE_BACKEND_API_URL: "https://hoppscotch.example.com/backend/v1" VITE_BACKEND_GQL_URL: "https://hoppscotch.example.com/backend/graphql" VITE_BACKEND_WS_URL: "wss:hoppscotch.example.com/backend/graphql" VITE_BASE_URL: "https://hoppscotch.example.com" VITE_SHORTCODE_BASE_URL: "https://hoppscotch.example.com" ``` BR
Author
Owner

@TinT1 commented on GitHub (Mar 14, 2025):

Tried setting HOSTNAME to hoppscotch.example.com so that it doesn't change when deployment is restarted, it didn't help.

<!-- gh-comment-id:2725496186 --> @TinT1 commented on GitHub (Mar 14, 2025): Tried setting `HOSTNAME` to `hoppscotch.example.com` so that it doesn't change when deployment is restarted, it didn't help.
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 15, 2025):

Hi,

So went ahead and set up a minimal reproducible example with Docker, but couldn't find any hash changes between container restarts when keeping environment variables consistent (foreshadowing!). Only the bundle signature changes, which is expected and handled correctly by the desktop app.

What's particularly interesting is that in our bundle generation process, hashes are calculated on the raw file content before being added to the ZIP archive. This confirms that the hash changes you're seeing come from actual content modifications in the index.html file itself, not just metadata or packaging differences. I haven't seen this issue reported by other users running Hoppscotch on k8s with Helm, though their setups might be more minimal.

I tested your observation about the HOSTNAME changing after each restart, and had a hunch: while the values replaced by globalThis.import_meta_env aren't changing, perhaps the order in which they're injected into the file is non-deterministic when environment variables change.

Bingo! That was it! The order of environment variable injection seems non-deterministic only when there are any modification to .env file, this doesn’t happen when say we move the entire file to .env.backup, and move it back as .env.

You can test this theory yourself by inspecting the actual index.html file before and after restarts:

Option 1: Look inside the container:

kubectl exec -it <pod-name> -- cat /site/selfhost-web/index.html > before.html
# After restart
kubectl exec -it <new-pod-name> -- cat /site/selfhost-web/index.html > after.html
diff before.html after.html

Option 2: Fetch and compare the bundles:

# Before restart
mkdir before
cd before
curl https://hoppscotch.example.com/desktop-app-server/api/v1/bundle -o before.zip
7z x before.zip

# After restart (in a different directory)
mkdir after
cd after
curl https://hoppscotch.example.com/desktop-app-server/api/v1/bundle -o after.zip
7z x after.zip

# Then compare the index.html files

This case - where environment changes after each restart affect file hashes - might warrant a selective verification skip option for index.html. While changing environment variables could technically be considered a fresh deployment (where clearing cache makes sense), I understand how skipping index.html hash verification would provide a much better user experience.

<!-- gh-comment-id:2726404324 --> @CuriousCorrelation commented on GitHub (Mar 15, 2025): Hi, So went ahead and set up a minimal reproducible example with Docker, but couldn't find any hash changes between container restarts when keeping environment variables consistent (foreshadowing!). Only the bundle signature changes, which is expected and handled correctly by the desktop app. What's particularly interesting is that in our bundle generation process, hashes are calculated on the raw file content before being added to the ZIP archive. This confirms that the hash changes you're seeing come from actual content modifications in the `index.html` file itself, not just metadata or packaging differences. I haven't seen this issue reported by other users running Hoppscotch on k8s with Helm, though their setups might be more minimal. I tested your observation about the `HOSTNAME` changing after each restart, and had a hunch: while the values replaced by `globalThis.import_meta_env` aren't changing, perhaps the order in which they're injected into the file is non-deterministic when environment variables change. Bingo! That was it! The order of environment variable injection seems non-deterministic only when there are any modification to `.env` file, this doesn’t happen when say we move the entire file to `.env.backup`, and move it back as `.env`. You can test this theory yourself by inspecting the actual `index.html` file before and after restarts: Option 1: Look inside the container: ``` kubectl exec -it <pod-name> -- cat /site/selfhost-web/index.html > before.html # After restart kubectl exec -it <new-pod-name> -- cat /site/selfhost-web/index.html > after.html diff before.html after.html ``` Option 2: Fetch and compare the bundles: ``` # Before restart mkdir before cd before curl https://hoppscotch.example.com/desktop-app-server/api/v1/bundle -o before.zip 7z x before.zip # After restart (in a different directory) mkdir after cd after curl https://hoppscotch.example.com/desktop-app-server/api/v1/bundle -o after.zip 7z x after.zip # Then compare the index.html files ``` This case - where environment changes after each restart affect file hashes - might warrant a selective verification skip option for `index.html`. While changing environment variables could technically be considered a fresh deployment (where clearing cache makes sense), I understand how skipping `index.html` hash verification would provide a much better user experience.
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 15, 2025):

I have a bit more research and testing to do on this before I can make a PR, but in the meantime you can use the changes described here to keep the env vars that are being injected consistent across pod restarts.

<!-- gh-comment-id:2726455005 --> @CuriousCorrelation commented on GitHub (Mar 15, 2025): I have a bit more research and testing to do on this before I can make a PR, but in the meantime you can use the changes [described here](https://github.com/hoppscotch/hoppscotch/commit/921c1a6ff97008d00ab1b2e4ae6af621e364b31e) to keep the env vars that are being injected consistent across pod restarts.
Author
Owner

@TinT1 commented on GitHub (Mar 19, 2025):

Hi @CuriousCorrelation

Thank you for the research. I've compared /site/selfhost-web/index.html before and after the pod restart and can confirm that the only difference is the order of environment variables in:

<script>
      globalThis.import_meta_env = JSON.parse('<< envs >>')
</script>

<!-- gh-comment-id:2735932817 --> @TinT1 commented on GitHub (Mar 19, 2025): Hi @CuriousCorrelation Thank you for the research. I've compared /site/selfhost-web/index.html before and after the pod restart and can confirm that the only difference is the order of environment variables in: ``` <script> globalThis.import_meta_env = JSON.parse('<< envs >>') </script> ```
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 19, 2025):

Excellent! Thanks for the confirmation, this should be fixed with the upcoming patch release scheduled soon 🚀

<!-- gh-comment-id:2736146192 --> @CuriousCorrelation commented on GitHub (Mar 19, 2025): Excellent! Thanks for the confirmation, this should be fixed with the upcoming patch release scheduled soon 🚀
Author
Owner

@CuriousCorrelation commented on GitHub (Mar 19, 2025):

Hi @TinT1, the latest release v25.2.3 is out, could you see if it has resolved your issue? 🤞

<!-- gh-comment-id:2736648702 --> @CuriousCorrelation commented on GitHub (Mar 19, 2025): Hi @TinT1, the latest release `v25.2.3` is out, could you see if it has resolved your issue? 🤞
Author
Owner

@TinT1 commented on GitHub (Mar 19, 2025):

Hi @CuriousCorrelation,

We just upgraded our Hoppscotch instance and tested restarting the pods multiple times. The order of environment variables remained the same across restarts, and we didn’t encounter any issues connecting the Desktop app after each restart.

Thank you very much for the fix! 🎉
Closing the issue.

<!-- gh-comment-id:2737326634 --> @TinT1 commented on GitHub (Mar 19, 2025): Hi @CuriousCorrelation, We just upgraded our Hoppscotch instance and tested restarting the pods multiple times. The order of environment variables remained the same across restarts, and we didn’t encounter any issues connecting the Desktop app after each restart. Thank you very much for the fix! 🎉 Closing the issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hoppscotch#1840
No description provided.