[GH-ISSUE #355] SqliteError: no such table: tasks #231

Closed
opened 2026-03-02 11:47:49 +03:00 by kerem · 5 comments
Owner

Originally created by @dannytsang on GitHub (Aug 18, 2024).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/355

I'm getting the following error when I go to Admin section and spams the docker logs:

SqliteError: no such table: tasks
    at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21)
    ... 5 lines matching cause stack trace ...
    at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) {
  code: 'INTERNAL_SERVER_ERROR',
  name: 'TRPCError',
  [cause]: SqliteError: no such table: tasks
      at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21)
      at v.prepareQuery (/app/apps/web/.next/server/chunks/917.js:51:40669)
      at v.prepareOneTimeQuery (/app/apps/web/.next/server/chunks/917.js:51:39590)
      at m._prepare (/app/apps/web/.next/server/chunks/917.js:51:66625)
      at m.all (/app/apps/web/.next/server/chunks/917.js:51:66807)
      at m.execute (/app/apps/web/.next/server/chunks/917.js:51:66919)
      at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) {
    code: 'SQLITE_ERROR'
  }
}

It's probably my problem because I'm running all the services as docker rather than using compose on Unraid. I can see there's db.db and queue.db created in the data volume which I've mapped to both wed and works container.

Full docker log:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service init-db-migration: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-db-migration successfully started
s6-rc: info: service svc-web: starting
s6-rc: info: service svc-web successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
Running db migration script
   ▲ Next.js 14.1.4
   - Local:        http://localhost:3000
   - Network:      http://0.0.0.0:3000

 ✓ Ready in 364ms
SqliteError: no such table: tasks
    at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21)
    ... 5 lines matching cause stack trace ...
    at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) {
  code: 'INTERNAL_SERVER_ERROR',
  name: 'TRPCError',
  [cause]: SqliteError: no such table: tasks
      at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21)
      at v.prepareQuery (/app/apps/web/.next/server/chunks/917.js:51:40669)
      at v.prepareOneTimeQuery (/app/apps/web/.next/server/chunks/917.js:51:39590)
      at m._prepare (/app/apps/web/.next/server/chunks/917.js:51:66625)
      at m.all (/app/apps/web/.next/server/chunks/917.js:51:66807)
      at m.execute (/app/apps/web/.next/server/chunks/917.js:51:66919)
      at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) {
    code: 'SQLITE_ERROR'
  }
}
SqliteError: no such table: tasks
    at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21)
    ... 5 lines matching cause stack trace ...
    at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) {
  code: 'INTERNAL_SERVER_ERROR',
  name: 'TRPCError',
  [cause]: SqliteError: no such table: tasks
      at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21)
      at v.prepareQuery (/app/apps/web/.next/server/chunks/917.js:51:40669)
      at v.prepareOneTimeQuery (/app/apps/web/.next/server/chunks/917.js:51:39590)
      at m._prepare (/app/apps/web/.next/server/chunks/917.js:51:66625)
      at m.all (/app/apps/web/.next/server/chunks/917.js:51:66807)
      at m.execute (/app/apps/web/.next/server/chunks/917.js:51:66919)
      at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) {
    code: 'SQLITE_ERROR'
  }
}

My docker templates (redacted secrets) for hoarder:
Web:

<?xml version="1.0"?>
<Container version="2">
  <Name>hoarder</Name>
  <Repository>ghcr.io/hoarder-app/hoarder-web:latest</Repository>
  <Registry/>
  <Network>docker</Network>
  <MyIP/>
  <Shell>sh</Shell>
  <Privileged>false</Privileged>
  <Support/>
  <Project/>
  <Overview/>
  <Category/>
  <TemplateURL/>
  <Icon>https://avatars.githubusercontent.com/u/170265186?s=48&amp;amp;v=4</Icon>
  <ExtraParams/>
  <PostArgs/>
  <CPUset/>
  <DateInstalled>1724009096</DateInstalled>
  <DonateText/>
  <DonateLink/>
  <Requires/>
  <Config Name="Host Port 1" Target="3000" Default="" Mode="tcp" Description="" Type="Port" Display="always" Required="false" Mask="false">3000</Config>
  <Config Name="REDIS_HOST" Target="REDIS_HOST" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">redis</Config>
  <Config Name="MEILI_ADDR" Target="MEILI_ADDR" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://meilisearch:7700</Config>
  <Config Name="Data" Target="/data" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/dockerData/hoarder/data/</Config>
  <Config Name="DATA_DIR" Target="DATA_DIR" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">/data</Config>
  <Config Name="NEXTAUTH_SECRET" Target="NEXTAUTH_SECRET" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config>
  <Config Name="NEXTAUTH_URL" Target="NEXTAUTH_URL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://localhost:3000</Config>
  <Config Name="traefik.enable" Target="traefik.enable" Default="" Mode="" Description="" Type="Label" Display="always" Required="false" Mask="false">true</Config>
  <Config Name="traefik.http.services.hoarder.loadbalancer.server.port" Target="traefik.http.services.hoarder.loadbalancer.server.port" Default="" Mode="" Description="" Type="Label" Display="always" Required="false" Mask="false">3000</Config>
  <Config Name="traefik.http.routers.hoarder.rule" Target="traefik.http.routers.hoarder.rule" Default="" Mode="" Description="" Type="Label" Display="always" Required="false" Mask="false">Host(`hoarder.houseoftsang.io`)</Config>
  <Config Name="MEILI_MASTER_KEY" Target="MEILI_MASTER_KEY" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config>
  <Config Name="REDIS_PASSWORD" Target="REDIS_PASSWORD" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false"#####</Config>
</Container>

Worker:

<?xml version="1.0"?>
<Container version="2">
  <Name>hoarder-workers</Name>
  <Repository>ghcr.io/hoarder-app/hoarder-workers</Repository>
  <Registry/>
  <Network>docker</Network>
  <MyIP/>
  <Shell>sh</Shell>
  <Privileged>false</Privileged>
  <Support/>
  <Project/>
  <Overview/>
  <Category/>
  <WebUI/>
  <TemplateURL/>
  <Icon/>
  <ExtraParams/>
  <PostArgs/>
  <CPUset/>
  <DateInstalled>1724008996</DateInstalled>
  <DonateText/>
  <DonateLink/>
  <Requires/>
  <Config Name="REDIS_HOST" Target="REDIS_HOST" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">redis</Config>
  <Config Name="MEILI_ADDR" Target="MEILI_ADDR" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://meilisearch:7700</Config>
  <Config Name="BROWSER_WEB_URL" Target="BROWSER_WEB_URL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://chrome:9222</Config>
  <Config Name="Data" Target="/data" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/dockerData/hoarder/data/</Config>
  <Config Name="OLLAMA_BASE_URL" Target="OLLAMA_BASE_URL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://ollama:11434</Config>
  <Config Name="INFERENCE_TEXT_MODEL" Target="INFERENCE_TEXT_MODEL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">llama3.1</Config>
  <Config Name="INFERENCE_IMAGE_MODEL" Target="INFERENCE_IMAGE_MODEL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">llava</Config>
  <Config Name="MEILI_MASTER_KEY" Target="MEILI_MASTER_KEY" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config>
  <Config Name="REDIS_PASSWORD" Target="REDIS_PASSWORD" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config>
  <Config Name="DISABLE_SIGNUPS" Target="DISABLE_SIGNUPS" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">false</Config>
</Container>
Originally created by @dannytsang on GitHub (Aug 18, 2024). Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/355 I'm getting the following error when I go to *Admin* section and spams the docker logs: ``` SqliteError: no such table: tasks at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21) ... 5 lines matching cause stack trace ... at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) { code: 'INTERNAL_SERVER_ERROR', name: 'TRPCError', [cause]: SqliteError: no such table: tasks at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21) at v.prepareQuery (/app/apps/web/.next/server/chunks/917.js:51:40669) at v.prepareOneTimeQuery (/app/apps/web/.next/server/chunks/917.js:51:39590) at m._prepare (/app/apps/web/.next/server/chunks/917.js:51:66625) at m.all (/app/apps/web/.next/server/chunks/917.js:51:66807) at m.execute (/app/apps/web/.next/server/chunks/917.js:51:66919) at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) { code: 'SQLITE_ERROR' } } ``` It's probably my problem because I'm running all the services as docker rather than using compose on Unraid. I can see there's db.db and queue.db created in the data volume which I've mapped to both wed and works container. Full docker log: ``` s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service init-db-migration: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service init-db-migration successfully started s6-rc: info: service svc-web: starting s6-rc: info: service svc-web successfully started s6-rc: info: service legacy-services: starting s6-rc: info: service legacy-services successfully started Running db migration script ▲ Next.js 14.1.4 - Local: http://localhost:3000 - Network: http://0.0.0.0:3000 ✓ Ready in 364ms SqliteError: no such table: tasks at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21) ... 5 lines matching cause stack trace ... at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) { code: 'INTERNAL_SERVER_ERROR', name: 'TRPCError', [cause]: SqliteError: no such table: tasks at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21) at v.prepareQuery (/app/apps/web/.next/server/chunks/917.js:51:40669) at v.prepareOneTimeQuery (/app/apps/web/.next/server/chunks/917.js:51:39590) at m._prepare (/app/apps/web/.next/server/chunks/917.js:51:66625) at m.all (/app/apps/web/.next/server/chunks/917.js:51:66807) at m.execute (/app/apps/web/.next/server/chunks/917.js:51:66919) at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) { code: 'SQLITE_ERROR' } } SqliteError: no such table: tasks at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21) ... 5 lines matching cause stack trace ... at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) { code: 'INTERNAL_SERVER_ERROR', name: 'TRPCError', [cause]: SqliteError: no such table: tasks at Database.prepare (/app/node_modules/better-sqlite3/lib/methods/wrappers.js:5:21) at v.prepareQuery (/app/apps/web/.next/server/chunks/917.js:51:40669) at v.prepareOneTimeQuery (/app/apps/web/.next/server/chunks/917.js:51:39590) at m._prepare (/app/apps/web/.next/server/chunks/917.js:51:66625) at m.all (/app/apps/web/.next/server/chunks/917.js:51:66807) at m.execute (/app/apps/web/.next/server/chunks/917.js:51:66919) at m.then (/app/apps/web/.next/server/chunks/917.js:51:45895) { code: 'SQLITE_ERROR' } } ``` My docker templates (redacted secrets) for hoarder: Web: ```xml <?xml version="1.0"?> <Container version="2"> <Name>hoarder</Name> <Repository>ghcr.io/hoarder-app/hoarder-web:latest</Repository> <Registry/> <Network>docker</Network> <MyIP/> <Shell>sh</Shell> <Privileged>false</Privileged> <Support/> <Project/> <Overview/> <Category/> <TemplateURL/> <Icon>https://avatars.githubusercontent.com/u/170265186?s=48&amp;amp;v=4</Icon> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1724009096</DateInstalled> <DonateText/> <DonateLink/> <Requires/> <Config Name="Host Port 1" Target="3000" Default="" Mode="tcp" Description="" Type="Port" Display="always" Required="false" Mask="false">3000</Config> <Config Name="REDIS_HOST" Target="REDIS_HOST" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">redis</Config> <Config Name="MEILI_ADDR" Target="MEILI_ADDR" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://meilisearch:7700</Config> <Config Name="Data" Target="/data" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/dockerData/hoarder/data/</Config> <Config Name="DATA_DIR" Target="DATA_DIR" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">/data</Config> <Config Name="NEXTAUTH_SECRET" Target="NEXTAUTH_SECRET" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config> <Config Name="NEXTAUTH_URL" Target="NEXTAUTH_URL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://localhost:3000</Config> <Config Name="traefik.enable" Target="traefik.enable" Default="" Mode="" Description="" Type="Label" Display="always" Required="false" Mask="false">true</Config> <Config Name="traefik.http.services.hoarder.loadbalancer.server.port" Target="traefik.http.services.hoarder.loadbalancer.server.port" Default="" Mode="" Description="" Type="Label" Display="always" Required="false" Mask="false">3000</Config> <Config Name="traefik.http.routers.hoarder.rule" Target="traefik.http.routers.hoarder.rule" Default="" Mode="" Description="" Type="Label" Display="always" Required="false" Mask="false">Host(`hoarder.houseoftsang.io`)</Config> <Config Name="MEILI_MASTER_KEY" Target="MEILI_MASTER_KEY" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config> <Config Name="REDIS_PASSWORD" Target="REDIS_PASSWORD" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false"#####</Config> </Container> ``` Worker: ```xml <?xml version="1.0"?> <Container version="2"> <Name>hoarder-workers</Name> <Repository>ghcr.io/hoarder-app/hoarder-workers</Repository> <Registry/> <Network>docker</Network> <MyIP/> <Shell>sh</Shell> <Privileged>false</Privileged> <Support/> <Project/> <Overview/> <Category/> <WebUI/> <TemplateURL/> <Icon/> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1724008996</DateInstalled> <DonateText/> <DonateLink/> <Requires/> <Config Name="REDIS_HOST" Target="REDIS_HOST" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">redis</Config> <Config Name="MEILI_ADDR" Target="MEILI_ADDR" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://meilisearch:7700</Config> <Config Name="BROWSER_WEB_URL" Target="BROWSER_WEB_URL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://chrome:9222</Config> <Config Name="Data" Target="/data" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/dockerData/hoarder/data/</Config> <Config Name="OLLAMA_BASE_URL" Target="OLLAMA_BASE_URL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">http://ollama:11434</Config> <Config Name="INFERENCE_TEXT_MODEL" Target="INFERENCE_TEXT_MODEL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">llama3.1</Config> <Config Name="INFERENCE_IMAGE_MODEL" Target="INFERENCE_IMAGE_MODEL" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">llava</Config> <Config Name="MEILI_MASTER_KEY" Target="MEILI_MASTER_KEY" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config> <Config Name="REDIS_PASSWORD" Target="REDIS_PASSWORD" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">#####</Config> <Config Name="DISABLE_SIGNUPS" Target="DISABLE_SIGNUPS" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">false</Config> </Container> ```
kerem closed this issue 2026-03-02 11:47:50 +03:00
Author
Owner

@MohamedBassem commented on GitHub (Aug 18, 2024):

This means that the workers container either hasn't been restarted or hasn't been updated. Try restarting the worker container first, if it didn't work, try re-pulling its latest version and restart it one more time. That should run the migrations and mitigate the errors.

<!-- gh-comment-id:2295370623 --> @MohamedBassem commented on GitHub (Aug 18, 2024): This means that the workers container either hasn't been restarted or hasn't been updated. Try restarting the worker container first, if it didn't work, try re-pulling its latest version and restart it one more time. That should run the migrations and mitigate the errors.
Author
Owner

@dannytsang commented on GitHub (Aug 18, 2024):

I tried restarting the worker, then stopped, removed the image and re-pulled it but I'm afraid it still has the same error.

The worker log is showing the below. Is it stuck at starting search index work?

Corepack is about to download https://registry.npmjs.org/pnpm/-/pnpm-9.7.1.tgz.
(node:35) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)

> @hoarder/workers@0.1.0 start:prod /app/apps/workers
> tsx index.ts

2024-08-18T20:18:27.633Z info: Workers version: 0.15.0
2024-08-18T20:18:27.641Z info: [Crawler] Connecting to existing browser instance: http://chrome:9222
2024-08-18T20:18:27.655Z info: [Crawler] Successfully resolved IP address, new address: http://172.18.0.57:9222/
2024-08-18T20:18:29.380Z info: Starting crawler worker ...
2024-08-18T20:18:29.381Z info: Starting inference worker ...
2024-08-18T20:18:29.381Z info: Starting search indexing worker ...

I also noticed I was originally using the nightly build because I didn't specify a tag for the work image. Now (and you can see above) I'm using the release tag.

<!-- gh-comment-id:2295379383 --> @dannytsang commented on GitHub (Aug 18, 2024): I tried restarting the worker, then stopped, removed the image and re-pulled it but I'm afraid it still has the same error. The worker log is showing the below. Is it stuck at starting search index work? ``` Corepack is about to download https://registry.npmjs.org/pnpm/-/pnpm-9.7.1.tgz. (node:35) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) > @hoarder/workers@0.1.0 start:prod /app/apps/workers > tsx index.ts 2024-08-18T20:18:27.633Z info: Workers version: 0.15.0 2024-08-18T20:18:27.641Z info: [Crawler] Connecting to existing browser instance: http://chrome:9222 2024-08-18T20:18:27.655Z info: [Crawler] Successfully resolved IP address, new address: http://172.18.0.57:9222/ 2024-08-18T20:18:29.380Z info: Starting crawler worker ... 2024-08-18T20:18:29.381Z info: Starting inference worker ... 2024-08-18T20:18:29.381Z info: Starting search indexing worker ... ``` I also noticed I was originally using the nightly build because I didn't specify a tag for the work image. Now (and you can see above) I'm using the release tag.
Author
Owner

@MohamedBassem commented on GitHub (Aug 18, 2024):

I also noticed I was originally using the nightly build because I didn't specify a tag for the work image. Now (and you can see above) I'm using the release tag.

That's your problem. You're using nightly (latest) for web, but release tag for workers. Hoarder assumes that both containers are using the same version. It's not safe to go back in versions, so you'll need to use nightly (latest) for workers for now, and then after the next release, you can go back to the release tag.

<!-- gh-comment-id:2295383832 --> @MohamedBassem commented on GitHub (Aug 18, 2024): > I also noticed I was originally using the nightly build because I didn't specify a tag for the work image. Now (and you can see above) I'm using the release tag. That's your problem. You're using `nightly` (latest) for web, but `release` tag for workers. Hoarder assumes that both containers are using the same version. It's not safe to go back in versions, so you'll need to use nightly (`latest`) for workers for now, and then after the next release, you can go back to the `release` tag.
Author
Owner

@MohamedBassem commented on GitHub (Aug 18, 2024):

The next release will simplify the whole thing by merging the two containers in one :)

<!-- gh-comment-id:2295383921 --> @MohamedBassem commented on GitHub (Aug 18, 2024): The next release will simplify the whole thing by merging the two containers in one :)
Author
Owner

@dannytsang commented on GitHub (Aug 18, 2024):

I also noticed I was originally using the nightly build because I didn't specify a tag for the work image. Now (and you can see above) I'm using the release tag.

That's your problem. You're using nightly (latest) for web, but release tag for workers. Hoarder assumes that both containers are using the same version. It's not safe to go back in versions, so you'll need to use nightly (latest) for workers for now, and then after the next release, you can go back to the release tag.

I've just started so no big data loss. I stopped hoarder, removed the db files in the data volume, added the release tag and started it up from scratch. I then restarted the worker that's laready on the release version and all is good now.

Thank you for your very quick response @MohamedBassem and look forward to the next version.

<!-- gh-comment-id:2295392090 --> @dannytsang commented on GitHub (Aug 18, 2024): > > I also noticed I was originally using the nightly build because I didn't specify a tag for the work image. Now (and you can see above) I'm using the release tag. > > That's your problem. You're using `nightly` (latest) for web, but `release` tag for workers. Hoarder assumes that both containers are using the same version. It's not safe to go back in versions, so you'll need to use nightly (`latest`) for workers for now, and then after the next release, you can go back to the `release` tag. I've just started so no big data loss. I stopped hoarder, removed the db files in the data volume, added the release tag and started it up from scratch. I then restarted the worker that's laready on the release version and all is good now. Thank you for your very quick response @MohamedBassem and look forward to the next version.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/karakeep#231
No description provided.