[GH-ISSUE #666] "no such table: tasks" when adding a new item #429

Closed
opened 2026-03-02 11:49:45 +03:00 by kerem · 17 comments
Owner

Originally created by @Guillaume-Bignon on GitHub (Nov 16, 2024).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/666

Describe the Bug

Hi,

I just installed the last version of Hoarder (v 0.19.0) following the steps here : https://docs.hoarder.app/Installation/docker/

Installation, sign up and log in worked just fine but when I'm trying to add a new item from the home page, I get a red alert saying "no such table: tasks". But when I refresh the page, the item appears.

By the way, I got a similar mistake when trying to delete an item, with a red alert bow saying "Something went wrong. There was a problem with your request." Mention that because I assume both errors could be linked.

I found another bug report with the same error and found out that my queue.db is also empty this might come from a database migration error, but I have no idea how to fix that since I'm pretty new to all of this.

Thanks for your help.

Steps to Reproduce

  1. Install a brand new Hoarder with docker compose
  2. Create user and log in
  3. Try to add a new item

Expected Behaviour

I guess item should be immediately added and visible and no error should be shown.

Screenshots or Additional Context

No response

Device Details

No response

Exact Hoarder Version

v0.19.0

Originally created by @Guillaume-Bignon on GitHub (Nov 16, 2024). Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/666 ### Describe the Bug Hi, I just installed the last version of Hoarder (v 0.19.0) following the steps here : https://docs.hoarder.app/Installation/docker/ Installation, sign up and log in worked just fine but when I'm trying to add a new item from the home page, I get a red alert saying "no such table: tasks". But when I refresh the page, the item appears. By the way, I got a similar mistake when trying to delete an item, with a red alert bow saying "Something went wrong. There was a problem with your request." Mention that because I assume both errors could be linked. I found another bug report with the same error and found out that my queue.db is also empty this might come from a database migration error, but I have no idea how to fix that since I'm pretty new to all of this. Thanks for your help. ### Steps to Reproduce 1. Install a brand new Hoarder with docker compose 2. Create user and log in 3. Try to add a new item ### Expected Behaviour I guess item should be immediately added and visible and no error should be shown. ### Screenshots or Additional Context _No response_ ### Device Details _No response_ ### Exact Hoarder Version v0.19.0
kerem 2026-03-02 11:49:45 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@MohamedBassem commented on GitHub (Nov 16, 2024):

Hey, can you try deleting queue.db and restarting the container? That should recreate it

<!-- gh-comment-id:2480844910 --> @MohamedBassem commented on GitHub (Nov 16, 2024): Hey, can you try deleting `queue.db` and restarting the container? That should recreate it
Author
Owner

@Guillaume-Bignon commented on GitHub (Nov 17, 2024):

Hi, thanks for your answer. I tried that fix, but unfortunately it did not work. queue.db was indeed recreated but is still empty. Don't know if it's important, but the database was not recreated during container restart but after I tried to add a new item.

<!-- gh-comment-id:2481085023 --> @Guillaume-Bignon commented on GitHub (Nov 17, 2024): Hi, thanks for your answer. I tried that fix, but unfortunately it did not work. queue.db was indeed recreated but is still empty. Don't know if it's important, but the database was not recreated during container restart but after I tried to add a new item.
Author
Owner

@stayupthetree commented on GitHub (Nov 17, 2024):

I am getting this error as well on a fresh install. I tried the removal of queue.db and result is the same

<!-- gh-comment-id:2481298676 --> @stayupthetree commented on GitHub (Nov 17, 2024): I am getting this error as well on a fresh install. I tried the removal of queue.db and result is the same
Author
Owner

@MohamedBassem commented on GitHub (Nov 18, 2024):

hmmm, that's unexpected. The queue.db file should be created on container startup. Can you share the logs immediately after starting the container?

<!-- gh-comment-id:2481731726 --> @MohamedBassem commented on GitHub (Nov 18, 2024): hmmm, that's unexpected. The `queue.db` file should be created on container startup. Can you share the logs immediately after starting the container?
Author
Owner

@MohamedBassem commented on GitHub (Nov 18, 2024):

Can you also share your compose file and redacted env file?

<!-- gh-comment-id:2481732590 --> @MohamedBassem commented on GitHub (Nov 18, 2024): Can you also share your compose file and redacted env file?
Author
Owner

@Guillaume-Bignon commented on GitHub (Nov 18, 2024):

Sure. I deleted queue.db once again and here are the logs of "hoarder-web-1" and "hoarder-chrome-1" (I don't know which one is relevant here).

hoarder-web-1:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service init-db-migration: starting
Running db migration script
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-db-migration successfully started
s6-rc: info: service svc-workers: starting
s6-rc: info: service svc-web: starting
s6-rc: info: service svc-workers successfully started
s6-rc: info: service svc-web successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
  ▲ Next.js 14.2.13
  - Local:        http://localhost:3000
  - Network:      http://0.0.0.0:3000

 ✓ Starting...
 ✓ Ready in 403ms

hoarder-chrome-1:

[1118/212522.742137:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[1118/212522.743160:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[1118/212522.743311:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[1118/212522.743528:WARNING:dns_config_service_linux.cc(427)] Failed to read DnsConfig.
[1118/212522.755910:INFO:policy_logger.cc(145)] :components/policy/core/common/config_dir_policy_loader.cc(118) Skipping mandatory platform policies because no policy file was found at: /etc/chromium/policies/managed
[1118/212522.755935:INFO:policy_logger.cc(145)] :components/policy/core/common/config_dir_policy_loader.cc(118) Skipping recommended platform policies because no policy file was found at: /etc/chromium/policies/recommended

DevTools listening on ws://0.0.0.0:9222/devtools/browser/bdfa091e-772d-4305-ac47-65a31c16276c
[1118/212522.760146:WARNING:bluez_dbus_manager.cc(248)] Floss manager not present, cannot set Floss enable/disable.
[1118/212522.781625:WARNING:sandbox_linux.cc(418)] InitializeSandbox() called with multiple threads in process gpu-process.
[1118/212522.845023:WARNING:dns_config_service_linux.cc(427)] Failed to read DnsConfig.

Once again, I don't if it's important, but a few seconds after restart I get this error several times in hoarder-web-1 container:

/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21609
    throw new Error(
          ^

Error: Error when performing the request to https://registry.npmjs.org/pnpm/latest; for troubleshooting help, see https://github.com/nodejs/corepack#troubleshooting
    at fetch (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21609:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async fetchAsJson (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21623:20)
    ... 4 lines matching cause stack trace ...
    at async Object.runMain (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:23096:5) {
  [cause]: TypeError: fetch failed
      at node:internal/deps/undici/undici:13392:13
      at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
      at async fetch (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21603:16)
      at async fetchAsJson (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21623:20)
      at async fetchLatestStableVersion (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21550:20)
      at async fetchLatestStableVersion2 (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21672:14)
      at async Engine.getDefaultVersion (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:22292:23)
      at async Engine.executePackageManagerRequest (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:22390:47)
      at async Object.runMain (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:23096:5) {
    [cause]: Error: getaddrinfo EAI_AGAIN registry.npmjs.org
        at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) {
      errno: -3001,
      code: 'EAI_AGAIN',
      syscall: 'getaddrinfo',
      hostname: 'registry.npmjs.org'
    }
  }
}

Node.js v22.11.0

Here is docker-compose.yml (I didn't change anything from the file linked in installation doc):

version: "3.8"
services:
  web:
    image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
    restart: unless-stopped
    volumes:
      - data:/data
    ports:
      - 3000:3000
    env_file:
      - .env
    environment:
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      # OPENAI_API_KEY: ...
      DATA_DIR: /data
  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    env_file:
      - .env
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - meilisearch:/meili_data

volumes:
  meilisearch:
  data:

And here is .env (I just removed next url and both secret keys):

HOARDER_VERSION=release
NEXTAUTH_SECRET=#secret1
MEILI_MASTER_KEY=#secret2
NEXTAUTH_URL=#nexturl

At that time, container is restarted since a few minutes, I did not try to connect to hoarder and queue.db is still missing in data repository.

<!-- gh-comment-id:2484184546 --> @Guillaume-Bignon commented on GitHub (Nov 18, 2024): Sure. I deleted queue.db once again and here are the logs of "hoarder-web-1" and "hoarder-chrome-1" (I don't know which one is relevant here). hoarder-web-1: ``` s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service init-db-migration: starting Running db migration script s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service init-db-migration successfully started s6-rc: info: service svc-workers: starting s6-rc: info: service svc-web: starting s6-rc: info: service svc-workers successfully started s6-rc: info: service svc-web successfully started s6-rc: info: service legacy-services: starting s6-rc: info: service legacy-services successfully started ▲ Next.js 14.2.13 - Local: http://localhost:3000 - Network: http://0.0.0.0:3000 ✓ Starting... ✓ Ready in 403ms ``` hoarder-chrome-1: ``` [1118/212522.742137:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [1118/212522.743160:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [1118/212522.743311:ERROR:bus.cc(407)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [1118/212522.743528:WARNING:dns_config_service_linux.cc(427)] Failed to read DnsConfig. [1118/212522.755910:INFO:policy_logger.cc(145)] :components/policy/core/common/config_dir_policy_loader.cc(118) Skipping mandatory platform policies because no policy file was found at: /etc/chromium/policies/managed [1118/212522.755935:INFO:policy_logger.cc(145)] :components/policy/core/common/config_dir_policy_loader.cc(118) Skipping recommended platform policies because no policy file was found at: /etc/chromium/policies/recommended DevTools listening on ws://0.0.0.0:9222/devtools/browser/bdfa091e-772d-4305-ac47-65a31c16276c [1118/212522.760146:WARNING:bluez_dbus_manager.cc(248)] Floss manager not present, cannot set Floss enable/disable. [1118/212522.781625:WARNING:sandbox_linux.cc(418)] InitializeSandbox() called with multiple threads in process gpu-process. [1118/212522.845023:WARNING:dns_config_service_linux.cc(427)] Failed to read DnsConfig. ``` Once again, I don't if it's important, but a few seconds after restart I get this error several times in hoarder-web-1 container: ``` /usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21609 throw new Error( ^ Error: Error when performing the request to https://registry.npmjs.org/pnpm/latest; for troubleshooting help, see https://github.com/nodejs/corepack#troubleshooting at fetch (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21609:11) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async fetchAsJson (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21623:20) ... 4 lines matching cause stack trace ... at async Object.runMain (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:23096:5) { [cause]: TypeError: fetch failed at node:internal/deps/undici/undici:13392:13 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async fetch (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21603:16) at async fetchAsJson (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21623:20) at async fetchLatestStableVersion (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21550:20) at async fetchLatestStableVersion2 (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:21672:14) at async Engine.getDefaultVersion (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:22292:23) at async Engine.executePackageManagerRequest (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:22390:47) at async Object.runMain (/usr/local/lib/node_modules/corepack/dist/lib/corepack.cjs:23096:5) { [cause]: Error: getaddrinfo EAI_AGAIN registry.npmjs.org at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) { errno: -3001, code: 'EAI_AGAIN', syscall: 'getaddrinfo', hostname: 'registry.npmjs.org' } } } Node.js v22.11.0 ``` Here is docker-compose.yml (I didn't change anything from the file linked in installation doc): ``` version: "3.8" services: web: image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release} restart: unless-stopped volumes: - data:/data ports: - 3000:3000 env_file: - .env environment: MEILI_ADDR: http://meilisearch:7700 BROWSER_WEB_URL: http://chrome:9222 # OPENAI_API_KEY: ... DATA_DIR: /data chrome: image: gcr.io/zenika-hub/alpine-chrome:123 restart: unless-stopped command: - --no-sandbox - --disable-gpu - --disable-dev-shm-usage - --remote-debugging-address=0.0.0.0 - --remote-debugging-port=9222 - --hide-scrollbars meilisearch: image: getmeili/meilisearch:v1.11.1 restart: unless-stopped env_file: - .env environment: MEILI_NO_ANALYTICS: "true" volumes: - meilisearch:/meili_data volumes: meilisearch: data: ``` And here is .env (I just removed next url and both secret keys): ``` HOARDER_VERSION=release NEXTAUTH_SECRET=#secret1 MEILI_MASTER_KEY=#secret2 NEXTAUTH_URL=#nexturl ``` At that time, container is restarted since a few minutes, I did not try to connect to hoarder and queue.db is still missing in data repository.
Author
Owner

@MohamedBassem commented on GitHub (Nov 18, 2024):

yeah clearly there's something wrong here. There are no logs coming from the worker job which explains why the queue db was not getting initialized. The corepack errors does seem relevant yeah. I'll need to debug this further and think I have some idea of what might be going wrong here.

<!-- gh-comment-id:2484199256 --> @MohamedBassem commented on GitHub (Nov 18, 2024): yeah clearly there's something wrong here. There are no logs coming from the worker job which explains why the queue db was not getting initialized. The corepack errors does seem relevant yeah. I'll need to debug this further and think I have some idea of what might be going wrong here.
Author
Owner

@MohamedBassem commented on GitHub (Nov 18, 2024):

I have a repro, will send a fix.

<!-- gh-comment-id:2484317387 --> @MohamedBassem commented on GitHub (Nov 18, 2024): I have a repro, will send a fix.
Author
Owner

@MohamedBassem commented on GitHub (Nov 19, 2024):

Ok, managed to fix it in github.com/hoarder-app/hoarder@ae78ef50b2. If you can't wait for the next release, you can use the nightly image by changing HOARDER_VERSION from release to latest.

<!-- gh-comment-id:2484420659 --> @MohamedBassem commented on GitHub (Nov 19, 2024): Ok, managed to fix it in https://github.com/hoarder-app/hoarder/commit/ae78ef50b284335dc77877b69f58af5613ee3c28. If you can't wait for the next release, you can use the nightly image by changing `HOARDER_VERSION` from `release` to `latest`.
Author
Owner

@Guillaume-Bignon commented on GitHub (Nov 19, 2024):

Just tried with the latest version and it works perfectly fine. Thank you very much!

<!-- gh-comment-id:2485128129 --> @Guillaume-Bignon commented on GitHub (Nov 19, 2024): Just tried with the latest version and it works perfectly fine. Thank you very much!
Author
Owner

@stayupthetree commented on GitHub (Nov 19, 2024):

tested with latest and error persists. My docker compose is below. Worth noting I also attempted with the latest docker-compose.yml from here and wiping out everything and starting fresh.

services:
  web:
    image: ghcr.io/hoarder-app/hoarder-web:${HOARDER_VERSION:-release}
    container_name: web
    network_mode: speedforce
    restart: unless-stopped
    volumes:
      - /mnt/user/appdata/hoarder/data:/data
    ports:
      - 3111:3000
    env_file:
      - .env
    environment:
      REDIS_HOST: redis
      MEILI_ADDR: http://meilisearch:7700
      DATA_DIR: /data
    labels:
      traefik.enable: "true"
      traefik.http.routers.web.entrypoints: http
      traefik.http.routers.web.rule: Host(`[REDACTED]`)
      traefik.http.middlewares.web-https-redirect.redirectscheme.scheme: https
      traefik.http.routers.web.middlewares: web-https-redirect
      traefik.http.routers.web-secure.entrypoints: https
      traefik.http.routers.web-secure.rule: Host(`[REDACTED]`)
      traefik.http.routers.web-secure.tls: "true"
      traefik.http.routers.web-secure.service: web
      traefik.http.services.web.loadbalancer.server.port: "3000"
      traefik.docker.network: speedforce
      kuma.mygroup.group.name: Hoarder
      kuma.web.http.name: Web
      kuma.web.docker.docker_container: web
      kuma.web.docker.name: web
      kuma.web.docker.interval: 60
      kuma.web.docker_host: unix:///var/run/docker.sock
      kuma.web.http.url: http://web:3000
      kuma.web.http.interval: 60
      kuma.web.http.max_redirects: 5
      kuma.web.http.status_code: 200
  redis:
    image: redis:7.2-alpine
    container_name: redis
    network_mode: speedforce
    restart: unless-stopped
    volumes:
      - redis:/data
  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    container_name: chrome
    network_mode: speedforce
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    container_name: meilisearch
    network_mode: speedforce
    restart: unless-stopped
    env_file:
      - .env
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - /mnt/user/appdata/hoarder/meilisearch:/meili_data
  workers:
    image: ghcr.io/hoarder-app/hoarder-workers:${HOARDER_VERSION:-release}
    container_name: workers
    network_mode: speedforce
    restart: unless-stopped
    volumes:
      - /mnt/user/appdata/hoarder/:/data
    env_file:
      - .env
    environment:
      REDIS_HOST: redis
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      DATA_DIR: /data
      OPENAI_API_KEY: [REDACTED]
    depends_on:
      web:
        condition: service_started
volumes:
  redis: null
  meilisearch: null
  data: null
networks:
  speedforce:
    external: true
<!-- gh-comment-id:2486844219 --> @stayupthetree commented on GitHub (Nov 19, 2024): tested with latest and error persists. My docker compose is below. Worth noting I also attempted with the latest docker-compose.yml from here and wiping out everything and starting fresh. ``` services: web: image: ghcr.io/hoarder-app/hoarder-web:${HOARDER_VERSION:-release} container_name: web network_mode: speedforce restart: unless-stopped volumes: - /mnt/user/appdata/hoarder/data:/data ports: - 3111:3000 env_file: - .env environment: REDIS_HOST: redis MEILI_ADDR: http://meilisearch:7700 DATA_DIR: /data labels: traefik.enable: "true" traefik.http.routers.web.entrypoints: http traefik.http.routers.web.rule: Host(`[REDACTED]`) traefik.http.middlewares.web-https-redirect.redirectscheme.scheme: https traefik.http.routers.web.middlewares: web-https-redirect traefik.http.routers.web-secure.entrypoints: https traefik.http.routers.web-secure.rule: Host(`[REDACTED]`) traefik.http.routers.web-secure.tls: "true" traefik.http.routers.web-secure.service: web traefik.http.services.web.loadbalancer.server.port: "3000" traefik.docker.network: speedforce kuma.mygroup.group.name: Hoarder kuma.web.http.name: Web kuma.web.docker.docker_container: web kuma.web.docker.name: web kuma.web.docker.interval: 60 kuma.web.docker_host: unix:///var/run/docker.sock kuma.web.http.url: http://web:3000 kuma.web.http.interval: 60 kuma.web.http.max_redirects: 5 kuma.web.http.status_code: 200 redis: image: redis:7.2-alpine container_name: redis network_mode: speedforce restart: unless-stopped volumes: - redis:/data chrome: image: gcr.io/zenika-hub/alpine-chrome:123 container_name: chrome network_mode: speedforce restart: unless-stopped command: - --no-sandbox - --disable-gpu - --disable-dev-shm-usage - --remote-debugging-address=0.0.0.0 - --remote-debugging-port=9222 - --hide-scrollbars meilisearch: image: getmeili/meilisearch:v1.11.1 container_name: meilisearch network_mode: speedforce restart: unless-stopped env_file: - .env environment: MEILI_NO_ANALYTICS: "true" volumes: - /mnt/user/appdata/hoarder/meilisearch:/meili_data workers: image: ghcr.io/hoarder-app/hoarder-workers:${HOARDER_VERSION:-release} container_name: workers network_mode: speedforce restart: unless-stopped volumes: - /mnt/user/appdata/hoarder/:/data env_file: - .env environment: REDIS_HOST: redis MEILI_ADDR: http://meilisearch:7700 BROWSER_WEB_URL: http://chrome:9222 DATA_DIR: /data OPENAI_API_KEY: [REDACTED] depends_on: web: condition: service_started volumes: redis: null meilisearch: null data: null networks: speedforce: external: true ```
Author
Owner

@MohamedBassem commented on GitHub (Nov 19, 2024):

@stayupthetree did you try the nightly build? Also can you share the logs of the web container on startup?

<!-- gh-comment-id:2486902808 --> @MohamedBassem commented on GitHub (Nov 19, 2024): @stayupthetree did you try the nightly build? Also can you share the logs of the web container on startup?
Author
Owner

@stayupthetree commented on GitHub (Nov 19, 2024):

web-2024-11-19T23-09-11.log

the error doesnt present until I try to add a bookmark or if I go to the admin settings
my .env

HOARDER_VERSION=latest
NEXTAUTH_SECRET=redacted
MEILI_MASTER_KEY=redacted
NEXTAUTH_URL=http://localhost:3000
CRAWLER_FULL_PAGE_SCREENSHOT=true
CRAWLER_FULL_PAGE_ARCHIVE=true
CRAWLER_VIDEO_DOWNLOAD=true
CRAWLER_VIDEO_DOWNLOAD_MAX_SIZE=-1
<!-- gh-comment-id:2486935622 --> @stayupthetree commented on GitHub (Nov 19, 2024): [web-2024-11-19T23-09-11.log](https://github.com/user-attachments/files/17822525/web-2024-11-19T23-09-11.log) the error doesnt present until I try to add a bookmark or if I go to the admin settings my .env ``` HOARDER_VERSION=latest NEXTAUTH_SECRET=redacted MEILI_MASTER_KEY=redacted NEXTAUTH_URL=http://localhost:3000 CRAWLER_FULL_PAGE_SCREENSHOT=true CRAWLER_FULL_PAGE_ARCHIVE=true CRAWLER_VIDEO_DOWNLOAD=true CRAWLER_VIDEO_DOWNLOAD_MAX_SIZE=-1 ```
Author
Owner

@MohamedBassem commented on GitHub (Nov 19, 2024):

Ah found the problem. You're using the 'hoarder-app/hoarder-web' image while you should be using 'hoarder-app/hoarder' instead. The hoarder web images doesn't contain the worker container so it doesn't run the db migration

<!-- gh-comment-id:2486939751 --> @MohamedBassem commented on GitHub (Nov 19, 2024): Ah found the problem. You're using the 'hoarder-app/hoarder-web' image while you should be using 'hoarder-app/hoarder' instead. The hoarder web images doesn't contain the worker container so it doesn't run the db migration
Author
Owner

@MohamedBassem commented on GitHub (Nov 19, 2024):

Actually, you see to be using the old docker compose and some things have changed since then, check the upgrade guide of version 0.16. We don't need redis anymore and we don't need a separate container for the workers.

<!-- gh-comment-id:2486947258 --> @MohamedBassem commented on GitHub (Nov 19, 2024): Actually, you see to be using the old docker compose and some things have changed since then, check the upgrade guide of [version 0.16](https://github.com/hoarder-app/hoarder/releases/tag/v0.16.0). We don't need redis anymore and we don't need a separate container for the workers.
Author
Owner

@MohamedBassem commented on GitHub (Nov 19, 2024):

And your main problem in the old docker compose is that both the web and worker containers had different mountpaths for /data while they should both have had the same one.

You have /mnt/user/appdata/hoarder/data:/data for web, and /mnt/user/appdata/hoarder/:/data for workers which is incorrect. Fixing that should be enough to fix your problem. However, I still recommend you upgrade to the new docker compose.

<!-- gh-comment-id:2486950907 --> @MohamedBassem commented on GitHub (Nov 19, 2024): And your main problem in the old docker compose is that both the web and worker containers had different mountpaths for `/data` while they should both have had the same one. You have `/mnt/user/appdata/hoarder/data:/data` for web, and `/mnt/user/appdata/hoarder/:/data` for workers which is incorrect. Fixing that should be enough to fix your problem. However, I still recommend you upgrade to the new docker compose.
Author
Owner

@stayupthetree commented on GitHub (Nov 19, 2024):

Nailed it! Apologies for missing the upgrade guide! Fully upgraded to new docker compose and seems to working splendidly. Thank you sir!

<!-- gh-comment-id:2486998274 --> @stayupthetree commented on GitHub (Nov 19, 2024): Nailed it! Apologies for missing the upgrade guide! Fully upgraded to new docker compose and seems to working splendidly. Thank you sir!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/karakeep#429
No description provided.