[GH-ISSUE #949] Docker compose example using OLLAMA #627

Closed
opened 2026-03-02 11:51:26 +03:00 by kerem · 4 comments
Owner

Originally created by @sujansujan on GitHub (Jan 30, 2025).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/949

Describe the feature you'd like

an alternatve docker compose example with usage instead of OpenAI api key will be much appreciated.

Describe the benefits this would bring to existing Hoarder users

Ease in setting up local LLM for tagging propose.
Private

Can the goal of this request already be achieved via other means?

Yes, but i have not been able to do it.

Have you searched for an existing open/closed issue?

  • I have searched for existing issues and none cover my fundamental request

Additional context

No response

Originally created by @sujansujan on GitHub (Jan 30, 2025). Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/949 ### Describe the feature you'd like an alternatve docker compose example with usage instead of OpenAI api key will be much appreciated. ### Describe the benefits this would bring to existing Hoarder users Ease in setting up local LLM for tagging propose. Private ### Can the goal of this request already be achieved via other means? Yes, but i have not been able to do it. ### Have you searched for an existing open/closed issue? - [ ] I have searched for existing issues and none cover my fundamental request ### Additional context _No response_
kerem closed this issue 2026-03-02 11:51:26 +03:00
Author
Owner

@slavid commented on GitHub (Jan 30, 2025):

The information is in the docs: https://docs.hoarder.app/Installation/docker#4-setup-openai under the blue dropdown that says: "If you want to use Ollama (https://ollama.com/) instead for local inference.":

-   Make sure ollama is running.
-   Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
-   Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`)
-   Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
-   Make sure that you `ollama pull`-ed the models that you want to use.
-   You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be.
<!-- gh-comment-id:2624739025 --> @slavid commented on GitHub (Jan 30, 2025): The information is in the docs: https://docs.hoarder.app/Installation/docker#4-setup-openai under the blue dropdown that says: "`If you want to use Ollama (https://ollama.com/) instead for local inference.`": ``` - Make sure ollama is running. - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API. - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`) - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`) - Make sure that you `ollama pull`-ed the models that you want to use. - You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be. ```
Author
Owner

@sujansujan commented on GitHub (Jan 31, 2025):

I tried that before posting a feature request. I couldn't get it to work. I also did some googling and looking into issues raised by other ollama users, it didnot work either. here is the compose i am using:

services:
web:
image: gher. io/hoarder-app/hoarder:${HOARDER_VERSION: -release}
restart: unless-stopped
volumes:

  • data:/data
    ports:
  • 3000:3000
    env_file:
  • . env
    environment:
    MEILI_ADDR: http://meilisearch:7700
    BROWSER_WEB_URL: http: // chrome: 9222
    OLLAMA_BASE_URL: http://ollama:11434/
    INFERENCE_TEXT_MODEL: deepseek-r1:1. 5b
    #INFERENCE_IMAGE_MODEL: llava: 7b
    INFERENCE_CONTEXT_LENGTH: 2048
    INFERENCE_LANG: english
    INFERENCE_JOB_TIMEOUT_SEC: 60
    DATA_DIR: / data
    chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
  • --no-sandbox
  • -- disable-gpu
  • --disable-dev-shm-usage
  • -- remote-debugging-address=0.0.0.0
  • --remote-debugging-port=9222
  • --hide-scrollbars
    meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    env_file:
  • . env
    environment:
    MEILI_NO_ANALYTICS: "true"
    volumes:
  • meilisearch:/meili_data
    volumes:
    meilisearch:
    data:
<!-- gh-comment-id:2626075273 --> @sujansujan commented on GitHub (Jan 31, 2025): I tried that before posting a feature request. I couldn't get it to work. I also did some googling and looking into issues raised by other ollama users, it didnot work either. here is the compose i am using: services: web: image: gher. io/hoarder-app/hoarder:${HOARDER_VERSION: -release} restart: unless-stopped volumes: - data:/data ports: - 3000:3000 env_file: - . env environment: MEILI_ADDR: http://meilisearch:7700 BROWSER_WEB_URL: http: // chrome: 9222 OLLAMA_BASE_URL: http://ollama:11434/ INFERENCE_TEXT_MODEL: deepseek-r1:1. 5b #INFERENCE_IMAGE_MODEL: llava: 7b INFERENCE_CONTEXT_LENGTH: 2048 INFERENCE_LANG: english INFERENCE_JOB_TIMEOUT_SEC: 60 DATA_DIR: / data chrome: image: gcr.io/zenika-hub/alpine-chrome:123 restart: unless-stopped command: - --no-sandbox - -- disable-gpu - --disable-dev-shm-usage - -- remote-debugging-address=0.0.0.0 - --remote-debugging-port=9222 - --hide-scrollbars meilisearch: image: getmeili/meilisearch:v1.11.1 restart: unless-stopped env_file: - . env environment: MEILI_NO_ANALYTICS: "true" volumes: - meilisearch:/meili_data volumes: meilisearch: data:
Author
Owner

@kamtschatka commented on GitHub (Jan 31, 2025):

you'll have to post some logs from the hoarder container, otherwise we don't know what doesn't work.

<!-- gh-comment-id:2626709392 --> @kamtschatka commented on GitHub (Jan 31, 2025): you'll have to post some logs from the hoarder container, otherwise we don't know what doesn't work.
Author
Owner

@erikgoldenstein commented on GitHub (Jan 31, 2025):

i got this one, it is working perfectly fine (on linux) but i had to pull the models manually using docker exec -it. also to get gpus working from the container you need to install nvidia-container-toolkit. with this setup you can then simply add traefik labels to the web service and its ready to be hosted.

the general problem is that it is not trivial to reach the hosts localhost from inside the container. on mac and windows you can use host.docker.internal method to reach localhost and on linux you would create a dedicated network, this looked like a nice blogpost on the matter. I found that just having ollama run in a container as well was the cleanest and most self contained solution and it avoided the routing issues.

services:
  web:
    image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
    container_name: mind
    restart: unless-stopped
    ports:
      - "3000:3000" # use this for running on localhost, can be removed when using traefik
    volumes:
      - ./data:/data
    env_file:
      - .env
    environment:
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      DATA_DIR: /data
    networks:
      - hoarder-net

  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
    networks:
      - hoarder-net
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    env_file:
      - .env
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - ./meilisearch:/meili_data
    networks:
      - hoarder-net

  ollama:
    volumes:
      - ./ollama/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    # entrypoint: /bin/bash -c "ollama pull moondream && ollama pull phi3:3.8b && ollama pull snowflake-arctic-embed2 && tail -f /dev/null" # this did sadly not work
    restart: unless-stopped
    image: ollama/ollama:latest
    networks:
      - hoarder-net
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]


networks:
  hoarder-net:
    name: hoarder-net
    driver: bridge
<!-- gh-comment-id:2626959942 --> @erikgoldenstein commented on GitHub (Jan 31, 2025): i got this one, it is working perfectly fine (on linux) but i had to pull the models manually using `docker exec -it`. also to get gpus working from the container you need to install `nvidia-container-toolkit`. with this setup you can then simply add traefik labels to the web service and its ready to be hosted. the general problem is that it is not trivial to reach the hosts `localhost` from inside the container. on mac and windows you can use `host.docker.internal` method to reach localhost and on linux you would create a dedicated network, this looked like a nice [blogpost on the matter](https://huzaima.io/blog/connect-localhost-docker). I found that just having ollama run in a container as well was the cleanest and most self contained solution and it avoided the routing issues. ```yml services: web: image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release} container_name: mind restart: unless-stopped ports: - "3000:3000" # use this for running on localhost, can be removed when using traefik volumes: - ./data:/data env_file: - .env environment: MEILI_ADDR: http://meilisearch:7700 BROWSER_WEB_URL: http://chrome:9222 DATA_DIR: /data networks: - hoarder-net chrome: image: gcr.io/zenika-hub/alpine-chrome:123 restart: unless-stopped command: - --no-sandbox - --disable-gpu - --disable-dev-shm-usage - --remote-debugging-address=0.0.0.0 - --remote-debugging-port=9222 - --hide-scrollbars networks: - hoarder-net meilisearch: image: getmeili/meilisearch:v1.11.1 restart: unless-stopped env_file: - .env environment: MEILI_NO_ANALYTICS: "true" volumes: - ./meilisearch:/meili_data networks: - hoarder-net ollama: volumes: - ./ollama/ollama:/root/.ollama container_name: ollama pull_policy: always tty: true # entrypoint: /bin/bash -c "ollama pull moondream && ollama pull phi3:3.8b && ollama pull snowflake-arctic-embed2 && tail -f /dev/null" # this did sadly not work restart: unless-stopped image: ollama/ollama:latest networks: - hoarder-net deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] networks: hoarder-net: name: hoarder-net driver: bridge ```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/karakeep#627
No description provided.