[GH-ISSUE #273] Running in production on a subdomain /app/ possible? #203

Closed
opened 2026-02-25 23:41:34 +03:00 by kerem · 14 comments
Owner

Originally created by @netopsengineer on GitHub (Aug 9, 2019).
Original GitHub issue: https://github.com/healthchecks/healthchecks/issues/273

Hello,

I have this fantastic app running in a venv, with mod_wsgi setup on Apache, and I need to run it as a subdomain to my main URL, in my case I mapped it to myURL/hc/ and what I am noticing is that several pages don't direct themselves to /hc/ and rather revert to myURL of my existing site.

For instance if I click on an individual "check" it takes me to myURL/checks/ee19e9f1-f727-4a8e-96fd-601cc26d85f6/details/ instead of myURL/hc/check/UUID, but not every page is like this, projects for example works as intended myURL/hc/projects/8195d2d5-af87-4552-a3cd-34886363e7b8/checks/

This is my first shot at turning up a Django app in production and not in debug mode so just wondering what I am missing or if the app is not designed to be used in this way? Please let me know if I can post any other relevant info to troubleshooting. Thank you for taking the time to create such a cool project, looking forward to using it.

Originally created by @netopsengineer on GitHub (Aug 9, 2019). Original GitHub issue: https://github.com/healthchecks/healthchecks/issues/273 Hello, I have this fantastic app running in a venv, with mod_wsgi setup on Apache, and I need to run it as a subdomain to my main URL, in my case I mapped it to myURL/hc/ and what I am noticing is that several pages don't direct themselves to /hc/ and rather revert to myURL of my existing site. For instance if I click on an individual "check" it takes me to myURL/checks/ee19e9f1-f727-4a8e-96fd-601cc26d85f6/details/ instead of myURL/hc/check/UUID, but not every page is like this, projects for example works as intended myURL/hc/projects/8195d2d5-af87-4552-a3cd-34886363e7b8/checks/ This is my first shot at turning up a Django app in production and not in debug mode so just wondering what I am missing or if the app is not designed to be used in this way? Please let me know if I can post any other relevant info to troubleshooting. Thank you for taking the time to create such a cool project, looking forward to using it.
kerem closed this issue 2026-02-25 23:41:34 +03:00
Author
Owner

@cuu508 commented on GitHub (Aug 12, 2019):

Hello, thank you for reporting this!

To be honest, I've always run the app directly from a domain or a subdomain (not from a subdirectory) and so had not run into this bug myself. But this particular issue should be an easy fix: here the URL is constructed on the client side, without knowledge about the "/hc/" subdirectory.

I intend to fix this, but may take a couple days until I get a chance to do it properly.

<!-- gh-comment-id:520320896 --> @cuu508 commented on GitHub (Aug 12, 2019): Hello, thank you for reporting this! To be honest, I've always run the app directly from a domain or a subdomain (not from a subdirectory) and so had not run into this bug myself. But this particular issue should be an easy fix: [here](https://github.com/healthchecks/healthchecks/blob/master/static/js/checks.js#L135) the URL is constructed on the client side, without knowledge about the "/hc/" subdirectory. I intend to fix this, but may take a couple days until I get a chance to do it properly.
Author
Owner

@cuu508 commented on GitHub (Aug 12, 2019):

@coxoperationsengineer could you please share your Apache configuration, the part with mod_wsgi and URL rewrites (if any)? Want to make sure my testing setup matches yours.

<!-- gh-comment-id:520434856 --> @cuu508 commented on GitHub (Aug 12, 2019): @coxoperationsengineer could you please share your Apache configuration, the part with mod_wsgi and URL rewrites (if any)? Want to make sure my testing setup matches yours.
Author
Owner

@netopsengineer commented on GitHub (Aug 12, 2019):

Good morning,

Sure, is this the part that you need? Let me know and I can get whatever you need, admittedly this is the first time I have tried to setup something with mod_wsgi and Django in a venv

/etc/httpd/conf.d $ cat healthcheck.conf
WSGIDaemonProcess hc
WSGIProcessGroup hc
WSGIApplicationGroup %{GLOBAL}

WSGIScriptAlias /hc /webapps/healthchecks/hc/wsgi.py
    
Alias /static /webapps/healthchecks/static-collected
<Directory /webapps/healthchecks/static-collected>
    Require all granted
</Directory>

<Directory /webapps/healthchecks/hc>
    <Files wsgi.py>
        Require all granted
    </Files>
</Directory>
    
#ErrorLog /webapps/healthchecks/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#LogLevel warn

#CustomLog /webapps/healthchecks/access.log combined

There is this file as well since I'm running it in a venv

/etc/httpd/conf.modules.d $ cat 02-wsgi.conf 
LoadModule wsgi_module "/usr/lib64/httpd/modules/mod_wsgi-py36.cpython-36m-x86_64-linux-gnu.so"
WSGIPythonHome "/webapps/hc-venv"
<!-- gh-comment-id:520447967 --> @netopsengineer commented on GitHub (Aug 12, 2019): Good morning, Sure, is this the part that you need? Let me know and I can get whatever you need, admittedly this is the first time I have tried to setup something with mod_wsgi and Django in a venv /etc/httpd/conf.d $ cat healthcheck.conf WSGIDaemonProcess hc WSGIProcessGroup hc WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /hc /webapps/healthchecks/hc/wsgi.py Alias /static /webapps/healthchecks/static-collected <Directory /webapps/healthchecks/static-collected> Require all granted </Directory> <Directory /webapps/healthchecks/hc> <Files wsgi.py> Require all granted </Files> </Directory> #ErrorLog /webapps/healthchecks/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. #LogLevel warn #CustomLog /webapps/healthchecks/access.log combined There is this file as well since I'm running it in a venv /etc/httpd/conf.modules.d $ cat 02-wsgi.conf LoadModule wsgi_module "/usr/lib64/httpd/modules/mod_wsgi-py36.cpython-36m-x86_64-linux-gnu.so" WSGIPythonHome "/webapps/hc-venv"
Author
Owner

@cuu508 commented on GitHub (Aug 12, 2019):

Perfect thanks – just got Healtchecks working under Apache & mod_wsgi and can reproduce the issue.

<!-- gh-comment-id:520457974 --> @cuu508 commented on GitHub (Aug 12, 2019): Perfect thanks – just got Healtchecks working under Apache & mod_wsgi and can reproduce the issue.
Author
Owner

@netopsengineer commented on GitHub (Aug 12, 2019):

@cuu508 While I have you and I'm sorting out the deployment, do you have any recommendations for handling the 'SendAlerts' in production? I haven't been able to find any examples of how that was done, as I tried inside the venv, but I noticed it ran only when I had the venv activated.

We are really enjoying this app, and its changing the way we view future deployments, the idea of doing these Django venv style apps and them being stand alone is really making us think about our traditional HTML/PHP only use of our web server.

<!-- gh-comment-id:520535140 --> @netopsengineer commented on GitHub (Aug 12, 2019): @cuu508 While I have you and I'm sorting out the deployment, do you have any recommendations for handling the 'SendAlerts' in production? I haven't been able to find any examples of how that was done, as I tried inside the venv, but I noticed it ran only when I had the venv activated. We are really enjoying this app, and its changing the way we view future deployments, the idea of doing these Django venv style apps and them being stand alone is really making us think about our traditional HTML/PHP only use of our web server.
Author
Owner

@cuu508 commented on GitHub (Aug 12, 2019):

@coxoperationsengineer your virtualenv's bin folder has a python binary. When you run python scripts using that binary it works as if the virtualenv was activated.

For example, let's say, your venv is at /webapps/hc-venv and the healthchecks project is at /webapps/healthchecks. You can run the sendalerts command like so:

/webapps/hc-venv/bin/python /webapps/healthchecks/manage.py sendalerts

You can use any task runner / process manager you like. I'm currently using systemd services. An example service file:

[Unit]
Description=sendalerts
After=network-online.target
Wants=network-online.target

[Service]
Slice=machine.slice
Restart=always
RestartSec=20
StartLimitInterval=10
StartLimitBurst=5

User=hc
Group=hc

ExecStart=/webapps/hc-venv/bin/python -u /webapps/healthchecks/manage.py sendalerts --no-threads

PrivateDevices=true
ProtectHome=true
ProtectSystem=full

[Install]
WantedBy=multi-user.target

(the -u flag is for unbuffered output)

  • put it in /etc/systemd/system/sendalerts.service
  • run systemctl daemon-reload so systemd notices the new service
  • run systemd restart sendalerts.service to start it up
  • run systemd enable sendalerts.service to make it start automatically on boot
  • run journalctl -f -u sendalerts to see live logs
<!-- gh-comment-id:520560304 --> @cuu508 commented on GitHub (Aug 12, 2019): @coxoperationsengineer your virtualenv's `bin` folder has a `python` binary. When you run python scripts using that binary it works as if the virtualenv was activated. For example, let's say, your venv is at `/webapps/hc-venv` and the healthchecks project is at `/webapps/healthchecks`. You can run the sendalerts command like so: /webapps/hc-venv/bin/python /webapps/healthchecks/manage.py sendalerts You can use any task runner / process manager you like. I'm currently using systemd services. An example service file: ``` [Unit] Description=sendalerts After=network-online.target Wants=network-online.target [Service] Slice=machine.slice Restart=always RestartSec=20 StartLimitInterval=10 StartLimitBurst=5 User=hc Group=hc ExecStart=/webapps/hc-venv/bin/python -u /webapps/healthchecks/manage.py sendalerts --no-threads PrivateDevices=true ProtectHome=true ProtectSystem=full [Install] WantedBy=multi-user.target ``` (the -u flag is for unbuffered output) * put it in `/etc/systemd/system/sendalerts.service` * run `systemctl daemon-reload` so systemd notices the new service * run `systemd restart sendalerts.service` to start it up * run `systemd enable sendalerts.service` to make it start automatically on boot * run `journalctl -f -u sendalerts` to see live logs
Author
Owner

@cuu508 commented on GitHub (Aug 12, 2019):

Just commited a fix for all instances of incorrect URLs being constructed in JS that I could find. Please let me know if you notice any other breakage – I had not tested running from a subdirectory before, and might have missed something.

<!-- gh-comment-id:520583964 --> @cuu508 commented on GitHub (Aug 12, 2019): Just commited a fix for all instances of incorrect URLs being constructed in JS that I could find. Please let me know if you notice any other breakage – I had not tested running from a subdirectory before, and might have missed something.
Author
Owner

@netopsengineer commented on GitHub (Aug 13, 2019):

@cuu508 Thank you for the info on the alerts, I got it working perfectly with systemd.

I cloned over your new files, stashed my files, and brought back my settings files, and most things worked.

A few observations: I have to modify the wsgi.py file because "hc.settings" can't be found, the "fix" was as such:

 14 sys.path.append('/webapps/healthchecks')
 15 sys.path.append('/webapps/healthchecks/hc')
 16 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "hc.settings")

There are several places I noticed where images don't work because they reference like this:
src="{% site_root %}/static/img/logo-full@2x.png"

The email template is one of those
https://github.com/healthchecks/healthchecks/blob/master/templates/emails/base.html

Also the badges are double appending the root of the URL (/hc/hc/)onto the "tags" and not displaying images either, I haven't been able to figure out where that comes from yet but I'm getting something like:
https://myURL.com/hc/hc/badge/2bcc6bd2-5381-4caa-a295-ce55a811423e/3Kr54ARM/RH_Poller-TPE_Router_OSPF.svg

<!-- gh-comment-id:521015396 --> @netopsengineer commented on GitHub (Aug 13, 2019): @cuu508 Thank you for the info on the alerts, I got it working perfectly with systemd. I cloned over your new files, stashed my files, and brought back my settings files, and most things worked. A few observations: I have to modify the wsgi.py file because "hc.settings" can't be found, the "fix" was as such: 14 sys.path.append('/webapps/healthchecks') 15 sys.path.append('/webapps/healthchecks/hc') 16 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "hc.settings") There are several places I noticed where images don't work because they reference like this: src="{% site_root %}/static/img/logo-full@2x.png" The email template is one of those https://github.com/healthchecks/healthchecks/blob/master/templates/emails/base.html Also the badges are double appending the root of the URL (/hc/hc/)onto the "tags" and not displaying images either, I haven't been able to figure out where that comes from yet but I'm getting something like: https://myURL.com/hc/hc/badge/2bcc6bd2-5381-4caa-a295-ce55a811423e/3Kr54ARM/RH_Poller-TPE_Router_OSPF.svg
Author
Owner

@cuu508 commented on GitHub (Aug 14, 2019):

I think if you add a WSGIPythonPath declaration to your Apache configuration you would not need to edit wsgi.py and adjust paths there.

Here's my configuration:

WSGIScriptAlias /hc /home/cepe/repos/healthchecks/hc/wsgi.py
WSGIPythonHome /home/cepe/venvs/healthchecks
WSGIPythonPath /home/cepe/repos/healthchecks

    
Alias /static /home/cepe/repos/healthchecks/static-collected
<Directory /home/cepe/repos/healthchecks/static-collected>
    Require all granted
</Directory>

<Directory /home/cepe/repos/healthchecks/hc>
    <Files wsgi.py>
        Require all granted
    </Files>
</Directory>

The paths are different but I'm sure you get the idea. I was referencing https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/modwsgi/

To fix src="{% site_root %}/static/img/logo-full@2x.png", I adjusted the SITE_ROOT variable in $PROJECT_DIR/hc/local_settings.py:

SITE_ROOT = "http://mydomain/hc"
PING_ENDPOINT = "http://mydomain/hc/ping/"
<!-- gh-comment-id:521260585 --> @cuu508 commented on GitHub (Aug 14, 2019): I think if you add a `WSGIPythonPath` declaration to your Apache configuration you would not need to edit wsgi.py and adjust paths there. Here's my configuration: ``` WSGIScriptAlias /hc /home/cepe/repos/healthchecks/hc/wsgi.py WSGIPythonHome /home/cepe/venvs/healthchecks WSGIPythonPath /home/cepe/repos/healthchecks Alias /static /home/cepe/repos/healthchecks/static-collected <Directory /home/cepe/repos/healthchecks/static-collected> Require all granted </Directory> <Directory /home/cepe/repos/healthchecks/hc> <Files wsgi.py> Require all granted </Files> </Directory> ``` The paths are different but I'm sure you get the idea. I was referencing https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/modwsgi/ To fix `src="{% site_root %}/static/img/logo-full@2x.png"`, I adjusted the SITE_ROOT variable in `$PROJECT_DIR/hc/local_settings.py`: ``` SITE_ROOT = "http://mydomain/hc" PING_ENDPOINT = "http://mydomain/hc/ping/" ```
Author
Owner

@immanuelfodor commented on GitHub (Sep 28, 2019):

I'm trying to run the project in a domain+folder combo like this one without success: https://api.xyz.tld/k8s/healthchecks

In my local_settings.py there are among others:

SITE_ROOT = "https://api.xyz.tld/k8s/healthchecks"
STATIC_URL = "https://api.xyz.tld/k8s/healthchecks/static/"

CSS/JS loads fine but still, the in-template URLs are falling back to / like /accounts/login in the HTML which results ending up at https://api.xyz.tld/accounts/login.

I thought of setting a <base> tag but it doesn't affect links starting with a slash, so I also tried injecting a small JS tag replacing all a.href and form.action attributes, which also feels hacky and incomplete. Some minified JS file still omits this.

Oh, and I also tried to inject some Python code into the hc/urls.py file to prepend all URLs with k8s/healthchecks but it didn't work, this way the app becomes available at https://api.xyz.tld/k8s/healthchecks/k8s/healthchecks which is also not what I want.

echo -e "\nurlpatterns = [path('k8s/healthchecks/', include(urlpatterns))]" >> /app/healthchecks/hc/urls.py

The entry point of my Kubernetes cluster is at https://api.xyz.tld/k8s and the /healthchecks/ path is managed by Ambassador, so the app's root should be at https://api.xyz.tld/k8s/healthchecks, it only starts to receive HTTP requests below this point.

And by the way, my Dockerfile is from https://github.com/linuxserver/docker-healthchecks if it helps anything, but I don't think the issue is with that, the app works fine just the links generated on the frontend are problematic.

What do you suggest what more should I try to effectively change the base path without too many source code hacks? :)

<!-- gh-comment-id:536168271 --> @immanuelfodor commented on GitHub (Sep 28, 2019): I'm trying to run the project in a domain+folder combo like this one without success: `https://api.xyz.tld/k8s/healthchecks` In my `local_settings.py` there are among others: ```python SITE_ROOT = "https://api.xyz.tld/k8s/healthchecks" STATIC_URL = "https://api.xyz.tld/k8s/healthchecks/static/" ``` CSS/JS loads fine but still, the in-template URLs are falling back to `/` like `/accounts/login` in the HTML which results ending up at `https://api.xyz.tld/accounts/login`. I thought of setting a `<base>` tag but it doesn't affect links starting with a slash, so I also tried injecting a small JS tag replacing all `a.href` and `form.action` attributes, which also feels hacky and incomplete. Some minified JS file still omits this. Oh, and I also tried to inject some Python code into the `hc/urls.py` file to prepend all URLs with `k8s/healthchecks` but it didn't work, this way the app becomes available at `https://api.xyz.tld/k8s/healthchecks/k8s/healthchecks` which is also not what I want. ```bash echo -e "\nurlpatterns = [path('k8s/healthchecks/', include(urlpatterns))]" >> /app/healthchecks/hc/urls.py ``` The entry point of my Kubernetes cluster is at `https://api.xyz.tld/k8s` and the `/healthchecks/` path is managed by Ambassador, so the app's root should be at `https://api.xyz.tld/k8s/healthchecks`, it only starts to receive HTTP requests below this point. And by the way, my `Dockerfile` is from https://github.com/linuxserver/docker-healthchecks if it helps anything, but I don't think the issue is with that, the app works fine just the links generated on the frontend are problematic. What do you suggest what more should I try to effectively change the base path without too many source code hacks? :)
Author
Owner

@cuu508 commented on GitHub (Sep 30, 2019):

Ideally we want this to just work, with no hacks! A couple questions:

  • Are you running the latest Dockerfile version from linuxserver? The fixes for running from a subdirectory went into v1.9.0
  • Would it be possible to make a minimal environment that demonstrates the issue? I'm not familiar with Ambassador/Envoy, can they run outside K8S with just a static configuration file?
<!-- gh-comment-id:536477685 --> @cuu508 commented on GitHub (Sep 30, 2019): Ideally we want this to _just work_, with no hacks! A couple questions: * Are you running the latest Dockerfile version from linuxserver? The fixes for running from a subdirectory went into v1.9.0 * Would it be possible to make a minimal environment that demonstrates the issue? I'm not familiar with Ambassador/Envoy, can they run outside K8S with just a static configuration file?
Author
Owner

@immanuelfodor commented on GitHub (Oct 1, 2019):

Of course, that would be great! 👍

  • Yes, I'm running the latest image: linuxserver/healthchecks:v1.9.0-ls37
  • I can't make the deployment public but I can show you all my files. I first created a simple docker-compose.yml, then converted it to k8s configs with kompose convert, then fine-tuned to my environment (needed to fix the PV to node 4 for example). The below config also contains the latest JS hack I tried to make, it renders the page more or less usable but the forms and the Django internal redirects for login/logout still don't work, many manual URL overwrites are needed in the browser.
cat <<EOF > healthchecks-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.18.0 (06a2e56)
  creationTimestamp: null
  labels:
    io.kompose.service: healthchecks
  name: healthchecks
  namespace: healthchecks
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: healthchecks
    spec:
      containers:
      - args:
        - /bin/sh
        - -c
        - sed -i -E "/<\/head>/i <script type='text/javascript'>window.addEventListener('load',function(){var l=document.links;for(var i=0,len=l.length;i<len;i++){if(l[i].href.includes('api.domain.tld')&&!l[i].href.includes('k8s/healthchecks')){l[i].href=l[i].href.replace('api.domain.tld','api.domain.tld/k8s/healthchecks')}}})</script>" /app/healthchecks/templates/base.html 
          && sleep infinity
        env:
        - name: ALLOWED_HOSTS
          value: '["*"]'
        - name: DEFAULT_FROM_EMAIL
          value: health@api.domain.tld
        - name: EMAIL_HOST
          value: mail.domain.tld
        - name: EMAIL_HOST_PASSWORD
          value: ''''''
        - name: EMAIL_HOST_USER
          value: ''''''
        - name: EMAIL_PORT
          value: "25"
        - name: EMAIL_USE_TLS
          value: "False"
        - name: PGID
          value: "1100"
        - name: PUID
          value: "1100"
        - name: SITE_NAME
          value: Healthchecks
        - name: SITE_ROOT
          value: https://api.domain.tld/k8s/healthchecks
        image: linuxserver/healthchecks:v1.9.0-ls37
        name: healthchecks
        ports:
        - containerPort: 8000
        resources: {}
        volumeMounts:
        - mountPath: /config
          name: healthchecks
      restartPolicy: Always
      volumes:
      - name: healthchecks
        persistentVolumeClaim:
          claimName: healthchecks
status: {}
EOF

cat <<EOF > healthchecks-namespace.yml
apiVersion: v1
kind: Namespace
metadata:
  name: healthchecks
EOF

cat <<EOF > healthchecks-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: healthchecks
  name: healthchecks
  namespace: healthchecks
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: healthchecks
  volumeName: node-4-healthchecks-pv
status: {}
EOF

cat <<EOF > healthchecks-persistentvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: node-4-healthchecks-pv
  labels:
    type: local
  namespace: healthchecks
spec:
  storageClassName: healthchecks
  capacity:
    storage: 1Gi
  local:
    path: "/mnt/healthchecks"
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - node4
EOF

cat <<EOF > healthchecks-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.18.0 (06a2e56)
  creationTimestamp: null
  labels:
    io.kompose.service: healthchecks
  name: healthchecks
  namespace: healthchecks
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind:  Mapping
      name:  healthchecks-mapping
      prefix: /healthchecks/
      service: healthchecks.healthchecks
      timeout_ms: 0
spec:
  ports:
  - name: "80"
    port: 80
    targetPort: 8000
  selector:
    io.kompose.service: healthchecks
status:
  loadBalancer: {}
EOF

cat <<EOF > healthchecks-storageclass.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: healthchecks
  namespace: healthchecks
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF

Applied all of them, then manually edited the local_settings.py on the PV to have the below content (empty user and pass for SMTP but this is unrelated, and the static URL), then deleted the pod, to apply it:

EMAIL_PORT = "25"
EMAIL_HOST = "mail.domain.tld"
EMAIL_USE_TLS = False
EMAIL_HOST_USER = ""
SITE_ROOT = "https://api.domain.tld/k8s/healthchecks"
SITE_NAME = "Healthchecks"
DEBUG = False
EMAIL_HOST_PASSWORD = ""
DEFAULT_FROM_EMAIL = "health@api.domain.tld"
ALLOWED_HOSTS = ["*"]
STATIC_URL = "https://api.domain.tld/k8s/healthchecks/static/"

I tried setting only https://api.domain.tld as SITE_ROOT and then adding the aforementioned echo -e "\nurlpatterns = [path('k8s/healthchecks/', include(urlpatterns))]" >> /app/healthchecks/hc/urls.py command to the deployment arg as another hack, but as I said, it also didn't work out well.

At the root of api.domain.tld there is an nginx reverse proxy, which has a configuration that any /k8s/ route should be passed to the cluster, then the Ambassador annotation on the service YAML takes care of the /healthchecks/ part to let me access the deployment in the cluster. I don't have any other config set up related to Healthchecks.

As the pod only handles <this part> in the URL: https://api.domain.tld/k8s/healthchecks/<this part>, I feel like the SITE_ROOT env is not honored somewhere in Django/templates/etc. This is why I think this is rather an app issue, not a Linuxserver Docker issue.

<!-- gh-comment-id:536929310 --> @immanuelfodor commented on GitHub (Oct 1, 2019): Of course, that would be great! :+1: - Yes, I'm running the latest image: `linuxserver/healthchecks:v1.9.0-ls37` - I can't make the deployment public but I can show you all my files. I first created a simple `docker-compose.yml`, then converted it to k8s configs with `kompose convert`, then fine-tuned to my environment (needed to fix the PV to node 4 for example). The below config also contains the latest JS hack I tried to make, it renders the page more or less usable but the forms and the Django internal redirects for login/logout still don't work, many manual URL overwrites are needed in the browser. ```bash cat <<EOF > healthchecks-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 (06a2e56) creationTimestamp: null labels: io.kompose.service: healthchecks name: healthchecks namespace: healthchecks spec: replicas: 1 strategy: type: Recreate template: metadata: creationTimestamp: null labels: io.kompose.service: healthchecks spec: containers: - args: - /bin/sh - -c - sed -i -E "/<\/head>/i <script type='text/javascript'>window.addEventListener('load',function(){var l=document.links;for(var i=0,len=l.length;i<len;i++){if(l[i].href.includes('api.domain.tld')&&!l[i].href.includes('k8s/healthchecks')){l[i].href=l[i].href.replace('api.domain.tld','api.domain.tld/k8s/healthchecks')}}})</script>" /app/healthchecks/templates/base.html && sleep infinity env: - name: ALLOWED_HOSTS value: '["*"]' - name: DEFAULT_FROM_EMAIL value: health@api.domain.tld - name: EMAIL_HOST value: mail.domain.tld - name: EMAIL_HOST_PASSWORD value: '''''' - name: EMAIL_HOST_USER value: '''''' - name: EMAIL_PORT value: "25" - name: EMAIL_USE_TLS value: "False" - name: PGID value: "1100" - name: PUID value: "1100" - name: SITE_NAME value: Healthchecks - name: SITE_ROOT value: https://api.domain.tld/k8s/healthchecks image: linuxserver/healthchecks:v1.9.0-ls37 name: healthchecks ports: - containerPort: 8000 resources: {} volumeMounts: - mountPath: /config name: healthchecks restartPolicy: Always volumes: - name: healthchecks persistentVolumeClaim: claimName: healthchecks status: {} EOF cat <<EOF > healthchecks-namespace.yml apiVersion: v1 kind: Namespace metadata: name: healthchecks EOF cat <<EOF > healthchecks-persistentvolumeclaim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null labels: io.kompose.service: healthchecks name: healthchecks namespace: healthchecks spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: healthchecks volumeName: node-4-healthchecks-pv status: {} EOF cat <<EOF > healthchecks-persistentvolume.yaml kind: PersistentVolume apiVersion: v1 metadata: name: node-4-healthchecks-pv labels: type: local namespace: healthchecks spec: storageClassName: healthchecks capacity: storage: 1Gi local: path: "/mnt/healthchecks" accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node4 EOF cat <<EOF > healthchecks-service.yaml apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 (06a2e56) creationTimestamp: null labels: io.kompose.service: healthchecks name: healthchecks namespace: healthchecks annotations: getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Mapping name: healthchecks-mapping prefix: /healthchecks/ service: healthchecks.healthchecks timeout_ms: 0 spec: ports: - name: "80" port: 80 targetPort: 8000 selector: io.kompose.service: healthchecks status: loadBalancer: {} EOF cat <<EOF > healthchecks-storageclass.yml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: healthchecks namespace: healthchecks provisioner: kubernetes.io/no-provisioner reclaimPolicy: Retain volumeBindingMode: WaitForFirstConsumer EOF ``` Applied all of them, then manually edited the `local_settings.py` on the PV to have the below content (empty user and pass for SMTP but this is unrelated, and the static URL), then deleted the pod, to apply it: ```python EMAIL_PORT = "25" EMAIL_HOST = "mail.domain.tld" EMAIL_USE_TLS = False EMAIL_HOST_USER = "" SITE_ROOT = "https://api.domain.tld/k8s/healthchecks" SITE_NAME = "Healthchecks" DEBUG = False EMAIL_HOST_PASSWORD = "" DEFAULT_FROM_EMAIL = "health@api.domain.tld" ALLOWED_HOSTS = ["*"] STATIC_URL = "https://api.domain.tld/k8s/healthchecks/static/" ``` I tried setting only https://api.domain.tld as SITE_ROOT and then adding the aforementioned `echo -e "\nurlpatterns = [path('k8s/healthchecks/', include(urlpatterns))]" >> /app/healthchecks/hc/urls.py` command to the deployment arg as another hack, but as I said, it also didn't work out well. At the root of `api.domain.tld` there is an nginx reverse proxy, which has a configuration that any `/k8s/` route should be passed to the cluster, then the Ambassador annotation on the service YAML takes care of the `/healthchecks/` part to let me access the deployment in the cluster. I don't have any other config set up related to Healthchecks. As the pod only handles `<this part>` in the URL: `https://api.domain.tld/k8s/healthchecks/<this part>`, I feel like the `SITE_ROOT` env is not honored somewhere in Django/templates/etc. This is why I think this is rather an app issue, not a Linuxserver Docker issue.
Author
Owner

@cuu508 commented on GitHub (Oct 1, 2019):

Thanks for the extra details!

Author of the original ticket is using Apache + mod_wsgi. In this setup, mod_wsgi sets a SCRIPT_NAME environment variable with the URL prefix. Django knows about this variable and handles things automatically: it takes the URL prefix in account when resolving URLs, and it adds the prefix when generating URLs.

The thing I needed to fix there was making sure that

  • the URL prefix is also used whenever an URL is generated in Javascript – there were a few such places
  • the templates don't use hardcoded URLs that start with "/".

As I understand this is a different situation, because we're not using a CGI-like interface, we are using a reverse proxy. The reverse proxy does not set the SCRIPT_NAME variable and so Django doesn't handle URLs correctly. I think this problem is not specific to Healthchecks, you would run into the same issue with any Django app, or possibly any wsgi app. I think the fix is to use Django's FORCE_SCRIPT_NAME setting.

As an experiment, I set up a nginx site like so:

server {
        ....
        location /foo/static/ {
            alias /path/to/healthchecks/static-collected/;
        }

        location /foo/ {
            proxy_pass http://localhost:8000/;
        }
}

In local_settings.py I added:

FORCE_SCRIPT_NAME = "/foo"
STATIC_URL = "/foo/static/"

And with that I had the site running at http://localhost/foo/.

This was with the development server, not with uwsgi that linuxserver's dockerfile uses. Now, I'm not sure what's the least painful way to get your stack to do something similar. You would need a way to set the FORCE_SCRIPT_NAME and the STATIC_URL settings, and you might also need to tweak the "static-map" bit in uwsgi.ini.

<!-- gh-comment-id:537040731 --> @cuu508 commented on GitHub (Oct 1, 2019): Thanks for the extra details! Author of the original ticket is using Apache + mod_wsgi. In this setup, mod_wsgi sets a SCRIPT_NAME environment variable with the URL prefix. Django knows about this variable and handles things automatically: it takes the URL prefix in account when resolving URLs, and it adds the prefix when generating URLs. The thing I needed to fix there was making sure that * the URL prefix is also used whenever an URL is generated in Javascript – there were a few such places * the templates don't use hardcoded URLs that start with "/". As I understand this is a different situation, because we're not using a CGI-like interface, we are using a reverse proxy. The reverse proxy does not set the SCRIPT_NAME variable and so Django doesn't handle URLs correctly. I think this problem is not specific to Healthchecks, you would run into the same issue with any Django app, or possibly any wsgi app. I think the fix is to use Django's [FORCE_SCRIPT_NAME](https://docs.djangoproject.com/en/2.2/ref/settings/#force-script-name) setting. As an experiment, I set up a nginx site like so: ``` server { .... location /foo/static/ { alias /path/to/healthchecks/static-collected/; } location /foo/ { proxy_pass http://localhost:8000/; } } ``` In `local_settings.py` I added: ``` FORCE_SCRIPT_NAME = "/foo" STATIC_URL = "/foo/static/" ``` And with that I had the site running at http://localhost/foo/. This was with the development server, not with uwsgi that linuxserver's dockerfile uses. Now, I'm not sure what's the least painful way to get your stack to do something similar. You would need a way to set the FORCE_SCRIPT_NAME and the STATIC_URL settings, and you might also need to tweak the "static-map" bit in uwsgi.ini.
Author
Owner

@immanuelfodor commented on GitHub (Oct 1, 2019):

Wow, that's it, you're a genious! Or at least you know Django better than me 😀

I simply added FORCE_SCRIPT_NAME = "/k8s/healthchecks" to the end of the previously posted local_settings.py, removed the script hack from the deployment, and it works out of the box as expected. Thank you!

For me this issue is resolved :)


In the meantime, I managed to enable Apprise as well with APPRISE_ENABLED = True in the local settings and applying the below command in the deployment YAML, as I've found out that the requirements.txt doesn't have it by default and so it falls back to False in transports.py:

github.com/healthchecks/healthchecks@ba886e90cb/hc/api/transports.py (L11-L15)

...
      containers:
      - args:
        - /bin/sh
        - -c
        - echo "Adding apprise from container args..."
          && pip3 install apprise
          && echo "Reloading uwsgi to discover apprise..."
          && killall -HUP uwsgi
          && sleep infinity
        env:
...

It might help others as well :)

<!-- gh-comment-id:537066077 --> @immanuelfodor commented on GitHub (Oct 1, 2019): Wow, that's it, you're a genious! Or at least you know Django better than me :grinning: I simply added `FORCE_SCRIPT_NAME = "/k8s/healthchecks"` to the end of the previously posted `local_settings.py`, removed the script hack from the deployment, and it works out of the box as expected. Thank you! For me this issue is resolved :) --- In the meantime, I managed to enable Apprise as well with `APPRISE_ENABLED = True` in the local settings and applying the below command in the deployment YAML, as I've found out that the `requirements.txt` doesn't have it by default and so it falls back to False in `transports.py`: https://github.com/healthchecks/healthchecks/blob/ba886e90cb4773a0dc6b8a7e2835ab5ba8d81e14/hc/api/transports.py#L11-L15 ```yaml ... containers: - args: - /bin/sh - -c - echo "Adding apprise from container args..." && pip3 install apprise && echo "Reloading uwsgi to discover apprise..." && killall -HUP uwsgi && sleep infinity env: ... ``` It might help others as well :)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/healthchecks#203
No description provided.