mirror of
https://github.com/healthchecks/healthchecks.git
synced 2026-04-25 23:15:49 +03:00
[GH-ISSUE #835] Turning off debug mode results in 500 error - 2.9-dev #586
Labels
No labels
bug
bug
bug
feature
good-first-issue
new integration
pull-request
question
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/healthchecks#586
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jl-678 on GitHub (May 31, 2023).
Original GitHub issue: https://github.com/healthchecks/healthchecks/issues/835
The subject says it all. I have installed healthchecks via the bare metal instructions in a cloud VM. It works well with the standard configuration, and I can login and view all settings and make changes. However, I also get the banner that says "Running in debug mode, do not use in production." which is to be expected. Obviously, I want to disable debug when I go into production and then the problem arises.
To disable debug, I go into local_settings.py and set DEBUG = False. After the change, the server starts up fine in the CLI, but when I try to visit the healthchecks URL, I get a 500 error. (This is the same URL that works perfectly in debug mode.) Here are the two entries that I see in the CLI:
I am not sure if there is some other setting change needed to turn off debug. Regarding version, I am running 2.9-dev
I appreciate any suggestions on how to fix or troubleshoot this issue. TIA!
@cuu508 commented on GitHub (May 31, 2023):
Start by setting
EMAIL_*settings – see https://github.com/healthchecks/healthchecks#sending-emailsAnd also set
ADMINSsetting in local_settings.py like so:You will then receive error reports in email, and will be able to see what precisely is causing the HTTP 500 response.
Sometimes the problem is missing static files – when
manage.py compressandmanage.py collectstatichave not been run.@jl-678 commented on GitHub (May 31, 2023):
Thank you for the rapid response! I will do that now. I had setup email, but missed the admin requirement.
Update for future visitors: @cuu508 had the right anwer - The solution was running
manage.py compressandmanage.py collectstatic.@cuu508 commented on GitHub (Jun 1, 2023):
Good stuff!
I'll keep this issue open for now, and see if the default behavior can be improved here. Perhaps the server should refuse to start and print an error message to stdout if static files are missing, instead of starting up and generating HTTP 500 on every request.
@jl-678 commented on GitHub (Jun 1, 2023):
I think that that is a great idea! Thank you.
@cuu508 commented on GitHub (Jun 8, 2023):
I looked into using system checks to detect when
manage.py collectstaticormanage.py compresshas not been run.I wrote a simple system check that checks for existence of specific directories. It seemed to work: with the static files missing,
manage.py runserverwould refuse to run, and would print a human-friendly error to the console. But there was a subtle issue with it: when templates change (for example, when upgrading to a newer Healthchecks version), the collected static files are out of date andcollectstaticandcompressmust be run again. But my naive check was not be able to detect that, as it just checks for the existence of a couple directories. Somanage.py runserverwould run, the server would appear to work, but some pages (where templates have changed) would still return HTTP 500.I dropped the system check idea, and instead extended the default Django logging configuration to log unhandled exceptions to console even when DEBUG=False. So now when the server returns HTTP 500, you can check the server logs and see the exception message and traceback (and you will still receive an email report if ADMINS and email is configured correctly).
@jl-678 commented on GitHub (Jun 8, 2023):
Thank you for implementing this! It sounds like a great solution.