mirror of
https://github.com/healthchecks/healthchecks.git
synced 2026-04-25 06:55:53 +03:00
[GH-ISSUE #860] Gitlab Alerts #602
Labels
No labels
bug
bug
bug
feature
good-first-issue
new integration
pull-request
question
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/healthchecks#602
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mtesch-um on GitHub (Jul 14, 2023).
Original GitHub issue: https://github.com/healthchecks/healthchecks/issues/860
[Draft - still not sure what the requirements should be here... but wanted a place to gather and share... maybe this just turns into documentation about how to set this up to save others from working through it or having a suboptimal setup (or me from having a suboptimal setup!).]
Would be nice to have an integration w/ Gitlab Alerts. It can be (sort of) done manually right now with webhooks, but it's not obvious how to do it, and maybe doesn't quite have full-feature support(?)
The Alert webhook interface documentation: https://docs.gitlab.com/ee/operations/incident_management/integrations.html#http-endpoints
To setup a gitlab webhook integration in
https://healthchecks.io/integrations/<uuid>/edit/https://gitlab.com/<path-to-project>/-/settings/operations- Take note of "Webhook URL" and "Authorization key"Execute when a check goes down: selectPOST: url=https://gitlab.com/<path-to-project>/alerts/notify/<alert-slug>/<some-numbers>.json("Webhook URL" from above)Request Body=
Request Headers=
Execute when a check goes up: selectPOST: url=https://gitlab.com/<path-to-project>/alerts/notify/<alert-slug>/<some-numbers>.json("Webhook URL" from above)Request Body=
Request Headers=
One thing that appears to be missing (I haven't figured it out yet anyway) is a per-failure "fingerprint" which I think would allow the healthcheck failure to map 1-1 with an Alert and Incident in gitlab.
@cuu508 commented on GitHub (Jul 14, 2023):
Pagerduty webhook payloads have an
incident_keyfield, which I think is similar to the fingerprint, it is used for grouping notifications about the "same thing" together. In the Pagerduty integration we use check's code as the incident key.@mtesch-um commented on GitHub (Jul 14, 2023):
👍 I'll let it run as is for a few days and see how it works in relation to the Alerts/Incident management built into gitlab.
I suspect we might want to have separate Alerts for separate failures, even for the same code, which maybe could be the last good
ridbefore the failure, or some event identifier for the webhook-down event or the cron schedule time that triggered the last failure (even if it's an UP event)?