[GH-ISSUE #445] Slack handler not throttling to match their requirements #152

Closed
opened 2026-03-04 02:12:40 +03:00 by kerem · 1 comment
Owner

Originally created by @PLaRoche on GitHub (Oct 30, 2014).
Original GitHub issue: https://github.com/Seldaek/monolog/issues/445

Slack only allows one message per second, and if you hit that for a sustained period of time they will cut you off. It would be nice if the slack handler could limit this, or at least listen for them saying you have been cut off.

From slack:

Since Slack is primarily a tool for humans to communicate with one another, this is a preventative measure to ensure that an out-of-control script doesn't compromise the quality of your archive or make it difficult for you to talk to one another.

To prevent this from happening in the future, you'll want to implement error checking in your integration. When we detect that one of your integrations might be running out of control (current thresholds are more than one message per second over a sustained period of time), we'll return an HTTP 429 and a chunk of JSON with some more details.

{ 
"ok": false,
"count_hour_ago": 400,
"count_minute_ago": 100,
"count_second_ago": 5
}

When we send one of these error messages back, it also means that your message did not make it into Slack — you'll want to queue it up and try again in a few seconds.

Also, as noted on the API Rate limits page, you may wish to explore alternatives for log aggregation: https://api.slack.com/docs/rate-limits

Originally created by @PLaRoche on GitHub (Oct 30, 2014). Original GitHub issue: https://github.com/Seldaek/monolog/issues/445 Slack only allows one message per second, and if you hit that for a sustained period of time they will cut you off. It would be nice if the slack handler could limit this, or at least listen for them saying you have been cut off. From slack: Since Slack is primarily a tool for humans to communicate with one another, this is a preventative measure to ensure that an out-of-control script doesn't compromise the quality of your archive or make it difficult for you to talk to one another. To prevent this from happening in the future, you'll want to implement error checking in your integration. When we detect that one of your integrations might be running out of control (current thresholds are more than one message per second over a sustained period of time), we'll return an HTTP 429 and a chunk of JSON with some more details. ``` { "ok": false, "count_hour_ago": 400, "count_minute_ago": 100, "count_second_ago": 5 } ``` When we send one of these error messages back, it also means that your message did not make it into Slack — you'll want to queue it up and try again in a few seconds. Also, as noted on the API Rate limits page, you may wish to explore alternatives for log aggregation: https://api.slack.com/docs/rate-limits
kerem 2026-03-04 02:12:40 +03:00
  • closed this issue
  • added the
    Bug
    label
Author
Owner

@Seldaek commented on GitHub (Dec 28, 2014):

Thanks for the warning. I guess we should implement a handleBatch method that allows sending all records as one message instead of one API call per record. At least if users configure it correctly after a BufferHandler or so.

<!-- gh-comment-id:68213261 --> @Seldaek commented on GitHub (Dec 28, 2014): Thanks for the warning. I guess we should implement a handleBatch method that allows sending all records as one message instead of one API call per record. At least if users configure it correctly after a BufferHandler or so.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/monolog#152
No description provided.