[GH-ISSUE #510] scheduler entries ttl maybe too short #2258

Closed
opened 2026-03-15 19:52:33 +03:00 by kerem · 3 comments
Owner

Originally created by @xuyang2 on GitHub (Jul 8, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/510

Originally assigned to: @hibiken on GitHub.

Currently the tick interval of Scheduler.runHeartbeater() and the ttl of WriteSchedulerEntries() are both 5*time.Second

scheduler entries is prone to expire when redis is slow to respond

Is it acceptable to increase the ttl of WriteSchedulerEntries() ?

Originally created by @xuyang2 on GitHub (Jul 8, 2022). Original GitHub issue: https://github.com/hibiken/asynq/issues/510 Originally assigned to: @hibiken on GitHub. Currently the tick interval of `Scheduler.runHeartbeater()` and the ttl of `WriteSchedulerEntries()` are both `5*time.Second` scheduler entries is prone to expire when redis is slow to respond Is it acceptable to increase the ttl of WriteSchedulerEntries() ? - https://github.com/hibiken/asynq/blob/c70ff6a335eed05735427567e8e9549745f43204/scheduler.go#L271 - https://github.com/hibiken/asynq/blob/c70ff6a335eed05735427567e8e9549745f43204/scheduler.go#L302
kerem 2026-03-15 19:52:33 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@hibiken commented on GitHub (Jul 9, 2022):

@xuyang2 thank you for reporting this issue!
I agree. I think it makes sense to increase the TTL to 10s to allow for a miss in heartbeat.

<!-- gh-comment-id:1179571435 --> @hibiken commented on GitHub (Jul 9, 2022): @xuyang2 thank you for reporting this issue! I agree. I think it makes sense to increase the TTL to 10s to allow for a miss in heartbeat.
Author
Owner

@tschaub commented on GitHub (Jul 11, 2022):

I'm seeing behavior that looks like tasks are not getting scheduled. I'm having trouble putting together a reproducible test case - only seeing the issue in a production environment and not in tests or running locally. I'm wondering if this TTL issue or some other known issue might be responsible.

Are others seeing cases where tasks are sometimes not getting run or scheduled (no error from enqueue, but the handler is never called with the task)?

<!-- gh-comment-id:1180640184 --> @tschaub commented on GitHub (Jul 11, 2022): I'm seeing behavior that looks like tasks are not getting scheduled. I'm having trouble putting together a reproducible test case - only seeing the issue in a production environment and not in tests or running locally. I'm wondering if this TTL issue or some other known issue might be responsible. Are others seeing cases where tasks are sometimes not getting run or scheduled (no error from enqueue, but the handler is never called with the task)?
Author
Owner

@lamhieo02 commented on GitHub (Jul 1, 2024):

@xuyang2 thank you for reporting this issue! I agree. I think it makes sense to increase the TTL to 10s to allow for a miss in heartbeat.

Do you have a new version to handle this?

<!-- gh-comment-id:2199208627 --> @lamhieo02 commented on GitHub (Jul 1, 2024): > @xuyang2 thank you for reporting this issue! I agree. I think it makes sense to increase the TTL to 10s to allow for a miss in heartbeat. Do you have a new version to handle this?
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#2258
No description provided.