mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-25 23:15:51 +03:00
[GH-ISSUE #820] Why are we cleaning the cron entries on shutdown? #2426
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#2426
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Kenan7 on GitHub (Feb 4, 2024).
Original GitHub issue: https://github.com/hibiken/asynq/issues/820
Originally assigned to: @hibiken on GitHub.
We have a few different types of periodic jobs running (monthly, weekly, daily).
But every time our services (pods) restart, the progress for these jobs is lost, and it has to start from the beginning,
I was researching this and came across this code, and I wanted to know how relevant is it, and why it was intended like this. Maybe it was difficult to handle resuming so it was cut off until it was fixed.
github.com/hibiken/asynq@6529a1e0b1In general, how can we resolve losing the progress of these entries?
@kamikazechaser commented on GitHub (Feb 4, 2024):
clearHistory ultimately clears the enqueue event history of all scheduler entries, I don't think this should be an issue as it just prevents the db from growing uncessarily. The heartbeater does clears the scheduler entries during shutdown. I am assuming the task entries are somehow utilized within your code?
Could you describe how you expected the scheduler to work? Are these long running jobs?
@Kenan7 commented on GitHub (Feb 4, 2024):
I understand this, but maybe we should add some kind of condition on not removing the queued and unfinished jobs because we lose the progress?
Also, what do you mean by this?
So we are leveraging this usage in asynq
Let's say for the daily periodic job, there needs to be a job that is handled after 55 minutes, the pod restarts, now it is beginning to calculate again for 24 hour, it is not going to be triggered after 55 minutes anymore.
The alternative is to use fixed times like everyday at 6:00 PM or something but it just does not suit the business needs.
@kamikazechaser commented on GitHub (Feb 5, 2024):
Thanks for clarifying. I'll investigate and try and reproduce this issue.
@Haji-sudo commented on GitHub (Feb 27, 2024):
Did you find a way to prevent deletion? @Kenan7 @kamikazechaser
@Kenan7 commented on GitHub (Feb 27, 2024):
Unfortunately. @Haji-sudo
@lsgndln commented on GitHub (Feb 27, 2024):
Same issue here, we have exactly the same use case with monthly / weekly jobs. We don't mind losing progress of ongoing jobs, as we keep track of the progress at a job level. But we would expect the job to run again on startup, as it has not been finished successfully.
This is especially visible with long-duration jobs running like so
@every weekor@every month. We have jobs running for hours or even days. Let's say that after 1 day the server restarts, we would have to wait for another week or month before re-running the job again.