mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-25 23:15:51 +03:00
[GH-ISSUE #369] [BUG] Unique tasks before TTL should not be run #160
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#160
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gmhafiz on GitHub (Dec 26, 2021).
Original GitHub issue: https://github.com/hibiken/asynq/issues/369
Originally assigned to: @hibiken on GitHub.
Describe the bug
The second of two identical tasks with same UUID and payload should not be sent to Redis before TTL, when the message sent could have been from a retry.
This is a related to issue https://github.com/hibiken/asynq/issues/275 with the difference that in 275, repeated tasks are allowed to be processed before TTL because the previous task is deleted. In my case, asynq must respect TTL even when the task has completed.
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
Using
Expected behavior
Environment (please complete the following information):
asynqpackage v0.20.0Additional context
Another way to achieve distributed lock is I manually do the following.
Additionally, I could delete the UUID key if a task is explicitly deleted (not completed).
The other way I can think of is to use retention.
This way, the UUID is still kept in Redis for another 5 minutes after the task is completed. This prevents the Http retry request from being accepted.
@hibiken commented on GitHub (Dec 26, 2021):
@gmhafiz Thank you for reporting an issue!
The intention behind the
Uniqueoption is to prevent a duplicate task being enqueued to the same queue. The duration you pass to theUniqueoption is there to avoid a situation where a stale task in the queue blocking new tasks from being enqueued (or other similar situations).Reading your use case, I think the alternative you suggested sounds perfect: the approach of using
TaskIDandRetentionoption so that completed tasks still remain the queue to prevent other duplicate tasks from being enqueued.Thank you again for the detailed issue report, and let me know if this approach doesn't work for you :)
@hibiken commented on GitHub (Jan 6, 2022):
This issue came up in discussion #376. I'm re-opening this issue to revisit the semantics of the
Uniqueoption.@hibiken commented on GitHub (Mar 1, 2022):
Started a discussion at #409