[GH-ISSUE #369] [BUG] Unique tasks before TTL should not be run #1176

Open
opened 2026-03-07 22:06:50 +03:00 by kerem · 3 comments
Owner

Originally created by @gmhafiz on GitHub (Dec 26, 2021).
Original GitHub issue: https://github.com/hibiken/asynq/issues/369

Originally assigned to: @hibiken on GitHub.

Describe the bug
The second of two identical tasks with same UUID and payload should not be sent to Redis before TTL, when the message sent could have been from a retry.

This is a related to issue https://github.com/hibiken/asynq/issues/275 with the difference that in 275, repeated tasks are allowed to be processed before TTL because the previous task is deleted. In my case, asynq must respect TTL even when the task has completed.

To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):

  1. An external API sends an HTTP request to my asynq microservice through an endpoint.
  2. This external API does a retry for 3 times, sleeping 10 seconds in between.
  3. The first request could have been delayed for more than 10 seconds, prompting a first retry (second identical request).
  4. That second request reaches asynq before original first request, starts processing, and completed in one second.
  5. The original first request finally reaches asynq and starts processing - even though it is identical to the one that has been processed and before its TTL.

Using

info, err := u.redis.EnqueueContext(ctx, task, asynq.Unique(5*time.Minute))

Expected behavior

  • Unique() respects TTL of completed tasks
  • the original first request (which is second chronologically) should have been rejected because it is identical to the first retry (second request) and before 5 minutes

Environment (please complete the following information):

  • OS: Linux
  • Version of asynq package v0.20.0

Additional context

Another way to achieve distributed lock is I manually do the following.

  • include a UUID on every request
  • extract UUID from request
  • check if UUID key is present in Redis
    • If not present, create a new key/value entry
 client.Set(r.Context(), requestID, true, 0).Err()
  • If present, reject
  • Do not delete the UUID key from Redis even after the task has been completed. This is to ensure retry requests fail.
  • Once task has been completed, update the key with TTL
redis.Set(ctx, requestID, true, 5*time.Minute).Err()
  • Once TTL has passed, Redis automatically deletes the key, allowing identical request to be processed.

Additionally, I could delete the UUID key if a task is explicitly deleted (not completed).

The other way I can think of is to use retention.

asynqClient.EnqueueContext(ctx, task, asynq.TaskID(request.UUID), asynq.Retention(5*time.Minute))

This way, the UUID is still kept in Redis for another 5 minutes after the task is completed. This prevents the Http retry request from being accepted.

Originally created by @gmhafiz on GitHub (Dec 26, 2021). Original GitHub issue: https://github.com/hibiken/asynq/issues/369 Originally assigned to: @hibiken on GitHub. **Describe the bug** The second of two identical tasks with same UUID and payload should not be sent to Redis before TTL, when the message sent could have been from a retry. This is a related to issue https://github.com/hibiken/asynq/issues/275 with the difference that in 275, repeated tasks are allowed to be processed before TTL because the previous task is deleted. In my case, asynq must respect TTL even when the task has completed. **To Reproduce** Steps to reproduce the behavior (Code snippets if applicable): 1. An external API sends an HTTP request to my asynq microservice through an endpoint. 2. This external API does a retry for 3 times, sleeping 10 seconds in between. 3. The first request could have been delayed for more than 10 seconds, prompting a first retry (second identical request). 4. That second request reaches asynq before original first request, starts processing, and completed in one second. 5. The original first request finally reaches asynq and starts processing - even though it is identical to the one that has been processed and before its TTL. Using ```go info, err := u.redis.EnqueueContext(ctx, task, asynq.Unique(5*time.Minute)) ``` **Expected behavior** - Unique() respects TTL of completed tasks - the original first request (which is second chronologically) should have been rejected because it is identical to the first retry (second request) and before 5 minutes **Environment (please complete the following information):** - OS: Linux - Version of `asynq` package v0.20.0 **Additional context** Another way to achieve distributed lock is I manually do the following. - include a UUID on every request - extract UUID from request - check if UUID key is present in Redis - If not present, create a new key/value entry ```go client.Set(r.Context(), requestID, true, 0).Err() ``` - If present, reject - Do not delete the UUID key from Redis even after the task has been completed. This is to ensure retry requests fail. - Once task has been completed, update the key with TTL ```go redis.Set(ctx, requestID, true, 5*time.Minute).Err() ``` - Once TTL has passed, Redis automatically deletes the key, allowing identical request to be processed. Additionally, I could delete the UUID key if a task is explicitly deleted (not completed). The other way I can think of is to use retention. ```go asynqClient.EnqueueContext(ctx, task, asynq.TaskID(request.UUID), asynq.Retention(5*time.Minute)) ``` This way, the UUID is still kept in Redis for another 5 minutes after the task is completed. This prevents the Http retry request from being accepted.
Author
Owner

@hibiken commented on GitHub (Dec 26, 2021):

@gmhafiz Thank you for reporting an issue!

The intention behind the Unique option is to prevent a duplicate task being enqueued to the same queue. The duration you pass to the Unique option is there to avoid a situation where a stale task in the queue blocking new tasks from being enqueued (or other similar situations).

Reading your use case, I think the alternative you suggested sounds perfect: the approach of using TaskID and Retention option so that completed tasks still remain the queue to prevent other duplicate tasks from being enqueued.

asynqClient.EnqueueContext(ctx, task, asynq.TaskID(request.UUID), asynq.Retention(5*time.Minute))

Thank you again for the detailed issue report, and let me know if this approach doesn't work for you :)

<!-- gh-comment-id:1001182260 --> @hibiken commented on GitHub (Dec 26, 2021): @gmhafiz Thank you for reporting an issue! The intention behind the `Unique` option is to prevent a duplicate task being enqueued to the same queue. The duration you pass to the `Unique` option is there to avoid a situation where a stale task in the queue blocking new tasks from being enqueued (or other similar situations). Reading your use case, I think the alternative you suggested sounds perfect: the approach of using `TaskID` and `Retention` option so that completed tasks still remain the queue to prevent other duplicate tasks from being enqueued. ```go asynqClient.EnqueueContext(ctx, task, asynq.TaskID(request.UUID), asynq.Retention(5*time.Minute)) ``` Thank you again for the detailed issue report, and let me know if this approach doesn't work for you :)
Author
Owner

@hibiken commented on GitHub (Jan 6, 2022):

This issue came up in discussion #376. I'm re-opening this issue to revisit the semantics of the Unique option.

<!-- gh-comment-id:1006234039 --> @hibiken commented on GitHub (Jan 6, 2022): This issue came up in discussion #376. I'm re-opening this issue to revisit the semantics of the `Unique` option.
Author
Owner

@hibiken commented on GitHub (Mar 1, 2022):

Started a discussion at #409

<!-- gh-comment-id:1054840056 --> @hibiken commented on GitHub (Mar 1, 2022): Started a discussion at #409
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#1176
No description provided.