[GH-ISSUE #590] [FEATURE REQUEST] Ability to hook lock check before dequeueing a task #2312

Open
opened 2026-03-15 20:03:04 +03:00 by kerem · 5 comments
Owner

Originally created by @ajatprabha on GitHub (Dec 23, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/590

Originally assigned to: @hibiken on GitHub.

Is your feature request related to a problem? Please describe.
I have worked on a distributed semaphore in the past. However, one question that I have is that the lock count is checked after the task is dequeued from the queue, and then if the lock can't be acquired, it is again re-queued with a delay set by retryBackoff.

This should be okay in normal circumstances, but in case of downtime on dependent upstreams, I am worried that this can put unnecessary pressure on Redis and de-queueing/re-queueing may also interfere with the order of tasks in the queue.

Is there a way, the dequeue routine can take into account of this lock count before the dequeue even happens?

Describe the solution you'd like
Instead of checking for the lock after the de-queue, I would like to hook the lock checker before the de-queue begins.

Describe alternatives you've considered
I tried to see if I can use the Pause queue feature to help with this, however, I kinda feel like it is just a workaround, and not sure if it is okay to do so.

Originally created by @ajatprabha on GitHub (Dec 23, 2022). Original GitHub issue: https://github.com/hibiken/asynq/issues/590 Originally assigned to: @hibiken on GitHub. **Is your feature request related to a problem? Please describe.** I have worked on a [distributed semaphore](https://github.com/hibiken/asynq/blob/master/x/rate/semaphore.go) in the past. However, one question that I have is that the lock count is checked after the task is dequeued from the queue, and then if the lock can't be acquired, it is again re-queued with a delay set by retryBackoff. This should be okay in normal circumstances, but in case of downtime on dependent upstreams, I am worried that this can put unnecessary pressure on Redis and de-queueing/re-queueing may also interfere with the order of tasks in the queue. Is there a way, the dequeue routine can take into account of this lock count before the dequeue even happens? **Describe the solution you'd like** Instead of checking for the lock after the de-queue, I would like to hook the lock checker before the de-queue begins. **Describe alternatives you've considered** I tried to see if I can use the Pause queue feature to help with this, however, I kinda feel like it is just a workaround, and not sure if it is okay to do so.
Author
Owner

@ajatprabha commented on GitHub (Jan 22, 2023):

@hibiken Please take a look into this and suggest some approach on how this should be done. I can make a PR.

<!-- gh-comment-id:1399410322 --> @ajatprabha commented on GitHub (Jan 22, 2023): @hibiken Please take a look into this and suggest some approach on how this should be done. I can make a PR.
Author
Owner

@ajatprabha commented on GitHub (Aug 2, 2023):

@hibiken Pinging again, as this got lost in time.

<!-- gh-comment-id:1661611798 --> @ajatprabha commented on GitHub (Aug 2, 2023): @hibiken Pinging again, as this got lost in time.
Author
Owner

@ajatprabha commented on GitHub (Feb 26, 2024):

@kamikazechaser Would it be possible for you to triage this issue?

<!-- gh-comment-id:1963404840 --> @ajatprabha commented on GitHub (Feb 26, 2024): @kamikazechaser Would it be possible for you to triage this issue?
Author
Owner

@kamikazechaser commented on GitHub (Feb 26, 2024):

I assume this is in the context of the x/semaphore pkg ? e.g. https://github.com/hibiken/asynq/blob/master/x/rate/example_test.go#L25 and not the exec() of the main processor (which also dequeues and uses a semaphore to control max goroutines)?

<!-- gh-comment-id:1964139093 --> @kamikazechaser commented on GitHub (Feb 26, 2024): I assume this is in the context of the x/semaphore pkg ? e.g. https://github.com/hibiken/asynq/blob/master/x/rate/example_test.go#L25 and not the exec() of the main processor (which also dequeues and uses a semaphore to control max goroutines)?
Author
Owner

@ajatprabha commented on GitHub (Feb 26, 2024):

Yes, this is in context of a custom locking mechanism. Instead of controlling simply max goroutines, such a solution would allow controlling concurrency in a lot of ways. My use-case is to control concurrency of queues across worker nodes.

Example: Say I want to limit max concurrency to 30, I have an option to limit max goroutines to say 5 and run 6 workers. But in case of auto-scaling workers, this max concurrency will keep varying.

I'm currently using gocraft/work, which allows max_concurrency via Redis. The earlier Semaphore work was a step in that direction and mostly fulfills the goal, however, I'm concerned about excessive dequeue & re-queue. Reference for dequeue process of gocraft.

<!-- gh-comment-id:1964255063 --> @ajatprabha commented on GitHub (Feb 26, 2024): Yes, this is in context of a custom locking mechanism. Instead of controlling simply max goroutines, such a solution would allow controlling concurrency in a lot of ways. My use-case is to control concurrency of queues across worker nodes. Example: Say I want to limit max concurrency to 30, I have an option to limit max goroutines to say 5 and run 6 workers. But in case of auto-scaling workers, this max concurrency will keep varying. > I'm currently using `gocraft/work`, [which allows max_concurrency](https://github.com/gocraft/work/blob/5959e69ad211c5ca37ffdf3ede02e35a5ae41d98/redis.go#L71-L73) via Redis. The earlier Semaphore work was a step in that direction and mostly fulfills the goal, however, I'm concerned about excessive dequeue & re-queue. [Reference](https://github.com/gocraft/work/blob/5959e69ad211c5ca37ffdf3ede02e35a5ae41d98/redis.go#L148) for dequeue process of gocraft.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#2312
No description provided.