mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-25 23:15:51 +03:00
[GH-ISSUE #590] [FEATURE REQUEST] Ability to hook lock check before dequeueing a task #2312
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#2312
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ajatprabha on GitHub (Dec 23, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/590
Originally assigned to: @hibiken on GitHub.
Is your feature request related to a problem? Please describe.
I have worked on a distributed semaphore in the past. However, one question that I have is that the lock count is checked after the task is dequeued from the queue, and then if the lock can't be acquired, it is again re-queued with a delay set by retryBackoff.
This should be okay in normal circumstances, but in case of downtime on dependent upstreams, I am worried that this can put unnecessary pressure on Redis and de-queueing/re-queueing may also interfere with the order of tasks in the queue.
Is there a way, the dequeue routine can take into account of this lock count before the dequeue even happens?
Describe the solution you'd like
Instead of checking for the lock after the de-queue, I would like to hook the lock checker before the de-queue begins.
Describe alternatives you've considered
I tried to see if I can use the Pause queue feature to help with this, however, I kinda feel like it is just a workaround, and not sure if it is okay to do so.
@ajatprabha commented on GitHub (Jan 22, 2023):
@hibiken Please take a look into this and suggest some approach on how this should be done. I can make a PR.
@ajatprabha commented on GitHub (Aug 2, 2023):
@hibiken Pinging again, as this got lost in time.
@ajatprabha commented on GitHub (Feb 26, 2024):
@kamikazechaser Would it be possible for you to triage this issue?
@kamikazechaser commented on GitHub (Feb 26, 2024):
I assume this is in the context of the x/semaphore pkg ? e.g. https://github.com/hibiken/asynq/blob/master/x/rate/example_test.go#L25 and not the exec() of the main processor (which also dequeues and uses a semaphore to control max goroutines)?
@ajatprabha commented on GitHub (Feb 26, 2024):
Yes, this is in context of a custom locking mechanism. Instead of controlling simply max goroutines, such a solution would allow controlling concurrency in a lot of ways. My use-case is to control concurrency of queues across worker nodes.
Example: Say I want to limit max concurrency to 30, I have an option to limit max goroutines to say 5 and run 6 workers. But in case of auto-scaling workers, this max concurrency will keep varying.