mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 07:25:56 +03:00
[GH-ISSUE #831] [BUG] Running only 1 worked from 800 #405
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#405
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ludovit-ubrezi on GitHub (Feb 27, 2024).
Original GitHub issue: https://github.com/hibiken/asynq/issues/831
Originally assigned to: @hibiken on GitHub.
Describe the bug
Hello. Problem is that even Concurency is set to 800 workers there is always only 1 worker processing queue. This is why the Pending queue is not processing correctly and whole system has delays. (screenshots below)
Code of function tasks.HandleDeliveryTask is working correctly and fast because when i load all the data, it processes Task with group of 5000 items and it load into RabbitMQ -> 1 million items in e.g. 1 minute.
But when Task has 1 item it slows down due to only 1 worker processing this queue even when there is 800 workers setup for server it only run 1.
Expected behavior
Should be all workers
To Reproduce
Running worker with this configuration
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Could be that that 1 worker is processed fast it did not spin more workers?
@abdeljalil09 commented on GitHub (Feb 28, 2024):
i have same issue no answer from the maintainer
@kamikazechaser commented on GitHub (Mar 21, 2024):
From your screenshot you have 800k tasks queued, 1 active and 0 complete/archive/retry. This most likely points to an issue with how the handler is implemented. Without any additional code, this is difficult to reproduce.
Also 800 is the max. There is no guarantee that 800 workers are immediately spawned. it will fluctuate between 0 and 800 as the semaphore is acquired and released when dequeuing tasks in the processor.
Related: #558
@abdeljalil09 commented on GitHub (Mar 21, 2024):
@kamikazechaser how to make N number of workers , based on the jobs bullmq handles this pefectly ,if we set Concurrency to 10
and we have 100 job 10 workers will be active at once
@kamikazechaser commented on GitHub (Mar 22, 2024):
You cannot with this library.
Alos, there is usually no need for that in Go because goroutines are lightweight. Internally we use a counting semaphore to limit the max goroutines that can be spawned by the processor.