[GH-ISSUE #831] [BUG] Running only 1 worked from 800 #1420

Closed
opened 2026-03-07 22:09:24 +03:00 by kerem · 4 comments
Owner

Originally created by @ludovit-ubrezi on GitHub (Feb 27, 2024).
Original GitHub issue: https://github.com/hibiken/asynq/issues/831

Originally assigned to: @hibiken on GitHub.

Describe the bug
Hello. Problem is that even Concurency is set to 800 workers there is always only 1 worker processing queue. This is why the Pending queue is not processing correctly and whole system has delays. (screenshots below)

Code of function tasks.HandleDeliveryTask is working correctly and fast because when i load all the data, it processes Task with group of 5000 items and it load into RabbitMQ -> 1 million items in e.g. 1 minute.

But when Task has 1 item it slows down due to only 1 worker processing this queue even when there is 800 workers setup for server it only run 1.

Expected behavior
Should be all workers

To Reproduce
Running worker with this configuration

  • go 1.21.4
  • github.com/hibiken/asynq v0.24.1
  • github.com/hibiken/asynqmon v0.7.2
  • github.com/redis/go-redis/v9 v9.3.0 // indirect
func main() {
	concurrency := runtime.NumCPU() * 200

	srv := asynq.NewServer(
		asynq.RedisClientOpt{Addr: redisHost + ":" + redisPort, Password: redisPassword, DB: 1},
		asynq.Config{
			Concurrency:    concurrency,
			Queues:         map[string]int{tasks.QueueName: 9},
			StrictPriority: true,
		},
	)

	// close publisher connection functionality
	defer tasks.Publisher.Channel.Close()

	// Use asynq.HandlerFunc adapter for a handler function
	if err := srv.Run(asynq.HandlerFunc(tasks.HandleDeliveryTask)); err != nil {
		log.Fatal(err)
	}
}

Screenshots
If applicable, add screenshots to help explain your problem.

Screenshot 2024-02-27 at 12 30 06
Screenshot 2024-02-27 at 12 30 38

Additional context
Could be that that 1 worker is processed fast it did not spin more workers?

Originally created by @ludovit-ubrezi on GitHub (Feb 27, 2024). Original GitHub issue: https://github.com/hibiken/asynq/issues/831 Originally assigned to: @hibiken on GitHub. **Describe the bug** Hello. Problem is that even **Concurency is set to 800 workers** there is **always only 1 worker processing queue**. This is why the Pending queue is not processing correctly and whole system has delays. (screenshots below) Code of function tasks.HandleDeliveryTask is working correctly and fast because when i load all the data, it processes Task with group of 5000 items and it load into RabbitMQ -> 1 million items in e.g. 1 minute. But when Task has 1 item it slows down due to only 1 worker processing this queue even when there is 800 workers setup for server it only run 1. **Expected behavior** Should be all workers **To Reproduce** Running worker with this configuration - go 1.21.4 - github.com/hibiken/asynq v0.24.1 - github.com/hibiken/asynqmon v0.7.2 - github.com/redis/go-redis/v9 v9.3.0 // indirect ``` func main() { concurrency := runtime.NumCPU() * 200 srv := asynq.NewServer( asynq.RedisClientOpt{Addr: redisHost + ":" + redisPort, Password: redisPassword, DB: 1}, asynq.Config{ Concurrency: concurrency, Queues: map[string]int{tasks.QueueName: 9}, StrictPriority: true, }, ) // close publisher connection functionality defer tasks.Publisher.Channel.Close() // Use asynq.HandlerFunc adapter for a handler function if err := srv.Run(asynq.HandlerFunc(tasks.HandleDeliveryTask)); err != nil { log.Fatal(err) } } ``` **Screenshots** If applicable, add screenshots to help explain your problem. ![Screenshot 2024-02-27 at 12 30 06](https://github.com/hibiken/asynq/assets/29823609/ffed4294-bd8e-4795-84c6-109f44714e29) ![Screenshot 2024-02-27 at 12 30 38](https://github.com/hibiken/asynq/assets/29823609/aa52d9a8-be28-4e6b-9af1-e8fc61c21fc2) **Additional context** Could be that that 1 worker is processed fast it did not spin more workers?
kerem 2026-03-07 22:09:24 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@abdeljalil09 commented on GitHub (Feb 28, 2024):

i have same issue no answer from the maintainer

<!-- gh-comment-id:1968976884 --> @abdeljalil09 commented on GitHub (Feb 28, 2024): i have same issue no answer from the maintainer
Author
Owner

@kamikazechaser commented on GitHub (Mar 21, 2024):

From your screenshot you have 800k tasks queued, 1 active and 0 complete/archive/retry. This most likely points to an issue with how the handler is implemented. Without any additional code, this is difficult to reproduce.

Also 800 is the max. There is no guarantee that 800 workers are immediately spawned. it will fluctuate between 0 and 800 as the semaphore is acquired and released when dequeuing tasks in the processor.

Related: #558

<!-- gh-comment-id:2011433558 --> @kamikazechaser commented on GitHub (Mar 21, 2024): From your screenshot you have 800k tasks queued, 1 active and 0 complete/archive/retry. This most likely points to an issue with how the handler is implemented. Without any additional code, this is difficult to reproduce. Also 800 is the max. There is no guarantee that 800 workers are immediately spawned. it will fluctuate between 0 and 800 as the semaphore is acquired and released when dequeuing tasks in the processor. Related: #558
Author
Owner

@abdeljalil09 commented on GitHub (Mar 21, 2024):

@kamikazechaser how to make N number of workers , based on the jobs bullmq handles this pefectly ,if we set Concurrency to 10

and we have 100 job 10 workers will be active at once

<!-- gh-comment-id:2011753867 --> @abdeljalil09 commented on GitHub (Mar 21, 2024): @kamikazechaser how to make N number of workers , based on the jobs bullmq handles this pefectly ,if we set Concurrency to 10 and we have 100 job 10 workers will be active at once
Author
Owner

@kamikazechaser commented on GitHub (Mar 22, 2024):

how to make N number of workers

You cannot with this library.

Alos, there is usually no need for that in Go because goroutines are lightweight. Internally we use a counting semaphore to limit the max goroutines that can be spawned by the processor.

<!-- gh-comment-id:2014423164 --> @kamikazechaser commented on GitHub (Mar 22, 2024): > how to make N number of workers You cannot with this library. Alos, there is usually no need for that in Go because goroutines are lightweight. Internally we use a counting semaphore to limit the max goroutines that can be spawned by the processor.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#1420
No description provided.