mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 15:35:55 +03:00
[GH-ISSUE #558] [BUG] Execution speed with more worker configured than CPU core for I/O bound tasks #2292
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#2292
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cs-satish-mishra on GitHub (Oct 14, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/558
Originally assigned to: @hibiken on GitHub.
I am new to go and async, I am coming from the celery background and want to try it.
I have configured my server with 32 concurrent workers on 8 core CPU machine.
I created 5000 tasks from a simple client - the task is an outside HTTP API call to an endpoint that sleeps 3sec and returns a successful response.
Since there is enough task in the Redis I expect 32 tasks to be processed concurrently but looks like only 8-10 are getting processed.
what am I missing here? I am trying a POC to find if it is better performing compared to python celery since most of my tasks are I/O to the external API call
my server configuration
worker := asynq.NewServer(redisConnection, asynq.Config{
// Specify how many concurrent workers to use.
Concurrency: 32,})
Task handler makes an api call as below
client := &http.Client{}
req, err := http.NewRequest(method, url, payload)
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
@kamikazechaser commented on GitHub (Oct 23, 2022):
Are the 5k tasks loaded immediately before the processor server starts or are both loading and processing done at the same time?
I suggest try loading all 5k first then run the processor.
Also not sure how you are getting the 8-10 tasks? From the Asynqmon UI? I think a better way to test would be to keep track of active connection at the endpoint (I assume this is your endpoint).
@ioannidesalex commented on GitHub (Jul 8, 2023):
Increase the number of workers to be more than number of cores. I have tested this and found out that for non intensive tasks (e.g. just an HTTP request), it works way faster.