mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 07:25:56 +03:00
[GH-ISSUE #535] [FEATURE REQUEST] Support BatchEnqueue for client #251
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#251
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Percivalll on GitHub (Sep 2, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/535
Originally assigned to: @hibiken on GitHub.
When a lot of tasks need to be enqueued, current method is slow because every redis-op needs at least a RTT.
For example: If I want enqueue 1000000 tasks, each client.Enqueue spends 13ms in my environment. So if I execute it without concurrency, this will spend 1000000*13=13000000ms, almost 3.6 hours. Definitely I can use a lot of goroutines to shorten the time, but takes a lot of cpu usage and many redis connections.
I think we should supply a BatchEnqueue method for supporting user to enqueue a lot of tasks at once. For redis broker, we can use pipeline to decrease network and cpu overhead.
@Percivalll commented on GitHub (Sep 2, 2022):
Related discussions: https://github.com/hibiken/asynq/issues/339#issuecomment-985507125
https://github.com/hibiken/asynq/issues/352
@KillianH commented on GitHub (Sep 6, 2022):
I need that too :) I have several millions of tasks to enqueue in my workflow. Overall I really love the lib I can handle 7 millions tasks in 44 minutes (with some computation and database requests)
@hibiken commented on GitHub (Sep 10, 2022):
Thank you @Serinalice for creating this feature request!
This feature makes a lot of sense and the package should support this use case.
We should probably discuss the API first (What should it look like? How should we handle partial errors?)
@Percivalll commented on GitHub (Sep 11, 2022):
No problem, I'll describe my preliminary ideas!
On Sep 10, 2022, at 22:27, Ken Hibino @.***> wrote:
Thank you @Serinalicehttps://github.com/Serinalice for creating this feature request!
This feature makes a lot of sense and the package should support this use case.
We should probably discuss the API first (What should it look like? How should we handle partial errors?)
—
Reply to this email directly, view it on GitHubhttps://github.com/hibiken/asynq/issues/535#issuecomment-1242741446, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AODYOTHFSG72DSHHWBFB53TV5SLF3ANCNFSM6AAAAAAQC5HN3U.
You are receiving this because you were mentioned.Message ID: @.***>
@Percivalll commented on GitHub (Sep 13, 2022):
How about this:
If error is nil, all tasks have been successfully enqueued. If not, array of task info saves all successfully tasks.
@xuyang2 commented on GitHub (Sep 13, 2022):
How to configure/default batch size?
https://redis.io/docs/manual/pipelining/
@Percivalll commented on GitHub (Sep 13, 2022):
By length of tasks.
@yousifh commented on GitHub (Sep 14, 2022):
Would this new batch API pipeline the existing
EVALSHAenqueue scripts or it will use a new Lua script that takes the batch of tasks and enqueue them all at once?@5idu commented on GitHub (Nov 8, 2022):
Are there any new developments on this issue?
@developersam1995 commented on GitHub (Mar 5, 2023):
Should this API support a combination of initial task states between "aggregating", "pending", and "scheduled"?
@thanhps42 commented on GitHub (May 25, 2024):
any update?