mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 15:35:55 +03:00
[GH-ISSUE #483] [FEATURE REQUEST] Enqueue multiple tasks at once, in an all-or-nothing fashion #1237
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#1237
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tannerallison on GitHub (Jun 3, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/483
Originally assigned to: @hibiken on GitHub.
I have a situation where I would like to enqueue a group of tasks and I want to ensure that they all get placed in the queue or none of them are enqueued. In other words, I don't want to run into a situation where half of the tasks are enqueued and then the next enqueue call returns an error. I'm not concerned with the actual handling of the task at this point, only that all of the tasks are successfully put into the queue.
Potential Solution
I would like to be able to send a list of tasks to the
Enqueuemethod and have it try to Enqueue them with the understanding that none of them will begin running until they have all been enqueued and if any have errors being enqueued, they will all come off the queue.Alternative
One alternative I'm considering is to enqueue a "Batch Task" with a payload of info for all the other tasks and then have that batch job enqueue each of the other jobs. Again it doesn't provide an all-or-nothing result, but it would allow for picking up where a failure left off.
@lovgrandma commented on GitHub (Jun 3, 2022):
You would have to generate ids for each job and loop through the jobs you want to create, provison them as a task. Return a boolean from the function and add all of those to another loop. Then check if every value in that loop is true, if not then just loop through your job ids again and remove them from redis. You could do this very quickly.
If theres no way to stop the job once asynq has already began it during that loop or prevent asynq from automatically running jobs, you could create a channel in the higher function and I'd imagine send an update to the channel for the in process jobs to receive and terminate.
Or better yet maybe just have the job hang until it receives a go ahead from the main function from that channel to begin.
Or better yet, both.
@tschaub commented on GitHub (Jul 6, 2022):
I agree it would be useful to have a method to enqueue a batch of tasks atomically.
With the current API, I think this could be implemented by either pausing the queue, adding all tasks, and unpausing if they all succeed (or removing if any fail) or by enqueuing tasks with some future processing time and then calling
RunAllScheduledTaskson an inspector if they all are successfully enqueued.Ideally, the library could enqueue a batch of tasks in an atomic way.