[GH-ISSUE #483] [FEATURE REQUEST] Enqueue multiple tasks at once, in an all-or-nothing fashion #1237

Open
opened 2026-03-07 22:07:51 +03:00 by kerem · 2 comments
Owner

Originally created by @tannerallison on GitHub (Jun 3, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/483

Originally assigned to: @hibiken on GitHub.

I have a situation where I would like to enqueue a group of tasks and I want to ensure that they all get placed in the queue or none of them are enqueued. In other words, I don't want to run into a situation where half of the tasks are enqueued and then the next enqueue call returns an error. I'm not concerned with the actual handling of the task at this point, only that all of the tasks are successfully put into the queue.

Potential Solution
I would like to be able to send a list of tasks to the Enqueue method and have it try to Enqueue them with the understanding that none of them will begin running until they have all been enqueued and if any have errors being enqueued, they will all come off the queue.

Alternative
One alternative I'm considering is to enqueue a "Batch Task" with a payload of info for all the other tasks and then have that batch job enqueue each of the other jobs. Again it doesn't provide an all-or-nothing result, but it would allow for picking up where a failure left off.

Originally created by @tannerallison on GitHub (Jun 3, 2022). Original GitHub issue: https://github.com/hibiken/asynq/issues/483 Originally assigned to: @hibiken on GitHub. I have a situation where I would like to enqueue a group of tasks and I want to ensure that they all get placed in the queue or none of them are enqueued. In other words, I don't want to run into a situation where half of the tasks are enqueued and then the next enqueue call returns an error. I'm not concerned with the actual handling of the task at this point, only that all of the tasks are successfully put into the queue. **Potential Solution** I would like to be able to send a list of tasks to the `Enqueue` method and have it try to Enqueue them with the understanding that none of them will begin running until they have all been enqueued and if any have errors being enqueued, they will all come off the queue. **Alternative** One alternative I'm considering is to enqueue a "Batch Task" with a payload of info for all the other tasks and then have that batch job enqueue each of the other jobs. Again it doesn't provide an all-or-nothing result, but it would allow for picking up where a failure left off.
Author
Owner

@lovgrandma commented on GitHub (Jun 3, 2022):

You would have to generate ids for each job and loop through the jobs you want to create, provison them as a task. Return a boolean from the function and add all of those to another loop. Then check if every value in that loop is true, if not then just loop through your job ids again and remove them from redis. You could do this very quickly.

If theres no way to stop the job once asynq has already began it during that loop or prevent asynq from automatically running jobs, you could create a channel in the higher function and I'd imagine send an update to the channel for the in process jobs to receive and terminate.

Or better yet maybe just have the job hang until it receives a go ahead from the main function from that channel to begin.

Or better yet, both.

<!-- gh-comment-id:1146340132 --> @lovgrandma commented on GitHub (Jun 3, 2022): You would have to generate ids for each job and loop through the jobs you want to create, provison them as a task. Return a boolean from the function and add all of those to another loop. Then check if every value in that loop is true, if not then just loop through your job ids again and remove them from redis. You could do this very quickly. If theres no way to stop the job once asynq has already began it during that loop or prevent asynq from automatically running jobs, you could create a channel in the higher function and I'd imagine send an update to the channel for the in process jobs to receive and terminate. Or better yet maybe just have the job hang until it receives a go ahead from the main function from that channel to begin. Or better yet, both.
Author
Owner

@tschaub commented on GitHub (Jul 6, 2022):

I agree it would be useful to have a method to enqueue a batch of tasks atomically.

func (c *Client) EnqueueBatch(tasks []*Task, opts ...Option) ([]*TaskInfo), error)

With the current API, I think this could be implemented by either pausing the queue, adding all tasks, and unpausing if they all succeed (or removing if any fail) or by enqueuing tasks with some future processing time and then calling RunAllScheduledTasks on an inspector if they all are successfully enqueued.

Ideally, the library could enqueue a batch of tasks in an atomic way.

<!-- gh-comment-id:1176479135 --> @tschaub commented on GitHub (Jul 6, 2022): I agree it would be useful to have a method to enqueue a batch of tasks atomically. ```go func (c *Client) EnqueueBatch(tasks []*Task, opts ...Option) ([]*TaskInfo), error) ``` With the current API, I think this could be implemented by either pausing the queue, adding all tasks, and unpausing if they all succeed (or removing if any fail) or by enqueuing tasks with some future processing time and then calling `RunAllScheduledTasks` on an inspector if they all are successfully enqueued. Ideally, the library could enqueue a batch of tasks in an atomic way.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#1237
No description provided.