[GH-ISSUE #273] [FEATURE REQUEST] Returning Task Back to Queue #106

Closed
opened 2026-03-02 05:18:43 +03:00 by kerem · 9 comments
Owner

Originally created by @andrey-ryoo on GitHub (May 18, 2021).
Original GitHub issue: https://github.com/hibiken/asynq/issues/273

Originally assigned to: @hibiken on GitHub.

Hi, I suggest this feature might be useful.
Problem: The worker is out of resources because of heavy task and it's good idea to return it back to queue.

Example:

func HandleCreateHeavyTask(ctx context.Context, t *asynq.Task) error {

    if noResourcesLeft {
        return asynq.SendBack{}
    }
    sessionStartTime := time.Now()
    id, readIdErr := t.Payload.GetString("id")
    someHeavyCode()

    return nil
}

This shouldn't be counted as error and tries not to break API, but honestly I'd return (map[string]interface, error)

Originally created by @andrey-ryoo on GitHub (May 18, 2021). Original GitHub issue: https://github.com/hibiken/asynq/issues/273 Originally assigned to: @hibiken on GitHub. Hi, I suggest this feature might be useful. Problem: The worker is out of resources because of heavy task and it's good idea to return it back to queue. Example: ```go func HandleCreateHeavyTask(ctx context.Context, t *asynq.Task) error { if noResourcesLeft { return asynq.SendBack{} } sessionStartTime := time.Now() id, readIdErr := t.Payload.GetString("id") someHeavyCode() return nil } ``` This shouldn't be counted as error and tries not to break API, but honestly I'd return ```(map[string]interface, error)```
kerem 2026-03-02 05:18:43 +03:00
Author
Owner

@hibiken commented on GitHub (May 20, 2021):

@andrey-ryoo Thank you for opening this issue!

(Let me make sure I understand your issue) You want to return a sentinel error value to indicate not to increment retry count, is that correct?

But alternatively, If you expect the task to fail, you can increase the number of max retry using MaxRetry option; would that approach work for you?

<!-- gh-comment-id:845554128 --> @hibiken commented on GitHub (May 20, 2021): @andrey-ryoo Thank you for opening this issue! (Let me make sure I understand your issue) You want to return a sentinel error value to indicate not to increment retry count, is that correct? But alternatively, If you expect the task to fail, you can increase the number of max retry using `MaxRetry` option; would that approach work for you?
Author
Owner

@andrey-ryoo commented on GitHub (May 21, 2021):

My point is the task should have 3 statuses. 1. Success, 2. Unable to handle(if condition say, don’t even start), 3. Error during execution.
1 and 3 is well implemented and I suggest implementing 2.

<!-- gh-comment-id:845709005 --> @andrey-ryoo commented on GitHub (May 21, 2021): My point is the task should have 3 statuses. 1. Success, 2. Unable to handle(if condition say, don’t even start), 3. Error during execution. 1 and 3 is well implemented and I suggest implementing 2.
Author
Owner

@hibiken commented on GitHub (May 21, 2021):

Ok, thanks for clarifying.

Here's my initial solution.

Proposal

We can introduce a sentinel error ErrFailedPrecondition which indicates to the asynq server not to increment the retried-counter but enqueues the task back for later processing.

Example

func MyTaskHandler(ctx context.Context, t *asynq.Task) error {
    // pre-check 
    if noResourceAvailable  {
        return fmt.Errorf("no resource available: %w", asynq.ErrFailedPrecondition)
    }
    // ... do actual task processing
    return nil
}

Asynq server should check for the returned error and if

errors.Is(err, ErrFailedPrecondition)

is true, do not increment retried-count but enqueue the task back for later processing (Task will be in retry-state)


Let me know if you have thoughts on this.

<!-- gh-comment-id:846291274 --> @hibiken commented on GitHub (May 21, 2021): Ok, thanks for clarifying. Here's my initial solution. ## Proposal We can introduce a sentinel error `ErrFailedPrecondition` which indicates to the asynq server not to increment the retried-counter but enqueues the task back for later processing. ### Example ```go func MyTaskHandler(ctx context.Context, t *asynq.Task) error { // pre-check if noResourceAvailable { return fmt.Errorf("no resource available: %w", asynq.ErrFailedPrecondition) } // ... do actual task processing return nil } ``` Asynq server should check for the returned error and if ```go errors.Is(err, ErrFailedPrecondition) ``` is true, do not increment retried-count but enqueue the task back for later processing (Task will be in retry-state) --- Let me know if you have thoughts on this.
Author
Owner

@andrey-ryoo commented on GitHub (May 22, 2021):

That’s exactly what I mentioned. The error needs to be built in as constant

<!-- gh-comment-id:846355539 --> @andrey-ryoo commented on GitHub (May 22, 2021): That’s exactly what I mentioned. The error needs to be built in as constant
Author
Owner

@flamedmg commented on GitHub (Jun 9, 2021):

One of the possible uses i encounter is API limits. Say some api endpoints have call limits, say 100 call per minute. Will this help to solve situations like this?

<!-- gh-comment-id:857462182 --> @flamedmg commented on GitHub (Jun 9, 2021): One of the possible uses i encounter is API limits. Say some api endpoints have call limits, say 100 call per minute. Will this help to solve situations like this?
Author
Owner

@gopkg-dev commented on GitHub (Jun 9, 2021):

One of the possible uses i encounter is API limits. Say some api endpoints have call limits, say 100 call per minute. Will this help to solve situations like this?

Me too. I need to put the task back in the queue.

<!-- gh-comment-id:857659140 --> @gopkg-dev commented on GitHub (Jun 9, 2021): > One of the possible uses i encounter is API limits. Say some api endpoints have call limits, say 100 call per minute. Will this help to solve situations like this? Me too. I need to put the task back in the queue.
Author
Owner

@crossworth commented on GitHub (Aug 19, 2021):

One of the possible uses i encounter is API limits. Say some api endpoints have call limits, say 100 call per minute. Will this help to solve situations like this?

Would be cool if we could specify a ProcessIn (or at) as well, since some APIs return a retry after x when reaching the rate limit and some libraries returns this as a custom error.

Maybe something like:

res, err : = externalAPI()

var floodErr tb.FloodError
if errors.As(err, &floodErr) {
	return fmt.Errorf("flood error: %w", asynq.ErrRetryIn(floodErr.RetryAfter * time.Second))
}

if err != nil {
	return err
}

return nil
<!-- gh-comment-id:902195937 --> @crossworth commented on GitHub (Aug 19, 2021): > One of the possible uses i encounter is API limits. Say some api endpoints have call limits, say 100 call per minute. Will this help to solve situations like this? Would be cool if we could specify a ProcessIn (or at) as well, since some APIs return a `retry after x` when reaching the rate limit and some libraries returns this as [a custom error](https://github.com/tucnak/telebot/blob/93c5ba7136a6f82d848be1775db3350b266079f3/errors.go#L15). Maybe something like: ```go res, err : = externalAPI() var floodErr tb.FloodError if errors.As(err, &floodErr) { return fmt.Errorf("flood error: %w", asynq.ErrRetryIn(floodErr.RetryAfter * time.Second)) } if err != nil { return err } return nil ```
Author
Owner

@sdklab007 commented on GitHub (Aug 24, 2021):

@hibiken Even I am looking for a similar solution proposed by @crossworth.

<!-- gh-comment-id:904337636 --> @sdklab007 commented on GitHub (Aug 24, 2021): @hibiken Even I am looking for a similar solution proposed by @crossworth.
Author
Owner

@hibiken commented on GitHub (Aug 25, 2021):

To answer @crossworth's comment,

We have RetryDelayFunc which can be configured when you create a Server instance via config.
You could return a custom error and handle it in your RetryDelayFunc like so:

type CustomError struct {
      RetryIn: time.Duration
}

func (e *CustomError) Error() string { /* return error message */ }

//---

srv := asynq.NewServer(redisConn, asynq.Config{
    // Check if the error is a custom error. If so, return the retry duration specified in the error, 
    //otherwise use default behavior (exponential backoff)
    RetryDelayFunc: func(n int, e error, t *Task) time.Duration {
        customErr, ok := error.(*CustomError)
        if ok {
            return customError.RetryIn
        }
        return asynq.DefaultRetryDelayFunc(n, e, t)
    },
})

Going back to the original question from @andrey-ryoo, we could use a similar approach and we can add IsFailure(err error) bool to the server's config and allow user to provide that function.

Example:

srv := asynq.NewServer(redisConn, asynq.Config{
     IsFailure: func(err error) bool {
        if (/* should not increment retry count */) {
            return false
        }
        return true
    },
})
<!-- gh-comment-id:905483820 --> @hibiken commented on GitHub (Aug 25, 2021): To answer @crossworth's [comment](https://github.com/hibiken/asynq/issues/273#issuecomment-902195937), We have `RetryDelayFunc` which can be configured when you create a Server instance via config. You could return a custom error and handle it in your RetryDelayFunc like so: ```go type CustomError struct { RetryIn: time.Duration } func (e *CustomError) Error() string { /* return error message */ } //--- srv := asynq.NewServer(redisConn, asynq.Config{ // Check if the error is a custom error. If so, return the retry duration specified in the error, //otherwise use default behavior (exponential backoff) RetryDelayFunc: func(n int, e error, t *Task) time.Duration { customErr, ok := error.(*CustomError) if ok { return customError.RetryIn } return asynq.DefaultRetryDelayFunc(n, e, t) }, }) ``` --- Going back to the original question from @andrey-ryoo, we could use a similar approach and we can add `IsFailure(err error) bool` to the server's config and allow user to provide that function. Example: ```go srv := asynq.NewServer(redisConn, asynq.Config{ IsFailure: func(err error) bool { if (/* should not increment retry count */) { return false } return true }, }) ```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#106
No description provided.