[GH-ISSUE #338] [BUG] Redis produces an error "ERR invalid expire time in set" when tasks enqueued with properties ProcessIn & Unique < 1 second. #1163

Closed
opened 2026-03-07 22:06:43 +03:00 by kerem · 7 comments
Owner

Originally created by @johnha on GitHub (Nov 8, 2021).
Original GitHub issue: https://github.com/hibiken/asynq/issues/338

Originally assigned to: @hibiken on GitHub.

Describe the bug

Redis produces an error "ERR invalid expire time in set" when tasks enqueued with properties ProcessIn & Unique < 1 second.

To Reproduce

		asynq.ProcessIn(time.Millisecond * 500),
		asynq.Unique(time.Millisecond * 500))

Returns error:

UNKNOWN: redis eval error: ERR Error running script (call to f_8640afea14b4dc438522b83eab3523efbf54ba9d): @user_script:2: ERR invalid expire time in set

Environment (please complete the following information):

Asyncq: github.com/hibiken/asynq v0.19.0
MacOS: Monterey 12.1 beta
redis:6.2.5

Additional context

This may be just a change to the documentation to state that timing < 1 second is not appropriate use-case for the library. The code (rdb.go) highlights that ttl is converted to seconds as below. If the configuration is <1 second - in the failure cases the int(ttl.Seconds()) statement returns a can return negative number which produces the error as stated. A check around this value being >=0 would resolve + documentation update.

rdb.go
argv := []interface{}{
msg.ID,
int(ttl.Seconds()),
processAt.Unix(),
encoded,
msg.Timeout,
msg.Deadline,
}
n, err := r.runScriptWithErrorCode(op, scheduleUniqueCmd, keys, argv...)

Cheers

Originally created by @johnha on GitHub (Nov 8, 2021). Original GitHub issue: https://github.com/hibiken/asynq/issues/338 Originally assigned to: @hibiken on GitHub. **Describe the bug** Redis produces an error "ERR invalid expire time in set" when tasks enqueued with properties ProcessIn & Unique < 1 second. **To Reproduce** asynq.ProcessIn(time.Millisecond * 500), asynq.Unique(time.Millisecond * 500)) Returns error: UNKNOWN: redis eval error: ERR Error running script (call to f_8640afea14b4dc438522b83eab3523efbf54ba9d): @user_script:2: ERR invalid expire time in set **Environment (please complete the following information):** Asyncq: github.com/hibiken/asynq v0.19.0 MacOS: Monterey 12.1 beta redis:6.2.5 **Additional context** This *may* be just a change to the documentation to state that timing < 1 second is not appropriate use-case for the library. The code (rdb.go) highlights that ttl is converted to seconds as below. If the configuration is <1 second - in the failure cases the int(ttl.Seconds()) statement returns a can return negative number which produces the error as stated. A check around this value being >=0 would resolve + documentation update. rdb.go argv := []interface{}{ msg.ID, int(ttl.Seconds()), processAt.Unix(), encoded, msg.Timeout, msg.Deadline, } n, err := r.runScriptWithErrorCode(op, scheduleUniqueCmd, keys, argv...) Cheers
kerem 2026-03-07 22:06:43 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@hibiken commented on GitHub (Nov 8, 2021):

Thank you for opening an issue!

I will update the docs to say that duration >= 1s is required for ProcessIn and Unique options. And I will update the code to return an error from Enqueue if those conditions are not met.

Thank you so much for spotting this :)

<!-- gh-comment-id:963334904 --> @hibiken commented on GitHub (Nov 8, 2021): Thank you for opening an issue! I will update the docs to say that duration >= 1s is required for `ProcessIn` and `Unique` options. And I will update the code to return an error from `Enqueue` if those conditions are not met. Thank you so much for spotting this :)
Author
Owner

@johnha commented on GitHub (Nov 8, 2021):

Cheers. And many thanks for the library - looks the best of those available.

1 second minimum is ok I think in this case. As FYI redis does support the PX where time is specified in milliseconds as opposed to EX. Obviously there is a lag for in getting to that point where some more performant requirements may have expired before they hit the request to redis - but I doubt anything above 50ms would be an issue. The EX/PX being floored at zero would prevent a redis error.

Thanks again.

On 8 Nov 2021, at 16:29, Ken Hibino @.***> wrote:

Thank you for opening an issue!

I will update the docs to say that duration >= 1s is required for ProcessIn and Unique options. And I will update the code to return an error from Enqueue if those conditions are not met.

Thank you so much for spotting this :)


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/hibiken/asynq/issues/338#issuecomment-963334904, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAME7UMIUIXQCT2OZ2JWFQLUK73INANCNFSM5HSOLAAQ.
Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

<!-- gh-comment-id:963467481 --> @johnha commented on GitHub (Nov 8, 2021): Cheers. And many thanks for the library - looks the best of those available. 1 second minimum is ok I think in this case. As FYI redis does support the PX where time is specified in milliseconds as opposed to EX. Obviously there is a lag for in getting to that point where some more performant requirements may have expired before they hit the request to redis - but I doubt anything above 50ms would be an issue. The EX/PX being floored at zero would prevent a redis error. Thanks again. > On 8 Nov 2021, at 16:29, Ken Hibino ***@***.***> wrote: > > > Thank you for opening an issue! > > I will update the docs to say that duration >= 1s is required for ProcessIn and Unique options. And I will update the code to return an error from Enqueue if those conditions are not met. > > Thank you so much for spotting this :) > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub <https://github.com/hibiken/asynq/issues/338#issuecomment-963334904>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAME7UMIUIXQCT2OZ2JWFQLUK73INANCNFSM5HSOLAAQ>. > Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. >
Author
Owner

@lmikolajczak commented on GitHub (Nov 8, 2021):

Hi @johnha & @hibiken,

Do I understand correctly that:

var d time.Duration
d = time.Millisecond * 500
int(d.Seconds()) -> 0

and then, we pass that 0 to redis. Could you please elaborate on this part:

If the configuration is <1 second - in the failure cases the int(ttl.Seconds()) statement returns a can return negative number which produces the error as stated.

how do we end up with a negative value? What do I miss? I'm just trying to understand what exactly happens here.

Thanks!

<!-- gh-comment-id:963514847 --> @lmikolajczak commented on GitHub (Nov 8, 2021): Hi @johnha & @hibiken, Do I understand correctly that: ``` var d time.Duration d = time.Millisecond * 500 int(d.Seconds()) -> 0 ``` and then, we pass that `0` to redis. Could you please elaborate on this part: >If the configuration is <1 second - in the failure cases the int(ttl.Seconds()) statement returns a can return negative number which produces the error as stated. how do we end up with a negative value? What do I miss? I'm just trying to understand what exactly happens here. Thanks!
Author
Owner

@hibiken commented on GitHub (Nov 9, 2021):

@Luqqk thanks for the correction there! Yes, this error happens when Unique option is specified to be less than 1s and int(d.Seconds()) evaluates to zero, which we pass to redis SET command as EX option and that results in an error.

I'm hesitant to round this up to 1 so I'll document that the duration has to be greater than or equal to 1s and return an error otherwise 👍

<!-- gh-comment-id:963796039 --> @hibiken commented on GitHub (Nov 9, 2021): @Luqqk thanks for the correction there! Yes, this error happens when `Unique` option is specified to be less than 1s and `int(d.Seconds())` evaluates to zero, which we pass to redis `SET` command as `EX` option and that results in an error. I'm hesitant to round this up to 1 so I'll document that the duration has to be greater than or equal to 1s and return an error otherwise 👍
Author
Owner

@johnha commented on GitHub (Nov 9, 2021):

Apologies for confusion - when I reproduced the error I was executing in debug with breakpoint - at which point my duration had become -'ve RE unique expiry. In the non-debug case this must have been 0 but had no transparency on that.

I misused the interface RE the de-dupe of tasks. To clarify my problem statement: - in short:
testing for use in event driven system, where many events can happen on the same component in small timeframe, and there is possible contention on the db side in re-evaluating a component so require to de-dupe those evaluations.

In order to prevent immediate execution to avoid contention - I specified the following:

asynq.ProcessIn(time.Second),
asynq.Unique(time.Second)

Thinking was that wait for up to 1 second, and discard any duplicates in that 1 second. That was my mistake. From looking at the code, the unique property adds an interval to drop dupes post the scheduled execution time:
e.g.
9.00.00 AM enqueue with processIn=9.00.01 AM, Unique=1 sec
schedule(processAt=9.00.01, uniqueTTL=1 sec)

ttl := t.Add(uniqueTTL).Sub(time.Now())
			returns ScheduleUnique(..., t=9.00.01, ttl= (9.00.01 + 1sec - 9.00.00) = 2sec

From this, it looks like a task will execute at 9.00.01 (the component DB operations is circa 30ms), and if I receive a duplicate operation at 9.00.015 (half a second post starting the operation) - then that will be dropped as duplicate (and loose the last re-evaluation).

However - from the code I don't need to use the async.Unique for this usecase - as if I specify a task-id - then this would automatically de-dupe and not miss anything. So will just set my own key.

I am unsure yet if my usecase requires a simpler mechanism - but from looking through the code it looks neat and performant and need the same redis functions for enabling the multiple instances of the process to coordinate the processing. The think the task-id approach should be fine.

Many thanks for the help.

<!-- gh-comment-id:964119149 --> @johnha commented on GitHub (Nov 9, 2021): Apologies for confusion - when I reproduced the error I was executing in debug with breakpoint - at which point my duration had become -'ve RE unique expiry. In the non-debug case this must have been 0 but had no transparency on that. I misused the interface RE the de-dupe of tasks. To clarify my problem statement: - in short: testing for use in event driven system, where many events can happen on the same component in small timeframe, and there is possible contention on the db side in re-evaluating a component so require to de-dupe those evaluations. In order to prevent immediate execution to avoid contention - I specified the following: ``` asynq.ProcessIn(time.Second), asynq.Unique(time.Second) ``` Thinking was that wait for up to 1 second, and discard any duplicates in that 1 second. That was my mistake. From looking at the code, the unique property adds an interval to drop dupes post the scheduled execution time: e.g. 9.00.00 AM enqueue with processIn=9.00.01 AM, Unique=1 sec schedule(processAt=9.00.01, uniqueTTL=1 sec) ``` ttl := t.Add(uniqueTTL).Sub(time.Now()) ``` returns ScheduleUnique(..., t=9.00.01, ttl= (9.00.01 + 1sec - 9.00.00) = 2sec From this, it looks like a task will execute at 9.00.01 (the component DB operations is circa 30ms), and if I receive a duplicate operation at 9.00.015 (half a second post starting the operation) - then that will be dropped as duplicate (and loose the last re-evaluation). However - from the code I don't need to use the async.Unique for this usecase - as if I specify a task-id - then this would automatically de-dupe and not miss anything. So will just set my own key. I am unsure yet if my usecase requires a simpler mechanism - but from looking through the code it looks neat and performant and need the same redis functions for enabling the multiple instances of the process to coordinate the processing. The think the task-id approach should be fine. Many thanks for the help.
Author
Owner

@hibiken commented on GitHub (Nov 9, 2021):

@johnha Thank you for the clarification.
As you described, the behavior of Unique option in conjunction with ProcessIn (or ProcessAt) is a bit nuanced (maybe I need to review/update the docs around that).

If you could generate a unique task ID to deduplicate tasks, it'd be the simplest and more straight-forward. As long as task ID is still in the queue, another task with the same ID won't be added to the queue (i.e. client.Enqueue will return an ErrTaskIDConflict).

Thanks again for spotting this edge case/bug and please let me know if you have questions/feedback!

<!-- gh-comment-id:964337173 --> @hibiken commented on GitHub (Nov 9, 2021): @johnha Thank you for the clarification. As you described, the behavior of `Unique` option in conjunction with `ProcessIn` (or `ProcessAt`) is a bit nuanced (maybe I need to review/update the docs around that). If you could generate a unique task ID to deduplicate tasks, it'd be the simplest and more straight-forward. As long as task ID is still in the queue, another task with the same ID won't be added to the queue (i.e. `client.Enqueue` will return an `ErrTaskIDConflict`). Thanks again for spotting this edge case/bug and please let me know if you have questions/feedback!
Author
Owner

@johnha commented on GitHub (Nov 9, 2021):

Cheers. I may be using the library for a slightly odd case - essentially I am debouncing work on the server side when I see a repeat, which is not exactly what the library was created for. However the dedupe feature of the library makes it possible for this style of task. Given its ability to be able to maintain the shared state for multiple consumers, the ability to ensure a process gets executed at some point after an event whilst de-duping those events - is extremely useful and don't see any other library that supports that case effectively. The issue here is as you note - this de-dupe feature is not using the de-dupe switch in the model itself which has a different meaning. The current model is for a case where it is ok to not execute a task for some defined period after last one has been scheduled to process, whereas my requirement is to de-dupe until the point of execution. I am testing for 'errors.Is(err, asynq.ErrTaskIDConflict)' as you note - and just scheduling and seems to work fine.

Anyway thanks for the help. I will close the issue.

<!-- gh-comment-id:964366562 --> @johnha commented on GitHub (Nov 9, 2021): Cheers. I may be using the library for a slightly odd case - essentially I am debouncing work on the server side when I see a repeat, which is not exactly what the library was created for. However the dedupe feature of the library makes it possible for this style of task. Given its ability to be able to maintain the shared state for multiple consumers, the ability to ensure a process gets executed at some point after an event whilst de-duping those events - is extremely useful and don't see any other library that supports that case effectively. The issue here is as you note - this de-dupe feature is not using the de-dupe switch in the model itself which has a different meaning. The current model is for a case where it is ok to not execute a task for some defined period after last one has been scheduled to process, whereas my requirement is to de-dupe until the point of execution. I am testing for 'errors.Is(err, asynq.ErrTaskIDConflict)' as you note - and just scheduling and seems to work fine. Anyway thanks for the help. I will close the issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/asynq#1163
No description provided.