mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 07:25:56 +03:00
[GH-ISSUE #338] [BUG] Redis produces an error "ERR invalid expire time in set" when tasks enqueued with properties ProcessIn & Unique < 1 second. #2173
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#2173
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @johnha on GitHub (Nov 8, 2021).
Original GitHub issue: https://github.com/hibiken/asynq/issues/338
Originally assigned to: @hibiken on GitHub.
Describe the bug
Redis produces an error "ERR invalid expire time in set" when tasks enqueued with properties ProcessIn & Unique < 1 second.
To Reproduce
Returns error:
UNKNOWN: redis eval error: ERR Error running script (call to f_8640afea14b4dc438522b83eab3523efbf54ba9d): @user_script:2: ERR invalid expire time in set
Environment (please complete the following information):
Asyncq: github.com/hibiken/asynq v0.19.0
MacOS: Monterey 12.1 beta
redis:6.2.5
Additional context
This may be just a change to the documentation to state that timing < 1 second is not appropriate use-case for the library. The code (rdb.go) highlights that ttl is converted to seconds as below. If the configuration is <1 second - in the failure cases the int(ttl.Seconds()) statement returns a can return negative number which produces the error as stated. A check around this value being >=0 would resolve + documentation update.
rdb.go
argv := []interface{}{
msg.ID,
int(ttl.Seconds()),
processAt.Unix(),
encoded,
msg.Timeout,
msg.Deadline,
}
n, err := r.runScriptWithErrorCode(op, scheduleUniqueCmd, keys, argv...)
Cheers
@hibiken commented on GitHub (Nov 8, 2021):
Thank you for opening an issue!
I will update the docs to say that duration >= 1s is required for
ProcessInandUniqueoptions. And I will update the code to return an error fromEnqueueif those conditions are not met.Thank you so much for spotting this :)
@johnha commented on GitHub (Nov 8, 2021):
Cheers. And many thanks for the library - looks the best of those available.
1 second minimum is ok I think in this case. As FYI redis does support the PX where time is specified in milliseconds as opposed to EX. Obviously there is a lag for in getting to that point where some more performant requirements may have expired before they hit the request to redis - but I doubt anything above 50ms would be an issue. The EX/PX being floored at zero would prevent a redis error.
Thanks again.
@lmikolajczak commented on GitHub (Nov 8, 2021):
Hi @johnha & @hibiken,
Do I understand correctly that:
and then, we pass that
0to redis. Could you please elaborate on this part:how do we end up with a negative value? What do I miss? I'm just trying to understand what exactly happens here.
Thanks!
@hibiken commented on GitHub (Nov 9, 2021):
@Luqqk thanks for the correction there! Yes, this error happens when
Uniqueoption is specified to be less than 1s andint(d.Seconds())evaluates to zero, which we pass to redisSETcommand asEXoption and that results in an error.I'm hesitant to round this up to 1 so I'll document that the duration has to be greater than or equal to 1s and return an error otherwise 👍
@johnha commented on GitHub (Nov 9, 2021):
Apologies for confusion - when I reproduced the error I was executing in debug with breakpoint - at which point my duration had become -'ve RE unique expiry. In the non-debug case this must have been 0 but had no transparency on that.
I misused the interface RE the de-dupe of tasks. To clarify my problem statement: - in short:
testing for use in event driven system, where many events can happen on the same component in small timeframe, and there is possible contention on the db side in re-evaluating a component so require to de-dupe those evaluations.
In order to prevent immediate execution to avoid contention - I specified the following:
Thinking was that wait for up to 1 second, and discard any duplicates in that 1 second. That was my mistake. From looking at the code, the unique property adds an interval to drop dupes post the scheduled execution time:
e.g.
9.00.00 AM enqueue with processIn=9.00.01 AM, Unique=1 sec
schedule(processAt=9.00.01, uniqueTTL=1 sec)
From this, it looks like a task will execute at 9.00.01 (the component DB operations is circa 30ms), and if I receive a duplicate operation at 9.00.015 (half a second post starting the operation) - then that will be dropped as duplicate (and loose the last re-evaluation).
However - from the code I don't need to use the async.Unique for this usecase - as if I specify a task-id - then this would automatically de-dupe and not miss anything. So will just set my own key.
I am unsure yet if my usecase requires a simpler mechanism - but from looking through the code it looks neat and performant and need the same redis functions for enabling the multiple instances of the process to coordinate the processing. The think the task-id approach should be fine.
Many thanks for the help.
@hibiken commented on GitHub (Nov 9, 2021):
@johnha Thank you for the clarification.
As you described, the behavior of
Uniqueoption in conjunction withProcessIn(orProcessAt) is a bit nuanced (maybe I need to review/update the docs around that).If you could generate a unique task ID to deduplicate tasks, it'd be the simplest and more straight-forward. As long as task ID is still in the queue, another task with the same ID won't be added to the queue (i.e.
client.Enqueuewill return anErrTaskIDConflict).Thanks again for spotting this edge case/bug and please let me know if you have questions/feedback!
@johnha commented on GitHub (Nov 9, 2021):
Cheers. I may be using the library for a slightly odd case - essentially I am debouncing work on the server side when I see a repeat, which is not exactly what the library was created for. However the dedupe feature of the library makes it possible for this style of task. Given its ability to be able to maintain the shared state for multiple consumers, the ability to ensure a process gets executed at some point after an event whilst de-duping those events - is extremely useful and don't see any other library that supports that case effectively. The issue here is as you note - this de-dupe feature is not using the de-dupe switch in the model itself which has a different meaning. The current model is for a case where it is ok to not execute a task for some defined period after last one has been scheduled to process, whereas my requirement is to de-dupe until the point of execution. I am testing for 'errors.Is(err, asynq.ErrTaskIDConflict)' as you note - and just scheduling and seems to work fine.
Anyway thanks for the help. I will close the issue.