mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-25 23:15:51 +03:00
[GH-ISSUE #90] [BUG] Same task received/processed by more than one worker #2049
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#2049
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gunnsth on GitHub (Feb 18, 2020).
Original GitHub issue: https://github.com/hibiken/asynq/issues/90
Originally assigned to: @hibiken on GitHub.
Describe the bug
The problem is that I spawned a taskqueuer that queued tasks ranging from now - 10 minutes to now + 10 minutes. With 4 workers running (each at concurrency 1).
The output I got was:
So tasks 3 and 4 were received twice, which could lead to problems, although I admit the case I am working with is a bit strange. I.e. minutes in the past etc.
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
Start 4 taskrunners
As per the output above, tasks 3 and 4 were received by two workers. Would be good if we can guarantee that each task can only get processed once.
Note the way I am spawning the workers and queuer, it is possible that the tasks are queued before some workers start. It is all starting at the same time. Ideally that would not matter.
Expected behavior
Expected that each task could only be received by one worker / processed once.
Screenshots
N/A
Environment (please complete the following information):
Additional context
If needed, I can clean up my docker compose environment and provide a fully contained example.
@hibiken commented on GitHub (Feb 19, 2020):
Thanks for filing this bug report!
I ran the same code on my machine with 4 worker processes reading from the same redis instance but could not reproduce the bug.
It could be that you've run the client code multiple times and there were duplicate tasks in Redis. Would you mind trying this again with clean Redis DB?
You can
asynqmon statsto make sure that there's no tasks in Redis.You can flush redis by running
redis-cli flushdb@gunnsth commented on GitHub (Feb 19, 2020):
@hibiken I prepared an environment where this can be reproduced (uses docker, docker-compose):
https://github.com/hibiken/asynq/compare/master...gunnsth:issue90-reproduce?expand=1
to reproduce it:
It will not always give exactly the same results, I guess there's some stochastic/random factor that determines which taskrunner catches tasks.
Example outputs:
We see that both taskrunner1 and 3 have processed task 2. And taskrunner3 and 2 have both processed task1.
Is there any way we can ensure that this does not happen?
There should not be any duplicate tasks since this creates a fresh redis instance.
(to be sure, it's easy to clean all the docker images).
Would it be possible to clear all tasks prior to start creating tasks (programmatically)? Just to make sure?
@hibiken commented on GitHub (Feb 20, 2020):
@gunnsth
Could you try adding this to your
taskqueue.goto flush DB to start from clean slate?Let me know if you are still seeing duplicate tasks.
Otherwise, we can close this issue 👍
@hibiken commented on GitHub (Jun 13, 2020):
@gunnsth I finally figured out what's causing this. Fix is in #170