mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 07:25:56 +03:00
[GH-ISSUE #420] [BUG] Task stuck in active state forever #1200
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#1200
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mailbaoer on GitHub (Mar 17, 2022).
Original GitHub issue: https://github.com/hibiken/asynq/issues/420
Originally assigned to: @hibiken on GitHub.
Describe the bug
I have some tasks not set timeout, for a long time run, I found some tasks are always in running state, I tried to cancel them in web ui or cli, they can't be canceled, and the state changed to running after canceling
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
sorry, I don't know how reproduce
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots

Environment (please complete the following information):
asynqpackage [e.g. v1.0.0]Additional context
Add any other context about the problem here.
@hibiken commented on GitHub (Mar 17, 2022):
@mailbaoer Thank you for opening an issue!
Would you mind providing the version of asynq package you are using :)
@mailbaoer commented on GitHub (Mar 17, 2022):
I'm use 0.22.1 now, but this bug may exists before this version, I've see this in other versions, maybe 0.18 for my first time use asynq
@hibiken commented on GitHub (Mar 17, 2022):
I see.
We've made some improvements around orphaned task recovery in v0.22. If you are using latest version of Web UI (v0.6.0), you'll see the status of the tasks will show "Orphaned". This happens when a worker start working on a task but crashes before completing the processing.
If you run a server against the same queue, they'll be recovered automatically after some time period (i.e. after a few heartbeat misses). Once the task is orphaned, they are no longer cancelable (the latest web UI will disable the cancel button)
Follow up questions:
ZRANGE asynq:{default}:lease 0 -1 WITHSCORES)@mailbaoer commented on GitHub (Mar 18, 2022):
@hibiken commented on GitHub (Mar 18, 2022):
Ok, thanks for providing that info.
Would you mind running this command:
@mailbaoer commented on GitHub (Mar 18, 2022):
all the tasks stays in running for a lot of days, It's still running now, and I'm upgrade asynqmon to 0.6.1
@hibiken commented on GitHub (Mar 18, 2022):
That's very strange. I thought you'd have entries in either
asynq:{default}:deadlines(used by v0.21.x or below) orasynq:{default}:lease(used by v0.22.x). These zsets are used to recover orphaned tasks in case of worker crash, but the fact that there's no entries there seem something unexpected happened.I'll keep this bug open to see if others have encountered similar issue and get more context.
Please let me know if you can reproduce this, I'd like to know how to reproduce this bug.
If you need to address this manually, you can get a list of "active" tasks and put their IDs back in the pending list (note: the IDs you see in the image above is just a prefix, so make sure to click into each row to get the full ID)
Once you have the IDs, you can
LREM asynq:{default}:active 1 <task_id>LPUSH asynq:{default}:pending <task_id>@mailbaoer commented on GitHub (Mar 21, 2022):
Thank you very much for your patience in answering, if I encounter this problem again I will check to see if I can reproduce it
@namhq1989 commented on GitHub (Mar 22, 2022):
any update? I met this bug too. asynq v0.22.1, redis v5.0.7
@hibiken commented on GitHub (Mar 22, 2022):
@namhq1989 Thanks for the comment. We're looking for a way to reproduce this.
For anyone experienced this bug, please provide the following:
<qname>with the queue name (e.g.asynq:{default}:lease):ZRANGE asynq:{<qname>}:lease 0 -1 WITHSCORESZRANGE asynq:{<qname>}:deadlines 0 -1 WITHSCORES@dokudoki commented on GitHub (Mar 31, 2022):
@piperck commented on GitHub (May 4, 2022):
I met this too.. If it happens again, I will try to investigate it.
@paveljanda commented on GitHub (May 12, 2022):
Hi, we never had this problem as we never used asynq before. BUT if anyone could describe how to reproduce this bug and it gets fixed/solved, it would help us choosing the right distributed task queue for our projects. Currently choosing from ~5 contestants.
Thanks a lot to everyone!
@zijiwork commented on GitHub (Jul 15, 2022):
I encountered this problem in the following versions, the task is always in the active state after the worker restarts, and the task cannot be closed
@KrokoYR commented on GitHub (Oct 8, 2024):
We accidentally met this kind of issue. What we did:
redis.Options{Addr: "{addr}", DB: 10}redis.Options{Addr: "{addr}", DB: 0}some_queueThis led to a problem that those services started "stealing" each others tasks. So we made a mistake, that we didn't pass DB number into
asynq.RedisClientOpt. Maybe it will help someone