mirror of
https://github.com/hibiken/asynq.git
synced 2026-04-26 07:25:56 +03:00
[GH-ISSUE #952] [BUG] handler did not run and this task hang out from then. #1478
Labels
No labels
CLI
bug
designing
documentation
duplicate
enhancement
good first issue
good first issue
help wanted
idea
invalid
investigate
needs-more-info
performance
pr-welcome
pull-request
question
wontfix
work in progress
work in progress
work-around-available
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/asynq#1478
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cobain on GitHub (Oct 31, 2024).
Original GitHub issue: https://github.com/hibiken/asynq/issues/952
Originally assigned to: @hibiken on GitHub.
Describe the bug
task enqueued successfully but I found that the handler did not executed.
once it happened, all the other task will not execute then.
When I restart the process, all the pending task runs one by one.
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Environment (please complete the following information):
asynqpackage [e.g. v0.24.1]Additional context
Add any other context about the problem here.
@cobain commented on GitHub (Oct 31, 2024):
I use the default config. I don't know if it relates with this. When I have 10 kinds of task, it works well.
But after increase to be 15, it suddenly did not work. After restart, it works.
// Specify how many concurrent workers to use
Concurrency: 10,
// Optionally specify multiple queues with different priority.
Queues: map[string]int{
"critical": 6,
"default": 3,
"low": 1,
},
@kamikazechaser commented on GitHub (Oct 31, 2024):
With this info only, it is very difficult where this issue is originating from. But very less likely from the library. Use the inspector or CLI to try and debug and provide more info.
@cobain commented on GitHub (Oct 31, 2024):
yes, I know. it seems the worker died. when I kill the process, I don't now what to be triggered and the handler executed.
@cobain commented on GitHub (Oct 31, 2024):
when I kill the process, it sends signals, then trigger the handler according to the doc. But I am not clearly about the worker.
Note: If you send TERM or INT signal without sending TSTP signal, the Server will start a timer for 8 seconds to allow for all workers to finish (To customize this timeout duration, use ShutdownTime config). If there are workers that didn't finish within that time frame, the task will be transitioned back to pending state and will be processed once the program restarts.
@cobain commented on GitHub (Oct 31, 2024):
I found it works before 00:00, after 00:00, all the task will hang out.
before this, it has worked well for half an year.
@cobain commented on GitHub (Oct 31, 2024):
another clue it that it happened after I added several task type. before it, I have 10 tasks type.
Now I have 14 tasks type
@cobain commented on GitHub (Oct 31, 2024):
does it related with the config Concurrency: 10 ? I don't know if it works after add the concurrency to be big like 20.
@kamikazechaser commented on GitHub (Nov 1, 2024):
Try v0.25.0 and report back
@cobain commented on GitHub (Nov 2, 2024):
do you have any guess about the reason? I don't know why it happened after 00:00. do we have any code which executes clean or any tasks after 00:00?
@cobain commented on GitHub (Nov 2, 2024):
ok. I will try it on test env firstly.
@cobain commented on GitHub (Nov 2, 2024):
anyway, I have upgrade to be 0.25.0 and see if it would work tomorrow.
@cobain commented on GitHub (Nov 3, 2024):
The bug still happened on 0.25.0. @kamikazechaser
@cobain commented on GitHub (Nov 4, 2024):
@hibiken @appleboy @pior
@cobain commented on GitHub (Nov 4, 2024):
does it relates with previous redis cache? now I need to a way to fix issue. I could not restart the service every night. 😂
@pior commented on GitHub (Nov 4, 2024):
If you post a simplified app, it will be easier to fix this issue.
@cobain commented on GitHub (Nov 4, 2024):
my app is the same with the demo code and it runs for 2 years.
recently, I added several task. then it failed after 00:00.
I have upgraded to 0.25.0 and it still doesn't work.
Today I just set the log level to be debug. let me check the log tonight.
On Mon, Nov 4, 2024 at 5:23 PM Pior Bastida @.***>
wrote:
@cobain commented on GitHub (Nov 5, 2024):
update:
it seems no useful debug logs currently.
I have switched the redis db to a new one. and try again.
btw, I am investigating the source code. Hope i can fix it asap.
@pior @hibiken @kamikazechaser @appleboy hope you guys can provide some help or more details.
@kamikazechaser commented on GitHub (Nov 5, 2024):
There are a lot of reasons for why you are encountering this issue. It's very difficult to reproduce without some sample code and/or redis info. I'd suggest you look at existing issues around the scheduler and archiving of tasks.
@cobain commented on GitHub (Nov 5, 2024):
My code and config are same with the demo provided. I just add 10+ task type and handler and registered.
last night, when I reproduced this issue after 00:00, I use the web tool and see the status was pending.
@cobain commented on GitHub (Nov 5, 2024):
I finally get the root cause. It is caused by some 3rd party script. It triggered a restart signal but not succeed.
in the case, asynq got the signal and shutdown the server. so asynq would not work any more.