mirror of
https://github.com/007revad/Synology_docker_cleanup.git
synced 2026-04-25 12:35:57 +03:00
[GH-ISSUE #4] Unable to remove orphan images, un-updated images not able to be restarted #1
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_docker_cleanup#1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @talz13 on GitHub (Mar 24, 2024).
Original GitHub issue: https://github.com/007revad/Synology_docker_cleanup/issues/4
Originally assigned to: @007revad on GitHub.
Carrying on the discussion on the new repo!
I reviewed and ran the new
syno_docker_cleanup.sh, and it was successful in removing the orphan docker btrfs subvolumes, all 594 of them on my nas.However, I'm getting an error on deleting the orphan images:
I ran the nested command on its own, and it produced the desired output.
Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active.
Anyway, I'm trying to get those images back up and running, but not sure how to get past that.
For example, unifi-controller was one container that's failing to start/restart, so I tried:
Here's my Container Manager logs from the affected time:
Any advice on this state?
@007revad commented on GitHub (Mar 25, 2024):
The only thing I can think of why
docker rmi "$(docker images -f "dangling=true" -q)"failed in the script but worked when you ran it via SSH is maybe docker was busy scanning the subvolumes after the script deleted the orphan subvolumes.Were the affected container's running when the script was run?
@007revad commented on GitHub (Mar 25, 2024):
This script was supposed to solve issues from running the syno_app_mover script, and not create more issues.
@007revad commented on GitHub (Mar 27, 2024):
It looks like Container Manager's .json exports are useless when the image and/or subvolume no longer exist.
@talz13 commented on GitHub (Mar 28, 2024):
edit: I just manually made a "@docker.bak" folder and moved the whole contents of the
@dockerfolder to it after uninstalling Container Manager. I reinstalled it, reinstalled Portainer, and it still had my stacks that I previously set up. A couple quick Update the stacks later, and everything is back up and running. Was able to deploy the remaining containers without issue!original post
So I've started moving all my images to Portainer stacks, to be able to re-create them much more easily, but still having issues with the couple remaining images. Seems like as long as I'm using the same image from before (in this case, tonesto7/echo-speaks-server which hasn't been updated in a couple years), it cannot "refresh" it, or it is still referencing those deleted IDs.
All my containers use external volumes, so I don't have any concern about clearing out Container Manager / Docker and starting over, I'd just like to get rid of everything aside from my docker shared folder and start fresh, hopefully getting rid of the issues.
Since I'm not sure if trying to remove the
/volume2/@dockerfolder will be the best course of action, I was trying to move it to @docker.bak, but get a busy error:any ideas with that?
@007revad commented on GitHub (Mar 28, 2024):
Try creating the
@docker.bakfolder then copying the contents of@dockerto@docker.bakinstead of moving@docker.Edit: I just saw your edit where you did something similar.
@quansiji9 commented on GitHub (Jan 7, 2025):
Hi, I get to know this script from https://github.com/007revad/Synology_app_mover/issues/149 when I'm trying to reset one container after moving Container Manager, and it DOES solved my problem, but I also find that all containers aren't really working. They do say running in Container Manager, but log inside is reporting system files not found.
I restarted Container Manager and then no container can start, all showing errors just same as @talz13 . Luckily I did choose to backup before moving Container Manager, and somehow Synology_app_mover cannot restore @docker folder.
I manually copied @docker folder and "everything is back up and running", so nothing is lost. But if I have no backup, everything will break.
DSM is 7.2.2, Synology_docker_cleanup and Synology_app_mover are all latest version.
@007revad commented on GitHub (Jan 8, 2025):
@quansiji9
It looks syno_app_mover was missing the "extras" folder. The path it should have used should be
/volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@dockerand not
/volume1/homes/xxx//syno_app_mover/ContainerManager/@dockerI'll have a look at it and see why it missed the extras folder.
@quansiji9 commented on GitHub (Jan 8, 2025):
the
@dockerfolder i manually copied IS at the "extras" folder/volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker. I also tried to move @docker to/volume1/homes/xxx//syno_app_mover/ContainerManager/@dockerand got a different error reportAlthough it says 'Restoring @docker to /volume5' , restore is not happening. So i guess this is some script path bugs?
@007revad commented on GitHub (Jan 8, 2025):
After checking I've found the "/extras" is only missing from the error message. When backing or restoring any of the
/volumeX/@foldersthey never should have got to that error message.I just tried replicating your issue but was unable to. I first tried backing up and restoring Presto File Server because it also has a
/volumeX/@folder. When that worked okay I thought maybe it only affects the@dockerfolder so I backed and then restored Container Manager, which also worked.Are all your volumes btrfs or all ext4?
Did you backup with an older version of the script?
The only thing I did different was answering N when the script asked if I wanted to backup
@docker. I'll try answering yes and see if I can reproduce your issue.@quansiji9 commented on GitHub (Jan 8, 2025):
all btrfs;all script version latest
As I'm getting this problem https://github.com/007revad/Synology_app_mover/issues/149 again, I uninstalled ContainerManager and use syno_app_mover to restore again. This time I choose not to backup
@dockerand everything works great.@quansiji9 commented on GitHub (Jan 8, 2025):
Ah another problem. This time hurts.
Although @docker folder is restored successfully, I found a share folder named 'docker' in /volume4 deleted and a new share folder 'docker' is created in /volume5 and is empty. This time I have no backup because I didn't expect shared folder will be modified. Sad.
And now this empty docker shared folder I cannot delete or modify because DSM tell me container manager need this folder.
@007revad commented on GitHub (Jan 8, 2025):
That does hurt. The docker shared folder is where the script saves the exported config files so you don't even have the json exports to import into Container Manager.
Apart from saving the exported config files in the docker share the script does not touch the docker share.
You should setup a hyper backup task that backs up the docker shared folder.
@quansiji9 commented on GitHub (Jan 8, 2025):
Yeah, a lesson learned.
I saw exported config files in docker shared folder after backup but didn't understand what's going on, because docker folder is config mount folder for me.
Maybe forcing creating a shared folder will cause existing shared folder with a same name in another volume deleted? In my case,
@dockerbackups in volume1, new docker in volume5, and docker shared folder in volume4. Now this empty docker in volume5.@alexagrippin commented on GitHub (Jan 23, 2025):
Good to see there is already a discussion going on. After running my containers for a week completely perfect with increased performance on the NVMe, I tried updating one container. Only to observe the next day, there is a new container with either .bak or part of the subvolume name prefixed, altogether unable to run with the described errors.
After trying to prune, run the cleaner script, I was ultimately left with no containers working at all but still not able to delete all orphans.
I recovered from the app_mover, now with working containers. I tried to update one container as a test, resulted in "failed to destroy btrfs snapshot".
At this point, I'd be willing to start with all my projects from scratch. So I started a new project, new names for the containers, which works fine. So I will setup all my projects "greenfield" now.
Any way to "wipe" all the existing files for good before setting up everything clean?
@007revad commented on GitHub (Jan 27, 2025):
@alexagrippin