[GH-ISSUE #4] Unable to remove orphan images, un-updated images not able to be restarted #1

Open
opened 2026-03-04 19:25:37 +03:00 by kerem · 15 comments
Owner

Originally created by @talz13 on GitHub (Mar 24, 2024).
Original GitHub issue: https://github.com/007revad/Synology_docker_cleanup/issues/4

Originally assigned to: @007revad on GitHub.

Carrying on the discussion on the new repo!

I reviewed and ran the new syno_docker_cleanup.sh, and it was successful in removing the orphan docker btrfs subvolumes, all 594 of them on my nas.

However, I'm getting an error on deleting the orphan images:

Deleting 6 orphan images...
Error: No such image: b4d108121738
2f48543fad4f
6fd099c65bce
c29b2a13b349
d874c386dd44
16cb8800d474

I ran the nested command on its own, and it produced the desired output.

Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active.

Anyway, I'm trying to get those images back up and running, but not sure how to get past that.

For example, unifi-controller was one container that's failing to start/restart, so I tried:

  • Removing the container
  • Importing the json backup for that container
    • No luck
  • Removed the underlying image (linuxserver/unifi-controller)
  • Re-downloaded (linuxserver/unifi-controller)
  • Tried importing container json again
    • No luck

Here's my Container Manager logs from the affected time:

image

Any advice on this state?

Originally created by @talz13 on GitHub (Mar 24, 2024). Original GitHub issue: https://github.com/007revad/Synology_docker_cleanup/issues/4 Originally assigned to: @007revad on GitHub. Carrying on the discussion on the new repo! I reviewed and ran the new `syno_docker_cleanup.sh`, and it was successful in removing the orphan docker btrfs subvolumes, all 594 of them on my nas. However, I'm getting an error on deleting the orphan images: ``` Deleting 6 orphan images... Error: No such image: b4d108121738 2f48543fad4f 6fd099c65bce c29b2a13b349 d874c386dd44 16cb8800d474 ``` I ran the nested command on its own, and it produced the desired output. Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active. Anyway, I'm trying to get those images back up and running, but not sure how to get past that. For example, unifi-controller was one container that's failing to start/restart, so I tried: - Removing the container - Importing the json backup for that container - No luck - Removed the underlying image (linuxserver/unifi-controller) - Re-downloaded (linuxserver/unifi-controller) - Tried importing container json again - No luck Here's my Container Manager logs from the affected time: ![image](https://github.com/007revad/Synology_docker_cleanup/assets/3825115/4837a3ab-9e87-442d-a789-13d90bedc3a1) Any advice on this state?
Author
Owner

@007revad commented on GitHub (Mar 25, 2024):

The only thing I can think of why docker rmi "$(docker images -f "dangling=true" -q)" failed in the script but worked when you ran it via SSH is maybe docker was busy scanning the subvolumes after the script deleted the orphan subvolumes.

Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active.

Were the affected container's running when the script was run?

<!-- gh-comment-id:2017019705 --> @007revad commented on GitHub (Mar 25, 2024): The only thing I can think of why `docker rmi "$(docker images -f "dangling=true" -q)"` failed in the script but worked when you ran it via SSH is maybe docker was busy scanning the subvolumes after the script deleted the orphan subvolumes. > Also after running the cleanup, some of my containers are not able to be restarted (but some still work! might have to do with which images were updated since the migration already?), and it looks like their associated subvolumes were removed with the script, not sure why they didn't come up as active. Were the affected container's running when the script was run?
Author
Owner

@007revad commented on GitHub (Mar 25, 2024):

This script was supposed to solve issues from running the syno_app_mover script, and not create more issues.

<!-- gh-comment-id:2017020659 --> @007revad commented on GitHub (Mar 25, 2024): This script was supposed to solve issues from running the syno_app_mover script, and not create more issues.
Author
Owner

@007revad commented on GitHub (Mar 27, 2024):

It looks like Container Manager's .json exports are useless when the image and/or subvolume no longer exist.

<!-- gh-comment-id:2022426762 --> @007revad commented on GitHub (Mar 27, 2024): It looks like Container Manager's .json exports are useless when the image and/or subvolume no longer exist.
Author
Owner

@talz13 commented on GitHub (Mar 28, 2024):

edit: I just manually made a "@docker.bak" folder and moved the whole contents of the @docker folder to it after uninstalling Container Manager. I reinstalled it, reinstalled Portainer, and it still had my stacks that I previously set up. A couple quick Update the stacks later, and everything is back up and running. Was able to deploy the remaining containers without issue!

original post
So I've started moving all my images to Portainer stacks, to be able to re-create them much more easily, but still having issues with the couple remaining images. Seems like as long as I'm using the same image from before (in this case, tonesto7/echo-speaks-server which hasn't been updated in a couple years), it cannot "refresh" it, or it is still referencing those deleted IDs.

All my containers use external volumes, so I don't have any concern about clearing out Container Manager / Docker and starting over, I'd just like to get rid of everything aside from my docker shared folder and start fresh, hopefully getting rid of the issues.

Since I'm not sure if trying to remove the /volume2/@docker folder will be the best course of action, I was trying to move it to @docker.bak, but get a busy error:

$ sudo mv \@docker \@docker.bak
Password: 
mv: cannot move '@docker' to '@docker.bak': Device or resource busy

any ideas with that?

<!-- gh-comment-id:2024978037 --> @talz13 commented on GitHub (Mar 28, 2024): **edit:** I just manually made a "@docker.bak" folder and moved the whole contents of the `@docker` folder to it after uninstalling Container Manager. I reinstalled it, reinstalled Portainer, and it still had my stacks that I previously set up. A couple quick **Update the stack**s later, and everything is back up and running. Was able to deploy the remaining containers without issue! **original post** So I've started moving all my images to Portainer stacks, to be able to re-create them much more easily, but still having issues with the couple remaining images. Seems like as long as I'm using the same image from before (in this case, tonesto7/echo-speaks-server which hasn't been updated in a couple years), it cannot "refresh" it, or it is still referencing those deleted IDs. All my containers use external volumes, so I don't have any concern about clearing out Container Manager / Docker and starting over, I'd just like to get rid of everything aside from my docker shared folder and start fresh, hopefully getting rid of the issues. Since I'm not sure if trying to remove the `/volume2/@docker` folder will be the best course of action, I was trying to move it to @docker.bak, but get a busy error: ``` $ sudo mv \@docker \@docker.bak Password: mv: cannot move '@docker' to '@docker.bak': Device or resource busy ``` any ideas with that?
Author
Owner

@007revad commented on GitHub (Mar 28, 2024):

Try creating the @docker.bak folder then copying the contents of @docker to @docker.bak instead of moving @docker.

if mkdir -m 710 "/volume2/@docker.bak"; then
    cp -prf "/volume2/@docker/." "/volume2/@docker.bak"
fi

Edit: I just saw your edit where you did something similar.

<!-- gh-comment-id:2026338491 --> @007revad commented on GitHub (Mar 28, 2024): Try creating the `@docker.bak` folder then copying the contents of `@docker` to `@docker.bak` instead of moving `@docker`. ``` if mkdir -m 710 "/volume2/@docker.bak"; then cp -prf "/volume2/@docker/." "/volume2/@docker.bak" fi ``` **Edit:** I just saw your edit where you did something similar.
Author
Owner

@quansiji9 commented on GitHub (Jan 7, 2025):

Hi, I get to know this script from https://github.com/007revad/Synology_app_mover/issues/149 when I'm trying to reset one container after moving Container Manager, and it DOES solved my problem, but I also find that all containers aren't really working. They do say running in Container Manager, but log inside is reporting system files not found.
I restarted Container Manager and then no container can start, all showing errors just same as @talz13 . Luckily I did choose to backup before moving Container Manager, and somehow Synology_app_mover cannot restore @docker folder.

Do you want to backup the @docker folder on /volume5? [y/n]
y
WARNING Backing up @docker could take a long time
  Backing up @docker to @docker_backup....
Backing up @docker to @docker_backup
Restoring @appconf/ContainerManager to /volume5
Restoring @appdata/ContainerManager to /volume5
Restoring @apphome/ContainerManager to /volume5
Restoring @appshare/ContainerManager to /volume5
Restoring @appstore/ContainerManager to /volume5
Restoring @apptemp/ContainerManager to /volume5
No /volume1/homes/xxx//syno_app_mover/ContainerManager/@docker to restore

I manually copied @docker folder and "everything is back up and running", so nothing is lost. But if I have no backup, everything will break.

DSM is 7.2.2, Synology_docker_cleanup and Synology_app_mover are all latest version.

<!-- gh-comment-id:2575236741 --> @quansiji9 commented on GitHub (Jan 7, 2025): Hi, I get to know this script from https://github.com/007revad/Synology_app_mover/issues/149 when I'm trying to reset one container after moving Container Manager, and it DOES solved my problem, but I also find that all containers aren't really working. They do say running in Container Manager, but log inside is reporting system files not found. I restarted Container Manager and then no container can start, all showing errors just same as @talz13 . Luckily I did choose to backup before moving Container Manager, and somehow Synology_app_mover cannot restore @docker folder. ``` Do you want to backup the @docker folder on /volume5? [y/n] y WARNING Backing up @docker could take a long time Backing up @docker to @docker_backup.... Backing up @docker to @docker_backup Restoring @appconf/ContainerManager to /volume5 Restoring @appdata/ContainerManager to /volume5 Restoring @apphome/ContainerManager to /volume5 Restoring @appshare/ContainerManager to /volume5 Restoring @appstore/ContainerManager to /volume5 Restoring @apptemp/ContainerManager to /volume5 No /volume1/homes/xxx//syno_app_mover/ContainerManager/@docker to restore ``` I manually copied @docker folder and "everything is back up and running", so nothing is lost. But if I have no backup, everything will break. DSM is 7.2.2, Synology_docker_cleanup and Synology_app_mover are all latest version.
Author
Owner

@007revad commented on GitHub (Jan 8, 2025):

@quansiji9

It looks syno_app_mover was missing the "extras" folder. The path it should have used should be /volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker

and not
/volume1/homes/xxx//syno_app_mover/ContainerManager/@docker

I'll have a look at it and see why it missed the extras folder.

<!-- gh-comment-id:2576591605 --> @007revad commented on GitHub (Jan 8, 2025): @quansiji9 It looks syno_app_mover was missing the "extras" folder. The path it should have used should be `/volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker` and not `/volume1/homes/xxx//syno_app_mover/ContainerManager/@docker` I'll have a look at it and see why it missed the extras folder.
Author
Owner

@quansiji9 commented on GitHub (Jan 8, 2025):

missing the "extras" folder

the @docker folder i manually copied IS at the "extras" folder /volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker. I also tried to move @docker to /volume1/homes/xxx//syno_app_mover/ContainerManager/@docker and got a different error report

Do you want to backup the @docker folder on /volume5? [y/n]
n
Restoring @appconf/ContainerManager to /volume5
Restoring @appdata/ContainerManager to /volume5
Restoring @apphome/ContainerManager to /volume5
Restoring @appshare/ContainerManager to /volume5
Restoring @appstore/ContainerManager to /volume5
Restoring @apptemp/ContainerManager to /volume5
cp: cannot stat '/volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker': No such file or directory
Restoring @docker to /volume5

Although it says 'Restoring @docker to /volume5' , restore is not happening. So i guess this is some script path bugs?

<!-- gh-comment-id:2576874914 --> @quansiji9 commented on GitHub (Jan 8, 2025): > missing the "extras" folder the `@docker` folder i manually copied IS at the "extras" folder `/volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker`. I also tried to move @docker to `/volume1/homes/xxx//syno_app_mover/ContainerManager/@docker` and got a different error report ``` Do you want to backup the @docker folder on /volume5? [y/n] n Restoring @appconf/ContainerManager to /volume5 Restoring @appdata/ContainerManager to /volume5 Restoring @apphome/ContainerManager to /volume5 Restoring @appshare/ContainerManager to /volume5 Restoring @appstore/ContainerManager to /volume5 Restoring @apptemp/ContainerManager to /volume5 cp: cannot stat '/volume1/homes/xxx//syno_app_mover/ContainerManager/extras/@docker': No such file or directory Restoring @docker to /volume5 ``` Although it says 'Restoring @docker to /volume5' , restore is not happening. So i guess this is some script path bugs?
Author
Owner

@007revad commented on GitHub (Jan 8, 2025):

After checking I've found the "/extras" is only missing from the error message. When backing or restoring any of the /volumeX/@folders they never should have got to that error message.

I just tried replicating your issue but was unable to. I first tried backing up and restoring Presto File Server because it also has a /volumeX/@folder. When that worked okay I thought maybe it only affects the @docker folder so I backed and then restored Container Manager, which also worked.

image

Are all your volumes btrfs or all ext4?

Did you backup with an older version of the script?

The only thing I did different was answering N when the script asked if I wanted to backup @docker. I'll try answering yes and see if I can reproduce your issue.

<!-- gh-comment-id:2576897305 --> @007revad commented on GitHub (Jan 8, 2025): After checking I've found the "/extras" is only missing from the error message. When backing or restoring any of the `/volumeX/@folders` they never should have got to that error message. I just tried replicating your issue but was unable to. I first tried backing up and restoring Presto File Server because it also has a `/volumeX/@folder`. When that worked okay I thought maybe it only affects the `@docker` folder so I backed and then restored Container Manager, which also worked. ![image](https://github.com/user-attachments/assets/55669d04-1543-40a3-9876-87aba50a38f4) Are all your volumes btrfs or all ext4? Did you backup with an older version of the script? The only thing I did different was answering N when the script asked if I wanted to backup `@docker`. I'll try answering yes and see if I can reproduce your issue.
Author
Owner

@quansiji9 commented on GitHub (Jan 8, 2025):

all btrfs;all script version latest

As I'm getting this problem https://github.com/007revad/Synology_app_mover/issues/149 again, I uninstalled ContainerManager and use syno_app_mover to restore again. This time I choose not to backup @docker and everything works great.

Do you want to backup the @docker folder on /volume5? [y/n]
n
Restoring @appconf/ContainerManager to /volume5
Restoring @appdata/ContainerManager to /volume5
Restoring @apphome/ContainerManager to /volume5
Restoring @appshare/ContainerManager to /volume5
Restoring @appstore/ContainerManager to /volume5
Restoring @apptemp/ContainerManager to /volume5
  Restoring @docker to /volume5......
<!-- gh-comment-id:2576972552 --> @quansiji9 commented on GitHub (Jan 8, 2025): all btrfs;all script version latest As I'm getting this problem https://github.com/007revad/Synology_app_mover/issues/149 again, I uninstalled ContainerManager and use syno_app_mover to restore again. This time I choose not to backup `@docker` and everything works great. ``` Do you want to backup the @docker folder on /volume5? [y/n] n Restoring @appconf/ContainerManager to /volume5 Restoring @appdata/ContainerManager to /volume5 Restoring @apphome/ContainerManager to /volume5 Restoring @appshare/ContainerManager to /volume5 Restoring @appstore/ContainerManager to /volume5 Restoring @apptemp/ContainerManager to /volume5 Restoring @docker to /volume5...... ```
Author
Owner

@quansiji9 commented on GitHub (Jan 8, 2025):

Ah another problem. This time hurts.

Although @docker folder is restored successfully, I found a share folder named 'docker' in /volume4 deleted and a new share folder 'docker' is created in /volume5 and is empty. This time I have no backup because I didn't expect shared folder will be modified. Sad.

And now this empty docker shared folder I cannot delete or modify because DSM tell me container manager need this folder.

<!-- gh-comment-id:2577020826 --> @quansiji9 commented on GitHub (Jan 8, 2025): Ah another problem. This time hurts. Although @docker folder is restored successfully, I found a share folder named 'docker' in /volume4 deleted and a new share folder 'docker' is created in /volume5 and is empty. This time I have no backup because I didn't expect shared folder will be modified. Sad. And now this empty docker shared folder I cannot delete or modify because DSM tell me container manager need this folder.
Author
Owner

@007revad commented on GitHub (Jan 8, 2025):

That does hurt. The docker shared folder is where the script saves the exported config files so you don't even have the json exports to import into Container Manager.

Apart from saving the exported config files in the docker share the script does not touch the docker share.

You should setup a hyper backup task that backs up the docker shared folder.

<!-- gh-comment-id:2577110131 --> @007revad commented on GitHub (Jan 8, 2025): That does hurt. The docker shared folder is where the script saves the exported config files so you don't even have the json exports to import into Container Manager. Apart from saving the exported config files in the docker share the script does not touch the docker share. You should setup a hyper backup task that backs up the docker shared folder.
Author
Owner

@quansiji9 commented on GitHub (Jan 8, 2025):

You should setup a hyper backup task that backs up the docker shared folder.

Yeah, a lesson learned.

I saw exported config files in docker shared folder after backup but didn't understand what's going on, because docker folder is config mount folder for me.

Maybe forcing creating a shared folder will cause existing shared folder with a same name in another volume deleted? In my case, @docker backups in volume1, new docker in volume5, and docker shared folder in volume4. Now this empty docker in volume5.

<!-- gh-comment-id:2577164301 --> @quansiji9 commented on GitHub (Jan 8, 2025): > You should setup a hyper backup task that backs up the docker shared folder. Yeah, a lesson learned. I saw exported config files in docker shared folder after backup but didn't understand what's going on, because docker folder is config mount folder for me. Maybe forcing creating a shared folder will cause existing shared folder with a same name in another volume deleted? In my case, `@docker` backups in volume1, new docker in volume5, and docker shared folder in volume4. Now this empty docker in volume5.
Author
Owner

@alexagrippin commented on GitHub (Jan 23, 2025):

Good to see there is already a discussion going on. After running my containers for a week completely perfect with increased performance on the NVMe, I tried updating one container. Only to observe the next day, there is a new container with either .bak or part of the subvolume name prefixed, altogether unable to run with the described errors.

After trying to prune, run the cleaner script, I was ultimately left with no containers working at all but still not able to delete all orphans.

I recovered from the app_mover, now with working containers. I tried to update one container as a test, resulted in "failed to destroy btrfs snapshot".

At this point, I'd be willing to start with all my projects from scratch. So I started a new project, new names for the containers, which works fine. So I will setup all my projects "greenfield" now.

Any way to "wipe" all the existing files for good before setting up everything clean?

<!-- gh-comment-id:2609309913 --> @alexagrippin commented on GitHub (Jan 23, 2025): Good to see there is already a discussion going on. After running my containers for a week completely perfect with increased performance on the NVMe, I tried updating one container. Only to observe the next day, there is a new container with either .bak or part of the subvolume name prefixed, altogether unable to run with the described errors. After trying to prune, run the cleaner script, I was ultimately left with no containers working at all but still not able to delete all orphans. I recovered from the app_mover, now with working containers. I tried to update one container as a test, resulted in "failed to destroy btrfs snapshot". At this point, I'd be willing to start with all my projects from scratch. So I started a new project, new names for the containers, which works fine. So I will setup all my projects "greenfield" now. Any way to "wipe" all the existing files for good before setting up everything clean?
Author
Owner

@007revad commented on GitHub (Jan 27, 2025):

@alexagrippin

  1. Backup your docker shared folder.
  2. Then uninstall Container Manager (and tick the box that says "Delete the items listed above when uninstalling the package".
  3. Reinstall Container Manager on the NVMe volume.
  4. Restore the data you backed up in step 1 to the docker shared folder.
  5. Recreate your projects/containers.
<!-- gh-comment-id:2614776947 --> @007revad commented on GitHub (Jan 27, 2025): @alexagrippin 1. Backup your docker shared folder. 2. Then uninstall Container Manager (and tick the box that says "Delete the items listed above when uninstalling the package". 3. Reinstall Container Manager on the NVMe volume. 4. Restore the data you backed up in step 1 to the docker shared folder. 5. Recreate your projects/containers.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_docker_cleanup#1
No description provided.