mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #514] DSM 7.3 - works without major issues #677
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#677
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @juliansteenbakker on GitHub (Oct 8, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/514
Originally assigned to: @007revad on GitHub.
I just wanted to state that i can confirm this script is still running on DSM 7.3 without any issues! After upgrade, the drive is showing as missing. After running script and rebooting, everything is back to normal 😄 Tested on DS920+
@witchlord32 commented on GitHub (Oct 8, 2025):
Thank you for testing that. Any issues, as far as You know, when using M.2 ssd as a storage pool?
@juliansteenbakker commented on GitHub (Oct 8, 2025):
The only issue i had is that the packages i installed (container manager, web station) on the NVME volume were not reachable, and after the update they got reinstalled on the wrong volume. This made it look like al my containers were gone. But after removing container manager and webstation and installing them on the nvme drive, everything worked without a problem.
@witchlord32 commented on GitHub (Oct 8, 2025):
That's exactly my situation, I've got Container Manager and ALL my containers on the NVME volume, Thank you for sharing your experience, you saved me a big headache.
@jodewee commented on GitHub (Oct 8, 2025):
did you had the script as start-up script defined?
@juliansteenbakker commented on GitHub (Oct 8, 2025):
I haven't, so that's probably why it didnt work straight out of the box after the update.
@MAngel666 commented on GitHub (Oct 8, 2025):
Unfortunately, I'm having trouble with a DS1819+ that it's being recognized as a cache drive in the Storage Manager, and it also says "Not supported by the current DSM version." I've run the script several times and restarted the device several times.I use the drive in a storage pool with raid type "basic"- It is a: Samsung SSD 990 PRO 2TB
I haven't used the --force option yet. Should I?
@juliansteenbakker commented on GitHub (Oct 8, 2025):
@MAngel666 i run the following as root:
/volume4/homes/x/scripts/Synology_HDD_db-main/syno_hdd_db.sh -nr --autoupdate=3. Then, after a reboot, everything worked.@jodewee commented on GitHub (Oct 8, 2025):
i have the force option enabled for a year now, without issues, but did not do the update to 7.3 yet
/volume1/scripts/Synology_HDD_db-main/syno_hdd_db.sh -nr -f --autoupdate=3
@MAngel666 commented on GitHub (Oct 8, 2025):
sorry I was on the wrong syno (have 2 of them) LOOOOOL. Everything works as expected!
@Staticznld commented on GitHub (Oct 8, 2025):
Upgraded to 7.3 without any issues.
Running the scheduled task at startup as mentioned here.
Running Docker and Mailstation plus from Volume2 (NVMe volume WD red) without issues.
DS923+ BTW.
@pergolafabio commented on GitHub (Oct 8, 2025):
I have read, that Synology now allows again thirdy party HDD since 7.3 , no more need for this tool?
https://nascompares.com/news/synology-reverse-the-hard-drive-policy-in-dsm-7-3-we-win/
@principalarchivist commented on GitHub (Oct 8, 2025):
Totally irrelevant. Pergo, you're referring to whether or not the Synology will accommodate drives that haven't been certified and labeled by Synology as compatible. The script on this github page is all about using M.2 drives for storage, rather than as cache.
@principalarchivist commented on GitHub (Oct 8, 2025):
My experience is similar - after updating DSM to 7.3, I had to run the script and reboot. Container manager (which had been installed on the volume created by the script initially) had been reinstalled to Volume 1 (hard drive) in the course of the DSM update. After the update, DSM invited me to "repair" container manager, which was of course futile. After "repairing," I uninstalled container manager and reinstalled to the drive created by the script, and all was well. Obviously, when uninstalling container manager, do NOT check the box that would uninstall your containers and content.
@mswildma77 commented on GitHub (Oct 8, 2025):
Hoping you all can clear up for me. I was on the fence about upgrading to the 1825 from my old 1817 due to the limitations on the drives but with the 7.3 change that seems like it has passed although I had planned to use the DB updater. My question is about using non Synology M.2 drives as cache in the 1825. I can't tell if that will work or not or if I will still need to use this tool or if there is an M.2 only tool assuming my Ironwolf drives work without attitude on 7.3?
Thanks all!!!!
@007revad commented on GitHub (Oct 8, 2025):
@mswildma77
Yes, according to the latest version of this Synology page: https://kb.synology.com/en-us/DSM/tutorial/Drive_compatibility_policies, you'd still need this script.
That page says Plus Series models cannot create a volume or cache if the NVMe drives is not on the compatibility list:
@mswildma77 commented on GitHub (Oct 8, 2025):
Thanks for that. Good to know not really anything has changed yet and I will still save a ton of money buying Samsung SSDs that are better than the Synology ones and still have to work around to use them. Good times....
@MAngel666 commented on GitHub (Oct 8, 2025):
BTW: Where should I actually save the script so that it "survives" an update? I noticed that the "/scripts" directory I created for it was removed during the update.
@KorperICT commented on GitHub (Oct 8, 2025):
I happen to have received my 1825+ today with 8x 24TB Seagates. I was going to use your great script but ended up not needing to use it. The disks got recognized. I did have a Synology drive laying around and did do the updates and first install with that and 1 seagate in slot 2. It did however give me the option right away to use the seagate for a 2 disk array. After initial updates and wizard i shut it down (without creating volumes). I removed the Synology drive and installed the 7 other Seagates. Created the volume and it's now doing its thing. I used the Synology drive to give it the best chance of working, but in hindsight i think it would have worked with just the Seagates
@principalarchivist commented on GitHub (Oct 8, 2025):
Maybe I’m the stupid one, but I didn’t think the script had anything to do with enabling the use of uncertified hard drives. I use it to use M2 drives as storage, when SYNOLOGY only intends them to be used as a cache. If the script can be used to allow running of uncertified hard drives, that’s news to me.
@principalarchivist commented on GitHub (Oct 8, 2025):
—-
I store the script somewhere in the Homes file structure. Because Synology never messes with that structure when updating.
@007revad commented on GitHub (Oct 8, 2025):
@MAngel666 The script should be in a shared folder that is on a HDD or 2.5 inch SSD so it survives DSM updates. And NOT on an NVMe volume as it may not be available until after the script has run.
Or you could create a root level /opt folder and store the script in there. DSM updates normally do not delete anything that is in /opt
I like to keep my scheduled scripts in
/var/services/homes/Dave/.scripts/. Then I can schedule/var/services/homes/Dave/.scripts/<script-name>.shinstead of/volume1/homes/home/Dave/.scripts/<script-name>.sh. This way if I ever move the homes shared folder to a different volume the schedule still works.@007revad commented on GitHub (Oct 8, 2025):
@KorperICT Yes, with DSM 7.3 the initial setup would have worked with just the Seagates. But with DSM 7.2.2 you would have needed the Synology drives, or to use telnet to run a couple of commands.
If you decide you want an NVMe cache or volume with non-Synology NVMe drives you will need syno_hdd_db
@007revad commented on GitHub (Oct 8, 2025):
@principalarchivist Synology_HDD_db was originally written for models that have had annoying warnings about unverified drives since DSM 7.0.1. These warnings used to only appear on business and enterprise models, but you could still use any drive.
Since DSM 7.2 was released the script also allows you to use non-Synology NVMe drives as a volume, and other things.
Allowing the use of uncertified hard drives (in models and DSM versions with restrictions) is just a bonus of how the script does what it does.
@MAngel666 commented on GitHub (Oct 9, 2025):
I place it in /opt/own_scripts. It seems to me to be a more suitable place for global scripts. Thank you.
@jodewee commented on GitHub (Oct 9, 2025):
without any issues here, scheduled task for the script at startup.
DS923+, nmve pool 4tb
@MAngel666 commented on GitHub (Oct 9, 2025):
same problem here... my m.2 is volume11 and after update the most apps are still there but:
22) /volume1 Antivirus by McAfee
23) /volume1 Log Center
24) /volume1 Node.js v20
25) /volume1 Snapshot Replication
26) /volume1 Synology Application Service
27) /volume1 Synology Drive Server
28) /volume1 Synology Photos
29) /volume1 Universal Viewer
It's possible that this is because the HDD script was deleted during the update to 7.3 (it was located in /scripts). If it's copied to a location that isn't deleted (e.g., /opt/...) and started via the Task Scheduler during startup and shutdown, as described in the documentation, then the problem it's probably not there. Can anyone confirm this?
@principalarchivist commented on GitHub (Oct 9, 2025):
My HDD script was NOT deleted during the update to 7.3, I have it set to run at every bootup, and I had the same problem. I believe it's because the reboot from the update doesn't run the script for some reason, or perhaps runs it unsuccessfully. Following my update, I got notifications that DSM couldn't find Volume 2 or somesuch. I wish I had written down the exact wording, but I didn't. In Storage Manager, the M.2 drive was a red box. I found that I had to reboot after the update to get DSM to recognize the M.2 drive as a storage device. YMMV.
@mswildma77 commented on GitHub (Oct 9, 2025):
I have always noticed that on the more major build updates like 7.2.x to 7.3 that first reboot runs a more in depth startup something like how Microsoft after major updates will run through the startup questionnaire all over again even though you have answered it a million times and have said you don't want Edge to be your default browser....;). Maybe that type of initial startup skips some of those scheduled tasks or mount points? Mind you I haven't gotten my 1825+ yet so I am just speculating. I was also thinking maybe trying to keep the script on a permanently installed old thumb drive stuck in a USB port, maybe that will work better and not risk getting wiped during an update plus always be outside of any RAID volume. Lord knows I have a bazillion useless USB2 slow thumbdrives sitting around collecting dust.
@reserve85 commented on GitHub (Oct 9, 2025):
Same here: No problems after the update with the NVMEs on a DS1522+, script task running on bootup.
But I needed to reconnect the UPS, that was not recognized after first bootup...
Thank you for this great script!
@mbuzina commented on GitHub (Oct 11, 2025):
One additional info:
DSM 7.3 will show the status of your drives as healthy, but synology migration assistant will not recognize 3rd party drives as healthy. It will prevent migration with a notice that you should check the abnormal storage status (i waited out the verification of the pool as I initially assumed it was caused by that).
I then ran your script (thanks!) and now the migration is running.
@mswildma77 commented on GitHub (Oct 11, 2025):
In prep for the new 1825+ I ran the script on my older 1817+ since I had some Ironwolf 20TB drives that I honestly for the longest time couldn't figure out why the Ironwolf tests didn't work on until I realized based on your tool that the version that was still in the DSM before they dropped it didn't support those drives, just the older 8TB ones. Anyway I ran the script and poof now all my drives get their Ironwolf tests. Will you be updating the script to support and download version 3 of IHM so that doesn't happen again when they release newer bigger drives that aren't part of the 2.5.1 testing?
@007revad commented on GitHub (Oct 12, 2025):
@mbuzina Thanks. I saw that mentioned on reddit.
@007revad commented on GitHub (Oct 12, 2025):
@mswildma77 I wasn't ware that there was a version 3 of IHM.
This script uses the x86_64 IronWolf Health binary from QNAP's QuTScloud firmware (which hasn't been updated since March 2025).
I've not been able to successfully extract QNAP QuTS Hero firmware to see if the latest version includes a newer version of IronWolf Health. I can unpack Asustor's latest ADM but I've not found any binary file that could be IHM. Maybe Asustor included IHM in another binary or library.
@mswildma77 commented on GitHub (Oct 12, 2025):
Thanks for the info. I downloaded the latest Terra Master software for a device and using 7zip looked at the file package and found a file called IHM, it was small so not sure if that is it since I have never seen the other file. Anyway, maybe a package from them might be easier for you to extract from.
@007revad commented on GitHub (Oct 12, 2025):
Thanks. Unfortunately the latest Terra Master TOS contains the same x86_64 IronWolf Health version as QNAP. But at least I can easily check TOS each time there's a new update.
I just discovered that with Asustor, IronWolf Health Management is a package you can install. But it contains SeaDragon_DHMr 2.5.1 and IHM 3.1
But the dhm_tool in DSM, IHM in TOS and stx_ihm in QuTS are all SeaDragon_DHMr 2.5.1
So I'm wondering what IHM_x86_64-redhat-linux is used for.

@Samt563 commented on GitHub (Oct 24, 2025):
Hallo everyone,
First of all, thank you for the awesome Scirpt. I've got a DS918+ and a DS920+ (not yet updated), but I do test on the DS918+ first. At first, everything worked as expected: manual DSM Update, because of EOL after that Update for the DS918+, the script runs on Bootup from "/volume1/scripts/Synology_HDD_db/syno_hdd_db.sh -nr" -> NVMEs were missing -> ran the script manually -> reboot -> NVMEs back online, I also had to uninstall/reinstall the Container Manager to NVME Volume. After that i noticed that Synology Office doesn't work (repair is offered but doesn't work) because of Synology AI Console wasn't startet and it refuses to do so. Uninstalling either of them doesn't work for all three apps (Drive, AI, Office) because they are dependent on each other and can not be uninstalled because of an Error without any further Info. Synology AI Console just doesn't start with "Systemerror. ...Try again later.". All mentioned Apps were and are installed on the NVMEs.
Does anyone has ideas on how to proceed?
@MAngel666 commented on GitHub (Oct 24, 2025):
done on my second synology (DS1621+) without problems. Before update, I've installed the HDD script in /opt/own_scripts and it all worked without problems. Thx!
@007revad commented on GitHub (Oct 25, 2025):
@mswildma77
I tried renaming IHM_x86_64-redhat-linux to dhm_tool and replaced dhm_tool in DSM but it didn't work.
@007revad commented on GitHub (Oct 25, 2025):
@Samt563
I would try stopping Drive and Office. Then delete or rename:
Finally install them again, making sure to select your NVMe volume as the destination volume.
The alternative is deleting a recreating 20 or more symlinks, and editing a few files, so they point to your NVMe volume.
@reserve85 commented on GitHub (Oct 28, 2025):
Anyone tried 7.3.1? I guess no issues?
@007revad commented on GitHub (Oct 28, 2025):
I'm running DSM 7.3.1 on my DS1821+ and not found any issues.
@mswildma77 commented on GitHub (Oct 28, 2025):
Just installed it on my DS1817+, DS916+ & DS920+ and all good. Just reran to update the drive database and reinstall the Ironwolf tools, so I know the installed location for that kept. I also had to run the Video Station installed as I had to manually remove Video Station before the update and then is disable the AME tools.
@MAngel666 commented on GitHub (Oct 30, 2025):
With my DS1819+ and version 7.3.1, I had the same problem as with version 7.3. The script was located in /opt/own_scripts, but after rebooting with 7.3.1, /opt was suddenly gone, and the M.2 drive wasn't recognized. After running HD script again, everything was fine. /opt and all its contents were back. I just had to use the application script to move the database again.
So, /opt doesn't seem to be the ideal location for scripts. Does anyone have a suggestion for where to place the scripts so they're always there after a reboot?
@MAngel666 commented on GitHub (Oct 30, 2025):
On my DS1621+ with 7.3.1 no problems, all works as expected (no /opt problem).
@Samt563 commented on GitHub (Oct 30, 2025):
Hello again,
@007revad thanks for those hints, it was usefull.
I had to manually remove serveral things to geht it back working again, data lost. (Test NAS, so it doesnt matter as much and I could restore from Backup, after I removed everything manually, because it won't allow to override it from Hyperbackup.
now the next upgrade from 7.3x to 7.3.1 same problem (Screenshot attached)
script is at: /volume1/scripts/Synology_HDD_db/syno_hdd_db.sh and scheduled on bootup as mentioned at the wiki.
I really don't know why that is happening. Any thoughts an that?
Greetings Sam
@Samt563 commented on GitHub (Oct 30, 2025):
And hello again,
after doing nothing, just restarting, volume2 NVME is back online. but:
Edit:
repair did work this time.
@mswildma77 commented on GitHub (Oct 30, 2025):
I can't remember what thread I read it in, but I placed all my scripts this one and the Video Station one, in the admin folder under homes. I didn't put it a level down within a scrips folder just straight up in admin and its the only thing. That folder seems to stay after an update and you can get to it with the path /var/services/homes/admin so it is "volume" independent and you don't have to worry about volume numbers changing which years ago had happen when I placed drives back in the cage in the wrong order. I lost volume1 and now start at 2. That path allows you not to worry about what volume the folder is on, as long as it is still there. And so far it has stayed.
@ElTonno79 commented on GitHub (Nov 5, 2025):
Upgraded DS1522+ from 7.2.2-72806 to 7.3.1-86003 without issues. Set task at startup.
@AHrubik commented on GitHub (Nov 29, 2025):
Hey all. Thanks so very much for the work on this. I wanted to report that migrating from a 1819 running DSM 7.2 to a 2422 running 7.3 flagged all the drives as "unverified". The script ran without issues but drive 7 and 8 reported System Partition errors afterwards. A reboot did not change this status. I was able to successfully repair the system partition using the DSM function and now every shows as green. Thought someone would want to know about that odd behavior.
@MAngel666 commented on GitHub (Nov 30, 2025):
on DS1819+ here last update to DSM 7.3.1-86003 Update 1 worked perfectly.
@TorbenSchreiter commented on GitHub (Dec 3, 2025):
On a DS920+ did an upgrade to 7.3.2-86009. Script was on a separate Shared Drive located on the HDD volume with autorun on shutdown and startup. Despite that, the NAS actually came up with the M.2s not being recognized upon first boot:
A simple reboot immediately fixed the problem and the M.2s re-appeared working after that.
The Container Manager package needed a repair, which apparently moved it from the M.2 volume to the HDD volume. Some other packages such as HyperBackup, Log Center and @database also appear to have been moved to the HDD volume as part of the update process. Have moved these packages back to the M.2 volume w/ Synology_app_mover separately.
How can I check again, which volume the System Partition is on?
@007revad commented on GitHub (Dec 3, 2025):
@TorbenSchreiter The system partition is mirrored (RAID 1) across all HDD and 2.5 inch SSDs in the NAS.
I'm investigating ways to prevent a DSM from repairing missing packages back to a HDD volume if the NVMe volume is missing.
@mswildma77 commented on GitHub (Dec 5, 2025):
Wanted to give an update, I finally got around to upgrading from the DS1817+ to the DS1825+. The script had been working fine in the old unit and after the HDD migration everything was detected in the new unit, including all the IHM available testing now for all the drives as it had been since I started running the script and the two new 2TB Samsung 990 Pro SSDs I installed for SSD Cache because they were way cheaper than the slower ones from Synology and lets be honest generally those Samsung SSDs even as consumer models have lifespans longer than most enterprise models. Now if I can just figure out why some network settings got stupid on the upgrade but that will be a task for a different day. Just happy the drives work, all of them. Thanks again!!!!!!!
@mswildma77 commented on GitHub (Dec 5, 2025):
Okay I take back my earlier comment. Something did happen. It looks like at 12am the system automatically updated the Drive Compatibility list, something I don't recall ever happening before and the script has been run on the new unit. So instantly I got an auto email the the Samsung 990s were not on the list and it killed my SSD cache. I manually ran the script and the drives came back online along with the cache but not sure how to stop it from autoupdating the drive database as I am already running the script with the -nrI variable options, "/var/services/homes/admin/Synology_HDD_db/syno_hdd_db.sh -nrI --autoupdate=3" Any thoughts?
@007revad commented on GitHub (Dec 5, 2025):
After comparing DSM 7.2.2, 7.3, 7.3.1 and 7.3.2 I see that since DSM 7.3 Synology have stopped checking synoinfo.conf for the drive_db_test_url setting. So the method the script uses to prevent drive database updates has been removed.
I'll look for a new way to prevent drive database updates.
@mswildma77 commented on GitHub (Dec 5, 2025):
Ruh roh....
At least I am not crazy. For now I will keep an eye out for when it runs automatically and then maybe if that is consistent make a second task to run right after that would run or if I can't find a pattern just make another task that runs like every hour so if the DB gets updated it will get fixed in short order. But hopefully you find a more permanent solution.
@007revad commented on GitHub (Dec 5, 2025):
DSM should only check for a drive database update:
@mswildma77 commented on GitHub (Dec 5, 2025):
After some googling myself it looks like on the 2025 models they made checking for update automatic whereas 2024 and older it was scheduled and yes it appears midnight is the time. I am going to clock it over the next few days myself and until and if you can find a way to stop the update again or they stop this new mess they made set a task to run at like 12:01am so it instantly brings in my case the cache SSDs back online. At least it still likes all the Ironwolf drives and didn't start throwing up alerts and stuff. That would not have been fun.
@witchlord32 commented on GitHub (Dec 10, 2025):
just updated my 923* to DSM 7.3.2-86009 without issues (volume2 on m.2 ssd's)
@007revad commented on GitHub (Dec 11, 2025):
DSM updates the database (when there's a newer version available) via a hidden system package installed in /var/packages/SynoOnlinePack_v2.
I assume that DSM uses package center to udpate this package. In which case disabling package center auto updates may prevent the drive databases from auto updating. See https://github.com/007revad/Synology_Information_Wiki/blob/main/pages/disable_auto_updates.md
On my DS925+ I already had package center auto updates disabled before I updated to DSM 7.3.2, so this could explain why all my HDDs and NVMe drives are healthy and the NVMe volume survived the DSM update.
Before I updated to DSM 7.3.2 I also ran a script to stop all packages that are installed on an NVMe volume, so package center wouldn't "repair" them to the HDD volume. https://github.com/007revad/Synology_HDD_db/blob/main/syno_hdd_shutdown.sh
syno_hdd_shutdown.sh creates a log of the packages it stopped. I intend to update syno_hdd_db.sh to check this log and if the NVMe volume is available then start the packages that syno_hdd_shutdown.sh stopped.
@Ang3rst commented on GitHub (Dec 12, 2025):
DS920+
From: 7.2.2-72806 Update 5
To: 7.3.2-86009
The script did run, since I did see 1 or 2 lines indicating it did excecuted someting. Altought I tought it was not right, I did push forward anyway.
To give others that might be in the same sort of starting situation an indication / what I had to do:
Docker
Hyperbackup
All up and running again!, thanks @007revad [Time, Effort, Knowledge, and SHAREING]
@007revad commented on GitHub (Dec 12, 2025):
@mswildma77
Where did you find that information? Was it Google's AI response like this one from when I searched for "when does synology dsm check for drive database updates":
That "Newer NAS models (2025+) check daily and install automatically" is the same behaviour as older models which check at midnight, and when a new drive is inserted, and update the SynoOnlinePack_v2 or SynoOnlinePack package silently and automatically.
@007revad commented on GitHub (Dec 12, 2025):
@Ang3rst When fixing Hyper Backup did you relink to the original backup task, like in step 5 here: https://kb.synology.com/en-uk/DSM/tutorial/How_to_relink_to_existing_backup_tasks_of_Hyper_Backup
@007revad commented on GitHub (Dec 12, 2025):
I changed
version="1002"toversion="1001"in the SynoOnlinePack_v2 package then checked if there was an update available withsynopkg checkupdate SynoOnlinePack_v2and it said no (returned 1).EDIT Oops! I changed the SynoOnlinePack_v2 version on my DS925+ but I ran
synopkg checkupdate SynoOnlinePack_v2on my DS1821+When I run
synopkg checkupdate SynoOnlinePack_v2on the correct Synology it returns 0 (a package update is available).@mswildma77 commented on GitHub (Dec 13, 2025):
It started with the Google AI but I do think the computers will take over, has nobody seen a Sci-Fi movie? But I ended up at a KB article from Synology after continuing down the rabbit hole. Thankfully my browsing history found it.
https://kb.synology.com/en-global/DSM/tutorial/stop_system_package_auto_update
@mswildma77 commented on GitHub (Dec 24, 2025):
Just wanted to give an update. Things have been fine since my last post a couple of weeks back and after making sure the auto updating was disabled but today just before 11am EST, 10:58:51 to be exact, the drive database updated itself and said my Samsung NVMe SSDs were bad bad drives as they were not on the nice list...... I manually ran the script and the cache came back online. Nothing else seemed like it updated that I can tell.
@007revad commented on GitHub (Dec 24, 2025):
My DS925+ also updated it's drive database today, at 7:09 am. Though the shell shows 6:09 am (no DST).
I also got an email at 7:09 am warning that my NVMe storage pool was critical because the drive was not on the compatible drive list.
All my shared folders on the NVMe volume were still accessible so it really wasn't a problem.

Interestingly storage manager's HDD/SSD section does not show the drive's storage pool allocation, temperature or serial number.
So my method of blocking auto drive database updates only blocked the "Update Now" button from updating.
@007revad commented on GitHub (Dec 24, 2025):
Actually now I'm thinking that I never ran v3.6.115 or v3.6.116 on my DS925+. It still has v3.6.112 which did not have the bug fix for disabling compatible drive database auto update in DSM 7.3.
My DS1821+ which has v3.6.116 didn't update it's drive database this morning.
@007revad commented on GitHub (Dec 24, 2025):
FYI The only change in this new drive database is Synology have added HAT5310-16T
@mswildma77 commented on GitHub (Dec 25, 2025):
Thanks I just checked, I was still running 3.6.112 since I hadn't rebooted in a bit and it hadn't tried to update itself in a few weeks requiring me to manually run it. I guess I will see what happens now. Maybe I will set a weekly run to in catch any updates that I wouldn't get without rebooting to get fixes. Thanks and happy holidays.
@AHrubik commented on GitHub (Dec 29, 2025):
DS2422+
Script version: 3.6.111 ---> 3.6.116
Reporting an update. I can confirm that they are updating the DB while the device is running now. I was running version 111 of the script as of my last reboot and I woke up this morning to find all my drives unverified. I ran the script manually and it ran as expected updating to version 116 and readding my industry standard NAS drives to their bullshit database.
@TorbenSchreiter commented on GitHub (Jan 17, 2026):
Confirming that my HyperBackup also lost all of its config. When creating a new backup task with "multiple versions" to be backuped, then it is possible to relink. But nonetheless in the wizard you still need to manually set all the settings from the _Syno_TaskConfig file again as described by @Ang3rst. I also had my encryption key file. It will relink, but you have to manually put all the backup job's settings despite that they are containes on the backup volume you are relinking to... :(
Do we know, why this package lost its settings in the first place?
@witchlord32 commented on GitHub (Jan 31, 2026):
Hi there, just to let you know that the script is perfectly working with DSM 7.3.2-86009 Update 1 (DS923+ with storage pool and containers on m.2 drives)
Thank you again for the great work!
@007revad commented on GitHub (Jan 31, 2026):
@TorbenSchreiter
Needing to remember what settings you previously used is annoying. 2 weeks ago I looked at getting the settings used in the wizard from the existing Hyper Backup tasks but I got side tracked and forgot about it.
@catalans1980 commented on GitHub (Feb 26, 2026):
Tried with my DS720+, 18gb RAM and 2 NVME running as volume.
The script didn't run on boot, idk why, i have it on tasks, as root, /volume1/[user]/scripts/syno_hdd_db.sh -nr
Should i add an f?
Had to ssh into the nas, run it and reboot.
After that yeah, uninstall Container manager and reinstall it in the volume2 (nvme).
After that all working perfectly, but would like to solve the issue about the script not running on boot
@007revad commented on GitHub (Feb 26, 2026):
Did you mean /volume1/homes/[user]/scripts/syno_hdd_db.sh -nr
@catalans1980 commented on GitHub (Feb 26, 2026):
No, i don't use the "home" / "homes" volume. My script is in volume1/[user]. With SSH i run it with that directory and works fine.
@007revad commented on GitHub (Feb 26, 2026):
@catalans1980
When you say "The script didn't run on boot" do you mean it didn't run or it ran but didn't do anything?
Can you provide a screenshot of the scheduled task's "Action > View Result", which should look like this:

You need to have "Settings > Save output results" setup for task scheduler to save the results of user-defined scheduled tasks.

Also make sure you have the scheduled task set to send you an email when it runs.

@catalans1980 commented on GitHub (Feb 26, 2026):
Didn't have the folder active to save results. Will do from now.
Thing is, when it runs, it normally sends me an email. It didn't this time.
Anyway, the next time i use it, i'll come back here if it doesnt work :) Thanks!
@cgoudie commented on GitHub (Feb 28, 2026):
To those who come here just before an upgrade to 7.3. As a heads up, the upgrade wipes the /root folder completely, so if you were running the script from there it wont be there after the restart.
@cgoudie commented on GitHub (Feb 28, 2026):
I have a storage pool on nvme ssd. Once I re-ran the script (after it was wiped as above) and put it in a better folder, and restarted, the system came up just fine. RS3618xs
@mswildma77 commented on GitHub (Feb 28, 2026):
If you read earlier entries there is a recommended location in the homes folder that doesn't get wiped during upgrades that is better for the script to live. Also if you have a volume with regular HDDs it is probably better to keep the script on that volume regardless of the folder as it stands a better shot of still being there after an upgrade as opposed to SSDs, except maybe Synology branded SSDs. Just my 2 cents.....:)
@cgoudie commented on GitHub (Mar 1, 2026):
Yup. I put it on a new share on volume 1 which in mine is spinning disks so it wouldn't happen again.