mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 21:55:59 +03:00
[GH-ISSUE #550] StorageManager latency and missing /run/synostorage/disks/*/compatibility_action files after using Synology_HDD_db #697
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#697
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @aferende on GitHub (Jan 6, 2026).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/550
Hi @007revad,
I’m reporting an issue that I’m trying to better understand and possibly fix, related to Storage Manager latency and repeated errors about missing files under
/run/synostorage/disks/*.I’m not sure if this is expected behavior or a side effect of how I’m using the scripts, so I’d appreciate some guidance.
System information
Expansion / cards
Expansion unit
Drives
HDDs (internal + DX517):
NVMe:
Scripts executed at every NAS boot
1. Enable E10M20-T1 (NIC + NVMe)
Output (expected, no errors):
2. Synology_HDD_db
Version used:
All disks and devices are detected correctly and already present in the DB files.
Errors observed
During the execution of
syno_hdd_db.sh, I consistently get many repeated errors like:This happens for:
The script completes successfully, but these errors are always present.
Observable side effects on the system
After this setup, I notice significant latency in DSM:
This suggests that Storage Manager is retrying or blocking while checking disk compatibility, possibly due to the missing
compatibility_actionfiles.Why I’m asking
I’m trying to understand:
/run/synostorage/disks/*/compatibility_actionfiles expected in this setup?
If useful, I can collect:
Just let me know what would help most.
Thanks a lot for your work on this project and for any guidance you can provide.
Here full log:
@007revad commented on GitHub (Jan 6, 2026):
I have a E10M20-T1 in my DS1821+ and have both syno_enable_m2_card and syno_hdd_db scheduled to run at boot up. I've not seen the issue you are having... though my DS1821+ is still using DSM 7.3.1
I never use the -p or --pcie option and I had forgotten what it does. Looking at the script I see it's only used to force "Enable creating pool on drives in M.2 adaptor card" if a PCIe M.2 card was not found.
The fact that the script printed "Creating pool in UI on drives in M.2 adaptor card already enabled." means that whatever is causing the error message you are seeing comes after that section of code.
The script does not read or change anything in /run except to echo 1 to
/run/synostorage/disks/nvmeN/m2_pool_supportto enable creating a storage pool and volume on NVMe drives in a PCIe card.On my DS925+ with DSM 7.3.2
/run/synostorage/disks/nvme0n1/compatibility_actioncontains:The compatibility_action for my SATA SSD, Synology HDD, Seagate HDD and WD HDD all show exactly the same thing, with the same values.
Does
jq ./run/synostorage/disks/sata1/compatibility_actionreturn anything, or the same file not found error?Does
ls /run/synostorage/disks/sata1 | grep 'compatibility'return the following:@007revad commented on GitHub (Jan 6, 2026):
I suspect the line with
synostgdisk --check-all-disks-compatibilityis where the error messages occur.If you run this do you get those 3 lines of errors for each drive?
@aferende commented on GitHub (Jan 6, 2026):
Thanks, here are the results of the checks you asked me to run.
Checking
compatibility_actionfilesSATA disk example (
sata1)Command:
Output:
Command:
Output:
NVMe disk example (
nvme0n1)Command:
Output:
Command:
Output:
The same result applies to all SATA disks, NVMe disks, and the DX517 expansion unit.
The
compatibility_actionfile is missing everywhere.synostgdisktestCommand:
Output (repeated for each disk):
The errors appear 2–3 times per disk and the command takes ~90 seconds to complete.
During this time:
On DSM 7.3.2 it looks like:
is not being created, but
synostgdiskstill expects it and retries internally.Your scripts do not appear to be involved in this behaviour.
This looks like a DSM Storage Manager /
synostgdiskissue rather than aSynology_HDD_db issue.
I don't know what could have caused these problems, but today I checked the output of the startup scripts and noticed this situation, so I linked them to possible latencies I've been experiencing on the NAS for a few weeks.
Do you have any idea what could have happened and how I can fix it?
Thanks in advance.
@007revad commented on GitHub (Jan 6, 2026):
Possible Solution 1
Try the following commands:
Then run the following command:
And check if the errors for the nvme drives are now gone. If the nvme errors are gone reboot and check with:
If they are still okay, then repeat those same commands for all 18 SATA drives.
Possible Solution 2
If the errors returned after a reboot then we can force DSM to let you do an update of the drives databases:
Run the following command:
Then:
@aferende commented on GitHub (Jan 6, 2026):
Hi Dave, thank you for your help and for the detailed suggestions.
As requested, I collected the results of both proposed solutions.
Both solutions were tested in the suggested order, and each test included a full reboot.
Below are the exact commands executed and their outputs.
Solution 1 – Manually creating
compatibility_actionCommand executed (NVMe example)
Check before reboot
Output:
After reboot
Result:
compatibility_actionfiles are removed at reboot and are not recreated by DSM.Solution 2 – Forcing Drive Database update
Command executed
GUI steps
Reboot performed after update
Post-reboot verification
Same result applies to all SATA disks and NVMe disks.
Summary
compatibility_actionfiles exist only until rebootsynostgdiskworks correctly only while the files existPlease let me know if there are other diagnostic steps or alternative approaches you would recommend to further investigate or resolve this behavior.
Thank you again for your time and support.
@aferende commented on GitHub (Jan 6, 2026):
In trying to figure out what might have caused this state on my NAS, I found this script I'd run a while back (during a week when I'd removed syno_hdd_db for testing and reverted to using NVMe drives as Synology's native cache).
Then, at least in my intentions, I cleaned everything up and reinstalled syno_hdd_db and enable_m2_card, but something might have gone wrong at this point.
I'm posting the script's contents here in case you think it might be helpful in addressing the problem:
Thanks always.
@007revad commented on GitHub (Jan 6, 2026):
I'm assuming this is a real DS1821+ and not Xpenology.
I've seen that script, on reddit I think, a few weeks ago. It's definitely not needed for a E10M20-T1. And it would only work for older, pre 20 series, models that don't use device tree, and only for generic 10G cards that use the AQC107 chip.
I don't think that would cause your problem.
You can either edit /usr/syno/etc.defaults/adapter_cards.conf and /usr/syno/etc/adapter_cards.conf with vi or vim (or WinSCP which is much easier) to remove:
Or restore the 2 adapter_cards.conf files from the backup syno_enable_m2_card would have created.
First check that the backup and current files are different:
Then restore the files from the backup.
Check the disk-compatibility rules exist
This command:
Should return:
Check support_disk_compatibility is set to yes
If the following return "no":
Run these commands:
Then reboot
Reinstall DSM 7.3.2
If still no good I would try reinstalling DSM 7.3.2 (which is just like upgrading DSM, so no data loss).
https://github.com/007revad/Synology_DSM_reinstall
You will need https://global.synologydownload.com/download/DSM/release/7.3.2/86009/DSM_DS1821%2B_86009.pat
@aferende commented on GitHub (Jan 6, 2026):
Hi Dave, thanks for the detailed reply and for sticking with me on this.
To answer your points and report back with results:
Hardware / Platform confirmation
adapter_cards.conf
I have already restored both adapter_cards.conf files from the backups created by
syno_enable_m2_card.So at this point:
/usr/syno/etc.defaults/adapter_cards.conf/usr/syno/etc/adapter_cards.confare back to their original backed-up state, and the
[AQC107_sup_nic] DS1821+=yesentry has effectively been reverted.Disk compatibility rules
As suggested, I checked for the rules directory.
Command executed
Result
The
/var/lib/disk-compatibility/directory exists, but it only contains the various*_v7.db,.version,.release, and.bakfiles.There is no
rules/subdirectory at all on this system.support_disk_compatibility flag
I verified both locations:
Output:
Output:
So
support_disk_compatibilityis already enabled in both configs.Current situation summary
compatibility_actionfiles can be created manually and everything works until rebootcompatibility_actionsynostgdisk --check-all-disks-compatibilitythen becomes slow and causes storage-related latencyadapter_cards.confhas been restoredsupport_disk_compatibilityis enabled/var/lib/disk-compatibility/rulesdirectory is missing entirelyAt this point I’m not seeing any other configuration changes left to undo.
Question on next steps
Before proceeding with a DSM 7.3.2 reinstall using the method you linked (which I understand should preserve data and volumes):
rules/directory and the behavior after reboot, would you agree that a DSM reinstall is the next sensible step?Thanks again for your time and guidance.
@007revad commented on GitHub (Jan 6, 2026):
I'm surprised that updating the drive database didn't restore the missing rules files.
I just compared the rules files between my DS1821+ DSM 7.3.1 and my DS925+ DSM 7.3.2 and they are exactly the same files.
These are from my DS1821+
rules.zip
Unzip and copy the rules_N folders with their eunit_rule_db and host_rule.db files to:
Then make sure the owner and permissions are correct:
Then run:
@aferende commented on GitHub (Jan 6, 2026):
Okay, I've done everything, thank you very much. I'm lucky we have the same NAS model :-)
Now the command:
It doesn't return any errors.
As soon as I can, I'll reboot to verify that everything is OK.
In the meantime, thank you very much.
@aferende commented on GitHub (Jan 13, 2026):
Hi Dave,
I finally managed to reboot the NAS, but unfortunately the error persists.
At this point, I think the only solution is to try reinstalling with your Synology_DSM_reinstall script.
Do you have any specific suggestions, or should I just proceed as directed?
Can you confirm that all settings are retained, including:
Thanks as always for the fantastic support.
@007revad commented on GitHub (Jan 13, 2026):
Reinstalling DSM is basically the same as doing a DSM update.
Your start-up scheduled tasks will survive (and run as long as the scripts are not stored on an NVMe storage pool).
The only thing to be aware of is that your NVMe volume might not get mounted until after a 2nd reboot. And DSM 7.3 automatically "repairs" any packages that were running before the reboot, and if they depend on a missing volume it "repairs" them to the first volume it finds.
So to prevent that, before reinstalling DSM I would:
To see which volume each package is installed on run https://github.com/007revad/Synology_SMART_info and enter 1 for Move.
To find packages that depend on your NVMe volume run this (replace /volume1 with your NVMe volume).
To find out if VMM has storage set to your NVMe volume:
@aferende commented on GitHub (Jan 14, 2026):
Great, thanks, Dave.
I managed to do everything; I had to do a double reboot, as expected, and finally the problem is solved.
Thanks so much for the excellent support.