mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #540] "WARNING Don't store this script on an NVMe volume!" but I'm only using SSD cache #694
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#694
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lnlyssg on GitHub (Dec 13, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/540
Originally assigned to: @007revad on GitHub.
I tried to run the latest version of the script but get the above error. I've run the script with -x and get the below output:
volume1 is my HDDs (with SSD cache) and I think the cache is giving a false reading and the regex needs to be amended....
@tievolu commented on GitHub (Dec 13, 2025):
I have the same problem. With the latest version I see this warning and the script exits without doing anything, even though the script isn't stored on my NVMe volume.
I've been using the script since June, and the warning has always been displayed incorrectly. I just ignored it and assumed it was a warning that was always displayed regardless of where the script was stored - I didn't realise the script actually thought the script was located on an NVMe drive...
It's a problem now that it causes the script to exit.
@007revad commented on GitHub (Dec 13, 2025):
I've commented out the "exit" in v3.6.115
I'll find a more reliable way to detect if the script is located on an NVMe volume.
@tievolu commented on GitHub (Dec 13, 2025):
Just to clarify, unlike the OP I don't have an NVMe cache. I have an HDD RAID volume and a non-RAID NVMe volume.
Let me know if you need any specific info from my NAS.
@tievolu commented on GitHub (Dec 13, 2025):
One other thing - I had to manually comment out the
exitin order to autoupdate to the new version, because the exit occurred before the autoupdate. Maybe the autoupdate should happen first?@007revad commented on GitHub (Dec 13, 2025):
@lnlyssg
I understand why grep returned false (exit code 1). But I don't understand why is the following:
returning:
@lnlyssg and @tievolu
What does the following command return?
@lnlyssg commented on GitHub (Dec 13, 2025):
@007revad commented on GitHub (Dec 13, 2025):
@lnlyssg
I just realised you would be using SHR and have replaced 2 of the 4 drives with larger drives. So your storage pool 1 has 2 partitions in the vg1 volume group, which are:
@tievolu probably is also using SHR and has previously replaced drives with larger drives.
I can assign
pvs --noheadings --select=vg_name=vg1 --options=pv_nameto an array then check each md# in the array. Or maybe just check the first one because they are both in the same volume group.I just tried to run some tests on my old DS1812+ with DSM 6.2.4 but I remembered it is using RAID 6 and not SHR2. Though it did show me there are other potential issues with DSM 6 or drives migrated from DSM 6 because older Synology NAS can have partitions created directly on md2 or md3 etc so there is not volume group, like:
@007revad commented on GitHub (Dec 13, 2025):
lnlyssg, it looks like you have 2x 8TB HDDs and 2x 10TB HDDs in SHR?
@lnlyssg @tievolu What does this command return?
@007revad commented on GitHub (Dec 14, 2025):
@lnlyssg and @tievolu
Can you try this test script. You will probably need to
chmod +xthe check_script_location.sh file to make it executable.If the script is located on an NVMe volume the output should look like this:
If the script is not located on an NVMe volume the output should look like this:
@tievolu commented on GitHub (Dec 14, 2025):
For me it returns:
I do use SHR and I have two 6TB and two 12TB HDDs. The volume started out with just the two 6s, and the 12s were added later.
My NVMe drive is
/dev/md3.@tievolu commented on GitHub (Dec 14, 2025):
It indicates that it's on my NVMe drive, which isn't correct:
My main SHR HDD volume is
volume1, and my NVMe isvolume2.@007revad commented on GitHub (Dec 14, 2025):
@tievolu
In check_script_location.sh can you change line 40 from:
To this and run it again:
@tievolu commented on GitHub (Dec 14, 2025):
That seems to work:
@tievolu commented on GitHub (Dec 14, 2025):
And if I run it from the NVMe drive it seems to work as well
@007revad commented on GitHub (Dec 14, 2025):
v3.6.116 should work correctly.
@lnlyssg commented on GitHub (Dec 14, 2025):
Working Ok with the latest version