[GH-ISSUE #540] "WARNING Don't store this script on an NVMe volume!" but I'm only using SSD cache #694

Closed
opened 2026-03-11 13:11:37 +03:00 by kerem · 16 comments
Owner

Originally created by @lnlyssg on GitHub (Dec 13, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/540

Originally assigned to: @007revad on GitHub.

I tried to run the latest version of the script but get the above error. I've run the script with -x and get the below output:

Running from: /volume1/homes/jim/scripts/syno_hdd_db.sh
+ get_script_vol
+ local script_root vol_num vg_name
+ script_root=volume1/homes/jim/scripts
+ script_root=volume1
+ [[ volume1 =~ ^volume ]]
+ vol_num=1
++ lvs --noheadings --select=lv_name=volume_1 --options=vg_name
+ vg_name='  vg1 '
+ vg_name=vg1
++ pvs --noheadings --select=vg_name=vg1 --options=pv_name
+ vol_name='  /dev/md2
  /dev/md5  '
+ vol_name='/dev/md2
/dev/md5'
+ grep -qE '^md2
/dev/md5 .+ nvme' /proc/mdstat
+ ding
+ printf '\a'
+ echo -e '\n\e[0;33mWARNING\e[0m Don'\''t store this script on an NVMe volume!'

volume1 is my HDDs (with SSD cache) and I think the cache is giving a false reading and the regex needs to be amended....

❯ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sata1p5[0] sata4p5[3] sata3p5[2] sata2p5[1]
      23427586368 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md5 : active raid1 sata3p6[0] sata4p6[1]
      1946506624 blocks super 1.2 [2/2] [UU]

md4 : active raid1 nvme1n1p5[0]
      966035584 blocks super 1.2 [1/1] [U]

md3 : active raid1 nvme0n1p1[0]
      488381952 blocks super 1.2 [1/1] [U]

md1 : active raid1 sata1p2[0] sata3p2[3] sata4p2[2] sata2p2[1]
      2097088 blocks [4/4] [UUUU]

md0 : active raid1 sata1p1[0] sata4p1[3] sata3p1[2] sata2p1[1]
      2490176 blocks [4/4] [UUUU]
Originally created by @lnlyssg on GitHub (Dec 13, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/540 Originally assigned to: @007revad on GitHub. I tried to run the latest version of the script but get the above error. I've run the script with -x and get the below output: ``` Running from: /volume1/homes/jim/scripts/syno_hdd_db.sh + get_script_vol + local script_root vol_num vg_name + script_root=volume1/homes/jim/scripts + script_root=volume1 + [[ volume1 =~ ^volume ]] + vol_num=1 ++ lvs --noheadings --select=lv_name=volume_1 --options=vg_name + vg_name=' vg1 ' + vg_name=vg1 ++ pvs --noheadings --select=vg_name=vg1 --options=pv_name + vol_name=' /dev/md2 /dev/md5 ' + vol_name='/dev/md2 /dev/md5' + grep -qE '^md2 /dev/md5 .+ nvme' /proc/mdstat + ding + printf '\a' + echo -e '\n\e[0;33mWARNING\e[0m Don'\''t store this script on an NVMe volume!' ``` volume1 is my HDDs (with SSD cache) and I think the cache is giving a false reading and the regex needs to be amended.... ``` ❯ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sata1p5[0] sata4p5[3] sata3p5[2] sata2p5[1] 23427586368 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md5 : active raid1 sata3p6[0] sata4p6[1] 1946506624 blocks super 1.2 [2/2] [UU] md4 : active raid1 nvme1n1p5[0] 966035584 blocks super 1.2 [1/1] [U] md3 : active raid1 nvme0n1p1[0] 488381952 blocks super 1.2 [1/1] [U] md1 : active raid1 sata1p2[0] sata3p2[3] sata4p2[2] sata2p2[1] 2097088 blocks [4/4] [UUUU] md0 : active raid1 sata1p1[0] sata4p1[3] sata3p1[2] sata2p1[1] 2490176 blocks [4/4] [UUUU] ```
kerem closed this issue 2026-03-11 13:11:43 +03:00
Author
Owner

@tievolu commented on GitHub (Dec 13, 2025):

I have the same problem. With the latest version I see this warning and the script exits without doing anything, even though the script isn't stored on my NVMe volume.

I've been using the script since June, and the warning has always been displayed incorrectly. I just ignored it and assumed it was a warning that was always displayed regardless of where the script was stored - I didn't realise the script actually thought the script was located on an NVMe drive...

It's a problem now that it causes the script to exit.

<!-- gh-comment-id:3649834762 --> @tievolu commented on GitHub (Dec 13, 2025): I have the same problem. With the latest version I see this warning and the script exits without doing anything, even though the script isn't stored on my NVMe volume. I've been using the script since June, and the warning has always been displayed incorrectly. I just ignored it and assumed it was a warning that was always displayed regardless of where the script was stored - I didn't realise the script actually thought the script was located on an NVMe drive... It's a problem now that it causes the script to exit.
Author
Owner

@007revad commented on GitHub (Dec 13, 2025):

I've commented out the "exit" in v3.6.115

I'll find a more reliable way to detect if the script is located on an NVMe volume.

<!-- gh-comment-id:3649842989 --> @007revad commented on GitHub (Dec 13, 2025): I've commented out the "exit" in [v3.6.115](https://github.com/007revad/Synology_HDD_db/releases/tag/v3.6.115) I'll find a more reliable way to detect if the script is located on an NVMe volume.
Author
Owner

@tievolu commented on GitHub (Dec 13, 2025):

Just to clarify, unlike the OP I don't have an NVMe cache. I have an HDD RAID volume and a non-RAID NVMe volume.

Let me know if you need any specific info from my NAS.

<!-- gh-comment-id:3649845941 --> @tievolu commented on GitHub (Dec 13, 2025): Just to clarify, unlike the OP I don't have an NVMe cache. I have an HDD RAID volume and a non-RAID NVMe volume. Let me know if you need any specific info from my NAS.
Author
Owner

@tievolu commented on GitHub (Dec 13, 2025):

One other thing - I had to manually comment out the exit in order to autoupdate to the new version, because the exit occurred before the autoupdate. Maybe the autoupdate should happen first?

<!-- gh-comment-id:3649849127 --> @tievolu commented on GitHub (Dec 13, 2025): One other thing - I had to manually comment out the `exit` in order to autoupdate to the new version, because the exit occurred before the autoupdate. Maybe the autoupdate should happen first?
Author
Owner

@007revad commented on GitHub (Dec 13, 2025):

@lnlyssg

++ pvs --noheadings --select=vg_name=vg1 --options=pv_name
+ vol_name='  /dev/md2
  /dev/md5  '
+ vol_name='/dev/md2
/dev/md5'
+ grep -qE '^md2
/dev/md5 .+ nvme' /proc/mdstat

I understand why grep returned false (exit code 1). But I don't understand why is the following:

pvs --noheadings --select=vg_name=vg1 --options=pv_name

returning:

'  /dev/md2
  /dev/md5  '

@lnlyssg and @tievolu
What does the following command return?

sudp pvs
<!-- gh-comment-id:3649888058 --> @007revad commented on GitHub (Dec 13, 2025): @lnlyssg ``` ++ pvs --noheadings --select=vg_name=vg1 --options=pv_name + vol_name=' /dev/md2 /dev/md5 ' + vol_name='/dev/md2 /dev/md5' + grep -qE '^md2 /dev/md5 .+ nvme' /proc/mdstat ``` I understand why grep returned false (exit code 1). But I don't understand why is the following: ``` pvs --noheadings --select=vg_name=vg1 --options=pv_name ``` returning: ``` ' /dev/md2 /dev/md5 ' ``` @lnlyssg and @tievolu What does the following command return? ``` sudp pvs ```
Author
Owner

@lnlyssg commented on GitHub (Dec 13, 2025):

❯ sudo pvs
  PV         VG               Fmt  Attr PSize   PFree
  /dev/md2   vg1              lvm2 a--   21.82t      0
  /dev/md3   shared_cache_vg1 lvm2 a--  465.75g  65.74g
  /dev/md4   vg2              lvm2 a--  921.28g  21.27g
  /dev/md5   vg1              lvm2 a--    1.81t 620.00m
<!-- gh-comment-id:3649913179 --> @lnlyssg commented on GitHub (Dec 13, 2025): ``` ❯ sudo pvs PV VG Fmt Attr PSize PFree /dev/md2 vg1 lvm2 a-- 21.82t 0 /dev/md3 shared_cache_vg1 lvm2 a-- 465.75g 65.74g /dev/md4 vg2 lvm2 a-- 921.28g 21.27g /dev/md5 vg1 lvm2 a-- 1.81t 620.00m ```
Author
Owner

@007revad commented on GitHub (Dec 13, 2025):

@lnlyssg

I just realised you would be using SHR and have replaced 2 of the 4 drives with larger drives. So your storage pool 1 has 2 partitions in the vg1 volume group, which are:

  • a RAID 5 partition using sata1, sata2, sata3 and sata4.
  • a RAID 1 partition using sata3 and sata4.

@tievolu probably is also using SHR and has previously replaced drives with larger drives.

I can assign pvs --noheadings --select=vg_name=vg1 --options=pv_name to an array then check each md# in the array. Or maybe just check the first one because they are both in the same volume group.

I just tried to run some tests on my old DS1812+ with DSM 6.2.4 but I remembered it is using RAID 6 and not SHR2. Though it did show me there are other potential issues with DSM 6 or drives migrated from DSM 6 because older Synology NAS can have partitions created directly on md2 or md3 etc so there is not volume group, like:

root@WEBBER:~# pvs --all
  PV         VG   Fmt Attr PSize PFree
  /dev/md2            ---     0     0
<!-- gh-comment-id:3649934260 --> @007revad commented on GitHub (Dec 13, 2025): @lnlyssg I just realised you would be using SHR and have replaced 2 of the 4 drives with larger drives. So your storage pool 1 has 2 partitions in the vg1 volume group, which are: - a RAID 5 partition using sata1, sata2, sata3 and sata4. - a RAID 1 partition using sata3 and sata4. @tievolu probably is also using SHR and has previously replaced drives with larger drives. I can assign `pvs --noheadings --select=vg_name=vg1 --options=pv_name` to an array then check each md# in the array. Or maybe just check the first one because they are both in the same volume group. I just tried to run some tests on my old DS1812+ with DSM 6.2.4 but I remembered it is using RAID 6 and not SHR2. Though it did show me there are other potential issues with DSM 6 or drives migrated from DSM 6 because older Synology NAS can have partitions created directly on md2 or md3 etc so there is not volume group, like: ``` root@WEBBER:~# pvs --all PV VG Fmt Attr PSize PFree /dev/md2 --- 0 0 ```
Author
Owner

@007revad commented on GitHub (Dec 13, 2025):

lnlyssg, it looks like you have 2x 8TB HDDs and 2x 10TB HDDs in SHR?

@lnlyssg @tievolu What does this command return?

sudo pvs --noheadings --select=vg_name=vg1 --options=pv_name | awk '{print $1}'
<!-- gh-comment-id:3649937966 --> @007revad commented on GitHub (Dec 13, 2025): lnlyssg, it looks like you have 2x 8TB HDDs and 2x 10TB HDDs in SHR? @lnlyssg @tievolu What does this command return? ``` sudo pvs --noheadings --select=vg_name=vg1 --options=pv_name | awk '{print $1}' ```
Author
Owner

@007revad commented on GitHub (Dec 14, 2025):

@lnlyssg and @tievolu

Can you try this test script. You will probably need to chmod +x the check_script_location.sh file to make it executable.

If the script is located on an NVMe volume the output should look like this:

Running from: /volume3/scripts_nvme/check_script_location.sh

WARNING Don't store this script on an NVMe volume!

If the script is not located on an NVMe volume the output should look like this:

Running from: /volume1/scripts/check_script_location.sh

check_script_location.sh is not on an NVMe volume
<!-- gh-comment-id:3649953376 --> @007revad commented on GitHub (Dec 14, 2025): @lnlyssg and @tievolu Can you try [this test script](https://github.com/007revad/Synology_HDD_db/blob/test/check_script_location.sh). You will probably need to `chmod +x` the check_script_location.sh file to make it executable. If the script is located on an NVMe volume the output should look like this: ``` Running from: /volume3/scripts_nvme/check_script_location.sh WARNING Don't store this script on an NVMe volume! ``` If the script is not located on an NVMe volume the output should look like this: ``` Running from: /volume1/scripts/check_script_location.sh check_script_location.sh is not on an NVMe volume ```
Author
Owner

@tievolu commented on GitHub (Dec 14, 2025):

What does this command return?

sudo pvs --noheadings --select=vg_name=vg1 --options=pv_name | awk '{print $1}'

For me it returns:

/dev/md2
/dev/md4

I do use SHR and I have two 6TB and two 12TB HDDs. The volume started out with just the two 6s, and the 12s were added later.

My NVMe drive is /dev/md3.

<!-- gh-comment-id:3649956454 --> @tievolu commented on GitHub (Dec 14, 2025): > What does this command return? > > ``` > sudo pvs --noheadings --select=vg_name=vg1 --options=pv_name | awk '{print $1}' > ``` For me it returns: ``` /dev/md2 /dev/md4 ``` I do use SHR and I have two 6TB and two 12TB HDDs. The volume started out with just the two 6s, and the 12s were added later. My NVMe drive is `/dev/md3`.
Author
Owner

@tievolu commented on GitHub (Dec 14, 2025):

@lnlyssg and @tievolu

Can you try this test script.

It indicates that it's on my NVMe drive, which isn't correct:

Running from: /volume1/scripts/disks/Synology_HDD_db-main/check_script_location.sh

WARNING Don't store this script on an NVMe volume!

My main SHR HDD volume is volume1, and my NVMe is volume2.

<!-- gh-comment-id:3649966916 --> @tievolu commented on GitHub (Dec 14, 2025): > [@lnlyssg](https://github.com/lnlyssg) and [@tievolu](https://github.com/tievolu) > > Can you try [this test script](https://github.com/007revad/Synology_HDD_db/blob/test/check_script_location.sh). It indicates that it's on my NVMe drive, which isn't correct: ``` Running from: /volume1/scripts/disks/Synology_HDD_db-main/check_script_location.sh WARNING Don't store this script on an NVMe volume! ``` My main SHR HDD volume is `volume1`, and my NVMe is `volume2`.
Author
Owner

@007revad commented on GitHub (Dec 14, 2025):

@tievolu

In check_script_location.sh can you change line 40 from:

        vol_name=$(pvs --noheadings --select=vg_name="$vg_name" --options=pv_name | awk '{print $1}')

To this and run it again:

        vol_name=$(pvs --noheadings --select=vg_name="$vg_name" --options=pv_name | awk '{print $1}' | head -n 1)
<!-- gh-comment-id:3649971001 --> @007revad commented on GitHub (Dec 14, 2025): @tievolu In check_script_location.sh can you change line 40 from: ``` vol_name=$(pvs --noheadings --select=vg_name="$vg_name" --options=pv_name | awk '{print $1}') ``` To this and run it again: ``` vol_name=$(pvs --noheadings --select=vg_name="$vg_name" --options=pv_name | awk '{print $1}' | head -n 1) ```
Author
Owner

@tievolu commented on GitHub (Dec 14, 2025):

That seems to work:

Running from: /volume1/scripts/disks/Synology_HDD_db-main/check_script_location.sh

check_script_location.sh is not on an NVMe volume
<!-- gh-comment-id:3649978707 --> @tievolu commented on GitHub (Dec 14, 2025): That _seems_ to work: ``` Running from: /volume1/scripts/disks/Synology_HDD_db-main/check_script_location.sh check_script_location.sh is not on an NVMe volume ```
Author
Owner

@tievolu commented on GitHub (Dec 14, 2025):

And if I run it from the NVMe drive it seems to work as well

Running from: /volume2/docker/check_script_location.sh

WARNING Don't store this script on an NVMe volume!
<!-- gh-comment-id:3649981864 --> @tievolu commented on GitHub (Dec 14, 2025): And if I run it from the NVMe drive it seems to work as well ``` Running from: /volume2/docker/check_script_location.sh WARNING Don't store this script on an NVMe volume! ```
Author
Owner

@007revad commented on GitHub (Dec 14, 2025):

v3.6.116 should work correctly.

<!-- gh-comment-id:3650084595 --> @007revad commented on GitHub (Dec 14, 2025): [v3.6.116](https://github.com/007revad/Synology_HDD_db/releases/tag/v3.6.116) should work correctly.
Author
Owner

@lnlyssg commented on GitHub (Dec 14, 2025):

Working Ok with the latest version

<!-- gh-comment-id:3650553420 --> @lnlyssg commented on GitHub (Dec 14, 2025): Working Ok with the latest version
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#694
No description provided.