[GH-ISSUE #204] M2 drive missing after update to 7.2.2-72806 Update 2 #473

Open
opened 2026-03-12 15:51:58 +03:00 by kerem · 25 comments
Owner

Originally created by @morphias2004 on GitHub (Jan 9, 2025).
Original GitHub issue: https://github.com/007revad/Synology_enable_M2_volume/issues/204

I have an 918+. This happens every update, but this time I can't get it back.

Normally, a combination of running a restore with Synology_HDD_db and Synology_enable_M2_volume and then letting Synology_enable_M2_volume run as a scheduled task resolves it.

In the past I logged an issue where disabling Synology_HDD_db stopped the issue from happening following updates, but that is not the case this time.

image

image image image
Originally created by @morphias2004 on GitHub (Jan 9, 2025). Original GitHub issue: https://github.com/007revad/Synology_enable_M2_volume/issues/204 I have an 918+. This happens every update, but this time I can't get it back. Normally, a combination of running a restore with **Synology_HDD_db** and **Synology_enable_M2_volume** and then letting Synology_enable_M2_volume run as a scheduled task resolves it. In the past I logged an issue where disabling **Synology_HDD_db** stopped the issue from happening following updates, but that is not the case this time. ![image](https://github.com/user-attachments/assets/1f770a7e-e065-4ef0-b413-18de8ed023f7) <img width="722" alt="image" src="https://github.com/user-attachments/assets/1d029ca4-0aeb-466c-bbe3-105079c10dc9" /> <img width="756" alt="image" src="https://github.com/user-attachments/assets/9f2f24f5-b6b0-4449-8932-979cbd18c14d" /> <img width="783" alt="image" src="https://github.com/user-attachments/assets/43e843c7-fee8-49fc-bbfd-7e99cf94aa56" />
Author
Owner

@morphias2004 commented on GitHub (Jan 9, 2025):

I have just found moving Synology_enable_M2_volume from a boot task to a shutdown task has resolved the issue, but on the subsequent reboot, the issue returns.

Synology_HDD_db task remains disabled.

<!-- gh-comment-id:2579914204 --> @morphias2004 commented on GitHub (Jan 9, 2025): I have just found moving **Synology_enable_M2_volume** from a boot task to a shutdown task has resolved the issue, but on the subsequent reboot, the issue returns. **Synology_HDD_db** task remains disabled.
Author
Owner

@morphias2004 commented on GitHub (Jan 9, 2025):

Further update:

Re-enabling the Synology_HDD_db task on boot and changing the user for both tasks from root to the admin account (not named admin because synology stopped that for security reasons) I created for normal administration of the NAS has resolved the issue.

<!-- gh-comment-id:2579956755 --> @morphias2004 commented on GitHub (Jan 9, 2025): Further update: Re-enabling the **Synology_HDD_db** task on boot and changing the user for both tasks from root to the admin account (not named admin because synology stopped that for security reasons) I created for normal administration of the NAS has resolved the issue.
Author
Owner

@morphias2004 commented on GitHub (Jan 10, 2025):

Another update:

Rebooted again and the issue has returned.

Tried running a restore with both scripts and rebooting, but the issue persists.

Changed the user back to root and rebooted again as I found the scripts were erroring as they can only be run as sudo or root.

I'm trying to work out the exact set of steps that gets it back online.

I managed to get it back online by manually running Synology_HDD_db and then rebooting to let Synology_enable_M2_volume automatically run at shutdown off the scheduled task. So I edited the task so that is ran Synology_HDD_db before Synology_enable_M2_volume on shutdown, the issue returned.

I removed Synology_HDD_db from the task again. and rebooted and the issue persisted.

I manually ran Synology_HDD_db from SSH and rebooted, letting Synology_enable_M2_volume automatically run at shutdown and the drive returned.

I'm at a bit of a loss. I have to run Synology_HDD_db manually from SSH before a shutdown to get the drive to come back online after a reboot.

<!-- gh-comment-id:2581695415 --> @morphias2004 commented on GitHub (Jan 10, 2025): Another update: Rebooted again and the issue has returned. Tried running a restore with both scripts and rebooting, but the issue persists. Changed the user back to root and rebooted again as I found the scripts were erroring as they can only be run as sudo or root. I'm trying to work out the exact set of steps that gets it back online. I managed to get it back online by manually running **Synology_HDD_db** and then rebooting to let **Synology_enable_M2_volume** automatically run at shutdown off the scheduled task. So I edited the task so that is ran **Synology_HDD_db** before **Synology_enable_M2_volume** on shutdown, the issue returned. I removed **Synology_HDD_db** from the task again. and rebooted and the issue persisted. I manually ran **Synology_HDD_db** from SSH and rebooted, letting **Synology_enable_M2_volume** automatically run at shutdown and the drive returned. I'm at a bit of a loss. I have to run **Synology_HDD_db** manually from SSH before a shutdown to get the drive to come back online after a reboot.
Author
Owner

@KAMYSHAN commented on GitHub (Jan 20, 2025):

I also upgraded to version 7.2.2-72806 Update 2 and after launching, the NVMe disk that was working as a cache disappeared. I have tried everything mentioned above, but nothing seems to help get it back. The disk is a Samsung 980 and the Synology is a 918+. Is there any hope for getting the disk back up and running?

<!-- gh-comment-id:2602878636 --> @KAMYSHAN commented on GitHub (Jan 20, 2025): I also upgraded to version 7.2.2-72806 Update 2 and after launching, the NVMe disk that was working as a cache disappeared. I have tried everything mentioned above, but nothing seems to help get it back. The disk is a Samsung 980 and the Synology is a 918+. Is there any hope for getting the disk back up and running?
Author
Owner

@KAMYSHAN commented on GitHub (Jan 20, 2025):

Image
Image

<!-- gh-comment-id:2602890437 --> @KAMYSHAN commented on GitHub (Jan 20, 2025): ![Image](https://github.com/user-attachments/assets/87e756c6-54ab-4e68-ad2d-8705253ef7ae) ![Image](https://github.com/user-attachments/assets/050cf0be-e53a-413b-94ea-b0c9003e2156)
Author
Owner

@007revad commented on GitHub (Jan 22, 2025):

@KAMYSHAN
Try removing the missing cache drive in storage manager. Then reboot and see if it is visible in storage manager.

If that doesn't solve it can run the following script and reply with the result.
https://github.com/007revad/Synology_HDD_db/blob/test/nvme_check.sh

<!-- gh-comment-id:2606385208 --> @007revad commented on GitHub (Jan 22, 2025): @KAMYSHAN Try removing the missing cache drive in storage manager. Then reboot and see if it is visible in storage manager. If that doesn't solve it can run the following script and reply with the result. https://github.com/007revad/Synology_HDD_db/blob/test/nvme_check.sh
Author
Owner

@007revad commented on GitHub (Jan 22, 2025):

@morphias2004
I'm not sure what's going on there. Some people have had issues with missing NVMe drives when using both syno_hdd_db and syno_enable_m2_volume. Usually the solution is:

  1. Disable the syno_enable_m2_volume schedule.
  2. Leave syno_hdd_db scheduled.
  3. Run syno_enable_m2_volume --restore
  4. Reboot.

This works with '20 series and newer models because they only need syno_hdd_db and don't need syno_enable_m2_volume. But I don't know if it will work with a DS918+.

<!-- gh-comment-id:2606430306 --> @007revad commented on GitHub (Jan 22, 2025): @morphias2004 I'm not sure what's going on there. Some people have had issues with missing NVMe drives when using both syno_hdd_db and syno_enable_m2_volume. Usually the solution is: 1. Disable the syno_enable_m2_volume schedule. 2. Leave syno_hdd_db scheduled. 3. Run syno_enable_m2_volume --restore 4. Reboot. This works with '20 series and newer models because they only need syno_hdd_db and don't need syno_enable_m2_volume. But I don't know if it will work with a DS918+.
Author
Owner

@KAMYSHAN commented on GitHub (Jan 22, 2025):

@KAMYSHAN Try removing the missing cache drive in storage manager. Then reboot and see if it is visible in storage manager.

If that doesn't solve it can run the following script and reply with the result. https://github.com/007revad/Synology_HDD_db/blob/test/nvme_check.sh

Image

The disk is different from what was mentioned in the previous post, and it is also not visible.

<!-- gh-comment-id:2606658097 --> @KAMYSHAN commented on GitHub (Jan 22, 2025): > [@KAMYSHAN](https://github.com/KAMYSHAN) Try removing the missing cache drive in storage manager. Then reboot and see if it is visible in storage manager. > > If that doesn't solve it can run the following script and reply with the result. https://github.com/007revad/Synology_HDD_db/blob/test/nvme_check.sh ![Image](https://github.com/user-attachments/assets/99dc22f9-86ae-4274-8788-4a909df7ca0d) The disk is different from what was mentioned in the previous post, and it is also not visible.
Author
Owner

@KAMYSHAN commented on GitHub (Jan 22, 2025):

For reference: The Arc Loader version 1.4.8 is currently installed on the computer. Two NVMe disk add-ons are connected to it. Today I plan to update the loader to the latest version 1.5.1.

Update
I have installed the new 1.5.1 evo bootloader, but the situation has not changed. I tried using two different drives: Samsung 980 and KingSpec NX-256, but the result was the same.

<!-- gh-comment-id:2606669507 --> @KAMYSHAN commented on GitHub (Jan 22, 2025): For reference: The Arc Loader version 1.4.8 is currently installed on the computer. Two NVMe disk add-ons are connected to it. Today I plan to update the loader to the latest version 1.5.1. **Update** I have installed the new 1.5.1 evo bootloader, but the situation has not changed. I tried using two different drives: Samsung 980 and KingSpec NX-256, but the result was the same.
Author
Owner

@007revad commented on GitHub (Jan 22, 2025):

Arc Addons includes syno_hdd_db (as hdddb) and a cut down version of syno_enable_m2_volume (as nvmevolume).

<!-- gh-comment-id:2607834086 --> @007revad commented on GitHub (Jan 22, 2025): Arc Addons includes syno_hdd_db (as hdddb) and a cut down version of syno_enable_m2_volume (as nvmevolume).
Author
Owner

@KAMYSHAN commented on GitHub (Jan 23, 2025):

Arc Addons includes syno_hdd_db (as hdddb) and a cut down version of syno_enable_m2_volume (as nvmevolume).

these add-ons are enabled

<!-- gh-comment-id:2608994545 --> @KAMYSHAN commented on GitHub (Jan 23, 2025): > Arc Addons includes syno_hdd_db (as hdddb) and a cut down version of syno_enable_m2_volume (as nvmevolume). these add-ons are enabled
Author
Owner

@AuxXxilium commented on GitHub (Jan 25, 2025):

and we are running in an issue if we use hdddb and nvmevolume, do you have an idea @007revad, i guess it's related to hdddb

<!-- gh-comment-id:2614032673 --> @AuxXxilium commented on GitHub (Jan 25, 2025): and we are running in an issue if we use hdddb and nvmevolume, do you have an idea @007revad, i guess it's related to hdddb
Author
Owner

@007revad commented on GitHub (Jan 25, 2025):

I've known for a while that syno_hdd_db and syno_enable_m2_volume together cause an issue. The solution is normally to only use syno_hdd_db. I personally don't use syno_enable_m2_volume any more.

If hdddb and nvmevolume also cause the same issue then it must caused by hdddb because you've parred down nvmevolume to only editing libhwcontrol.so.1

I suspect the issue started with one of these changes:

v3.5.90 - May 11, 2024
- Changed to enable creating storage pools/volumes on NVMe drives in a PCIe M.2 adaptor in DSM 7.2
  - Previously only supported DSM 7.2.1
- Changed to enable creating storage pools/volumes on NVMe drives in a PCIe M.2 adaptor even if PCIe M.2 adaptor not found.
  - This may allow creating NVMe volumes on 3rd party PCIe M.2 adaptors.

v3.3.70 - Dec 17, 2023
- Now enables creating storage pools in Storage Manager for M.2 drives in PCIe adaptor cards:
  - E10M20-T1, M2D20, M2D18 and M2D17.

v3.0.55 - Jun 20, 2023
- Fixed error if /run/synostorage/disks/nvme0n1/m2_pool_support doesn't exist yet (for DSM 7.2).

v2.0.35 - Apr 8, 2023
- Now allows creating M.2 storage pool and volume all from Storage Manager.

I'll have to search through all the issues to find when this issue was first reported.

The only change from v1.0.34 to v2.0.35 was that I added:

# Enable creating M.2 storage pool and volume in Storage Manager
echo 1 > /run/synostorage/disks/$(basename -- "$d")/m2_pool_support

In v3.0.55 that was changed to:

if [[ -f /run/synostorage/disks/"$(basename -- "$1")"/m2_pool_support ]]; then  # GitHub issue #86, #87
    echo 1 > /run/synostorage/disks/"$(basename -- "$1")"/m2_pool_support
fi

After comparing v3.5.90 to v3.5.89, and v3.3.70 to v3.3.69 the changes in those versions can't be the cause because the script only edits storage_panel.js if a E10M20-T1, M2D20, M2D18 or M2D17 is found, or the -p, --pci option is used.

I suspect the script should only echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_support if libhwcontrol.so.1 has not been edited?

<!-- gh-comment-id:2614087310 --> @007revad commented on GitHub (Jan 25, 2025): I've known for a while that syno_hdd_db and syno_enable_m2_volume together cause an issue. The solution is normally to only use syno_hdd_db. I personally don't use syno_enable_m2_volume any more. If hdddb and nvmevolume also cause the same issue then it must caused by hdddb because you've parred down nvmevolume to only editing libhwcontrol.so.1 I suspect the issue started with one of these changes: ``` v3.5.90 - May 11, 2024 - Changed to enable creating storage pools/volumes on NVMe drives in a PCIe M.2 adaptor in DSM 7.2 - Previously only supported DSM 7.2.1 - Changed to enable creating storage pools/volumes on NVMe drives in a PCIe M.2 adaptor even if PCIe M.2 adaptor not found. - This may allow creating NVMe volumes on 3rd party PCIe M.2 adaptors. v3.3.70 - Dec 17, 2023 - Now enables creating storage pools in Storage Manager for M.2 drives in PCIe adaptor cards: - E10M20-T1, M2D20, M2D18 and M2D17. v3.0.55 - Jun 20, 2023 - Fixed error if /run/synostorage/disks/nvme0n1/m2_pool_support doesn't exist yet (for DSM 7.2). v2.0.35 - Apr 8, 2023 - Now allows creating M.2 storage pool and volume all from Storage Manager. ``` I'll have to search through all the issues to find when this issue was first reported. The only change from v1.0.34 to v2.0.35 was that I added: ``` # Enable creating M.2 storage pool and volume in Storage Manager echo 1 > /run/synostorage/disks/$(basename -- "$d")/m2_pool_support ``` In v3.0.55 that was changed to: ``` if [[ -f /run/synostorage/disks/"$(basename -- "$1")"/m2_pool_support ]]; then # GitHub issue #86, #87 echo 1 > /run/synostorage/disks/"$(basename -- "$1")"/m2_pool_support fi ``` After comparing v3.5.90 to v3.5.89, and v3.3.70 to v3.3.69 the changes in those versions can't be the cause because the script only edits storage_panel.js if a E10M20-T1, M2D20, M2D18 or M2D17 is found, or the `-p, --pci` option is used. I suspect the script should only `echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_support` if libhwcontrol.so.1 has not been edited?
Author
Owner

@AuxXxilium commented on GitHub (Jan 25, 2025):

thanks for answering, let me check, i know we use -p in hdddb script, i will do some tests and report back

<!-- gh-comment-id:2614095486 --> @AuxXxilium commented on GitHub (Jan 25, 2025): thanks for answering, let me check, i know we use -p in hdddb script, i will do some tests and report back
Author
Owner

@AuxXxilium commented on GitHub (Jan 25, 2025):

one more thing, the issue only exists on models without devicetree, so i guess there is something missing or we patch something more than we need. i try to figure it out with different versions and doing the patches manually.

<!-- gh-comment-id:2614104769 --> @AuxXxilium commented on GitHub (Jan 25, 2025): one more thing, the issue only exists on models without devicetree, so i guess there is something missing or we patch something more than we need. i try to figure it out with different versions and doing the patches manually.
Author
Owner

@KAMYSHAN commented on GitHub (Feb 6, 2025):

Image
Image
Image

<!-- gh-comment-id:2640040473 --> @KAMYSHAN commented on GitHub (Feb 6, 2025): ![Image](https://github.com/user-attachments/assets/e3755d50-a5fe-4cf6-81cd-2b8c9aca7f0d) ![Image](https://github.com/user-attachments/assets/b80d67ed-27ba-4d6b-860d-a7531a56c15f) ![Image](https://github.com/user-attachments/assets/ba0ae2c3-3e91-49ee-8681-2d499f8fdbbc)
Author
Owner

@KAMYSHAN commented on GitHub (Feb 6, 2025):

He still doesn't see the NVME SSD. The latest loader is installed. I also put the disk officially supported... I have tried many different configuration options.

<!-- gh-comment-id:2640053444 --> @KAMYSHAN commented on GitHub (Feb 6, 2025): He still doesn't see the NVME SSD. The latest loader is installed. I also put the disk officially supported... I have tried many different configuration options.
Author
Owner

@AuxXxilium commented on GitHub (Feb 6, 2025):

He still doesn't see the NVME SSD. The latest loader is installed. I also put the disk officially supported... I have tried many different configuration options.

it's working for dsm 7.2.1 but not 7.2.2 since syno changed multiole things with this release, i can't find the related parts in dsm. so it's not loader dependend.

<!-- gh-comment-id:2640378007 --> @AuxXxilium commented on GitHub (Feb 6, 2025): > He still doesn't see the NVME SSD. The latest loader is installed. I also put the disk officially supported... I have tried many different configuration options. it's working for dsm 7.2.1 but not 7.2.2 since syno changed multiole things with this release, i can't find the related parts in dsm. so it's not loader dependend.
Author
Owner

@AuxXxilium commented on GitHub (Feb 6, 2025):

Image Image Image

slot mapping is only working for devicetree models and your 918 isn't.

<!-- gh-comment-id:2640385360 --> @AuxXxilium commented on GitHub (Feb 6, 2025): > ![Image](https://github.com/user-attachments/assets/e3755d50-a5fe-4cf6-81cd-2b8c9aca7f0d) ![Image](https://github.com/user-attachments/assets/b80d67ed-27ba-4d6b-860d-a7531a56c15f) ![Image](https://github.com/user-attachments/assets/ba0ae2c3-3e91-49ee-8681-2d499f8fdbbc) slot mapping is only working for devicetree models and your 918 isn't.
Author
Owner

@007revad commented on GitHub (Feb 6, 2025):

I'm comparing 7.2.1 to 7.2.2 to see what's changed. So far none of the changes relate to M.2 drives, except maybe this change:

Line 3 has changed in both:
/usr/lib/systemd/system/syno-detected-pool-scan.service
/usr/lib/systemd/system/syno-bootup-done.target.wants/syno-detected-pool-scan.service

7.2.1
After=syno-space.target syno-bootup-done.service

7.2.2
After=syno-space.target syno-bootup-done.service syno-check-disk-compatibility.service

<!-- gh-comment-id:2640734029 --> @007revad commented on GitHub (Feb 6, 2025): I'm comparing 7.2.1 to 7.2.2 to see what's changed. So far none of the changes relate to M.2 drives, except maybe this change: Line 3 has changed in both: /usr/lib/systemd/system/syno-detected-pool-scan.service /usr/lib/systemd/system/syno-bootup-done.target.wants/syno-detected-pool-scan.service 7.2.1 `After=syno-space.target syno-bootup-done.service` 7.2.2 `After=syno-space.target syno-bootup-done.service syno-check-disk-compatibility.service`
Author
Owner

@AuxXxilium commented on GitHub (Feb 6, 2025):

I'm comparing 7.2.1 to 7.2.2 to see what's changed. So far none of the changes relate to M.2 drives, except maybe this change:

Line 3 has changed in both: /usr/lib/systemd/system/syno-detected-pool-scan.service /usr/lib/systemd/system/syno-bootup-done.target.wants/syno-detected-pool-scan.service

7.2.1 After=syno-space.target syno-bootup-done.service

7.2.2 After=syno-space.target syno-bootup-done.service syno-check-disk-compatibility.service

i have different checksum for different lib files and also for libhwcontrol.so.1. which we need to patch. havn't checked the difference at all and not tried to use 7.2.1 libhwcontrol.so.1 in 7.2.2.

<!-- gh-comment-id:2640747386 --> @AuxXxilium commented on GitHub (Feb 6, 2025): > I'm comparing 7.2.1 to 7.2.2 to see what's changed. So far none of the changes relate to M.2 drives, except maybe this change: > > Line 3 has changed in both: /usr/lib/systemd/system/syno-detected-pool-scan.service /usr/lib/systemd/system/syno-bootup-done.target.wants/syno-detected-pool-scan.service > > 7.2.1 `After=syno-space.target syno-bootup-done.service` > > 7.2.2 `After=syno-space.target syno-bootup-done.service syno-check-disk-compatibility.service` i have different checksum for different lib files and also for libhwcontrol.so.1. which we need to patch. havn't checked the difference at all and not tried to use 7.2.1 libhwcontrol.so.1 in 7.2.2.
Author
Owner

@KAMYSHAN commented on GitHub (Feb 7, 2025):

Hello there,
I'm curious about the possibility of adding an NVME cache to my current configuration. I currently have a 918+ setup with a valid pair, but I'm wondering if it's possible to change the device without losing any data. Can you please advise me on which option would be best and whether my valid pair would still function on a different setup?
My current DSM version is Update 2 of DSM 7.2.2, and my processor is an Intel(R) Xeon(R) E3-1265L v3 clocked at 2.50GHz, with 16GBGB of RAM and integrated graphics from Intel HD.
Thank you for your assistance!

<!-- gh-comment-id:2642114854 --> @KAMYSHAN commented on GitHub (Feb 7, 2025): Hello there, I'm curious about the possibility of adding an NVME cache to my current configuration. I currently have a 918+ setup with a valid pair, but I'm wondering if it's possible to change the device without losing any data. Can you please advise me on which option would be best and whether my valid pair would still function on a different setup? My current DSM version is Update 2 of DSM 7.2.2, and my processor is an Intel(R) Xeon(R) E3-1265L v3 clocked at 2.50GHz, with 16GBGB of RAM and integrated graphics from Intel HD. Thank you for your assistance!
Author
Owner

@KAMYSHAN commented on GitHub (Feb 10, 2025):

Image
Image
Image
Image

<!-- gh-comment-id:2648014410 --> @KAMYSHAN commented on GitHub (Feb 10, 2025): ![Image](https://github.com/user-attachments/assets/555b6b93-48c7-43d5-bc03-5dab7c910b86) ![Image](https://github.com/user-attachments/assets/e2f50426-0212-4e10-a6a2-14a836243a27) ![Image](https://github.com/user-attachments/assets/f0fadf5d-01c5-4285-a57f-f6408adaf72b) ![Image](https://github.com/user-attachments/assets/0c7ce25b-9025-4eac-8aa6-bbd92fde2246)
Author
Owner

@KAMYSHAN commented on GitHub (Feb 10, 2025):

After updating the bootloader, a new disk appeared in the system. However, it is not possible to use it as a cache at this time.

<!-- gh-comment-id:2648018653 --> @KAMYSHAN commented on GitHub (Feb 10, 2025): After updating the bootloader, a new disk appeared in the system. However, it is not possible to use it as a cache at this time.
Author
Owner

@AuxXxilium commented on GitHub (Feb 10, 2025):

After updating the bootloader, a new disk appeared in the system. However, it is not possible to use it as a cache at this time.

this is not a help
channel for arc!

<!-- gh-comment-id:2648404605 --> @AuxXxilium commented on GitHub (Feb 10, 2025): > After updating the bootloader, a new disk appeared in the system. However, it is not possible to use it as a cache at this time. > this is not a help channel for arc!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_enable_M2_volume#473
No description provided.