mirror of
https://github.com/007revad/Synology_enable_M2_volume.git
synced 2026-04-25 04:55:52 +03:00
[GH-ISSUE #204] M2 drive missing after update to 7.2.2-72806 Update 2 #473
Labels
No labels
enhancement
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_enable_M2_volume#473
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @morphias2004 on GitHub (Jan 9, 2025).
Original GitHub issue: https://github.com/007revad/Synology_enable_M2_volume/issues/204
I have an 918+. This happens every update, but this time I can't get it back.
Normally, a combination of running a restore with Synology_HDD_db and Synology_enable_M2_volume and then letting Synology_enable_M2_volume run as a scheduled task resolves it.
In the past I logged an issue where disabling Synology_HDD_db stopped the issue from happening following updates, but that is not the case this time.
@morphias2004 commented on GitHub (Jan 9, 2025):
I have just found moving Synology_enable_M2_volume from a boot task to a shutdown task has resolved the issue, but on the subsequent reboot, the issue returns.
Synology_HDD_db task remains disabled.
@morphias2004 commented on GitHub (Jan 9, 2025):
Further update:
Re-enabling the Synology_HDD_db task on boot and changing the user for both tasks from root to the admin account (not named admin because synology stopped that for security reasons) I created for normal administration of the NAS has resolved the issue.
@morphias2004 commented on GitHub (Jan 10, 2025):
Another update:
Rebooted again and the issue has returned.
Tried running a restore with both scripts and rebooting, but the issue persists.
Changed the user back to root and rebooted again as I found the scripts were erroring as they can only be run as sudo or root.
I'm trying to work out the exact set of steps that gets it back online.
I managed to get it back online by manually running Synology_HDD_db and then rebooting to let Synology_enable_M2_volume automatically run at shutdown off the scheduled task. So I edited the task so that is ran Synology_HDD_db before Synology_enable_M2_volume on shutdown, the issue returned.
I removed Synology_HDD_db from the task again. and rebooted and the issue persisted.
I manually ran Synology_HDD_db from SSH and rebooted, letting Synology_enable_M2_volume automatically run at shutdown and the drive returned.
I'm at a bit of a loss. I have to run Synology_HDD_db manually from SSH before a shutdown to get the drive to come back online after a reboot.
@KAMYSHAN commented on GitHub (Jan 20, 2025):
I also upgraded to version 7.2.2-72806 Update 2 and after launching, the NVMe disk that was working as a cache disappeared. I have tried everything mentioned above, but nothing seems to help get it back. The disk is a Samsung 980 and the Synology is a 918+. Is there any hope for getting the disk back up and running?
@KAMYSHAN commented on GitHub (Jan 20, 2025):
@007revad commented on GitHub (Jan 22, 2025):
@KAMYSHAN
Try removing the missing cache drive in storage manager. Then reboot and see if it is visible in storage manager.
If that doesn't solve it can run the following script and reply with the result.
https://github.com/007revad/Synology_HDD_db/blob/test/nvme_check.sh
@007revad commented on GitHub (Jan 22, 2025):
@morphias2004
I'm not sure what's going on there. Some people have had issues with missing NVMe drives when using both syno_hdd_db and syno_enable_m2_volume. Usually the solution is:
This works with '20 series and newer models because they only need syno_hdd_db and don't need syno_enable_m2_volume. But I don't know if it will work with a DS918+.
@KAMYSHAN commented on GitHub (Jan 22, 2025):
The disk is different from what was mentioned in the previous post, and it is also not visible.
@KAMYSHAN commented on GitHub (Jan 22, 2025):
For reference: The Arc Loader version 1.4.8 is currently installed on the computer. Two NVMe disk add-ons are connected to it. Today I plan to update the loader to the latest version 1.5.1.
Update
I have installed the new 1.5.1 evo bootloader, but the situation has not changed. I tried using two different drives: Samsung 980 and KingSpec NX-256, but the result was the same.
@007revad commented on GitHub (Jan 22, 2025):
Arc Addons includes syno_hdd_db (as hdddb) and a cut down version of syno_enable_m2_volume (as nvmevolume).
@KAMYSHAN commented on GitHub (Jan 23, 2025):
these add-ons are enabled
@AuxXxilium commented on GitHub (Jan 25, 2025):
and we are running in an issue if we use hdddb and nvmevolume, do you have an idea @007revad, i guess it's related to hdddb
@007revad commented on GitHub (Jan 25, 2025):
I've known for a while that syno_hdd_db and syno_enable_m2_volume together cause an issue. The solution is normally to only use syno_hdd_db. I personally don't use syno_enable_m2_volume any more.
If hdddb and nvmevolume also cause the same issue then it must caused by hdddb because you've parred down nvmevolume to only editing libhwcontrol.so.1
I suspect the issue started with one of these changes:
I'll have to search through all the issues to find when this issue was first reported.
The only change from v1.0.34 to v2.0.35 was that I added:
In v3.0.55 that was changed to:
After comparing v3.5.90 to v3.5.89, and v3.3.70 to v3.3.69 the changes in those versions can't be the cause because the script only edits storage_panel.js if a E10M20-T1, M2D20, M2D18 or M2D17 is found, or the
-p, --pcioption is used.I suspect the script should only
echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_supportif libhwcontrol.so.1 has not been edited?@AuxXxilium commented on GitHub (Jan 25, 2025):
thanks for answering, let me check, i know we use -p in hdddb script, i will do some tests and report back
@AuxXxilium commented on GitHub (Jan 25, 2025):
one more thing, the issue only exists on models without devicetree, so i guess there is something missing or we patch something more than we need. i try to figure it out with different versions and doing the patches manually.
@KAMYSHAN commented on GitHub (Feb 6, 2025):
@KAMYSHAN commented on GitHub (Feb 6, 2025):
He still doesn't see the NVME SSD. The latest loader is installed. I also put the disk officially supported... I have tried many different configuration options.
@AuxXxilium commented on GitHub (Feb 6, 2025):
it's working for dsm 7.2.1 but not 7.2.2 since syno changed multiole things with this release, i can't find the related parts in dsm. so it's not loader dependend.
@AuxXxilium commented on GitHub (Feb 6, 2025):
slot mapping is only working for devicetree models and your 918 isn't.
@007revad commented on GitHub (Feb 6, 2025):
I'm comparing 7.2.1 to 7.2.2 to see what's changed. So far none of the changes relate to M.2 drives, except maybe this change:
Line 3 has changed in both:
/usr/lib/systemd/system/syno-detected-pool-scan.service
/usr/lib/systemd/system/syno-bootup-done.target.wants/syno-detected-pool-scan.service
7.2.1
After=syno-space.target syno-bootup-done.service7.2.2
After=syno-space.target syno-bootup-done.service syno-check-disk-compatibility.service@AuxXxilium commented on GitHub (Feb 6, 2025):
i have different checksum for different lib files and also for libhwcontrol.so.1. which we need to patch. havn't checked the difference at all and not tried to use 7.2.1 libhwcontrol.so.1 in 7.2.2.
@KAMYSHAN commented on GitHub (Feb 7, 2025):
Hello there,
I'm curious about the possibility of adding an NVME cache to my current configuration. I currently have a 918+ setup with a valid pair, but I'm wondering if it's possible to change the device without losing any data. Can you please advise me on which option would be best and whether my valid pair would still function on a different setup?
My current DSM version is Update 2 of DSM 7.2.2, and my processor is an Intel(R) Xeon(R) E3-1265L v3 clocked at 2.50GHz, with 16GBGB of RAM and integrated graphics from Intel HD.
Thank you for your assistance!
@KAMYSHAN commented on GitHub (Feb 10, 2025):
@KAMYSHAN commented on GitHub (Feb 10, 2025):
After updating the bootloader, a new disk appeared in the system. However, it is not possible to use it as a cache at this time.
@AuxXxilium commented on GitHub (Feb 10, 2025):
this is not a help
channel for arc!