mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #171] Failed to re-add Samsung SSD after upgrading to DSM 7.2.1-69057 Update 3 and ran the script #574
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#574
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jk1z on GitHub (Dec 13, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/171
Originally assigned to: @007revad on GitHub.
After upgrading to DSM 7.2.1-69057 Update 3 and re-run the script. Somehow SSD is not included in the storage pool. How can I re-add SSD back?




@007revad commented on GitHub (Dec 13, 2023):
Did you do the Online Assemble that the warning mentioned?
See how to do an online assemble
@jk1z commented on GitHub (Dec 17, 2023):
@007revad No, because online assemble only available to the "available storage pool" That option wasn't there
@007revad commented on GitHub (Dec 18, 2023):
Try running the script again then rebooting.
@007revad commented on GitHub (Dec 23, 2023):
@jk1z not responding.
@jk1z commented on GitHub (Dec 23, 2023):
@007revad Hi I have tried downloading new binary and run it. Still no luck. I have even taken the nvme drive out and format it.
@jk1z commented on GitHub (Dec 23, 2023):
I think it's stuck in a place where it's "in" the storage pool config. However, because it's not in the UI, I cannot remove it and then repair it
@jk1z commented on GitHub (Dec 23, 2023):
Is there a way you know I can ssh and remove the ssd disk storage config
@007revad commented on GitHub (Dec 23, 2023):
As you have no data on the drive now you could try https://github.com/007revad/Synology_M2_volume which will create the DSM system and swap partitions and create a storage pool. After a reboot the online assemble option should appear.
@jk1z commented on GitHub (Dec 23, 2023):
@007revad I'm getting this error :'(

@jk1z commented on GitHub (Dec 23, 2023):
@007revad commented on GitHub (Dec 23, 2023):
DSM thinks that NVMe drive is part of a cache group. Maybe a read/write cache with 1 NVMe drive missing.
Did you previously have a cache setup for volume 1?
If you go to "Storage Manager > Storage" and click on "Create > Volume" is the NVMe drive available?
@jk1z commented on GitHub (Dec 24, 2023):
Yes, I have in DSM 7.2.1 Update 2. but once upgraded to update 3 the nvme drive disappeared in the cache group.
@jk1z commented on GitHub (Dec 24, 2023):
@007revad commented on GitHub (Dec 24, 2023):
A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back.
Try the following command:
sudo -i synostorage --unlock-disk /dev/nvme0Then reboot.
Apparently it can take a few hours for things to appear normal.
@jk1z commented on GitHub (Dec 25, 2023):
when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db?
@jk1z commented on GitHub (Dec 25, 2023):
Still no luck but I will perform a data scrubbing to see if it does any good
@007revad commented on GitHub (Dec 25, 2023):
The syno_hdd_db script.
@jk1z commented on GitHub (Dec 26, 2023):
@007revad commented on GitHub (Dec 26, 2023):
You could try shutting down the NAS, remove the NVMe drive, bootup, shut down, insert NVMe drive and boot up to see if it clears the error.
What do the following commands return?
sudo nvme listudevadm info /dev/nvme0n1cat /proc/mdstat | grep -E -A 2 'nvme|unused'ls /run/synostorage/disk_cache_targetfor f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done@PeterSuh-Q3 commented on GitHub (Dec 29, 2023):
Yesterday, a user using DS918+ was already using two Micron 1100 SSD 2TB products included in the compatibility list from Synology, but there was an issue where a problem occurred after the drive database information was updated.
For detailed inquiries, we will contact you as a separate issue.
It seems to me that this issue is also relevant.
This problem seems to have occurred due to a merge update to DB information already included in the compatibility list.
@jk1z commented on GitHub (Dec 29, 2023):
@007revad commented on GitHub (Dec 31, 2023):
These 2 stand out to me:
Have you previously run Synology_enable_M2_volume?
What do these commands return?
ls -l /usr/lib/libhwcontrol.so.*md5sum -b /usr/lib/libhwcontrol.so.1The last command should return:
afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1Also try the following:
sudo -i syno_hdd_db.sh --restorethen reboot.@jk1z commented on GitHub (Jan 1, 2024):
I don't think so. Should I?
@jk1z commented on GitHub (Jan 1, 2024):
Looks like this file has been modified
@jk1z commented on GitHub (Jan 1, 2024):
I will restore all of the files and try v3.3.74
@jk1z commented on GitHub (Jan 1, 2024):
I did the following. It still stuck

@jk1z commented on GitHub (Jan 1, 2024):
I ran the debug commands again. Here is the output.
@007revad commented on GitHub (Jan 2, 2024):
I should have asked if you were using Xpenology. Hopefully PeterSuh-Q3 can help you.
@jk1z commented on GitHub (Jan 4, 2024):
Ah, ok I see. I might replace the nvme with another one. It looks like this config is permanently stuck