[GH-ISSUE #171] Failed to re-add Samsung SSD after upgrading to DSM 7.2.1-69057 Update 3 and ran the script #574

Closed
opened 2026-03-11 12:16:31 +03:00 by kerem · 29 comments
Owner

Originally created by @jk1z on GitHub (Dec 13, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/171

Originally assigned to: @007revad on GitHub.

After upgrading to DSM 7.2.1-69057 Update 3 and re-run the script. Somehow SSD is not included in the storage pool. How can I re-add SSD back?
image
image
image
image

Originally created by @jk1z on GitHub (Dec 13, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/171 Originally assigned to: @007revad on GitHub. After upgrading to DSM 7.2.1-69057 Update 3 and re-run the script. Somehow SSD is not included in the storage pool. How can I re-add SSD back? ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/131ab84d-a559-4980-8cc5-34110453de5d) ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/cf14cc7a-cbf8-43b7-a0eb-0e2afd370fbb) ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/b38d7b7a-c899-4356-aad0-777d14d581d8) ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/3022c5eb-4e83-4105-a098-55fb5d5cfd93)
kerem closed this issue 2026-03-11 12:16:36 +03:00
Author
Owner

@007revad commented on GitHub (Dec 13, 2023):

Did you do the Online Assemble that the warning mentioned?

See how to do an online assemble

<!-- gh-comment-id:1854597330 --> @007revad commented on GitHub (Dec 13, 2023): Did you do the Online Assemble that the warning mentioned? See [how to do an online assemble](https://github.com/007revad/Synology_M2_volume/blob/main/images/create_m2_volume_online_assemble.png)
Author
Owner

@jk1z commented on GitHub (Dec 17, 2023):

@007revad No, because online assemble only available to the "available storage pool" That option wasn't there

<!-- gh-comment-id:1859155304 --> @jk1z commented on GitHub (Dec 17, 2023): @007revad No, because online assemble only available to the "available storage pool" That option wasn't there
Author
Owner

@007revad commented on GitHub (Dec 18, 2023):

Try running the script again then rebooting.

<!-- gh-comment-id:1859446806 --> @007revad commented on GitHub (Dec 18, 2023): Try running the script again then rebooting.
Author
Owner

@007revad commented on GitHub (Dec 23, 2023):

@jk1z not responding.

<!-- gh-comment-id:1868159029 --> @007revad commented on GitHub (Dec 23, 2023): @jk1z not responding.
Author
Owner

@jk1z commented on GitHub (Dec 23, 2023):

@007revad Hi I have tried downloading new binary and run it. Still no luck. I have even taken the nvme drive out and format it.

<!-- gh-comment-id:1868202143 --> @jk1z commented on GitHub (Dec 23, 2023): @007revad Hi I have tried downloading new binary and run it. Still no luck. I have even taken the nvme drive out and format it.
Author
Owner

@jk1z commented on GitHub (Dec 23, 2023):

I think it's stuck in a place where it's "in" the storage pool config. However, because it's not in the UI, I cannot remove it and then repair it

<!-- gh-comment-id:1868202241 --> @jk1z commented on GitHub (Dec 23, 2023): I think it's stuck in a place where it's "in" the storage pool config. However, because it's not in the UI, I cannot remove it and then repair it
Author
Owner

@jk1z commented on GitHub (Dec 23, 2023):

Is there a way you know I can ssh and remove the ssd disk storage config

<!-- gh-comment-id:1868202363 --> @jk1z commented on GitHub (Dec 23, 2023): Is there a way you know I can ssh and remove the ssd disk storage config
Author
Owner

@007revad commented on GitHub (Dec 23, 2023):

I have even taken the nvme drive out and format it.

As you have no data on the drive now you could try https://github.com/007revad/Synology_M2_volume which will create the DSM system and swap partitions and create a storage pool. After a reboot the online assemble option should appear.

<!-- gh-comment-id:1868241958 --> @007revad commented on GitHub (Dec 23, 2023): > I have even taken the nvme drive out and format it. As you have no data on the drive now you could try https://github.com/007revad/Synology_M2_volume which will create the DSM system and swap partitions and create a storage pool. After a reboot the online assemble option should appear.
Author
Owner

@jk1z commented on GitHub (Dec 23, 2023):

@007revad I'm getting this error :'(
image

<!-- gh-comment-id:1868290265 --> @jk1z commented on GitHub (Dec 23, 2023): @007revad I'm getting this error :'( <img width="766" alt="image" src="https://github.com/007revad/Synology_HDD_db/assets/18542127/ad406b82-4bc5-4557-8746-7d1043434807">
Author
Owner

@jk1z commented on GitHub (Dec 23, 2023):

image image image I have successfully execute the script but did not have the new storage pool
<!-- gh-comment-id:1868292676 --> @jk1z commented on GitHub (Dec 23, 2023): <img width="1031" alt="image" src="https://github.com/007revad/Synology_HDD_db/assets/18542127/89853f93-2a94-44ba-82de-e01dc0b9987e"> <img width="765" alt="image" src="https://github.com/007revad/Synology_HDD_db/assets/18542127/9da2c557-8dde-4ee1-96f1-05016239578e"> <img width="580" alt="image" src="https://github.com/007revad/Synology_HDD_db/assets/18542127/5275c1ff-b069-4786-b7eb-f0207b0dbf04"> I have successfully execute the script but did not have the new storage pool
Author
Owner

@007revad commented on GitHub (Dec 23, 2023):

DSM thinks that NVMe drive is part of a cache group. Maybe a read/write cache with 1 NVMe drive missing.

Did you previously have a cache setup for volume 1?

If you go to "Storage Manager > Storage" and click on "Create > Volume" is the NVMe drive available?

<!-- gh-comment-id:1868380259 --> @007revad commented on GitHub (Dec 23, 2023): DSM thinks that NVMe drive is part of a cache group. Maybe a read/write cache with 1 NVMe drive missing. Did you previously have a cache setup for volume 1? If you go to "Storage Manager > Storage" and click on "Create > Volume" is the NVMe drive available?
Author
Owner

@jk1z commented on GitHub (Dec 24, 2023):

Yes, I have in DSM 7.2.1 Update 2. but once upgraded to update 3 the nvme drive disappeared in the cache group.

<!-- gh-comment-id:1868406651 --> @jk1z commented on GitHub (Dec 24, 2023): Yes, I have in DSM 7.2.1 Update 2. but once upgraded to update 3 the nvme drive disappeared in the cache group.
Author
Owner

@jk1z commented on GitHub (Dec 24, 2023):

image
<!-- gh-comment-id:1868406961 --> @jk1z commented on GitHub (Dec 24, 2023): <img width="695" alt="image" src="https://github.com/007revad/Synology_HDD_db/assets/18542127/e825abb4-bf17-48cc-a10a-19ed5594537a">
Author
Owner

@007revad commented on GitHub (Dec 24, 2023):

A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back.

Try the following command:
sudo -i synostorage --unlock-disk /dev/nvme0

Then reboot.

Apparently it can take a few hours for things to appear normal.

<!-- gh-comment-id:1868425456 --> @007revad commented on GitHub (Dec 24, 2023): A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back. Try the following command: `sudo -i synostorage --unlock-disk /dev/nvme0` Then reboot. Apparently it can take a few hours for things to appear normal.
Author
Owner

@jk1z commented on GitHub (Dec 25, 2023):

A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back.

Try the following command: sudo -i synostorage --unlock-disk /dev/nvme0

Then reboot.

Apparently it can take a few hours for things to appear normal.

when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db?

<!-- gh-comment-id:1868648186 --> @jk1z commented on GitHub (Dec 25, 2023): > A couple of people have reported that they needed to run the script and reboot, 2 or 3 times to get their NVMe drives back. > > Try the following command: `sudo -i synostorage --unlock-disk /dev/nvme0` > > Then reboot. > > Apparently it can take a few hours for things to appear normal. when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db?
Author
Owner

@jk1z commented on GitHub (Dec 25, 2023):

Still no luck but I will perform a data scrubbing to see if it does any good

<!-- gh-comment-id:1868759575 --> @jk1z commented on GitHub (Dec 25, 2023): Still no luck but I will perform a data scrubbing to see if it does any good
Author
Owner

@007revad commented on GitHub (Dec 25, 2023):

when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db?

The syno_hdd_db script.

<!-- gh-comment-id:1868845319 --> @007revad commented on GitHub (Dec 25, 2023): > when you referring to the script which script is it? use nvme as sdd drive or adding third party nvme to the db? The syno_hdd_db script.
Author
Owner

@jk1z commented on GitHub (Dec 26, 2023):

The syno_hdd_db script.
I have tried 5 times. The drive still stuck in the detected state. And I cannot reassemble it

<!-- gh-comment-id:1869177329 --> @jk1z commented on GitHub (Dec 26, 2023): > The syno_hdd_db script. I have tried 5 times. The drive still stuck in the detected state. And I cannot reassemble it
Author
Owner

@007revad commented on GitHub (Dec 26, 2023):

You could try shutting down the NAS, remove the NVMe drive, bootup, shut down, insert NVMe drive and boot up to see if it clears the error.

What do the following commands return?

sudo nvme list

udevadm info /dev/nvme0n1

cat /proc/mdstat | grep -E -A 2 'nvme|unused'

ls /run/synostorage/disk_cache_target

for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done

<!-- gh-comment-id:1869757096 --> @007revad commented on GitHub (Dec 26, 2023): You could try shutting down the NAS, remove the NVMe drive, bootup, shut down, insert NVMe drive and boot up to see if it clears the error. What do the following commands return? `sudo nvme list` `udevadm info /dev/nvme0n1` `cat /proc/mdstat | grep -E -A 2 'nvme|unused'` `ls /run/synostorage/disk_cache_target` `for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done`
Author
Owner

@PeterSuh-Q3 commented on GitHub (Dec 29, 2023):

Yesterday, a user using DS918+ was already using two Micron 1100 SSD 2TB products included in the compatibility list from Synology, but there was an issue where a problem occurred after the drive database information was updated.
For detailed inquiries, we will contact you as a separate issue.

스크린샷 2023-12-29 오전 11 19 24

It seems to me that this issue is also relevant.
This problem seems to have occurred due to a merge update to DB information already included in the compatibility list.

스크린샷 2023-12-29 오후 12 21 22

<!-- gh-comment-id:1871700786 --> @PeterSuh-Q3 commented on GitHub (Dec 29, 2023): Yesterday, a user using DS918+ was already using two Micron 1100 SSD 2TB products included in the compatibility list from Synology, but there was an issue where a problem occurred after the drive database information was updated. For detailed inquiries, we will contact you as a separate issue. ![스크린샷 2023-12-29 오전 11 19 24](https://github.com/007revad/Synology_HDD_db/assets/85427533/84155587-8e5f-4863-a172-9642dda12d32) It seems to me that this issue is also relevant. This problem seems to have occurred due to a merge update to DB information already included in the compatibility list. ![스크린샷 2023-12-29 오후 12 21 22](https://github.com/007revad/Synology_HDD_db/assets/85427533/dc208210-2236-43ac-a0f8-a7515d91037e)
Author
Owner

@jk1z commented on GitHub (Dec 29, 2023):

for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done

ash-4.4# sudo nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p1   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p2   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p3   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
ash-4.4# udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: MAJOR=259
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0
E: SUBSYSTEM=block
E: SYNO_ATTR_SERIAL=S3X4NB0K300311M
E: SYNO_DEV_DISKPORTTYPE=CACHE
E: SYNO_INFO_PLATFORM_NAME=apollolake
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_USB_PRINTER=yes
E: SYNO_SUPPORT_XA=no
E: TAGS=:systemd:
E: USEC_INITIALIZED=973961

ash-4.4# cat /proc/mdstat | grep -E -A 2 'nvme|unused'
unused devices: <none>
ash-4.4# ls /run/synostorage/disk_cache_target
ash-4.4# for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done
adv_damage_weight: 0
adv_status: not_support
bad_sec_ct: -1
below_remain_life_mail_notify_thr: 0
below_remain_life_show_thr: 0
below_remain_life_thr: 0
compatibility: disabled
compatibility_action: {"alert":false,"hide_alloc_status":false,"hide_is4Kn":false,"hide_remain_life":false,"hide_sb_days_left":false,"hide_serial":false,"hide_temperature":false,"hide_unc":false,"notification":false,"notify_health_status":true,"notify_lifetime":true,"notify_unc":true,"selectable":true,"send_health_report":true,"show_lifetime_chart":true,"ui_compatibility":"support"}
compatibility.lock:
container:
critical_warning: 0
dsl_cmd_support: 0
erase_time: 1
firm: 3B7QCXE7
firm_status_from_db: do_nothing
firm_status_from_db.lock:
force_compatibility: support
id: M.2 Drive 1
ironwolf: 0
is_bundle_ssd: 0
is_syno_drive: 1
low_perf_in_raid: normal
low_perf_in_raid_disk_list:
m2_pool_support: 1

mask_serial: 0
model: Samsung SSD 960 EVO 500GB
predict_status: not_support
predict_weight: 0
read_only: 0
remain_life: 99
remain_life_danger: 0
reset_fail_status: normal
reset_fail_weight: 0
sct_cmd_support: 0
seq_status: normal
serial: S3X4NB0K300311M
smart: normal
smart_attr_ignore: 1
smart_damage_weight: 0
smart_selftest_log_type: 0
smart_test_ignore: 1
smart_test_support: 0
ssd_bad_block_over_thr: 0
temperature: 32
timeout_status: normal
timeout_weight: 0
type: SSD
ui_serial: S3X4NB0K300311M
unc_status: normal
unc_weight: 0
vendor: Samsung
wdda_status: not_support
wdda_support: 0
<!-- gh-comment-id:1871829440 --> @jk1z commented on GitHub (Dec 29, 2023): > for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done ``` ash-4.4# sudo nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme0n1p1 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme0n1p2 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme0n1p3 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 ash-4.4# udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S3X4NB0K300311M E: SYNO_DEV_DISKPORTTYPE=CACHE E: SYNO_INFO_PLATFORM_NAME=apollolake E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=973961 ash-4.4# cat /proc/mdstat | grep -E -A 2 'nvme|unused' unused devices: <none> ash-4.4# ls /run/synostorage/disk_cache_target ash-4.4# for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done adv_damage_weight: 0 adv_status: not_support bad_sec_ct: -1 below_remain_life_mail_notify_thr: 0 below_remain_life_show_thr: 0 below_remain_life_thr: 0 compatibility: disabled compatibility_action: {"alert":false,"hide_alloc_status":false,"hide_is4Kn":false,"hide_remain_life":false,"hide_sb_days_left":false,"hide_serial":false,"hide_temperature":false,"hide_unc":false,"notification":false,"notify_health_status":true,"notify_lifetime":true,"notify_unc":true,"selectable":true,"send_health_report":true,"show_lifetime_chart":true,"ui_compatibility":"support"} compatibility.lock: container: critical_warning: 0 dsl_cmd_support: 0 erase_time: 1 firm: 3B7QCXE7 firm_status_from_db: do_nothing firm_status_from_db.lock: force_compatibility: support id: M.2 Drive 1 ironwolf: 0 is_bundle_ssd: 0 is_syno_drive: 1 low_perf_in_raid: normal low_perf_in_raid_disk_list: m2_pool_support: 1 mask_serial: 0 model: Samsung SSD 960 EVO 500GB predict_status: not_support predict_weight: 0 read_only: 0 remain_life: 99 remain_life_danger: 0 reset_fail_status: normal reset_fail_weight: 0 sct_cmd_support: 0 seq_status: normal serial: S3X4NB0K300311M smart: normal smart_attr_ignore: 1 smart_damage_weight: 0 smart_selftest_log_type: 0 smart_test_ignore: 1 smart_test_support: 0 ssd_bad_block_over_thr: 0 temperature: 32 timeout_status: normal timeout_weight: 0 type: SSD ui_serial: S3X4NB0K300311M unc_status: normal unc_weight: 0 vendor: Samsung wdda_status: not_support wdda_support: 0 ```
Author
Owner

@007revad commented on GitHub (Dec 31, 2023):

These 2 stand out to me:

compatibility: disabled
is_syno_drive: 1

Have you previously run Synology_enable_M2_volume?

What do these commands return?

ls -l /usr/lib/libhwcontrol.so.*

md5sum -b /usr/lib/libhwcontrol.so.1

The last command should return:
afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1

Also try the following:

  1. Disable any syno_hdd-db schedules you have.
  2. Run sudo -i syno_hdd_db.sh --restore then reboot.
  3. Run this version of syno_hdd_db.sh with the -nr options: https://github.com/007revad/Synology_HDD_db/releases/tag/v3.3.74
<!-- gh-comment-id:1872874204 --> @007revad commented on GitHub (Dec 31, 2023): These 2 stand out to me: > compatibility: disabled > is_syno_drive: 1 Have you previously run [Synology_enable_M2_volume](https://github.com/007revad/Synology_enable_M2_volume)? What do these commands return? `ls -l /usr/lib/libhwcontrol.so.*` `md5sum -b /usr/lib/libhwcontrol.so.1` The last command should return: `afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1` Also try the following: 1. Disable any syno_hdd-db schedules you have. 2. Run `sudo -i syno_hdd_db.sh --restore` then reboot. 3. Run this version of syno_hdd_db.sh with the -nr options: https://github.com/007revad/Synology_HDD_db/releases/tag/v3.3.74
Author
Owner

@jk1z commented on GitHub (Jan 1, 2024):

image

Have you previously run Synology_enable_M2_volume?

I don't think so. Should I?

<!-- gh-comment-id:1873097962 --> @jk1z commented on GitHub (Jan 1, 2024): ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/a28d2d0c-50b8-49fb-bde5-1f74ccd1b489) > Have you previously run [Synology_enable_M2_volume](https://github.com/007revad/Synology_enable_M2_volume)? I don't think so. Should I?
Author
Owner

@jk1z commented on GitHub (Jan 1, 2024):

image
Looks like this file has been modified

<!-- gh-comment-id:1873100850 --> @jk1z commented on GitHub (Jan 1, 2024): ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/9586285a-34c2-45b9-ab64-1337e0af908e) Looks like this file has been modified
Author
Owner

@jk1z commented on GitHub (Jan 1, 2024):

I will restore all of the files and try v3.3.74

<!-- gh-comment-id:1873102740 --> @jk1z commented on GitHub (Jan 1, 2024): I will restore all of the files and try v3.3.74
Author
Owner

@jk1z commented on GitHub (Jan 1, 2024):

These 2 stand out to me:

compatibility: disabled
is_syno_drive: 1

Have you previously run Synology_enable_M2_volume?

What do these commands return?

ls -l /usr/lib/libhwcontrol.so.*

md5sum -b /usr/lib/libhwcontrol.so.1

The last command should return: afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1

Also try the following:

  1. Disable any syno_hdd-db schedules you have.
  2. Run sudo -i syno_hdd_db.sh --restore then reboot.
  3. Run this version of syno_hdd_db.sh with the -nr options: https://github.com/007revad/Synology_HDD_db/releases/tag/v3.3.74

I did the following. It still stuck
image

<!-- gh-comment-id:1873107390 --> @jk1z commented on GitHub (Jan 1, 2024): > These 2 stand out to me: > > > compatibility: disabled > > is_syno_drive: 1 > > Have you previously run [Synology_enable_M2_volume](https://github.com/007revad/Synology_enable_M2_volume)? > > What do these commands return? > > `ls -l /usr/lib/libhwcontrol.so.*` > > `md5sum -b /usr/lib/libhwcontrol.so.1` > > The last command should return: `afdcbf2ca3aa188cd363e276a1f89754 */usr/lib/libhwcontrol.so.1` > > Also try the following: > > 1. Disable any syno_hdd-db schedules you have. > 2. Run `sudo -i syno_hdd_db.sh --restore` then reboot. > 3. Run this version of syno_hdd_db.sh with the -nr options: https://github.com/007revad/Synology_HDD_db/releases/tag/v3.3.74 I did the following. It still stuck ![image](https://github.com/007revad/Synology_HDD_db/assets/18542127/f96a1d6d-41c0-47e4-9c2e-3b31e5bf52b8)
Author
Owner

@jk1z commented on GitHub (Jan 1, 2024):

I ran the debug commands again. Here is the output.

overlord@Synology:~$ overlord@Synology:~$ sudo nvme list
Password:
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p1   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p2   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
/dev/nvme0n1p3   S3X4NB0K300311M      Samsung SSD 960 EVO 500GB                1         118.24  GB / 500.11  GB    512   B +  0 B   3B7QCXE7
overlord@Synology:~$ udevadm info /dev/nvme0n1
P: /devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
N: nvme0n1
E: DEVNAME=/dev/nvme0n1
E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1
E: DEVTYPE=disk
E: MAJOR=259
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0
E: SUBSYSTEM=block
E: SYNO_ATTR_SERIAL=S3X4NB0K300311M
E: SYNO_DEV_DISKPORTTYPE=CACHE
E: SYNO_INFO_PLATFORM_NAME=apollolake
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_USB_PRINTER=yes
E: SYNO_SUPPORT_XA=no
E: TAGS=:systemd:
E: USEC_INITIALIZED=382185

overlord@Synology:~$ cat /proc/mdstat | grep -E -A 2 'nvme|unused'
unused devices: <none>
overlord@Synology:~$ ls /run/synostorage/disk_cache_target
overlord@Synology:~$ for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done
adv_damage_weight: 0
adv_status: not_support
bad_sec_ct: -1
below_remain_life_mail_notify_thr: 0
below_remain_life_show_thr: 0
below_remain_life_thr: 0
compatibility: disabled
compatibility_action: {"alert":false,"hide_alloc_status":false,"hide_is4Kn":false,"hide_remain_life":false,"hide_sb_days_left":false,"hide_serial":false,"hide_temperature":false,"hide_unc":false,"notification":false,"notify_health_status":true,"notify_lifetime":true,"notify_unc":true,"selectable":true,"send_health_report":true,"show_lifetime_chart":true,"ui_compatibility":"support"}
compatibility.lock:
container:
critical_warning: 0
dsl_cmd_support: 0
erase_time: 1
firm: 3B7QCXE7
firm_status_from_db: do_nothing
firm_status_from_db.lock:
force_compatibility: support
id: M.2 Drive 1
ironwolf: 0
is_bundle_ssd: 0
is_syno_drive: 1
low_perf_in_raid: normal
low_perf_in_raid_disk_list:
m2_pool_support: 1

mask_serial: 0
model: Samsung SSD 960 EVO 500GB
predict_status: not_support
predict_weight: 0
read_only: 0
remain_life: 99
remain_life_danger: 0
reset_fail_status: normal
reset_fail_weight: 0
sct_cmd_support: 0
seq_status: normal
serial: S3X4NB0K300311M
smart: normal
smart_attr_ignore: 1
smart_damage_weight: 0
smart_selftest_log_type: 0
smart_test_ignore: 1
smart_test_support: 0
ssd_bad_block_over_thr: 0
temperature: 31
timeout_status: normal
timeout_weight: 0
type: SSD
ui_serial: S3X4NB0K300311M
unc_status: normal
unc_weight: 0
vendor: Samsung
wdda_status: not_support
wdda_support: 0
<!-- gh-comment-id:1873108475 --> @jk1z commented on GitHub (Jan 1, 2024): I ran the debug commands again. Here is the output. ```` overlord@Synology:~$ overlord@Synology:~$ sudo nvme list Password: Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme0n1p1 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme0n1p2 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 /dev/nvme0n1p3 S3X4NB0K300311M Samsung SSD 960 EVO 500GB 1 118.24 GB / 500.11 GB 512 B + 0 B 3B7QCXE7 overlord@Synology:~$ udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:1b.0/0000:01:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S3X4NB0K300311M E: SYNO_DEV_DISKPORTTYPE=CACHE E: SYNO_INFO_PLATFORM_NAME=apollolake E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: TAGS=:systemd: E: USEC_INITIALIZED=382185 overlord@Synology:~$ cat /proc/mdstat | grep -E -A 2 'nvme|unused' unused devices: <none> overlord@Synology:~$ ls /run/synostorage/disk_cache_target overlord@Synology:~$ for f in $(ls /run/synostorage/disks/nvme0n1); do echo -n "${f}: " && cat /run/synostorage/disks/nvme0n1/$f && echo; done adv_damage_weight: 0 adv_status: not_support bad_sec_ct: -1 below_remain_life_mail_notify_thr: 0 below_remain_life_show_thr: 0 below_remain_life_thr: 0 compatibility: disabled compatibility_action: {"alert":false,"hide_alloc_status":false,"hide_is4Kn":false,"hide_remain_life":false,"hide_sb_days_left":false,"hide_serial":false,"hide_temperature":false,"hide_unc":false,"notification":false,"notify_health_status":true,"notify_lifetime":true,"notify_unc":true,"selectable":true,"send_health_report":true,"show_lifetime_chart":true,"ui_compatibility":"support"} compatibility.lock: container: critical_warning: 0 dsl_cmd_support: 0 erase_time: 1 firm: 3B7QCXE7 firm_status_from_db: do_nothing firm_status_from_db.lock: force_compatibility: support id: M.2 Drive 1 ironwolf: 0 is_bundle_ssd: 0 is_syno_drive: 1 low_perf_in_raid: normal low_perf_in_raid_disk_list: m2_pool_support: 1 mask_serial: 0 model: Samsung SSD 960 EVO 500GB predict_status: not_support predict_weight: 0 read_only: 0 remain_life: 99 remain_life_danger: 0 reset_fail_status: normal reset_fail_weight: 0 sct_cmd_support: 0 seq_status: normal serial: S3X4NB0K300311M smart: normal smart_attr_ignore: 1 smart_damage_weight: 0 smart_selftest_log_type: 0 smart_test_ignore: 1 smart_test_support: 0 ssd_bad_block_over_thr: 0 temperature: 31 timeout_status: normal timeout_weight: 0 type: SSD ui_serial: S3X4NB0K300311M unc_status: normal unc_weight: 0 vendor: Samsung wdda_status: not_support wdda_support: 0 ````
Author
Owner

@007revad commented on GitHub (Jan 2, 2024):

I did the following. It still stuck image

I should have asked if you were using Xpenology. Hopefully PeterSuh-Q3 can help you.

<!-- gh-comment-id:1873606652 --> @007revad commented on GitHub (Jan 2, 2024): > I did the following. It still stuck ![image](https://private-user-images.githubusercontent.com/18542127/293527710-f96a1d6d-41c0-47e4-9c2e-3b31e5bf52b8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDQxNjg4MzIsIm5iZiI6MTcwNDE2ODUzMiwicGF0aCI6Ii8xODU0MjEyNy8yOTM1Mjc3MTAtZjk2YTFkNmQtNDFjMC00N2U0LTljMmUtM2IzMWU1YmY1MmI4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAxMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMTAyVDA0MDg1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTA1YWQzMGZlZDI3MTZkOTUyZDkyMzc0NDFkZjdmZmNjZWFkZjBkODliNmI2MDRkMjlkMTA0OWU1MWVlNTgzNzEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.l9kBlHV6x6E90XNbqdI6TM0kD9Zoch2TykakI2PWyjk) I should have asked if you were using Xpenology. Hopefully PeterSuh-Q3 can help you.
Author
Owner

@jk1z commented on GitHub (Jan 4, 2024):

Ah, ok I see. I might replace the nvme with another one. It looks like this config is permanently stuck

<!-- gh-comment-id:1876278151 --> @jk1z commented on GitHub (Jan 4, 2024): Ah, ok I see. I might replace the nvme with another one. It looks like this config is permanently stuck
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#574
No description provided.