[GH-ISSUE #6] Release 1.1.9 not working on RS4021xs+ #717

Closed
opened 2026-03-12 16:05:36 +03:00 by kerem · 49 comments
Owner

Originally created by @jayanty on GitHub (Mar 9, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/6

When I execute the script I get a bunch of errors like this:

sda SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument

I think these come from the getfirmware function. If I replace hdparm -I with smartctl -i that function does not return any errors and the script runs normally and reports that my drives already exist in the db files (Because I've run this a few times now). Unfortunately, Synology still shows the drives as not compatible if I don't use the -f flag. I want to use the deduplication capability so I don't want to use the force flag.

What could be wrong here? What tests can I do?

The drives I am trying to add:

HDD/SSD models found: 6
P043S3T8EMC3840,Revision: ESFA
P043S7T6EMC7680,Revision: ESV7
PX02SMF040,Revision: MS01
PX02SMF040,Revision: MS02
PX04SMB040,Revision: AM04
PX05SMB040,Revision: 0101

NVMe drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db
P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db.new
P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db
P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db.new
PX02SMF040 already exists in rs4021xs+_host_v7.db
PX02SMF040 already exists in rs4021xs+_host_v7.db.new
PX02SMF040 already exists in rs4021xs+_host_v7.db
PX02SMF040 already exists in rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
PX05SMB040 already exists in rs4021xs+_host_v7.db
PX05SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

Originally created by @jayanty on GitHub (Mar 9, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/6 When I execute the script I get a bunch of errors like this: `sda SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument` I think these come from the getfirmware function. If I replace hdparm -I with smartctl -i that function does not return any errors and the script runs normally and reports that my drives already exist in the db files (Because I've run this a few times now). Unfortunately, Synology still shows the drives as not compatible if I don't use the -f flag. I want to use the deduplication capability so I don't want to use the force flag. What could be wrong here? What tests can I do? The drives I am trying to add: HDD/SSD models found: 6 P043S3T8EMC3840,Revision: ESFA P043S7T6EMC7680,Revision: ESV7 PX02SMF040,Revision: MS01 PX02SMF040,Revision: MS02 PX04SMB040,Revision: AM04 PX05SMB040,Revision: 0101 NVMe drive models found: 1 WD_BLACK SN850X 2000GB,620311WD P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db.new P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db.new PX02SMF040 already exists in rs4021xs+_host_v7.db PX02SMF040 already exists in rs4021xs+_host_v7.db.new PX02SMF040 already exists in rs4021xs+_host_v7.db PX02SMF040 already exists in rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new PX05SMB040 already exists in rs4021xs+_host_v7.db PX05SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new
kerem 2026-03-12 16:05:36 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@007revad commented on GitHub (Mar 9, 2023):

Can you try running the following command: /usr/syno/sbin/synostgdisk --check-all-disks-compatibility to see if the drives appear with the green tick.

After a quick google I found out that some SAS drives do not implement the SATA IDENTIFY command, which is what hdparm is sending them.

I've uploaded a develop version that uses smartctl to get the firmware version for SAS drives which will at least avoid those errors messages: https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh

<!-- gh-comment-id:1461574508 --> @007revad commented on GitHub (Mar 9, 2023): Can you try running the following command: `/usr/syno/sbin/synostgdisk --check-all-disks-compatibility` to see if the drives appear with the green tick. After a quick google I found out that some SAS drives do not implement the SATA IDENTIFY command, which is what hdparm is sending them. I've uploaded a develop version that uses smartctl to get the firmware version for SAS drives which will at least avoid those errors messages: https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh
Author
Owner

@jayanty commented on GitHub (Mar 9, 2023):

The PX04SMB040 show as compatible with that command, all others show as incompatible, even though they've been added to the DB.

The develop version of the script results in these errors
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 01 01 13 00 00 00 00 00 00 00 79 00
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 01 01 13 00 00 00 00 00 00 00 79 00
HDIO_DRIVE_CMD(identify) failed: Input/output error
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 78 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 77 03 01 1a 00 00 00 00 00 00 20 00
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 77 03 01 1a 00 00 00 00 00 00 21 00
HDIO_DRIVE_CMD(identify) failed: Input/output error
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00
HDIO_GET_IDENTITY failed: Invalid argument
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 77 02 01 1b 00 00 00 00 00 00 20 00
HDIO_GET_IDENTITY failed: Invalid argument
ERROR: No drives found!

<!-- gh-comment-id:1462479900 --> @jayanty commented on GitHub (Mar 9, 2023): The PX04SMB040 show as compatible with that command, all others show as incompatible, even though they've been added to the DB. The develop version of the script results in these errors SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 79 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 01 01 13 00 00 00 00 00 00 00 79 00 SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 01 01 13 00 00 00 00 00 00 00 79 00 HDIO_DRIVE_CMD(identify) failed: Input/output error HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 55 04 01 00 00 00 00 00 00 00 78 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 77 03 01 1a 00 00 00 00 00 00 20 00 SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 77 03 01 1a 00 00 00 00 00 00 21 00 HDIO_DRIVE_CMD(identify) failed: Input/output error HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 10 00 00 00 00 20 00 00 c0 00 00 00 25 20 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 28 00 00 00 00 20 00 00 00 00 00 00 85 77 02 01 1b 00 00 00 00 00 00 20 00 HDIO_GET_IDENTITY failed: Invalid argument ERROR: No drives found!
Author
Owner

@007revad commented on GitHub (Mar 9, 2023):

It sounds like your SAS drives are sda, sdb etc instead of sas1, sas2 etc. Can you run the following command and report back what it shows.
find /dev -maxdepth 1 \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "nvme*" \)

<!-- gh-comment-id:1462862306 --> @007revad commented on GitHub (Mar 9, 2023): It sounds like your SAS drives are sda, sdb etc instead of sas1, sas2 etc. Can you run the following command and report back what it shows. `find /dev -maxdepth 1 \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "nvme*" \)`
Author
Owner

@jayanty commented on GitHub (Mar 10, 2023):

That's right. My drives are not named sas1, sas2, etc. Here's the output from that command:

/dev/sdw3
/dev/sdv3
/dev/sdw2
/dev/sdw1
/dev/sdv2
/dev/sdv1
/dev/sdw
/dev/sdx3
/dev/sdx2
/dev/sdx1
/dev/sdx
/dev/sdv
/dev/sdt2
/dev/sdt1
/dev/sdu2
/dev/sdu1
/dev/sdt
/dev/sdu
/dev/sds2
/dev/sds1
/dev/sds
/dev/sdr2
/dev/sdr1
/dev/sdr
/dev/sdq2
/dev/sdq1
/dev/sdq
/dev/sdp2
/dev/sdp1
/dev/sdp
/dev/sdo2
/dev/sdo1
/dev/sdo
/dev/sdn2
/dev/sdn1
/dev/sdn
/dev/sdm2
/dev/sdm1
/dev/sdm
/dev/sdl3
/dev/sdl2
/dev/sdl1
/dev/sdl
/dev/sdk2
/dev/sdk1
/dev/sdk
/dev/sdj3
/dev/sdj2
/dev/sdj1
/dev/sdh3
/dev/sdh2
/dev/sdh1
/dev/sdi3
/dev/sdi2
/dev/sdi1
/dev/sdj
/dev/sdh
/dev/sdi
/dev/sdg3
/dev/sdg2
/dev/sdg1
/dev/sdf3
/dev/sdf2
/dev/sdf1
/dev/sdg
/dev/sde3
/dev/sde2
/dev/sde1
/dev/sdf
/dev/sdd3
/dev/sdd2
/dev/sdd1
/dev/sde
/dev/sdc3
/dev/sdc2
/dev/sdc1
/dev/sdd
/dev/sdb3
/dev/sdb2
/dev/sdb1
/dev/sdc
/dev/sda3
/dev/sda2
/dev/sda1
/dev/sdb
/dev/sda
/dev/nvme1n1p1
/dev/nvme1n1
/dev/nvme0n1p1
/dev/nvme0n1
/dev/nvme1
/dev/nvme0

<!-- gh-comment-id:1463016212 --> @jayanty commented on GitHub (Mar 10, 2023): That's right. My drives are not named sas1, sas2, etc. Here's the output from that command: /dev/sdw3 /dev/sdv3 /dev/sdw2 /dev/sdw1 /dev/sdv2 /dev/sdv1 /dev/sdw /dev/sdx3 /dev/sdx2 /dev/sdx1 /dev/sdx /dev/sdv /dev/sdt2 /dev/sdt1 /dev/sdu2 /dev/sdu1 /dev/sdt /dev/sdu /dev/sds2 /dev/sds1 /dev/sds /dev/sdr2 /dev/sdr1 /dev/sdr /dev/sdq2 /dev/sdq1 /dev/sdq /dev/sdp2 /dev/sdp1 /dev/sdp /dev/sdo2 /dev/sdo1 /dev/sdo /dev/sdn2 /dev/sdn1 /dev/sdn /dev/sdm2 /dev/sdm1 /dev/sdm /dev/sdl3 /dev/sdl2 /dev/sdl1 /dev/sdl /dev/sdk2 /dev/sdk1 /dev/sdk /dev/sdj3 /dev/sdj2 /dev/sdj1 /dev/sdh3 /dev/sdh2 /dev/sdh1 /dev/sdi3 /dev/sdi2 /dev/sdi1 /dev/sdj /dev/sdh /dev/sdi /dev/sdg3 /dev/sdg2 /dev/sdg1 /dev/sdf3 /dev/sdf2 /dev/sdf1 /dev/sdg /dev/sde3 /dev/sde2 /dev/sde1 /dev/sdf /dev/sdd3 /dev/sdd2 /dev/sdd1 /dev/sde /dev/sdc3 /dev/sdc2 /dev/sdc1 /dev/sdd /dev/sdb3 /dev/sdb2 /dev/sdb1 /dev/sdc /dev/sda3 /dev/sda2 /dev/sda1 /dev/sdb /dev/sda /dev/nvme1n1p1 /dev/nvme1n1 /dev/nvme0n1p1 /dev/nvme0n1 /dev/nvme1 /dev/nvme0
Author
Owner

@007revad commented on GitHub (Mar 10, 2023):

Does the following command return anything?
smartctl -i /dev/sda | grep -i sas

<!-- gh-comment-id:1463249559 --> @007revad commented on GitHub (Mar 10, 2023): Does the following command return anything? `smartctl -i /dev/sda | grep -i sas`
Author
Owner

@007revad commented on GitHub (Mar 10, 2023):

Assuming that you are going to say that smartctl -i /dev/sda | grep -i sas returned either:

  • Transport protocol: SAS (SPL-3)
  • Transport protocol: SAS

I've edited the develop branch's syno_hdd_db.sh file which should now work for you.
https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh

<!-- gh-comment-id:1463289039 --> @007revad commented on GitHub (Mar 10, 2023): Assuming that you are going to say that `smartctl -i /dev/sda | grep -i sas` returned either: - Transport protocol: SAS (SPL-3) - Transport protocol: SAS I've edited the develop branch's syno_hdd_db.sh file which should now work for you. https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh
Author
Owner

@jayanty commented on GitHub (Mar 10, 2023):

Does the following command return anything?
smartctl -i /dev/sda | grep -i sas

Transport protocol: SAS (SPL-3)

The new script works without error and returns this output below. Unfortunately, other than the PX04SMB040 drives, all others show up as unsupported even after this.

HDD/SSD models found: 3
P043S3T8EMC3840,ESFA
P043S7T6EMC7680,ESV7
PX04SMB040,AM04

NVMe drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db
P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db.new
P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db
P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

Re-enabled support disk compatibility.

DSM successfully checked disk compatibility.

<!-- gh-comment-id:1464468876 --> @jayanty commented on GitHub (Mar 10, 2023): > Does the following command return anything? > `smartctl -i /dev/sda | grep -i sas` `Transport protocol: SAS (SPL-3)` The new script works without error and returns this output below. Unfortunately, other than the PX04SMB040 drives, all others show up as unsupported even after this. HDD/SSD models found: 3 P043S3T8EMC3840,ESFA P043S7T6EMC7680,ESV7 PX04SMB040,AM04 NVMe drive models found: 1 WD_BLACK SN850X 2000GB,620311WD P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db P043S3T8EMC3840 already exists in rs4021xs+_host_v7.db.new P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db P043S7T6EMC7680 already exists in rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new Re-enabled support disk compatibility. DSM successfully checked disk compatibility.
Author
Owner

@007revad commented on GitHub (Mar 11, 2023):

Can you run the following test script and report back what it outputs.
https://github.com/007revad/Synology_HDD_db/blob/test/drive_info.sh

Also can you run the following command which will copy your rs4021xs+_host_v7.db file to your home folder as rs4021xs+_host_v7.db.txt

sudo cp /var/lib/disk-compatibility/rs4021xs+_host_v7.db ~/rs4021xs+_host_v7.db.txt

Then attach the rs4021xs+_host_v7.db.txt file to your reply.

<!-- gh-comment-id:1464810484 --> @007revad commented on GitHub (Mar 11, 2023): Can you run the following test script and report back what it outputs. https://github.com/007revad/Synology_HDD_db/blob/test/drive_info.sh Also can you run the following command which will copy your rs4021xs+_host_v7.db file to your home folder as rs4021xs+_host_v7.db.txt `sudo cp /var/lib/disk-compatibility/rs4021xs+_host_v7.db ~/rs4021xs+_host_v7.db.txt` Then attach the rs4021xs+_host_v7.db.txt file to your reply.
Author
Owner

@jayanty commented on GitHub (Mar 11, 2023):

nvme0n1
NVMe Model: 'WD_BLACK SN850X 2000GB '
NVMe Model: 'WD_BLACK SN850X 2000GB'
NVMe Firmware: '620311WD'
NVMe Firmware: '620311WD'

nvme1n1
NVMe Model: 'WD_BLACK SN850X 2000GB '
NVMe Model: 'WD_BLACK SN850X 2000GB'
NVMe Firmware: '620311WD'
NVMe Firmware: '620311WD'

sda
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdb
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdc
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdd
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sde
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdf
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdg
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdh
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdi
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdj
Model: 'P043S3T8EMC3840'
Firmware: 'ESFA'

sdk
Model: 'PX04SMB040'
Firmware: 'AM04'

sdl
Model: 'PX04SMB040'
Firmware: 'AM04'

sdm
Model: 'PX04SMB040'
Firmware: 'AM04'

sdn
Model: 'PX04SMB040'
Firmware: 'AM04'

sdo
Model: 'PX04SMB040'
Firmware: 'AM04'

sdp
Model: 'PX04SMB040'
Firmware: 'AM04'

sdq
Model: 'PX04SMB040'
Firmware: 'AM04'

sdr
Model: 'PX04SMB040'
Firmware: 'AM04'

sds
Model: 'PX04SMB040'
Firmware: 'AM04'

sdt
Model: 'PX04SMB040'
Firmware: 'AM04'

sdu
Model: 'PX04SMB040'
Firmware: 'AM04'

sdv
Model: 'PX04SMB040'
Firmware: 'AM04'

sdx
Model: 'P043S7T6EMC7680'
Firmware: 'ESV7'

sdw
Model: 'PX04SMB040'
Firmware: 'AM04'

rs4021xs+_host_v7.db.txt

<!-- gh-comment-id:1464822106 --> @jayanty commented on GitHub (Mar 11, 2023): nvme0n1 NVMe Model: 'WD_BLACK SN850X 2000GB ' NVMe Model: 'WD_BLACK SN850X 2000GB' NVMe Firmware: '620311WD' NVMe Firmware: '620311WD' nvme1n1 NVMe Model: 'WD_BLACK SN850X 2000GB ' NVMe Model: 'WD_BLACK SN850X 2000GB' NVMe Firmware: '620311WD' NVMe Firmware: '620311WD' sda Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdb Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdc Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdd Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sde Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdf Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdg Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdh Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdi Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdj Model: 'P043S3T8EMC3840' Firmware: 'ESFA' sdk Model: 'PX04SMB040' Firmware: 'AM04' sdl Model: 'PX04SMB040' Firmware: 'AM04' sdm Model: 'PX04SMB040' Firmware: 'AM04' sdn Model: 'PX04SMB040' Firmware: 'AM04' sdo Model: 'PX04SMB040' Firmware: 'AM04' sdp Model: 'PX04SMB040' Firmware: 'AM04' sdq Model: 'PX04SMB040' Firmware: 'AM04' sdr Model: 'PX04SMB040' Firmware: 'AM04' sds Model: 'PX04SMB040' Firmware: 'AM04' sdt Model: 'PX04SMB040' Firmware: 'AM04' sdu Model: 'PX04SMB040' Firmware: 'AM04' sdv Model: 'PX04SMB040' Firmware: 'AM04' sdx Model: 'P043S7T6EMC7680' Firmware: 'ESV7' sdw Model: 'PX04SMB040' Firmware: 'AM04' [rs4021xs+_host_v7.db.txt](https://github.com/007revad/Synology_HDD_db/files/10947585/rs4021xs%2B_host_v7.db.txt)
Author
Owner

@007revad commented on GitHub (Mar 11, 2023):

Everything looks okay in the rs4021xs+_host_v7.db file.

  1. Are all 13 PX04SMB040 drives in the RS4021xs+ ?
  2. Are the 10 P043S3T8EMC3840 drives and the P043S7T6EMC7680 in expansion units?
  3. What brand are the P043S3T8EMC3840 and P043S7T6EMC7680 drives?

Google finds nothing for P043S3T8EMC3840 or P043S7T6EMC7680

<!-- gh-comment-id:1464845597 --> @007revad commented on GitHub (Mar 11, 2023): Everything looks okay in the rs4021xs+_host_v7.db file. 1. Are all 13 PX04SMB040 drives in the RS4021xs+ ? 2. Are the 10 P043S3T8EMC3840 drives and the P043S7T6EMC7680 in expansion units? 3. What brand are the P043S3T8EMC3840 and P043S7T6EMC7680 drives? Google finds nothing for P043S3T8EMC3840 or P043S7T6EMC7680
Author
Owner

@jayanty commented on GitHub (Mar 12, 2023):

Hey, appreciate all the investigations so far. Let me know if I can help beyond supplying you the info you need. I'm migrating all data off this nas and am open to all sorts of testing even if it results in data loss.

Answers to your questions below, starting with drive types
P043S3T8EMC3840 = Samsung PM1643 (EMC Firmware): 3.84TB SAS 12G SSD
=== START OF INFORMATION SECTION ===
Vendor: SAMSUNG
Product: P043S3T8 EMC3840
Revision: ESFA
Compliance: SPC-5
User Capacity: 3,840,774,504,448 bytes [3.84 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: 0x5002538b088b8010
Serial number: ZWNY0K815764
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Sun Mar 12 12:33:09 2023 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled
Read Cache is: Enabled
Writeback Cache is: Enabled

P043S7T6EMC7680 = Samsung PM1643 (EMC Firmware): 7.68TB SAS 12G SSD
=== START OF INFORMATION SECTION ===
Vendor: SAMSUNG
Product: P043S7T6 EMC7680
Revision: ESV7
Compliance: SPC-5
User Capacity: 7,680,475,267,072 bytes [7.68 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: 0x5002538b0028ef30
Serial number: KPNA0N200850
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Sun Mar 12 12:32:19 2023 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled
Read Cache is: Enabled
Writeback Cache is: Enabled

PX04SMB040 = Toshiba PX04SMB040 (Dell Firmware): 400GB SAS 12G SSD
=== START OF INFORMATION SECTION ===
Vendor: TOSHIBA
Product: PX04SMB040
Revision: AM04
Compliance: SPC-4
User Capacity: 400,088,457,216 bytes [400 GB]
Logical block size: 512 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: 0x50000396dc88e551
Serial number: 2670A00AT2XD
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Sun Mar 12 12:37:45 2023 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Disabled or Not Supported
Read Cache is: Enabled
Writeback Cache is: Disabled

NVME drives are on a Synology M2D20 SSD Adapter. The drives themselves are Western Digital SN850X, 2TB variants.

All drives are in the same machine. This is a redpill (arpl) build of DSM 7.1.1-42962 Update 2 on a custom built server with a Supermicro SAS3 backplane (24 Bay), and a Supermicro X11SSF-CF motherboard with a built in LSI SAS3 3008 controller.

<!-- gh-comment-id:1465284676 --> @jayanty commented on GitHub (Mar 12, 2023): Hey, appreciate all the investigations so far. Let me know if I can help beyond supplying you the info you need. I'm migrating all data off this nas and am open to all sorts of testing even if it results in data loss. Answers to your questions below, starting with drive types **P043S3T8EMC3840 = Samsung PM1643 (EMC Firmware): 3.84TB SAS 12G SSD** === START OF INFORMATION SECTION === Vendor: SAMSUNG Product: P043S3T8 EMC3840 Revision: ESFA Compliance: SPC-5 User Capacity: 3,840,774,504,448 bytes [3.84 TB] Logical block size: 512 bytes Physical block size: 4096 bytes LU is resource provisioned, LBPRZ=1 Rotation Rate: Solid State Device Form Factor: 2.5 inches Logical Unit id: 0x5002538b088b8010 Serial number: ZWNY0K815764 Device type: disk Transport protocol: SAS (SPL-3) Local Time is: Sun Mar 12 12:33:09 2023 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Enabled Read Cache is: Enabled Writeback Cache is: Enabled **P043S7T6EMC7680 = Samsung PM1643 (EMC Firmware): 7.68TB SAS 12G SSD** === START OF INFORMATION SECTION === Vendor: SAMSUNG Product: P043S7T6 EMC7680 Revision: ESV7 Compliance: SPC-5 User Capacity: 7,680,475,267,072 bytes [7.68 TB] Logical block size: 512 bytes Physical block size: 4096 bytes LU is resource provisioned, LBPRZ=1 Rotation Rate: Solid State Device Form Factor: 2.5 inches Logical Unit id: 0x5002538b0028ef30 Serial number: KPNA0N200850 Device type: disk Transport protocol: SAS (SPL-3) Local Time is: Sun Mar 12 12:32:19 2023 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Enabled Read Cache is: Enabled Writeback Cache is: Enabled **PX04SMB040 = Toshiba PX04SMB040 (Dell Firmware): 400GB SAS 12G SSD** === START OF INFORMATION SECTION === Vendor: TOSHIBA Product: PX04SMB040 Revision: AM04 Compliance: SPC-4 User Capacity: 400,088,457,216 bytes [400 GB] Logical block size: 512 bytes LU is resource provisioned, LBPRZ=1 Rotation Rate: Solid State Device Form Factor: 2.5 inches Logical Unit id: 0x50000396dc88e551 Serial number: 2670A00AT2XD Device type: disk Transport protocol: SAS (SPL-3) Local Time is: Sun Mar 12 12:37:45 2023 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Disabled or Not Supported Read Cache is: Enabled Writeback Cache is: Disabled NVME drives are on a Synology M2D20 SSD Adapter. The drives themselves are Western Digital SN850X, 2TB variants. All drives are in the same machine. This is a redpill ([arpl](https://github.com/fbelavenuto/arpl)) build of DSM 7.1.1-42962 Update 2 on a custom built server with a Supermicro SAS3 backplane (24 Bay), and a [Supermicro X11SSF-CF](https://www.supermicro.com/en/products/motherboard/x11ssl-cf) motherboard with a built in LSI SAS3 3008 controller.
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

Now we're getting somewhere.

The problem with the SAMSUNG SSDs is my script is removing the space in P043S7T6 EMC7680 to P043S7T6EMC7680. Can you run these commands, 1 at a time, and report back what they output:

cat "/sys/block/sda/device/model"

cat "/sys/block/sda/device/model" | xargs

The problem with the NVMe drives may be an indication that I need to add those drives M2D20's database file. Though my ds1821+_m2d20_v7.db file has no drives in it at all. It only has {"disk_compatbility_info":{},"nas_model":"ds1821+"}

I've always been doubtful that drive model names like WD_BLACK SN850X 2000GB would work because none of the drives in the NAS model's .db files have spaces in the model. But I just found SSD 850 PRO 2TB in the expansion unit .db files. So maybe I can add WD_BLACK SN850X 2000GB to the *_m2d20_v7.db file.

WD's product number for the WD_BLACK SN850X 2000GB is WDS200T2X0E but I can't find any way to get the actual WD product number from the NVMe drives in DSM.

I just discovered yesterday that SuperMicro makes motherboards with built-in SAS ports.

<!-- gh-comment-id:1465322927 --> @007revad commented on GitHub (Mar 12, 2023): Now we're getting somewhere. The problem with the SAMSUNG SSDs is my script is removing the space in `P043S7T6 EMC7680` to `P043S7T6EMC7680`. Can you run these commands, 1 at a time, and report back what they output: `cat "/sys/block/sda/device/model"` `cat "/sys/block/sda/device/model" | xargs` The problem with the NVMe drives may be an indication that I need to add those drives M2D20's database file. Though my ds1821+_m2d20_v7.db file has no drives in it at all. It only has `{"disk_compatbility_info":{},"nas_model":"ds1821+"}` I've always been doubtful that drive model names like WD_BLACK SN850X 2000GB would work because none of the drives in the NAS model's .db files have spaces in the model. But I just found `SSD 850 PRO 2TB` in the expansion unit .db files. So maybe I can add WD_BLACK SN850X 2000GB to the *_m2d20_v7.db file. WD's product number for the WD_BLACK SN850X 2000GB is WDS200T2X0E but I can't find any way to get the actual WD product number from the NVMe drives in DSM. I just discovered yesterday that SuperMicro makes motherboards with built-in SAS ports.
Author
Owner

@jayanty commented on GitHub (Mar 13, 2023):

cat "/sys/block/sda/device/model"
P043S3T8 EMC3840

cat "/sys/block/sda/device/model" | xargs
P043S3T8 EMC3840

Seems to me the first one returns some trailing spaces but otherwise both are the same.

The problem with the NVMe drives may be an indication that I need to add those drives M2D20's database file. Though my ds1821+_m2d20_v7.db file has no drives in it at all. It only has {"disk_compatbility_info":{},"nas_model":"ds1821+"}

Attaching my rs4021xs+_m2d20_v7.db as it has a few models defined in it. See if that helps.
rs4021xs+_m2d20_v7.db.txt

I've always been doubtful that drive model names like WD_BLACK SN850X 2000GB would work because none of the drives in the NAS model's .db files have spaces in the model. But I just found SSD 850 PRO 2TB in the expansion unit .db files. So maybe I can add WD_BLACK SN850X 2000GB to the *_m2d20_v7.db file.

WD's product number for the WD_BLACK SN850X 2000GB is WDS200T2X0E but I can't find any way to get the actual WD product number from the NVMe drives in DSM.

So if we can find a way to get that WD product number from DSM that will be all you need? Should we try hardcoding to see if it's the product number that's needed vs something else?

I'm having trouble understanding the layout of these db files. Are you just appending to the end, are you adjusting the entire file somehow and inserting in between?

<!-- gh-comment-id:1465396495 --> @jayanty commented on GitHub (Mar 13, 2023): > `cat "/sys/block/sda/device/model"` `P043S3T8 EMC3840 ` > `cat "/sys/block/sda/device/model" | xargs` `P043S3T8 EMC3840` Seems to me the first one returns some trailing spaces but otherwise both are the same. > The problem with the NVMe drives may be an indication that I need to add those drives M2D20's database file. Though my ds1821+_m2d20_v7.db file has no drives in it at all. It only has `{"disk_compatbility_info":{},"nas_model":"ds1821+"}` Attaching my rs4021xs+_m2d20_v7.db as it has a few models defined in it. See if that helps. [rs4021xs+_m2d20_v7.db.txt](https://github.com/007revad/Synology_HDD_db/files/10952817/rs4021xs%2B_m2d20_v7.db.txt) > I've always been doubtful that drive model names like WD_BLACK SN850X 2000GB would work because none of the drives in the NAS model's .db files have spaces in the model. But I just found `SSD 850 PRO 2TB` in the expansion unit .db files. So maybe I can add WD_BLACK SN850X 2000GB to the *_m2d20_v7.db file. > > WD's product number for the WD_BLACK SN850X 2000GB is WDS200T2X0E but I can't find any way to get the actual WD product number from the NVMe drives in DSM. So if we can find a way to get that WD product number from DSM that will be all you need? Should we try hardcoding to see if it's the product number that's needed vs something else? I'm having trouble understanding the layout of these db files. Are you just appending to the end, are you adjusting the entire file somehow and inserting in between?
Author
Owner

@007revad commented on GitHub (Mar 13, 2023):

The db files are single-line JSON files. If you want to view the contents in a human readable JSON format run:

jq . /var/lib/disk-compatibility/rs4021xs+_m2d20_v7.db

For larger db files you need to save it to a file:

jq . /var/lib/disk-compatibility/rs4021xs+_m2d20_v7.db > ~/rs4021xs+_m2d20_v7.db.txt

But don't edit it while it's in the human readable format.

In DSM 7 I add the drive model to the end of the file, before the }},"nas_model":"rs4021xs+"} In DSM I add the drive model at the start just after the {"success":1,"list":[

I've inserted WD_BLACK SN850X 2000GB in the attached file. Make sure you backup the existing rs4021xs+_m2d20_v7.db file before replacing it with this one:
rs4021xs+_m2d20_v7.db.edited.txt

<!-- gh-comment-id:1465432337 --> @007revad commented on GitHub (Mar 13, 2023): The db files are single-line JSON files. If you want to **view** the contents in a human readable JSON format run: `jq . /var/lib/disk-compatibility/rs4021xs+_m2d20_v7.db` For larger db files you need to save it to a file: `jq . /var/lib/disk-compatibility/rs4021xs+_m2d20_v7.db > ~/rs4021xs+_m2d20_v7.db.txt` But don't edit it while it's in the human readable format. In DSM 7 I add the drive model to the end of the file, before the `}},"nas_model":"rs4021xs+"}` In DSM I add the drive model at the start just after the `{"success":1,"list":[` I've inserted WD_BLACK SN850X 2000GB in the attached file. Make sure you backup the existing rs4021xs+_m2d20_v7.db file before replacing it with this one: [rs4021xs+_m2d20_v7.db.edited.txt](https://github.com/007revad/Synology_HDD_db/files/10953070/rs4021xs%2B_m2d20_v7.db.edited.txt)
Author
Owner

@jayanty commented on GitHub (Mar 13, 2023):

That file worked. The NVME drives are verified now.

<!-- gh-comment-id:1465525315 --> @jayanty commented on GitHub (Mar 13, 2023): That file worked. The NVME drives are verified now.
Author
Owner

@jayanty commented on GitHub (Mar 13, 2023):

As a test, I also manually modified my rs4021xs+_host_v7.db and changed P043S7T6EMC7680 to P043S7T6 EMC7680 and did the same for P043S3T8EMC3840 => P043S3T8 EMC3840. Now all my drives are verified.

I have a backup of the file available to revert and test with the script whenever you're ready.

<!-- gh-comment-id:1465538473 --> @jayanty commented on GitHub (Mar 13, 2023): As a test, I also manually modified my rs4021xs+_host_v7.db and changed P043S7T6EMC7680 to P043S7T6 EMC7680 and did the same for P043S3T8EMC3840 => P043S3T8 EMC3840. Now all my drives are verified. I have a backup of the file available to revert and test with the script whenever you're ready.
Author
Owner

@007revad commented on GitHub (Mar 13, 2023):

I've just uploaded the latest script version, v1.1.14, which has some improvements and fixes in it. It will correctly add your P043S7T6 EMC7680 and P043S3T8 EMC3840 so you can restore your rs4021xs+_host_v7.db and try it.

I'll start working on changes to the script to support m2dxx cards. Though I would like to be able to detect which card is installed (rather than editing all the m2dxx.db files).

<!-- gh-comment-id:1465613970 --> @007revad commented on GitHub (Mar 13, 2023): I've just uploaded the latest script version, v1.1.14, which has some improvements and fixes in it. It will correctly add your P043S7T6 EMC7680 and P043S3T8 EMC3840 so you can restore your rs4021xs+_host_v7.db and try it. I'll start working on changes to the script to support m2dxx cards. Though I would like to be able to detect which card is installed (rather than editing all the m2dxx.db files).
Author
Owner

@jayanty commented on GitHub (Mar 14, 2023):

I only reverted my hosts file, not the one for the M2D20 card and executed the script. Happy to revert both and test again if you want. Everything seems to work fine for now. Script output below

HDD/SSD models found: 3
P043S3T8 EMC3840,ESFA
P043S7T6 EMC7680,ESV7
PX04SMB040,AM04

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

Backed up database to rs4021xs+_host_v7.db.bak

Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db
Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db.new
Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db
Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

Enabled M.2 volume support.

Changes to rs4021xs+_host_v7.db
},
"P043S3T8 EMC3840": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
},
"P043S7T6 EMC7680": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
}
},
"nas_model": "rs4021xs+"
}

DSM successfully checked disk compatibility.

<!-- gh-comment-id:1467172902 --> @jayanty commented on GitHub (Mar 14, 2023): I only reverted my hosts file, not the one for the M2D20 card and executed the script. Happy to revert both and test again if you want. Everything seems to work fine for now. Script output below HDD/SSD models found: 3 P043S3T8 EMC3840,ESFA P043S7T6 EMC7680,ESV7 PX04SMB040,AM04 M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD Backed up database to rs4021xs+_host_v7.db.bak Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db.new Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new Enabled M.2 volume support. Changes to rs4021xs+_host_v7.db }, "P043S3T8 EMC3840": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } }, "P043S7T6 EMC7680": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } } }, "nas_model": "rs4021xs+" } DSM successfully checked disk compatibility.
Author
Owner

@jayanty commented on GitHub (Mar 14, 2023):

I noticed this one has a message about enabling M.2 volume support. Is that supposed to work via Storage Manager? I don't see any options in the GUI to create a volume out of my M.2 drives. Not a deal breaker for me because I don't plan on doing this yet but thought I'd let you know.

Completely unrelated question (and happy to open a new issue to make it a separate thread if needed). Do you see a path to somehow changing the vendor name on these drives to make them look like Synology drives? DSM seems to be restricting the ability to enable Deduplication because these are not Synology drives.

<!-- gh-comment-id:1467176346 --> @jayanty commented on GitHub (Mar 14, 2023): I noticed this one has a message about enabling M.2 volume support. Is that supposed to work via Storage Manager? I don't see any options in the GUI to create a volume out of my M.2 drives. Not a deal breaker for me because I don't plan on doing this yet but thought I'd let you know. Completely unrelated question (and happy to open a new issue to make it a separate thread if needed). Do you see a path to somehow changing the vendor name on these drives to make them look like Synology drives? DSM seems to be restricting the ability to enable Deduplication because these are not Synology drives.
Author
Owner

@007revad commented on GitHub (Mar 14, 2023):

Can you run the following command and see if it identifies the card model as M2D20

synodisk --m2-card-model-get /dev/nvme0n1

The comment about enabling M.2 volume support refers to the script making sure that "support_m2_pool" is set to "yes" in /etc.defaults/synoinfo.conf. Someone testing DSM 7.2 beta on a DS918+ lost their SSH created NMVe volume because DSM 7.2 beta set "support_m2_pool" to "no".

I am working on figuring out how Synology prevents non-Synology NMVe drives being able to create a volume in the GUI so I can bypass it. After that's done I'll have a look at enabling Deduplication for non-Synology drives. I'm actually surprized that the clever Expenology developers haven't done it already.

A discussion on changing the vendor name and model, or changing the vendor name and model that DSM sees, would be best in a new thread with an appropriate title so others who may now a solution can join the thread.

<!-- gh-comment-id:1467193198 --> @007revad commented on GitHub (Mar 14, 2023): Can you run the following command and see if it identifies the card model as M2D20 `synodisk --m2-card-model-get /dev/nvme0n1` The comment about enabling M.2 volume support refers to the script making sure that "support_m2_pool" is set to "yes" in /etc.defaults/synoinfo.conf. Someone testing DSM 7.2 beta on a DS918+ lost their SSH created NMVe volume because DSM 7.2 beta set "support_m2_pool" to "no". I am working on figuring out how Synology prevents non-Synology NMVe drives being able to create a volume in the GUI so I can bypass it. After that's done I'll have a look at enabling Deduplication for non-Synology drives. I'm actually surprized that the clever Expenology developers haven't done it already. A discussion on changing the vendor name and model, or changing the vendor name and model that DSM sees, would be best in a new thread with an appropriate title so others who may now a solution can join the thread.
Author
Owner

@jayanty commented on GitHub (Mar 14, 2023):

Yup, that command returns
M2D20

I'll start a separate issue on enabling deduplication

<!-- gh-comment-id:1467213033 --> @jayanty commented on GitHub (Mar 14, 2023): Yup, that command returns `M2D20` I'll start a separate issue on enabling deduplication
Author
Owner

@007revad commented on GitHub (Mar 14, 2023):

Can you restore your .db file backups then try this version of the script which processes the M2D20 as well:
https://github.com/007revad/Synology_HDD_db/blob/test/syno_hdd_db.sh

<!-- gh-comment-id:1468835451 --> @007revad commented on GitHub (Mar 14, 2023): Can you restore your .db file backups then try this version of the script which processes the M2D20 as well: https://github.com/007revad/Synology_HDD_db/blob/test/syno_hdd_db.sh
Author
Owner

@jayanty commented on GitHub (Mar 17, 2023):

Sorry, just saw this. I'm migrating some live VMs off the NAS and will test after that and report back. Will be likely tomorrow by the time I get a chance

<!-- gh-comment-id:1473115929 --> @jayanty commented on GitHub (Mar 17, 2023): Sorry, just saw this. I'm migrating some live VMs off the NAS and will test after that and report back. Will be likely tomorrow by the time I get a chance
Author
Owner

@jayanty commented on GitHub (Mar 17, 2023):

tl;dr: I've just finished testing and the M.2 drives are NOT marked verified.

Detailed steps below

  1. I reverted these two files:
  • rs4021xs+_host_v7.db
  • rs4021xs+_m2d20_v7.db
  1. I executed sudo ./syno_hdd_db.sh -showedits
  2. DSM marked my drives as unverified.
  3. I then downloaded and executed the new script shared here
  4. DSM marks my SSDs as compatible but the M.2 drives still show as unverified
  5. Just to double check I executed this command again: sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility but my M.2 drives are marked unverified

Detailed script output below
HDD/SSD models found: 3
P043S3T8 EMC3840,ESFA
P043S7T6 EMC7680,ESV7
PX04SMB040,AM04

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db
P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new
Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db
P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

M.2 volume support already enabled.

Changes to rs4021xs+_host_v7.db
},
"P043S3T8 EMC3840": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
},
"P043S7T6 EMC7680": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
}
},
"nas_model": "rs4021xs+"
}

DSM successfully checked disk compatibility.

<!-- gh-comment-id:1474479369 --> @jayanty commented on GitHub (Mar 17, 2023): tl;dr: I've just finished testing and the M.2 drives are NOT marked verified. **Detailed steps below** 1. I reverted these two files: - rs4021xs+_host_v7.db - rs4021xs+_m2d20_v7.db 2. I executed sudo ./syno_hdd_db.sh -showedits 3. DSM marked my drives as unverified. 4. I then downloaded and executed the new script shared here 5. DSM marks my SSDs as compatible but the M.2 drives still show as unverified 6. Just to double check I executed this command again: sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility but my M.2 drives are marked unverified **Detailed script output below** HDD/SSD models found: 3 P043S3T8 EMC3840,ESFA P043S7T6 EMC7680,ESV7 PX04SMB040,AM04 M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new M.2 volume support already enabled. Changes to rs4021xs+_host_v7.db }, "P043S3T8 EMC3840": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } }, "P043S7T6 EMC7680": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } } }, "nas_model": "rs4021xs+" } DSM successfully checked disk compatibility.
Author
Owner

@jayanty commented on GitHub (Mar 17, 2023):

Just to be fully sure, I copied over your manually edited file rs4021xs+_m2d20_v7.db.edited.txt into /var/lib/disk-compatibility/rs4021xs+_m2d20_v7.db and now my M.2 drives show as verified

<!-- gh-comment-id:1474484830 --> @jayanty commented on GitHub (Mar 17, 2023): Just to be fully sure, I copied over your manually edited file rs4021xs+_m2d20_v7.db.edited.txt into /var/lib/disk-compatibility/rs4021xs+_m2d20_v7.db and now my M.2 drives show as verified
Author
Owner

@jayanty commented on GitHub (Mar 17, 2023):

Copy of the original file that I am using to revert M.2 to unverified status:
rs4021xs+_m2d20_v7.db.org.txt

<!-- gh-comment-id:1474487518 --> @jayanty commented on GitHub (Mar 17, 2023): Copy of the original file that I am using to revert M.2 to unverified status: [rs4021xs+_m2d20_v7.db.org.txt](https://github.com/007revad/Synology_HDD_db/files/11006852/rs4021xs%2B_m2d20_v7.db.org.txt)
Author
Owner

@jayanty commented on GitHub (Mar 17, 2023):

rs4021xs+_m2d20_v7.db.script.txt

This is the file after running your script

<!-- gh-comment-id:1474491260 --> @jayanty commented on GitHub (Mar 17, 2023): [rs4021xs+_m2d20_v7.db.script.txt](https://github.com/007revad/Synology_HDD_db/files/11006868/rs4021xs%2B_m2d20_v7.db.script.txt) This is the file after running your script
Author
Owner

@007revad commented on GitHub (Mar 18, 2023):

Yes, both rs4021xs+_m2d20_v7.db.org.txt and rs4021xs+_m2d20_v7.db.script.txt are the same.

That script version was just to test if it found your M2D20.

This version will edit the m2d20 db file. https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh

<!-- gh-comment-id:1474761130 --> @007revad commented on GitHub (Mar 18, 2023): Yes, both `rs4021xs+_m2d20_v7.db.org.txt` and `rs4021xs+_m2d20_v7.db.script.txt` are the same. That script version was just to test if it found your M2D20. This version will edit the m2d20 db file. https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh
Author
Owner

@jayanty commented on GitHub (Mar 19, 2023):

Getting an error with this one

HDD/SSD models found: 3
P043S3T8 EMC3840,ESFA
P043S7T6 EMC7680,ESV7
PX04SMB040,AM04

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

ERROR 3 /var/lib/disk-compatibility/rsrs4021xs+_host_v7.db not found!

<!-- gh-comment-id:1475061961 --> @jayanty commented on GitHub (Mar 19, 2023): Getting an error with this one HDD/SSD models found: 3 P043S3T8 EMC3840,ESFA P043S7T6 EMC7680,ESV7 PX04SMB040,AM04 M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD ERROR 3 /var/lib/disk-compatibility/rsrs4021xs+_host_v7.db not found!
Author
Owner

@007revad commented on GitHub (Mar 19, 2023):

Can you tell me what these 3 commands return:

printf "sed: " && sed 's/[0-9].*//' "/proc/sys/kernel/syno_hw_version"

printf "cat: " && cat /proc/sys/kernel/syno_hw_version

printf "key: " && get_key_value /etc/synoinfo.conf unique | cut -d'_' -f3

<!-- gh-comment-id:1475070772 --> @007revad commented on GitHub (Mar 19, 2023): Can you tell me what these 3 commands return: `printf "sed: " && sed 's/[0-9].*//' "/proc/sys/kernel/syno_hw_version"` `printf "cat: " && cat /proc/sys/kernel/syno_hw_version` `printf "key: " && get_key_value /etc/synoinfo.conf unique | cut -d'_' -f3`
Author
Owner

@007revad commented on GitHub (Mar 19, 2023):

I've fixed the develop version of the script. Can you please try this one.

https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh

<!-- gh-comment-id:1475140431 --> @007revad commented on GitHub (Mar 19, 2023): I've fixed the develop version of the script. Can you please try this one. https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh
Author
Owner

@jayanty commented on GitHub (Mar 19, 2023):

printf "sed: " && sed 's/[0-9].*//' "/proc/sys/kernel/syno_hw_version"
sed: RS

printf "cat: " && cat /proc/sys/kernel/syno_hw_version
cat: RS4021xs+

printf "key: " && get_key_value /etc/synoinfo.conf unique | cut -d'_' -f3
key: rs4021xs+

<!-- gh-comment-id:1475317524 --> @jayanty commented on GitHub (Mar 19, 2023): > `printf "sed: " && sed 's/[0-9].*//' "/proc/sys/kernel/syno_hw_version"` sed: RS > `printf "cat: " && cat /proc/sys/kernel/syno_hw_version` cat: RS4021xs+ > `printf "key: " && get_key_value /etc/synoinfo.conf unique | cut -d'_' -f3` key: rs4021xs+
Author
Owner

@jayanty commented on GitHub (Mar 19, 2023):

I've fixed the develop version of the script. Can you please try this one.

https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh

Reverted both sets of files and used this version of the script. My M.2 drives still show as unverified.

Script output below
HDD/SSD models found: 3
P043S3T8 EMC3840,ESFA
P043S7T6 EMC7680,ESV7
PX04SMB040,AM04

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

No Expansion Units found

Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db
P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new
Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db
P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

M.2 volume support already enabled.

Changes to rs4021xs+_host_v7.db
},
"P043S3T8 EMC3840": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
},
"P043S7T6 EMC7680": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
}
},
"nas_model": "rs4021xs+"
}

DSM successfully checked disk compatibility.

<!-- gh-comment-id:1475319172 --> @jayanty commented on GitHub (Mar 19, 2023): > I've fixed the develop version of the script. Can you please try this one. > > https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh Reverted both sets of files and used this version of the script. My M.2 drives still show as **unverified**. Script output below HDD/SSD models found: 3 P043S3T8 EMC3840,ESFA P043S7T6 EMC7680,ESV7 PX04SMB040,AM04 M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD No Expansion Units found Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new M.2 volume support already enabled. Changes to rs4021xs+_host_v7.db }, "P043S3T8 EMC3840": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } }, "P043S7T6 EMC7680": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } } }, "nas_model": "rs4021xs+" } DSM successfully checked disk compatibility.
Author
Owner

@007revad commented on GitHub (Mar 21, 2023):

Can you attach a copy of the rs4021xs+_host_v7.db and rs4021xs+_m2d20_v7.db files from after the script ran.

<!-- gh-comment-id:1477165241 --> @007revad commented on GitHub (Mar 21, 2023): Can you attach a copy of the rs4021xs+_host_v7.db and rs4021xs+_m2d20_v7.db files from after the script ran.
Author
Owner

@jayanty commented on GitHub (Mar 21, 2023):

rs4021xs+_m2d20_v7.db.3-21.txt
rs4021xs+_host_v7.db.3-21.txt

<!-- gh-comment-id:1478710512 --> @jayanty commented on GitHub (Mar 21, 2023): [rs4021xs+_m2d20_v7.db.3-21.txt](https://github.com/007revad/Synology_HDD_db/files/11034701/rs4021xs%2B_m2d20_v7.db.3-21.txt) [rs4021xs+_host_v7.db.3-21.txt](https://github.com/007revad/Synology_HDD_db/files/11034702/rs4021xs%2B_host_v7.db.3-21.txt)
Author
Owner

@007revad commented on GitHub (Mar 23, 2023):

I assume you are still using that develop version of the script?

This output is missing the M.2 card models section:

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

No Expansion Units found

It should look like this:

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

M.2 card models found: 1
M2D20

No Expansion Units found

I've changed v1.2.23 so if there are M.2 drives and no M.2 cards were found it will output like this:

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

No M.2 card models found

No Expansion Units found

At least then we can see if it's not finding the M2D20, or if it is finding the M2D20 but failing to edit the db files.

Please try v1.2.23 https://github.com/007revad/Synology_HDD_db

<!-- gh-comment-id:1480578802 --> @007revad commented on GitHub (Mar 23, 2023): I assume you are still using that develop version of the script? This output is missing the M.2 card models section: ``` M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD No Expansion Units found ``` It should look like this: ``` M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD M.2 card models found: 1 M2D20 No Expansion Units found ``` I've changed v1.2.23 so if there are M.2 drives and no M.2 cards were found it will output like this: ``` M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD No M.2 card models found No Expansion Units found ``` At least then we can see if it's not finding the M2D20, or if it is finding the M2D20 but failing to edit the db files. Please try v1.2.23 https://github.com/007revad/Synology_HDD_db
Author
Owner

@jayanty commented on GitHub (Mar 24, 2023):

Tried the latest script, still the same problem. My M.2 drives are still unverified. Output below

Synology_HDD_db v1.2.23
RS4021xs+ DSM 7.1.1

HDD/SSD models found: 3
P043S3T8 EMC3840,ESFA
P043S7T6 EMC7680,ESV7
PX04SMB040,AM04

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

No M.2 cards found

No Expansion Units found

Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db
P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new
Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db
P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

Re-enabled support memory compatibility.

M.2 volume support already enabled.

Changes to rs4021xs+_host_v7.db
},
"P043S3T8 EMC3840": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
},
"P043S7T6 EMC7680": {
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true
}
]
}
}
},
"nas_model": "rs4021xs+"
}

DSM successfully checked disk compatibility.

<!-- gh-comment-id:1482221873 --> @jayanty commented on GitHub (Mar 24, 2023): Tried the latest script, still the same problem. My M.2 drives are still unverified. Output below Synology_HDD_db v1.2.23 RS4021xs+ DSM 7.1.1 HDD/SSD models found: 3 P043S3T8 EMC3840,ESFA P043S7T6 EMC7680,ESV7 PX04SMB040,AM04 M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD No M.2 cards found No Expansion Units found Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new Re-enabled support memory compatibility. M.2 volume support already enabled. Changes to rs4021xs+_host_v7.db }, "P043S3T8 EMC3840": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } }, "P043S7T6 EMC7680": { "default": { "compatibility_interval": [ { "compatibility": "support", "not_yet_rolling_status": "support", "fw_dsm_update_status_notify": false, "barebone_installable": true } ] } } }, "nas_model": "rs4021xs+" } DSM successfully checked disk compatibility.
Author
Owner

@007revad commented on GitHub (Mar 24, 2023):

Can you try the following command, which should list your M2D20 twice (once for each NVMe drive.

sudo for d in /sys/block/*; do cardmodel=$(synodisk --m2-card-model-get "/dev/$d") && echo "$cardmodel"; done

<!-- gh-comment-id:1482666729 --> @007revad commented on GitHub (Mar 24, 2023): Can you try the following command, which should list your M2D20 twice (once for each NVMe drive. `sudo for d in /sys/block/*; do cardmodel=$(synodisk --m2-card-model-get "/dev/$d") && echo "$cardmodel"; done`
Author
Owner

@jayanty commented on GitHub (Mar 25, 2023):

I get a syntax error
-sh: syntax error near unexpected token ``do'

<!-- gh-comment-id:1483639636 --> @jayanty commented on GitHub (Mar 25, 2023): I get a syntax error `-sh: syntax error near unexpected token ``do' `
Author
Owner

@jayanty commented on GitHub (Mar 25, 2023):

If it helps, I ran these two command manually

synodisk --m2-card-model-get /dev/nvme0n1
M2D20

synodisk --m2-card-model-get /dev/dm-10
Not support

<!-- gh-comment-id:1483641608 --> @jayanty commented on GitHub (Mar 25, 2023): If it helps, I ran these two command manually synodisk --m2-card-model-get /dev/nvme0n1 `M2D20` synodisk --m2-card-model-get /dev/dm-10 `Not support`
Author
Owner

@jayanty commented on GitHub (Mar 25, 2023):

Also ls -l /dev/block

total 0
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-10 -> ../devices/virtual/block/dm-10
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-2 -> ../devices/virtual/block/dm-2
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-3 -> ../devices/virtual/block/dm-3
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-4 -> ../devices/virtual/block/dm-4
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-5 -> ../devices/virtual/block/dm-5
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-6 -> ../devices/virtual/block/dm-6
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-7 -> ../devices/virtual/block/dm-7
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-8 -> ../devices/virtual/block/dm-8
lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-9 -> ../devices/virtual/block/dm-9
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop0 -> ../devices/virtual/block/loop0
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop1 -> ../devices/virtual/block/loop1
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop2 -> ../devices/virtual/block/loop2
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop3 -> ../devices/virtual/block/loop3
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop4 -> ../devices/virtual/block/loop4
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop5 -> ../devices/virtual/block/loop5
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop6 -> ../devices/virtual/block/loop6
lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop7 -> ../devices/virtual/block/loop7
lrwxrwxrwx 1 root root 0 Mar 13 17:37 md0 -> ../devices/virtual/block/md0
lrwxrwxrwx 1 root root 0 Mar 13 17:37 md1 -> ../devices/virtual/block/md1
lrwxrwxrwx 1 root root 0 Mar 13 17:37 md2 -> ../devices/virtual/block/md2
lrwxrwxrwx 1 root root 0 Mar 13 17:37 md3 -> ../devices/virtual/block/md3
lrwxrwxrwx 1 root root 0 Mar 13 17:37 md4 -> ../devices/virtual/block/md4
lrwxrwxrwx 1 root root 0 Mar 13 17:37 nvme0n1 -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:04.0/0000:06:00.0/nvme/nvme0/nvme0n1
lrwxrwxrwx 1 root root 0 Mar 13 17:37 nvme1n1 -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:08.0/0000:07:00.0/nvme/nvme1/nvme1n1
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram0 -> ../devices/virtual/block/ram0
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram1 -> ../devices/virtual/block/ram1
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram10 -> ../devices/virtual/block/ram10
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram11 -> ../devices/virtual/block/ram11
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram12 -> ../devices/virtual/block/ram12
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram13 -> ../devices/virtual/block/ram13
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram14 -> ../devices/virtual/block/ram14
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram15 -> ../devices/virtual/block/ram15
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram2 -> ../devices/virtual/block/ram2
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram3 -> ../devices/virtual/block/ram3
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram4 -> ../devices/virtual/block/ram4
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram5 -> ../devices/virtual/block/ram5
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram6 -> ../devices/virtual/block/ram6
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram7 -> ../devices/virtual/block/ram7
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram8 -> ../devices/virtual/block/ram8
lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram9 -> ../devices/virtual/block/ram9
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sda -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:0/end_device-0:0:0/target0:0:0/0:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdb -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:1/end_device-0:0:1/target0:0:1/0:0:1:0/block/sdb
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdc -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:2/end_device-0:0:2/target0:0:2/0:0:2:0/block/sdc
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdd -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:3/end_device-0:0:3/target0:0:3/0:0:3:0/block/sdd
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sde -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:4/end_device-0:0:4/target0:0:4/0:0:4:0/block/sde
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdf -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:5/end_device-0:0:5/target0:0:5/0:0:5:0/block/sdf
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdg -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:6/end_device-0:0:6/target0:0:6/0:0:6:0/block/sdg
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdh -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:7/end_device-0:0:7/target0:0:7/0:0:7:0/block/sdh
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdi -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:8/end_device-0:0:8/target0:0:8/0:0:8:0/block/sdi
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdj -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:9/end_device-0:0:9/target0:0:9/0:0:9:0/block/sdj
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdk -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:10/end_device-0:0:10/target0:0:10/0:0:10:0/block/sdk
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdl -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:11/end_device-0:0:11/target0:0:11/0:0:11:0/block/sdl
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdm -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:12/end_device-0:0:12/target0:0:12/0:0:12:0/block/sdm
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdn -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:13/end_device-0:0:13/target0:0:13/0:0:13:0/block/sdn
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdo -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:14/end_device-0:0:14/target0:0:14/0:0:14:0/block/sdo
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdp -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:15/end_device-0:0:15/target0:0:15/0:0:15:0/block/sdp
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdq -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:16/end_device-0:0:16/target0:0:16/0:0:16:0/block/sdq
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdr -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:17/end_device-0:0:17/target0:0:17/0:0:17:0/block/sdr
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sds -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:18/end_device-0:0:18/target0:0:18/0:0:18:0/block/sds
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdt -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:19/end_device-0:0:19/target0:0:19/0:0:19:0/block/sdt
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdu -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:20/end_device-0:0:20/target0:0:20/0:0:20:0/block/sdu
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdv -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:21/end_device-0:0:21/target0:0:21/0:0:21:0/block/sdv
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdw -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:22/end_device-0:0:22/target0:0:22/0:0:22:0/block/sdw
lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdx -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:23/end_device-0:0:23/target0:0:23/0:0:23:0/block/sdx
lrwxrwxrwx 1 root root 0 Mar 24 17:45 synoboot -> ../devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/host1/target1:0:0/1:0:0:0/block/synoboot

<!-- gh-comment-id:1483641998 --> @jayanty commented on GitHub (Mar 25, 2023): Also ls -l /dev/block total 0 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-10 -> ../devices/virtual/block/dm-10 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-2 -> ../devices/virtual/block/dm-2 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-3 -> ../devices/virtual/block/dm-3 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-4 -> ../devices/virtual/block/dm-4 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-5 -> ../devices/virtual/block/dm-5 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-6 -> ../devices/virtual/block/dm-6 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-7 -> ../devices/virtual/block/dm-7 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-8 -> ../devices/virtual/block/dm-8 lrwxrwxrwx 1 root root 0 Mar 24 17:45 dm-9 -> ../devices/virtual/block/dm-9 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop0 -> ../devices/virtual/block/loop0 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop1 -> ../devices/virtual/block/loop1 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop2 -> ../devices/virtual/block/loop2 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop3 -> ../devices/virtual/block/loop3 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop4 -> ../devices/virtual/block/loop4 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop5 -> ../devices/virtual/block/loop5 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop6 -> ../devices/virtual/block/loop6 lrwxrwxrwx 1 root root 0 Mar 24 17:45 loop7 -> ../devices/virtual/block/loop7 lrwxrwxrwx 1 root root 0 Mar 13 17:37 md0 -> ../devices/virtual/block/md0 lrwxrwxrwx 1 root root 0 Mar 13 17:37 md1 -> ../devices/virtual/block/md1 lrwxrwxrwx 1 root root 0 Mar 13 17:37 md2 -> ../devices/virtual/block/md2 lrwxrwxrwx 1 root root 0 Mar 13 17:37 md3 -> ../devices/virtual/block/md3 lrwxrwxrwx 1 root root 0 Mar 13 17:37 md4 -> ../devices/virtual/block/md4 lrwxrwxrwx 1 root root 0 Mar 13 17:37 nvme0n1 -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:04.0/0000:06:00.0/nvme/nvme0/nvme0n1 lrwxrwxrwx 1 root root 0 Mar 13 17:37 nvme1n1 -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:08.0/0000:07:00.0/nvme/nvme1/nvme1n1 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram0 -> ../devices/virtual/block/ram0 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram1 -> ../devices/virtual/block/ram1 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram10 -> ../devices/virtual/block/ram10 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram11 -> ../devices/virtual/block/ram11 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram12 -> ../devices/virtual/block/ram12 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram13 -> ../devices/virtual/block/ram13 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram14 -> ../devices/virtual/block/ram14 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram15 -> ../devices/virtual/block/ram15 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram2 -> ../devices/virtual/block/ram2 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram3 -> ../devices/virtual/block/ram3 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram4 -> ../devices/virtual/block/ram4 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram5 -> ../devices/virtual/block/ram5 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram6 -> ../devices/virtual/block/ram6 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram7 -> ../devices/virtual/block/ram7 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram8 -> ../devices/virtual/block/ram8 lrwxrwxrwx 1 root root 0 Mar 24 17:45 ram9 -> ../devices/virtual/block/ram9 lrwxrwxrwx 1 root root 0 Mar 13 17:37 sda -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:0/end_device-0:0:0/target0:0:0/0:0:0:0/block/sda lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdb -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:1/end_device-0:0:1/target0:0:1/0:0:1:0/block/sdb lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdc -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:2/end_device-0:0:2/target0:0:2/0:0:2:0/block/sdc lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdd -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:3/end_device-0:0:3/target0:0:3/0:0:3:0/block/sdd lrwxrwxrwx 1 root root 0 Mar 13 17:37 sde -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:4/end_device-0:0:4/target0:0:4/0:0:4:0/block/sde lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdf -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:5/end_device-0:0:5/target0:0:5/0:0:5:0/block/sdf lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdg -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:6/end_device-0:0:6/target0:0:6/0:0:6:0/block/sdg lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdh -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:7/end_device-0:0:7/target0:0:7/0:0:7:0/block/sdh lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdi -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:8/end_device-0:0:8/target0:0:8/0:0:8:0/block/sdi lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdj -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:9/end_device-0:0:9/target0:0:9/0:0:9:0/block/sdj lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdk -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:10/end_device-0:0:10/target0:0:10/0:0:10:0/block/sdk lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdl -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:11/end_device-0:0:11/target0:0:11/0:0:11:0/block/sdl lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdm -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:12/end_device-0:0:12/target0:0:12/0:0:12:0/block/sdm lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdn -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:13/end_device-0:0:13/target0:0:13/0:0:13:0/block/sdn lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdo -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:14/end_device-0:0:14/target0:0:14/0:0:14:0/block/sdo lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdp -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:15/end_device-0:0:15/target0:0:15/0:0:15:0/block/sdp lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdq -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:16/end_device-0:0:16/target0:0:16/0:0:16:0/block/sdq lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdr -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:17/end_device-0:0:17/target0:0:17/0:0:17:0/block/sdr lrwxrwxrwx 1 root root 0 Mar 13 17:37 sds -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:18/end_device-0:0:18/target0:0:18/0:0:18:0/block/sds lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdt -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:19/end_device-0:0:19/target0:0:19/0:0:19:0/block/sdt lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdu -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:20/end_device-0:0:20/target0:0:20/0:0:20:0/block/sdu lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdv -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:21/end_device-0:0:21/target0:0:21/0:0:21:0/block/sdv lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdw -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:22/end_device-0:0:22/target0:0:22/0:0:22:0/block/sdw lrwxrwxrwx 1 root root 0 Mar 13 17:37 sdx -> ../devices/pci0000:00/0000:00:01.1/0000:09:00.0/host0/port-0:0/expander-0:0/port-0:0:23/end_device-0:0:23/target0:0:23/0:0:23:0/block/sdx lrwxrwxrwx 1 root root 0 Mar 24 17:45 synoboot -> ../devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/host1/target1:0:0/1:0:0:0/block/synoboot
Author
Owner

@007revad commented on GitHub (Mar 26, 2023):

Can you please download this test version: https://raw.githubusercontent.com/007revad/Synology_HDD_db/test/syno_hdd_db.sh

And run it with -d or --debug

sudo -i /<path-to-script>/syno_hdd_db.sh -d

<!-- gh-comment-id:1483981045 --> @007revad commented on GitHub (Mar 26, 2023): Can you please download this test version: https://raw.githubusercontent.com/007revad/Synology_HDD_db/test/syno_hdd_db.sh And run it with -d or --debug `sudo -i /<path-to-script>/syno_hdd_db.sh -d`
Author
Owner

@jayanty commented on GitHub (Mar 26, 2023):

Getting an error:

sudo -i ./syno_hdd_db.sh -d
-ash: ./syno_hdd_db.sh: No such file or directory

<!-- gh-comment-id:1483982558 --> @jayanty commented on GitHub (Mar 26, 2023): Getting an error: `sudo -i ./syno_hdd_db.sh -d` -ash: ./syno_hdd_db.sh: No such file or directory
Author
Owner

@jayanty commented on GitHub (Mar 26, 2023):

Worked with different syntax. The M.2 drives are still not verified. SSD's become verified like before

sudo -i ~/syno_hdd_db.sh -d

Synology_HDD_db v1.2.27
RS4021xs+ DSM 7.1.1-42962

debug 1: /sys/block/nvme0n1 nvme
debug 2: getcardmodel
debug 3: cardmodel: Not M.2 adapter card
debug 1: /sys/block/nvme1n1 nvme
debug 2: getcardmodel
debug 3: cardmodel: Not M.2 adapter card
HDD/SSD models found: 3
P043S3T8 EMC3840,ESFA
P043S7T6 EMC7680,ESV7
PX04SMB040,AM04

M.2 drive models found: 1
WD_BLACK SN850X 2000GB,620311WD

No M.2 cards found

No Expansion Units found

Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db
P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new
Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db
P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new
PX04SMB040 already exists in rs4021xs+_host_v7.db
PX04SMB040 already exists in rs4021xs+_host_v7.db.new
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db
WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new

M.2 volume support already enabled.

DSM successfully checked disk compatibility.

<!-- gh-comment-id:1483982901 --> @jayanty commented on GitHub (Mar 26, 2023): Worked with different syntax. The M.2 drives are still not verified. SSD's become verified like before `sudo -i ~/syno_hdd_db.sh -d` Synology_HDD_db v1.2.27 RS4021xs+ DSM 7.1.1-42962 debug 1: /sys/block/nvme0n1 nvme debug 2: getcardmodel debug 3: cardmodel: Not M.2 adapter card debug 1: /sys/block/nvme1n1 nvme debug 2: getcardmodel debug 3: cardmodel: Not M.2 adapter card HDD/SSD models found: 3 P043S3T8 EMC3840,ESFA P043S7T6 EMC7680,ESV7 PX04SMB040,AM04 M.2 drive models found: 1 WD_BLACK SN850X 2000GB,620311WD No M.2 cards found No Expansion Units found Added P043S3T8 EMC3840 to rs4021xs+_host_v7.db P043S3T8 EMC3840 already exists in rs4021xs+_host_v7.db.new Added P043S7T6 EMC7680 to rs4021xs+_host_v7.db P043S7T6 EMC7680 already exists in rs4021xs+_host_v7.db.new PX04SMB040 already exists in rs4021xs+_host_v7.db PX04SMB040 already exists in rs4021xs+_host_v7.db.new WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db WD_BLACK SN850X 2000GB already exists in rs4021xs+_host_v7.db.new M.2 volume support already enabled. DSM successfully checked disk compatibility.
Author
Owner

@007revad commented on GitHub (Mar 26, 2023):

I see the problem. The script is using
synodisk --m2-card-model-get /dev/sys/block/nvme0n1 when it should have been using
synodisk --m2-card-model-get /dev/nvme0n1

This test version should work for you:
https://raw.githubusercontent.com/007revad/Synology_HDD_db/test/syno_hdd_db.sh

<!-- gh-comment-id:1484020076 --> @007revad commented on GitHub (Mar 26, 2023): I see the problem. The script is using `synodisk --m2-card-model-get /dev/sys/block/nvme0n1` when it should have been using `synodisk --m2-card-model-get /dev/nvme0n1` This test version should work for you: https://raw.githubusercontent.com/007revad/Synology_HDD_db/test/syno_hdd_db.sh
Author
Owner

@007revad commented on GitHub (Mar 26, 2023):

Someone else has confirmed that my new version of the script now finds M2 cards and the M2 drives on the M2 card are now showing as compatible.

https://github.com/007revad/Synology_HDD_db/releases/tag/v1.2.28

<!-- gh-comment-id:1484202005 --> @007revad commented on GitHub (Mar 26, 2023): Someone else has confirmed that my new version of the script now finds M2 cards and the M2 drives on the M2 card are now showing as compatible. https://github.com/007revad/Synology_HDD_db/releases/tag/v1.2.28
Author
Owner

@jayanty commented on GitHub (Mar 26, 2023):

The latest version works but seems to have trouble with updating itself. It says a new version is available, (Current 1.2.27, New 1.2.28), I say 'y' to get the new version, then say 'y' to stop so I can run the new one. But the newly downloaded one still thinks its 1.2.27 so it prompts to download again.

<!-- gh-comment-id:1484215487 --> @jayanty commented on GitHub (Mar 26, 2023): The latest version works but seems to have trouble with updating itself. It says a new version is available, (Current 1.2.27, New 1.2.28), I say 'y' to get the new version, then say 'y' to stop so I can run the new one. But the newly downloaded one still thinks its 1.2.27 so it prompts to download again.
Author
Owner

@jayanty commented on GitHub (Mar 26, 2023):

Looking up the code, the version number hardcoded in Release 1.2.28 is wrong - it says 1.2.27

<!-- gh-comment-id:1484215968 --> @jayanty commented on GitHub (Mar 26, 2023): Looking up the code, the version number hardcoded in Release 1.2.28 is wrong - it says 1.2.27
Author
Owner

@007revad commented on GitHub (Apr 11, 2023):

Sorry @jayanty. I read your comment 2 weeks ago and immediately released a newer version to fix that wrong version number and forgot to reply to let you know.

<!-- gh-comment-id:1502866455 --> @007revad commented on GitHub (Apr 11, 2023): Sorry @jayanty. I read your comment 2 weeks ago and immediately released a newer version to fix that wrong version number and forgot to reply to let you know.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#717
No description provided.