[GH-ISSUE #3] Not finding M.2 SSDs attached to PCI card #503

Closed
opened 2026-03-11 11:23:53 +03:00 by kerem · 27 comments
Owner

Originally created by @sthulin on GitHub (Feb 27, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/3

While the script was able to detect the HDDs i have, it didn't seem to pick up the two SanDisk M.2 SSDs I have on a M2D17 PCI card in my DS1817+ running DSM6.

Originally created by @sthulin on GitHub (Feb 27, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/3 While the script was able to detect the HDDs i have, it didn't seem to pick up the two SanDisk M.2 SSDs I have on a M2D17 PCI card in my DS1817+ running DSM6.
kerem 2026-03-11 11:23:53 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@007revad commented on GitHub (Feb 27, 2023):

If you run the following command via SSH does it include the missing SanDisk M.2 SSDs?

sudo nvme list

<!-- gh-comment-id:1445852335 --> @007revad commented on GitHub (Feb 27, 2023): If you run the following command via SSH does it include the missing SanDisk M.2 SSDs? `sudo nvme list`
Author
Owner

@sthulin commented on GitHub (Feb 27, 2023):

If you run the following command via SSH does it include the missing SanDisk M.2 SSDs?

sudo nvme list

sudo: nvme: command not found

<!-- gh-comment-id:1446354059 --> @sthulin commented on GitHub (Feb 27, 2023): > If you run the following command via SSH does it include the missing SanDisk M.2 SSDs? > > `sudo nvme list` sudo: nvme: command not found
Author
Owner

@007revad commented on GitHub (Feb 28, 2023):

DSM 6.2.4 does have the nvme command. Try sudo /bin/nvme list

Do any of the following commands return anything?

ls /sys/class/nvme/

ls /sys/class/nvme/nvme0/

ls /sys/class/nvme/nvme1/

I've downloaded the DSM 6.2.4 pat file for the DS1817+ and unpacked it to have a look at it. I've found 67 files containing the word nvme. One of them seems promising but it's 500 lines so will take me a bit to understand where it's getting the nvme drive info from.

<!-- gh-comment-id:1447553624 --> @007revad commented on GitHub (Feb 28, 2023): DSM 6.2.4 does have the nvme command. Try `sudo /bin/nvme list` Do any of the following commands return anything? `ls /sys/class/nvme/` `ls /sys/class/nvme/nvme0/` `ls /sys/class/nvme/nvme1/` I've downloaded the DSM 6.2.4 pat file for the DS1817+ and unpacked it to have a look at it. I've found 67 files containing the word nvme. One of them seems promising but it's 500 lines so will take me a bit to understand where it's getting the nvme drive info from.
Author
Owner

@sthulin commented on GitHub (Feb 28, 2023):

None of those commands work. I think these are Sata M.2 drives and not NVMe.

<!-- gh-comment-id:1448198202 --> @sthulin commented on GitHub (Feb 28, 2023): None of those commands work. I think these are Sata M.2 drives and not NVMe.
Author
Owner

@007revad commented on GitHub (Mar 1, 2023):

While looking in the MD217 compatibility file I noticed that it supports SATA M.2 drives.

Also while looking around in the unpacked DSM 6.2.4 pat file for the DS1817+ I found it searches for 6 different partition types: hd*, sd*, sas*, sata*, sysd* and nvme*. My script is currently only searching for sd*, sata* and nvme*.

If you run the following command what does it find?

find /dev \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "sysd*" -o -name "nvme*" \)

<!-- gh-comment-id:1449537099 --> @007revad commented on GitHub (Mar 1, 2023): While looking in the MD217 compatibility file I noticed that it supports SATA M.2 drives. Also while looking around in the unpacked DSM 6.2.4 pat file for the DS1817+ I found it searches for 6 different partition types: hd*, sd*, sas*, sata*, sysd* and nvme*. My script is currently only searching for sd*, sata* and nvme*. If you run the following command what does it find? `find /dev \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "sysd*" -o -name "nvme*" \)`
Author
Owner

@sthulin commented on GitHub (Mar 1, 2023):

While looking in the MD217 compatibility file I noticed that it supports SATA M.2 drives.

Also while looking around in the unpacked DSM 6.2.4 pat file for the DS1817+ I found it searches for 6 different partition types: hd*, sd*, sas*, sata*, sysd* and nvme*. My script is currently only searching for sd*, sata* and nvme*.

If you run the following command what does it find?

find /dev \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "sysd*" -o -name "nvme*" \)

/dev/sdjb5
/dev/sdjb2
/dev/sdjb1
/dev/sdjb
/dev/sdje5
/dev/sdje2
/dev/sdje1
/dev/sdje
/dev/sdja5
/dev/sdja2
/dev/sdja1
/dev/sdja
/dev/sdjd5
/dev/sdjd2
/dev/sdjd1
/dev/sdjd
/dev/hda4
/dev/hda3
/dev/hda2
/dev/hda1
/dev/hda
/dev/sdg5
/dev/sdg2
/dev/sdg1
/dev/sdf5
/dev/sdf2
/dev/sdf1
/dev/sdf
/dev/sdg
/dev/sdh5
/dev/sdh2
/dev/sdh1
/dev/sdc5
/dev/sdc2
/dev/sdc1
/dev/sdh
/dev/sdc
/dev/sdd5
/dev/sdd2
/dev/sdd1
/dev/sdjc5
/dev/sdjc2
/dev/sdjc1
/dev/sde5
/dev/sde2
/dev/sde1
/dev/sdd
/dev/sde
/dev/sdjc
/dev/sdb5
/dev/sdb2
/dev/sdb1
/dev/sda5
/dev/sda2
/dev/sda1
/dev/sdb
/dev/sda

<!-- gh-comment-id:1450172727 --> @sthulin commented on GitHub (Mar 1, 2023): > While looking in the MD217 compatibility file I noticed that it supports SATA M.2 drives. > > Also while looking around in the unpacked DSM 6.2.4 pat file for the DS1817+ I found it searches for 6 different partition types: hd*, sd*, sas*, sata*, sysd* and nvme*. My script is currently only searching for sd*, sata* and nvme*. > > If you run the following command what does it find? > > `find /dev \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "sysd*" -o -name "nvme*" \)` /dev/sdjb5 /dev/sdjb2 /dev/sdjb1 /dev/sdjb /dev/sdje5 /dev/sdje2 /dev/sdje1 /dev/sdje /dev/sdja5 /dev/sdja2 /dev/sdja1 /dev/sdja /dev/sdjd5 /dev/sdjd2 /dev/sdjd1 /dev/sdjd /dev/hda4 /dev/hda3 /dev/hda2 /dev/hda1 /dev/hda /dev/sdg5 /dev/sdg2 /dev/sdg1 /dev/sdf5 /dev/sdf2 /dev/sdf1 /dev/sdf /dev/sdg /dev/sdh5 /dev/sdh2 /dev/sdh1 /dev/sdc5 /dev/sdc2 /dev/sdc1 /dev/sdh /dev/sdc /dev/sdd5 /dev/sdd2 /dev/sdd1 /dev/sdjc5 /dev/sdjc2 /dev/sdjc1 /dev/sde5 /dev/sde2 /dev/sde1 /dev/sdd /dev/sde /dev/sdjc /dev/sdb5 /dev/sdb2 /dev/sdb1 /dev/sda5 /dev/sda2 /dev/sda1 /dev/sdb /dev/sda
Author
Owner

@007revad commented on GitHub (Mar 1, 2023):

It looks like you also have a DX517 with 5 drives?

hda appears to be an M.2 drive but there's 1 listed and it has 4 partitions. Does your M.2 have 2 storage pools, or is it a read/write cache?

/dev/hda
/dev/hda1
/dev/hda2
/dev/hda3
/dev/hda4

Does the following command list one of your M.2 drive models?

hdparm -i /dev/hda | grep Model

<!-- gh-comment-id:1450495228 --> @007revad commented on GitHub (Mar 1, 2023): It looks like you also have a DX517 with 5 drives? hda appears to be an M.2 drive but there's 1 listed and it has 4 partitions. Does your M.2 have 2 storage pools, or is it a read/write cache? /dev/hda /dev/hda1 /dev/hda2 /dev/hda3 /dev/hda4 Does the following command list one of your M.2 drive models? `hdparm -i /dev/hda | grep Model`
Author
Owner

@sthulin commented on GitHub (Mar 1, 2023):

find /dev -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "sysd*" -o -name "nvme*"

So you caught me in the middle of an issue where i have a dying m.2 card and it kicked it out of the system last night, it's back now and currently rebuilding (see Below). It's 2 of them in a read/write cache (except when 1 gets kicked out and it goes into protection mode). I do also have a dx517 with 5 drives.

/dev/hda4
/dev/hda3
/dev/hda2
/dev/hda1
/dev/hda
/dev/sdf5
/dev/sdf2
/dev/sdf1
/dev/sdjc5
/dev/sdjc2
/dev/sdja5
/dev/sdjc1
/dev/sdja2
/dev/sdja1
/dev/sdjb5
/dev/sdjb2
/dev/sdjd5
/dev/sdjb1
/dev/sdje5
/dev/sdjd2
/dev/sdje2
/dev/sdjd1
/dev/sdje1
/dev/sdg5
/dev/sdg2
/dev/sdh5
/dev/sdg1
/dev/sdh2
/dev/sdh1
/dev/sdc5
/dev/sdc2
/dev/sdc1
/dev/sdf
/dev/sdd5
/dev/sdd2
/dev/sdd1
/dev/sdg
/dev/sde5
/dev/sde2
/dev/sde1
/dev/sdh
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdje
/dev/sdjd
/dev/sdjc
/dev/sdjb
/dev/sdja
/dev/sda5
/dev/sda2
/dev/sda1
/dev/sdb5
/dev/sdb2
/dev/sdb1
/dev/sdb
/dev/sda

hdparm -i /dev/hda | grep Model
HDIO_DRIVE_CMD(identify) failed: Bad address
Model=HGST HDN724040ALE640, FwRev=MJAOA5E0, SerialNo=[REDACTED]

That is not the SSD, that is the HDD in slot 1 of the main unit. The SSD would be SanDisk

<!-- gh-comment-id:1450529965 --> @sthulin commented on GitHub (Mar 1, 2023): > find /dev \( -name "hd*" -o -name "sd*" -o -name "sas*" -o -name "sata*" -o -name "sysd*" -o -name "nvme*" \) So you caught me in the middle of an issue where i have a dying m.2 card and it kicked it out of the system last night, it's back now and currently rebuilding (see Below). It's 2 of them in a read/write cache (except when 1 gets kicked out and it goes into protection mode). I do also have a dx517 with 5 drives. /dev/hda4 /dev/hda3 /dev/hda2 /dev/hda1 /dev/hda /dev/sdf5 /dev/sdf2 /dev/sdf1 /dev/sdjc5 /dev/sdjc2 /dev/sdja5 /dev/sdjc1 /dev/sdja2 /dev/sdja1 /dev/sdjb5 /dev/sdjb2 /dev/sdjd5 /dev/sdjb1 /dev/sdje5 /dev/sdjd2 /dev/sdje2 /dev/sdjd1 /dev/sdje1 /dev/sdg5 /dev/sdg2 /dev/sdh5 /dev/sdg1 /dev/sdh2 /dev/sdh1 /dev/sdc5 /dev/sdc2 /dev/sdc1 /dev/sdf /dev/sdd5 /dev/sdd2 /dev/sdd1 /dev/sdg /dev/sde5 /dev/sde2 /dev/sde1 /dev/sdh /dev/sdc /dev/sdd /dev/sde /dev/sdje /dev/sdjd /dev/sdjc /dev/sdjb /dev/sdja /dev/sda5 /dev/sda2 /dev/sda1 /dev/sdb5 /dev/sdb2 /dev/sdb1 /dev/sdb /dev/sda hdparm -i /dev/hda | grep Model HDIO_DRIVE_CMD(identify) failed: Bad address Model=HGST HDN724040ALE640, FwRev=MJAOA5E0, SerialNo=[REDACTED] That is not the SSD, that is the HDD in slot 1 of the main unit. The SSD would be SanDisk
Author
Owner

@007revad commented on GitHub (Mar 1, 2023):

That is strange.

  • The 5 sdj* drives would be the dx517 drives.
  • Then the hda drive that you said is in the ds1817+. I assume this drive is a single drive with 2 volumes.
  • Then there's 8 sd* drives. If only 7 of those are in ds1817+. So what is the 8th sd* drive?
<!-- gh-comment-id:1450703095 --> @007revad commented on GitHub (Mar 1, 2023): That is strange. - The 5 sdj* drives would be the dx517 drives. - Then the hda drive that you said is in the ds1817+. I assume this drive is a single drive with 2 volumes. - Then there's 8 sd* drives. If only 7 of those are in ds1817+. So what is the 8th sd* drive?
Author
Owner

@sthulin commented on GitHub (Mar 1, 2023):

So the 1817+ has 8x 3.84TB 7200RPM HDDs
It also has 2x 1TB SanDisk SSDs in the PCI Card
The Dx517 has 5x 3.84TB 7200RPM HDDs as well. It's 1 big BTFRS pool using synology raid with 2 failures and then the SSDs are a read/write cache for it.

<!-- gh-comment-id:1450712243 --> @sthulin commented on GitHub (Mar 1, 2023): So the 1817+ has 8x 3.84TB 7200RPM HDDs It also has 2x 1TB SanDisk SSDs in the PCI Card The Dx517 has 5x 3.84TB 7200RPM HDDs as well. It's 1 big BTFRS pool using synology raid with 2 failures and then the SSDs are a read/write cache for it.
Author
Owner

@007revad commented on GitHub (Mar 3, 2023):

I've uploaded an updated version that includes a 2nd method to disable drive compatibility. It won't add your M.2 sata drives the drive database but should prevent them causing any unsupported drive messages.

<!-- gh-comment-id:1453142900 --> @007revad commented on GitHub (Mar 3, 2023): I've uploaded an updated version that includes a 2nd method to disable drive compatibility. It won't add your M.2 sata drives the drive database but should prevent them causing any unsupported drive messages.
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

Can you please run the following command as user (not root):
sudo cp /etc/space/space_table/$(sudo ls /etc/space/space_table | sort -u | tail -n 1) ~/space_table.txt

Then attach to your reply the space_table.txt file that you'll find in your home folder.

<!-- gh-comment-id:1465061551 --> @007revad commented on GitHub (Mar 12, 2023): Can you please run the following command as user (not root): `sudo cp /etc/space/space_table/$(sudo ls /etc/space/space_table | sort -u | tail -n 1) ~/space_table.txt` Then attach to your reply the space_table.txt file that you'll find in your home folder.
Author
Owner

@sthulin commented on GitHub (Mar 12, 2023):

i don't appear to to have a /etc/space/space_table/ directory so i get a bunch of errors

<!-- gh-comment-id:1465063160 --> @sthulin commented on GitHub (Mar 12, 2023): i don't appear to to have a /etc/space/space_table/ directory so i get a bunch of errors
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

I should have checked DSM 6 first.

Can you try this one:
sudo cp /etc/space/$(sudo ls /etc/space | sort -u | tail -n 1) ~/space_history.txt

<!-- gh-comment-id:1465077101 --> @007revad commented on GitHub (Mar 12, 2023): I should have checked DSM 6 first. Can you try this one: sudo cp /etc/space/$(sudo ls /etc/space | sort -u | tail -n 1) ~/space_history.txt
Author
Owner

@sthulin commented on GitHub (Mar 12, 2023):

6Yqqjy-ZM3y-kxXH-7W93-l7gA-P234-acOzCZ="SPACE:/dev/vg1000/lv,FCACHE:/dev/mapper/cachedev_0,REFERENCE:/volume1

<!-- gh-comment-id:1465077344 --> @sthulin commented on GitHub (Mar 12, 2023): > 6Yqqjy-ZM3y-kxXH-7W93-l7gA-P234-acOzCZ="SPACE:/dev/vg1000/lv,FCACHE:/dev/mapper/cachedev_0,REFERENCE:/volume1
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

That's the contents of the vspace_layer.conf file. This command will get the latest space_history xml file:
sudo cp /etc/space/$(sudo ls /etc/space | sort -u | tail -2 | head -1) ~/space_history.txt

Then attach to your reply the space_table.txt file that you'll find in your home folder.

<!-- gh-comment-id:1465083487 --> @007revad commented on GitHub (Mar 12, 2023): That's the contents of the vspace_layer.conf file. This command will get the latest space_history xml file: `sudo cp /etc/space/$(sudo ls /etc/space | sort -u | tail -2 | head -1) ~/space_history.txt` Then attach to your reply the space_table.txt file that you'll find in your home folder.
Author
Owner

@sthulin commented on GitHub (Mar 12, 2023):

see attached. Weird thing in that is it says one of the SSDs is rebuilding when that finished days ago and everything is marked healthy.
space_history.txt

<!-- gh-comment-id:1465197235 --> @sthulin commented on GitHub (Mar 12, 2023): see attached. Weird thing in that is it says one of the SSDs is rebuilding when that finished days ago and everything is marked healthy. [space_history.txt](https://github.com/007revad/Synology_HDD_db/files/10950924/space_history.txt)
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

The space_history_<date>_<time>.xml files only seem to get created when you change something in storage manager. So the latest one your NAS has was probably created when the SSD RAID rebuild started, and you haven't done anything to trigger a new being created since then.

Can you try the following commands, 1 line at a time:

cat /sys/block/dev/nvc1p1/device/model

cat /sys/block/dev/nvc1p1/device/firmware_rev

cat /sys/block/dev/nvc2p1/device/model

cat /sys/block/dev/nvc2p1/device/firmware_rev

<!-- gh-comment-id:1465305511 --> @007revad commented on GitHub (Mar 12, 2023): The `space_history_<date>_<time>.xml` files only seem to get created when you change something in storage manager. So the latest one your NAS has was probably created when the SSD RAID rebuild started, and you haven't done anything to trigger a new being created since then. Can you try the following commands, 1 line at a time: `cat /sys/block/dev/nvc1p1/device/model` `cat /sys/block/dev/nvc1p1/device/firmware_rev` `cat /sys/block/dev/nvc2p1/device/model` `cat /sys/block/dev/nvc2p1/device/firmware_rev`
Author
Owner

@sthulin commented on GitHub (Mar 12, 2023):

i don't have a dev folder inside /sys/block

/sys/block$ ls
dm-0 loop3 md0 nvc2 ram12 ram3 ram8 sdd sdja synoboot
dm-1 loop4 md1 ram0 ram13 ram4 ram9 sde sdjb zram0
loop0 loop5 md2 ram1 ram14 ram5 sda sdf sdjc zram1
loop1 loop6 md3 ram10 ram15 ram6 sdb sdg sdjd zram2
loop2 loop7 nvc1 ram11 ram2 ram7 sdc sdh sdje zram3

<!-- gh-comment-id:1465318127 --> @sthulin commented on GitHub (Mar 12, 2023): > i don't have a dev folder inside /sys/block /sys/block$ ls dm-0 loop3 md0 nvc2 ram12 ram3 ram8 sdd sdja synoboot dm-1 loop4 md1 ram0 ram13 ram4 ram9 sde sdjb zram0 loop0 loop5 md2 ram1 ram14 ram5 sda sdf sdjc zram1 loop1 loop6 md3 ram10 ram15 ram6 sdb sdg sdjd zram2 loop2 loop7 nvc1 ram11 ram2 ram7 sdc sdh sdje zram3
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

I'm glad you replied with the ls output because now I know it's nvc1 and not nvc1p1. Try the following commands, 1 line at a time:

cat /sys/block/nvc1/device/model

cat /sys/block/nvc1/device/firmware_rev

cat /sys/blockv/nvc2/device/model

cat /sys/block/nvc2/device/firmware_rev

<!-- gh-comment-id:1465326532 --> @007revad commented on GitHub (Mar 12, 2023): I'm glad you replied with the ls output because now I know it's nvc1 and not nvc1p1. Try the following commands, 1 line at a time: `cat /sys/block/nvc1/device/model` `cat /sys/block/nvc1/device/firmware_rev` `cat /sys/blockv/nvc2/device/model` `cat /sys/block/nvc2/device/firmware_rev`
Author
Owner

@sthulin commented on GitHub (Mar 12, 2023):

so the model comes back with the correct model numbers:

cat /sys/block/nvc2/device/model
SD9SN8W1T00
cat /sys/block/nvc1/device/model
SD9SN8W1T001122

however firmware_rev is not a part of that folder:

/sys/block/nvc1/device$ ls -alh
total 0
drwxr-xr-x 8 root root 0 Mar 1 11:34 .
drwxr-xr-x 4 root root 0 Mar 1 11:34 ..
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 auto_remap
drwxr-xr-x 3 root root 0 Mar 1 11:34 block
drwxr-xr-x 3 root root 0 Mar 12 19:17 bsg
--w------- 1 root root 4.0K Mar 12 19:17 delete
-r--r--r-- 1 root root 4.0K Mar 12 19:17 device_blocked
lrwxrwxrwx 1 root root 0 Mar 12 19:17 driver -> ../../../../../../../../../../bus/scsi/drivers/sd
-r--r--r-- 1 root root 4.0K Mar 12 19:17 evt_media_change
lrwxrwxrwx 1 root root 0 Mar 12 19:17 generic -> scsi_generic/sg2
-r--r--r-- 1 root root 4.0K Mar 12 19:17 iocounterbits
-r--r--r-- 1 root root 4.0K Mar 12 19:17 iodone_cnt
-r--r--r-- 1 root root 4.0K Mar 12 19:17 ioerr_cnt
-r--r--r-- 1 root root 4.0K Mar 12 19:17 iorequest_cnt
-r--r--r-- 1 root root 4.0K Mar 12 19:17 modalias
-r--r--r-- 1 root root 4.0K Mar 1 11:34 model
drwxr-xr-x 2 root root 0 Mar 12 19:17 power
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 queue_depth
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 queue_ramp_up_period
-r--r--r-- 1 root root 4.0K Mar 12 19:17 queue_type
--w------- 1 root root 4.0K Mar 12 19:17 rescan
-r--r--r-- 1 root root 4.0K Mar 12 19:17 rev
drwxr-xr-x 3 root root 0 Mar 12 19:17 scsi_device
drwxr-xr-x 3 root root 0 Mar 12 19:17 scsi_disk
drwxr-xr-x 3 root root 0 Mar 12 19:17 scsi_generic
-r--r--r-- 1 root root 4.0K Mar 12 19:17 scsi_level
-rw-r--r-- 1 root root 4.0K Mar 1 11:34 state
lrwxrwxrwx 1 root root 0 Mar 12 19:17 subsystem -> ../../../../../../../../../../bus/scsi
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 sw_activity
--w--w--w- 1 root root 4.0K Mar 12 19:17 syno_deep_sleep_ctrl
-r--r--r-- 1 root root 4.0K Mar 12 19:17 syno_deep_sleep_support
-r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_other_hist
-r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_read_hist
-r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_stat
-r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_write_hist
-r--r--r-- 1 root root 4.0K Mar 12 19:17 syno_disk_serial
-rw-rw-rw- 1 root root 4.0K Mar 12 19:17 syno_fake_error_ctrl
-rw-r--r-- 1 root root 4.0K Mar 2 04:34 syno_idle_time
-rw-rw-rw- 1 root root 4.0K Mar 12 19:17 syno_pwr_reset_count
--w--w--w- 1 root root 4.0K Mar 12 19:17 syno_sata_error_event_debug
-rw-rw-rw- 1 root root 4.0K Mar 12 19:17 syno_scmd_min_timeout
-r--r--r-- 1 root root 4.0K Mar 2 04:34 syno_spindown
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 syno_standby_syncing
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 syno_wcache
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 timeout
-r--r--r-- 1 root root 4.0K Mar 12 19:17 type
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 uevent
-rw-r--r-- 1 root root 4.0K Mar 12 19:17 unload_heads
-r--r--r-- 1 root root 4.0K Mar 12 19:00 vendor

<!-- gh-comment-id:1465327887 --> @sthulin commented on GitHub (Mar 12, 2023): so the model comes back with the correct model numbers: cat /sys/block/nvc2/device/model SD9SN8W1T00 cat /sys/block/nvc1/device/model SD9SN8W1T001122 however firmware_rev is not a part of that folder: /sys/block/nvc1/device$ ls -alh total 0 drwxr-xr-x 8 root root 0 Mar 1 11:34 . drwxr-xr-x 4 root root 0 Mar 1 11:34 .. -rw-r--r-- 1 root root 4.0K Mar 12 19:17 auto_remap drwxr-xr-x 3 root root 0 Mar 1 11:34 block drwxr-xr-x 3 root root 0 Mar 12 19:17 bsg --w------- 1 root root 4.0K Mar 12 19:17 delete -r--r--r-- 1 root root 4.0K Mar 12 19:17 device_blocked lrwxrwxrwx 1 root root 0 Mar 12 19:17 driver -> ../../../../../../../../../../bus/scsi/drivers/sd -r--r--r-- 1 root root 4.0K Mar 12 19:17 evt_media_change lrwxrwxrwx 1 root root 0 Mar 12 19:17 generic -> scsi_generic/sg2 -r--r--r-- 1 root root 4.0K Mar 12 19:17 iocounterbits -r--r--r-- 1 root root 4.0K Mar 12 19:17 iodone_cnt -r--r--r-- 1 root root 4.0K Mar 12 19:17 ioerr_cnt -r--r--r-- 1 root root 4.0K Mar 12 19:17 iorequest_cnt -r--r--r-- 1 root root 4.0K Mar 12 19:17 modalias -r--r--r-- 1 root root 4.0K Mar 1 11:34 model drwxr-xr-x 2 root root 0 Mar 12 19:17 power -rw-r--r-- 1 root root 4.0K Mar 12 19:17 queue_depth -rw-r--r-- 1 root root 4.0K Mar 12 19:17 queue_ramp_up_period -r--r--r-- 1 root root 4.0K Mar 12 19:17 queue_type --w------- 1 root root 4.0K Mar 12 19:17 rescan -r--r--r-- 1 root root 4.0K Mar 12 19:17 rev drwxr-xr-x 3 root root 0 Mar 12 19:17 scsi_device drwxr-xr-x 3 root root 0 Mar 12 19:17 scsi_disk drwxr-xr-x 3 root root 0 Mar 12 19:17 scsi_generic -r--r--r-- 1 root root 4.0K Mar 12 19:17 scsi_level -rw-r--r-- 1 root root 4.0K Mar 1 11:34 state lrwxrwxrwx 1 root root 0 Mar 12 19:17 subsystem -> ../../../../../../../../../../bus/scsi -rw-r--r-- 1 root root 4.0K Mar 12 19:17 sw_activity --w--w--w- 1 root root 4.0K Mar 12 19:17 syno_deep_sleep_ctrl -r--r--r-- 1 root root 4.0K Mar 12 19:17 syno_deep_sleep_support -r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_other_hist -r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_read_hist -r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_stat -r--r--r-- 1 root root 4.0K Mar 12 05:04 syno_disk_latency_write_hist -r--r--r-- 1 root root 4.0K Mar 12 19:17 syno_disk_serial -rw-rw-rw- 1 root root 4.0K Mar 12 19:17 syno_fake_error_ctrl -rw-r--r-- 1 root root 4.0K Mar 2 04:34 syno_idle_time -rw-rw-rw- 1 root root 4.0K Mar 12 19:17 syno_pwr_reset_count --w--w--w- 1 root root 4.0K Mar 12 19:17 syno_sata_error_event_debug -rw-rw-rw- 1 root root 4.0K Mar 12 19:17 syno_scmd_min_timeout -r--r--r-- 1 root root 4.0K Mar 2 04:34 syno_spindown -rw-r--r-- 1 root root 4.0K Mar 12 19:17 syno_standby_syncing -rw-r--r-- 1 root root 4.0K Mar 12 19:17 syno_wcache -rw-r--r-- 1 root root 4.0K Mar 12 19:17 timeout -r--r--r-- 1 root root 4.0K Mar 12 19:17 type -rw-r--r-- 1 root root 4.0K Mar 12 19:17 uevent -rw-r--r-- 1 root root 4.0K Mar 12 19:17 unload_heads -r--r--r-- 1 root root 4.0K Mar 12 19:00 vendor
Author
Owner

@007revad commented on GitHub (Mar 12, 2023):

So M.2 SATA SSDs have rev instead of firmware_rev. Try these:

cat /sys/block/nvc1/device/rev

cat /sys/block/nvc2/device/rev

<!-- gh-comment-id:1465329873 --> @007revad commented on GitHub (Mar 12, 2023): So M.2 SATA SSDs have rev instead of firmware_rev. Try these: `cat /sys/block/nvc1/device/rev` `cat /sys/block/nvc2/device/rev`
Author
Owner

@007revad commented on GitHub (Mar 13, 2023):

Can you try this develop version of the script to make sure it finds and adds your M.2 SATA SSD drives:

https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh

<!-- gh-comment-id:1465361608 --> @007revad commented on GitHub (Mar 13, 2023): Can you try this develop version of the script to make sure it finds and adds your M.2 SATA SSD drives: https://github.com/007revad/Synology_HDD_db/blob/develop/syno_hdd_db.sh
Author
Owner

@sthulin commented on GitHub (Mar 13, 2023):

that appears to be worse: ERROR No drives found!

Your commands above did return "X610" which is the firmware of the drives in question.

<!-- gh-comment-id:1465364106 --> @sthulin commented on GitHub (Mar 13, 2023): that appears to be worse: ERROR No drives found! Your commands above did return "X610" which is the firmware of the drives in question.
Author
Owner

@007revad commented on GitHub (Mar 13, 2023):

Sorry. I've fixed it now. Can you download it again.

<!-- gh-comment-id:1465378669 --> @007revad commented on GitHub (Mar 13, 2023): Sorry. I've fixed it now. Can you download it again.
Author
Owner

@sthulin commented on GitHub (Mar 13, 2023):

much better, this seems to have picked them both up:

HDD/SSD models found: 3
HDN724040ALE640,MJAO
HDN726040ALE614,APGN
HUS724040ALE640,MJAO

M.2 drive models found: 2
SD9SN8W1T001122,X610
SD9SN8W1T00,X610

<!-- gh-comment-id:1465380457 --> @sthulin commented on GitHub (Mar 13, 2023): much better, this seems to have picked them both up: HDD/SSD models found: 3 HDN724040ALE640,MJAO HDN726040ALE614,APGN HUS724040ALE640,MJAO M.2 drive models found: 2 SD9SN8W1T001122,X610 SD9SN8W1T00,X610
Author
Owner

@007revad commented on GitHub (Mar 13, 2023):

Excellent.

<!-- gh-comment-id:1465496511 --> @007revad commented on GitHub (Mar 13, 2023): Excellent.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#503
No description provided.