mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #306] Crucial CT4000P3SSD8 Unrecognized firmware version #611
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#611
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @R-a-K-i on GitHub (Jun 21, 2024).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/306
Originally assigned to: @007revad on GitHub.
Hello,
Thank you very much for your great work. Your scripts are very helpful. Maybe you can help me with the following problem:
Although the M.2 SSD and the firmware version were recognized correctly and entered in the DB file, the SSD was not recognized correctly. Is there a solution for this?
Regards
Ralph
syno_hdd_db.sh -s
CT4000P3SSD8:
{
"P9CR30A": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
},
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
}
}
@R-a-K-i commented on GitHub (Jun 21, 2024):
Synology_HDD_db v3.5.91
DS923+ DSM 7.2.1-69057-5
StorageManager 1.0.0-0017
ds923+_host_v7 version 8041
Using options: -s
Running from: /volume1/homes/adminrknas/scripts/syno_hdd_db.sh
HDD/SSD models found: 1
SSD,W0802A0
M.2 drive models found: 1
CT4000P3SSD8,P9CR30A
No M.2 PCIe cards found
No Expansion Units found
SSD already exists in ds923+_host_v7.db
CT4000P3SSD8 already exists in ds923+_host_v7.db
Support disk compatibility already enabled.
Support memory compatibility already enabled.
NVMe support already enabled.
M.2 volume support already enabled.
Drive db auto updates already enabled.
SSD:
{
"W0802A0": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
},
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
}
}
CT4000P3SSD8:
{
"P9CR30A": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
},
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
}
}
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.
@007revad commented on GitHub (Jun 22, 2024):
If Storage Manager was already open it needs to be closed and reopened.
You should always run the script with the -n option to prevent DSM updating the drive database.
Interesting that your 2.5 inch SATA SSD shows up as just "SSD" for the model. Does it show in Storage Manager as either:
@R-a-K-i commented on GitHub (Jun 22, 2024):
I added the "-n" option (also in the "boot-up" task). After I rebooted the NAS twice, the result is the same. What surprises me is that everything works so far. I can create a volume and also activate deduplication.
Thanks for your help
Ralph
Synology_HDD_db v3.5.91
DS923+ DSM 7.2.1-69057-5
StorageManager 1.0.0-0017
ds923+_host_v7 version 8041
Using options: -n -s
Running from: /volume1/homes/adminrknas/scripts/syno_hdd_db.sh
HDD/SSD models found: 1
SSD,W0802A0
M.2 drive models found: 1
CT4000P3SSD8,P9CR30A
No M.2 PCIe cards found
No Expansion Units found
SSD already exists in ds923+_host_v7.db
CT4000P3SSD8 already exists in ds923+_host_v7.db
Support disk compatibility already enabled.
Support memory compatibility already enabled.
NVMe support already enabled.
M.2 volume support already enabled.
Drive db auto updates already disabled.
SSD:
{
"W0802A0": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
},
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
}
}
CT4000P3SSD8:
{
"P9CR30A": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
},
"default": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
}
}
DSM successfully checked disk compatibility.
The INTENSO SSDs are displayed like this:
@jeprojects commented on GitHub (Jun 29, 2024):
Happening also with my nvme drives: CT1000P3PSSD8
@007revad commented on GitHub (Jul 1, 2024):
In my experience the "Unrecognized firmware version, please update the drive database." warning normally occurs if:
What does the following command return?
Try running Synology_HDD_db with no options then:
@007revad commented on GitHub (Jul 1, 2024):
@R-a-K-i
I have no idea why everything is working when you're still seeing the "Unrecognized firmware version" warning.
Where is that image from? Is that Active Insight?
My HDD/SS section in Storage Manager looks like this:

@R-a-K-i commented on GitHub (Jul 1, 2024):
All images are from the Storage Manager:
@jeprojects commented on GitHub (Jul 2, 2024):
Running
ls /var/lib/disk-compatibility/ds1520+_host*.dbWould this mean there is an old v6 db left over?
I also tried your 5 steps listed above and it still shows the warning.
@007revad commented on GitHub (Jul 2, 2024):
Yes, there'd be a lot of v6 db files left over from DSM 6.
Try:
Then close and reopen storage manager.
@cb12tre commented on GitHub (Jul 9, 2024):
I also have the same problem with a SOLIDIGM drive.
In the file syno_hdd_vendor_ids.txt, I had to add a line 0x25e="Solidigm" because my SSD was recognized with this code and not as 0x025e="Solidigm".
Now the brand is recognized, but the message "Unrecognized firmware" still appears.
This is the output of the command syno_hdd_db.sh -s
Regards, Gio.
@007revad commented on GitHub (Jul 9, 2024):
I see a few potential issues.
You didn't run the syno_hdd_db with the -n option to prevent DSM updating the drive database.
Your DS920+ appears to have been running DSM 6 in the past. I can't tell if DSM would be using the old ds920+_host.db file or the ds920+_host_v7.db file.
What do the following commands return:
I notice WD60EFZX-68B3FN0 has size_gb. I'll have to check if this was added in DSM 7.2.1 update 5.
@007revad commented on GitHub (Jul 9, 2024):
Okay, it's not DSM 7.2.1 update 5. It's the newer host db version.
I notice there's also:
I'll add size_gb and "barebone_installable_v2": "auto"
@cb12tre commented on GitHub (Jul 9, 2024):
I confirm that version 6 of DSM was installed in the past.
This is the result of the two ls commands:
The first time I ran the syno_hdd_db command without the -n option, then later I ran it with the -n option and scheduled it as a shutdown task for the NAS:

The WD60EFZX-68B3FN0 drives are not giving any problems; I've been using them since before using this script.
@007revad commented on GitHub (Jul 10, 2024):
Delete ds920+_host.db and ds920+_host.db.new
Then delete "volume1/homes/scripts/Synology_HDD/syno_hdd-db.sh -n" from the shutdown task.
Next create bootup task with "volume1/homes/scripts/Synology_HDD/syno_hdd-db.sh -n".
Then reboot.
The WD60EFZX-68B3FN0 is fine. It's the WD60EFRX-68L0BN1 and more importantly the SOLIDIGM SSDPFKNU512GZH that are missing 2 new lines that Synology seem to have been added in host_v7 version 8052:
I've almost finished updating the script to include
size_gbandbarebone_installable_v2but I have to test it before releasing it.@cb12tre commented on GitHub (Jul 10, 2024):
Do I also put syno_enable_m2_volume.sh as a task on boot or leave it on shutdown?
@007revad commented on GitHub (Jul 10, 2024):
Leave syno_enable_m2_volume.sh as a shutdown task.
But also leave the syno_enable_m2_volume.sh shutdown task disabled for now. You should not need it with a DS920+ running DSM 7.2.1.
@cb12tre commented on GitHub (Jul 10, 2024):
I deleted the two files, this is the contents of the directory now:
This is the result of running the script at boot:
Storage manager still reports "Unrecognized firmware"

@007revad commented on GitHub (Jul 10, 2024):
Was storage manager already open? If yes, then close and reopen storage manager.
@cb12tre commented on GitHub (Jul 10, 2024):
I restarted the NAS to have the script run at startup, so Storage Manager also restarted.
@cb12tre commented on GitHub (Jul 10, 2024):
Now the volume and the disk are marked as critical.
I used this disk on a laptop until last week, and it never caused any problems.
Could it be related to the firmware not being recognized?
Later, I'll try to put in another M2 drive to see if the same thing happens.
@007revad commented on GitHub (Jul 10, 2024):
Unrecognized firmware would not cause the drive to be marked as critical.
You could run https://github.com/007revad/Synology_SMART_info to see why DSM is saying it is critical.
@cb12tre commented on GitHub (Jul 10, 2024):
Okay, thanks for the help.
I have now restarted the NAS and enabled write access to the volume, and it seems to be working normally.
The Synology_SMART_info script shows this result.
The strange thing to me is the 25. "Unsafe Shutdowns"
@007revad commented on GitHub (Jul 10, 2024):
Only 615 power on hours. It's been used for less than 4 weeks.
I noticed the 25 "Unsafe Shutdowns" too. My NVMe drives have about 5, but it's because I removed the drives while the NAS was powered on.
5,196 power_cycles is an huge amount for a drive with only 615 power on hours. It's as if the NAS continuously power cycled the NVMe drive trying to connect to it.
The 1,048 controller_busy_time is a lot too.
@cb12tre commented on GitHub (Jul 10, 2024):
This storage device has been connected to the NAS for only a week; previously, it was used on a notebook, so most of the SMART data pertains to that.
I believe the 5196 power cycles are related to the notebook's power-saving features.
@007revad commented on GitHub (Jul 10, 2024):
That makes sense.
Though the 25 "Unsafe Shutdowns" are a concern if you didn't remove the drive while it was powered on 25 times, or maybe the notebook shut down 25 times when it's battery ran too low.
@cb12tre commented on GitHub (Jul 10, 2024):
Sure, it could have been the laptop running out of battery, as you say.
It might be better to compare these results with those from one or two weeks later to determine if there are any actual problems with the disk
@cb12tre commented on GitHub (Jul 11, 2024):
I connected the new Samsung drive, and it is recognized without any error messages.

The SOLIDIGM, on the other hand, continues to give a firmware warning message.
If you want to try to understand why this is happening, I can keep the SOLIDIGM drive connected for a few more days to run some tests before disconnecting it.
Could it be because the drive was recognized as 0x25e instead of 0x025e?
@007revad commented on GitHub (Jul 11, 2024):
Yes, definitely. Did the script show you that it was 0x25e ?
Did it always show in Storage Manager as "Solidigm SOLIDIGM SSD SSDPFKNU512GZH"?
Or did originally show as "Unknown SOLIDIGM SSD SSDPFKNU512GZH"?
@007revad commented on GitHub (Jul 11, 2024):
What do the following commands return?
And these:
@cb12tre commented on GitHub (Jul 11, 2024):
Yes, originally the disk was recognized as 0x25e and shown as "Unknown".
This is what I wrote in my first message two days ago. Maybe you didn't notice it.
This is the result of the commands
@cb12tre commented on GitHub (Jul 11, 2024):
I've noticed that the data pertains to Samsung; these should be the ones you're interested in.
@007revad commented on GitHub (Jul 11, 2024):
Try this:
Then close and reopen storage manager.
@cb12tre commented on GitHub (Jul 12, 2024):
I still get the same message, I also tried restarting the NAS.
Here is the content of the two files if it can be helpful:
@007revad commented on GitHub (Jul 12, 2024):
That's disappointing.
Try running the script with the -f or --force option (as well as -n). So
syno_hdd_db.sh -nf@cb12tre commented on GitHub (Jul 12, 2024):
Still the same, the firmware message always appears.
@007revad commented on GitHub (Jul 13, 2024):
Can you try the following:
@cb12tre commented on GitHub (Jul 13, 2024):
I tried to update the database from Storage Manager but this message appears:

I also tried to download the update file and do it manually but this message appears:

However, I have loaded and executed your new script:
I disabled the automatic restart scheduling, otherwise it would have run the old script, and I restarted the NAS.
The same message about the firmware still appears. 😞
@007revad commented on GitHub (Jul 14, 2024):
I just downloaded and unpacked the Synology HDD/SSD Offline Update Pack for the DS920+
It contains v7 host db files for 3 different models, each with a different version:
My DS1821+ has v8054
Run these commands:
Then download the latest Synology HDD/SSD Offline Update Pack for the DS920+ near the bottom of this page: https://www.synology.com/en-global/support/download/DS920+?version=7.2#system and do a Manual Install in Package Center.
Finally download this new https://github.com/007revad/Synology_HDD_db/releases/tag/v3.5.94 version and run it with the -ns options.
If Storage manager is already open, close it an reopen it.
@R-a-K-i commented on GitHub (Jul 14, 2024):
I wanted to reinstall the drive database to start from scratch. The following steps solved my problem by chance:
sudo -s
syno_hdd_db.sh --restore
rm -R /var/lib/disk-compatibility/*
syno_dsm_reinstall.sh
reboot
Reinstall latest firmware with the current settings
After these steps, no more warnings are displayed:
This is really strange. The drives are no longer listed in the compatibility list.
disk-compatibility.zip
Can anyone explain this?
@007revad commented on GitHub (Jul 15, 2024):
@R-a-K-i
This is interesting.
That probably worked for you because the DS932+ already supports M.2 volumes and your NVMe volume already existed.
I notice you previously had ds923+_host_v7.db version 8041 and you now have an older version 8028
The difference between the 2 db versions is version 8041 has 2 new keys for each drive in the database:
The latest version of the script includes those 2 new keys:
https://github.com/007revad/Synology_HDD_db/releases/tag/v3.5.94
But it won't add those keys if the drive model already exists in the db file.
@cb12tre commented on GitHub (Jul 15, 2024):
I executed these commands without any problems:
When I try to upload the file, I still get this message:

@007revad commented on GitHub (Jul 15, 2024):
Try this:
Then package center should let you install the downloaded SynoOfflinePack
@R-a-K-i commented on GitHub (Jul 15, 2024):
@007revad
You are right. However, the reinstallation brings the old database with it. Even a manual update does not update to version 8041.
@007revad commented on GitHub (Jul 15, 2024):
What do these commands return?
@R-a-K-i commented on GitHub (Jul 15, 2024):
sh-4.4# cat /var/lib/disk-compatibility/ds923+_host_v7.version && echo
8028
sh-4.4# cat /var/lib/disk-compatibility/ds923+_host_v7.release && echo
20230915
sh-4.4#
@007revad commented on GitHub (Jul 15, 2024):
That's the version from DSM 7.2.1 (with Update 1).
Interesting that storage manager thinks you have the latest version already.
@cb12tre commented on GitHub (Jul 15, 2024):
No, same message:

@R-a-K-i commented on GitHub (Jul 15, 2024):
@rylos commented on GitHub (Jul 15, 2024):
I've the same problem but dedup is ok, just firmware warning. I'm on DS923 7.2.1 update5:

@rylos commented on GitHub (Jul 17, 2024):
I've made some more tests. If I disable dedupe with syno_enable_dedupe.sh --restore and then reboot my crycial M2 SSD are NOT marked as firmware unknow, all is OK. If i enable dedupe (I actually use syno_enable_dedupe.sh --hdd) when I reboot my crucial M2 SSD get the firmware warning.

I've also noticed that with automatic dedupe active on both my volume1 (SSD) and volume2 (HDD) in storage manager I can see only dedupe data for SSD and not for the volume2 HDD, see screen:
Dedupe is active for both volumes:

Will use my docker based "bees" deduplication (https://github.com/Zygo/bees) as before for now, lots of fine tune controls. Who is interested this is my repo with docker for synology: https://github.com/rylos/bees-docker
@007revad
@007revad commented on GitHub (Jul 19, 2024):
@rylos
Nice detective work. Now have something to investigate.
I've previously suspected that syno_enable_m2_volume was causing the unrecognized firmware issue. Since syno_enable_m2_volume and syno_enable_dedupe are 99% the same it makes sense that it could cause this issue.
I've known about the missing "Deduplication Status" and "Deduplication Savings" for SATA SSDs and HDDs for a while. I previously spent a week trying to fix it but it was too difficult.
@007revad commented on GitHub (Jul 19, 2024):
@rylos , @jeprojects , @cb12tre , @R-a-K-i
For those that are seeing "Unrecognized firmware version" but everything else works as it should can you try this script:

https://github.com/007revad/Synology_HDD_db/blob/test/edit_extjs-patch.zip
If storage manager is already open, close it and open it again.
And let me know:
If you have a problem you can run it with a
--restoreoption to undo the change.If this works for everyone I'll add it to syno_hdd_db
@jeprojects commented on GitHub (Jul 21, 2024):
@007revad thanks for the test script (edit_extjs-patch.zip), I am still getting the "Unrecognized firmware version"
@cb12tre commented on GitHub (Jul 22, 2024):
The script 'edit_extjs-patch.zip' doesn't resolve the issue on my device either.
@007revad commented on GitHub (Jul 24, 2024):
I've got a Crucial NVMe drive arriving tomorrow so I can see if I can reproduce the "Unrecognized firmware version". I fIc an reproduce it it will be a lot easier to fix it.
@cb12tre commented on GitHub (Jul 30, 2024):
I had the opportunity to replace the disk with a new one, the same brand (SOLIDIGM) and model but with a capacity of 1 TB, and I still get the same message regarding the firmware. Here are the details if they might be useful to you:
SMART DATA:
@covein commented on GitHub (Aug 15, 2024):
I have test this HDD.sh on SA6400 7.2-64570 update1-4 and the newest 7.2.1-69057-5:
@covein commented on GitHub (Aug 15, 2024):
I have tried this scripts, https://github.com/007revad/Synology_HDD_db/blob/test/edit_extjs-patch.zip
but nothing changes.
@007revad commented on GitHub (Aug 21, 2024):
While running some tests with syno_hdd_db, syno_enable_m2_drive and syno_enable_dedupe I remembered, from 18 months ago, a few people needed to run the syno_hdd_db.sh -n 2 or 3 times before their drives all stopped showed as unverified.
Make sure to run
syno_hdd_dbwith the -n option.@007revad commented on GitHub (Oct 19, 2024):
Can you try the following:
@NeoXTof commented on GitHub (Nov 6, 2024):
Hello,
I've the same issue (unrecognized firmware) on a DS1520+ (DSm 7.2.2-72806 update 1) with 2 Crucial P3 Plus 1To SSD
I'm already with latest version of the script.

I've tried to restore all change, manually update the drive DB, I'm still having the issue. It's just cosmetic, everything is working fine otherwise. But if you have a solution, it would be great !
Thanks in advance
@m0tt commented on GitHub (Nov 21, 2024):
I have the same issue with Crucial 1TB M.2 PCIe Gen4 NVMe P3 Plus. Cosmetic but still it would be nice to fix it. :)
@wderuijter commented on GitHub (Dec 11, 2024):
Yeah, I've got the same issue with the Crucial P3 Plus 1TB NVMe PCIe 4.0 x4 M.2 (
Micron CT1000P3PSSD8) with firmware version P9CR413@hk906801250 commented on GitHub (Mar 15, 2025):
按照上面那个哥们说的 Synology_enable_M2_volume 这个再运行一次 加一个 -r 就正常了。
@Loweack commented on GitHub (May 3, 2025):
Hello. Same for me. My new Micron (ex Crucial) P3 1TB has the message, but my older one doesn't have the warning.
@007revad commented on GitHub (May 3, 2025):
@Loweack Thanks for your screenshot. I think I know what the problem is now. I've ordered a Crucial P3 Plus NVMe drive to test with.
@jdavies565 commented on GitHub (May 14, 2025):
Wow you are single handedly providing such an incredible amount of help to the community, thank you for all the effort you put into this!
@rylos commented on GitHub (May 20, 2025):
@007revad Any news on this issue?
@007revad commented on GitHub (May 21, 2025):
I bought a 500GB Crucial P3 Plus, CT500P3PSSD8 with firmware P9CR413, installed it in my DS925+ with DSM 7.2.2 Update 1 and ran syno_hdd_db. Then closed and reopened storage manager and it shows healthy as it should.
@rylos commented on GitHub (May 21, 2025):
@007revad it's clean also for me until I enable your deduplication script (and after it I relaunch syno_hdd_db) but until deduplication is active, the warning is visible. If I disable deduplication, the warning goes away.
This is the sequence I run:
This is my situation, clean, with deduplication OFF:
@007revad commented on GitHub (May 22, 2025):
My HDD and NVMe drives stay healthy with deduplication enabled or disabled.
@NeoXTof commented on GitHub (Jun 4, 2025):
Hello,
I've tried uninstalling the scripts, update manually the syno db, reinstalling the script, I'm still having the warning in Storage Manager