mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #432] enable creating m.2 storage pool on ds1823xs+ with E10M20-T1 #652
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#652
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @younghoon-na on GitHub (Feb 26, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/432
Originally assigned to: @007revad on GitHub.
Hi,
Thanks for sharing such an amazing script so that we can unlock(?) many features on Synology nas.
Here, I'm posting this issue because I currently have some problem in enabling to create m.2 storage pool on ds1823xs+ with E10M20-T1.
More precisely, the two m.2 nvme devices are all recognized in storage manager and they can be used as ssd cache,
but GUI shows the error message and I cannot make a storage pool for those m.2 nvme devices with E10M20-T1.
What I did is,
script_enable_m2) -> reboot -> run the script in this repo (call itscript_syno_hdd) without / with -n argument.script_enable_m2-> reboot ->script_syno_hddwithout / with -n -> rebootrestore for script_enable_m2/restore for script_syno_hdd-> reboot -> repeat 1) and 2) above.But for all of those trials, I saw the same message that
this drive is installed via an adapter card and cannot be used in M.2 SSD storage pools.If I run
script_syno_hddwithout -n argument, after a while, sometimes the message becomesthe drive is not verified to use m.2 storage pools.. but it changes tothis drive is installed via an adapter card and cannot be used in M.2 SSD storage poolsif I run script_syno_hdd again.One another observation is that,
even if I restore for script_syno_hdd, only some of drives become
redstatus (not verified) and some are remained asgreen status.My expectation was that all of drives become
redstatus since none of my drives are synology ones, and my nas is ds1823xs+.I also tried to update the drive database manually (downloaded from synology website), but it only shows
already up-to-date so cannot update.I know that your another script
Synology M2 Volumecan create the storage pool and volume.But for my case, I want to convert some of my drive into RAID with m.2, so really want to do it in storage manager.
Could you please help or give me any appropriate trial (order of script...) so that I can try?
Any comment would be very valuable for me.
Thank you very much.
@007revad commented on GitHub (Feb 26, 2025):
Firstly, I never use syno_enable_m2_volume on a '20 series or newer Synology as it's not needed with syno_hdd-db
I assume this is a real DS1823xs+ and not xpenology?
What DSM version are you using? Can you reply with the full output from syno_hdd_db
Syno_enable_m2_card obviously worked because Storage Manager can see the NVMe drives in the E10M20-T1.
Syno_hdd_db should always be run with the -n option. Syno_hdd_db edits a Storage Manager file to make it allow creating storage pools, and volumes, on NVMe drives in a PCIe card.
On my DS1821+ I have syno_hdd_db and syno_enable_m2_card scheduled to run at boot up. With syno_enable_m2_card set as a pre-task for the syno_hdd_db schedule, just to make sure syno_enable_m2_card runs before syno_hdd_db.
Note In the following screenshots the command to run the scripts wraps after a - dash, so it looks like
- -modeland-- autoupdatewhen it's really--modeland--autoupdatesyno_enable_m2_card schedule


syno_hdd_db schedule


@younghoon-na commented on GitHub (Feb 26, 2025):
Hi,
Thanks for your comment.
You’re right, that is, I’m not using xpenology and using pure DS1823xs+.
I also tried with scheduled task, but the result was same. For all the trials, storage manager can see drives, but not able to create storage pool. only for cache.
One strange thing (for me) was, I understand about the reason that we can schedule the task on boot-up for ‘syno_enable_m2’ is that, if there is an update and E10M20-T1 is disappeared, then we need to enable so we can run it at boot-up.
But in my ds1823xs+, I need to reboot one more time after boot-up with scheduled boot-up task of ‘syno_enable_m2’ before the card is recognized.
(So, it’s not visible just after boot up, and becomes visible after one more reboot)
This is not straightforward to me, because if I need to reboot one more time, no strict reason to run the script on boot-up.. is this correct?
For the result from ‘syno_hdd_db’, I will be able to upload the screen today night.
Please kindly let me know if there any other information valuable to understand the situation.
Thanks.
@007revad commented on GitHub (Feb 26, 2025):
After any DSM update that replaces the compatible drive database files or the storage manager package you need to run both scripts then reboot. It's annoying waiting for the Synology to boot after a DSM update and then having to reboot it again, but I haven't found a way around that. Thankfully DSM is not updated too often.
@younghoon-na commented on GitHub (Feb 26, 2025):
yes..
I actually read another posting which was about 1 year back, and you mentioned that you suspect that there is a kind of database cache so the script is not applied directly for an enterprise model.
I’m not sure whether my case is also similar.
My dsm version is a recent one, 7.2.2-72806
@007revad commented on GitHub (Feb 26, 2025):
I just searched in the DS1823xs+'s DSM 7.2.2-72806 for "disk_compatbility_info" and it was only in the db files and libhwcontrol.so.1. So I compared the DS1823xs+ libhwcontrol.so.1 to the DS1821+ and they are the same file.
The included Storage Manager package is also the same for both models. They both have StorageManager-v1000-1.0.0-00502.spk
I also compared /usr/lib between DS1821+ and DS1823xs+ and the only difference is the DS1823xs+ has syno-oob-check-status.service because it has an OOB port. There are other differences in DSM but they mostly relate to the OOB port and different number of 1GbE and 10GbE ports.
Something interesting, but unrelated, I found is that the DS1823xs+ does not have
/var.defaults/lib/smartmontools/drivedb.hThere are differences in synoinfo.conf

is_business_model="yes"is interesting. I search all of DSM for "is_business_model" and it was only found in synoinfo.conf. Maybe it's used by storage manager. I also checked in storage manager and didn't find "is_business_model".And I'm not sure what
support_fc="yes"does.@younghoon-na commented on GitHub (Feb 26, 2025):
I have no idea about the exact meaning of ‘is_business_model’.
But, as far as I know and experienced,
ds1823xs+ was intended to use only Synology hdd, which is verified by Synology.
If not, storage manager shows those drives are at risk, and assign the orange flag (warning).
If there is no other purposes, storage manager might use ‘is_business_model’ for this purpose..
I have been using the same/similar hdd on including ds918, ds1621+ and now ds1823xs+, and found no other difference on storage manager. (other than deduplication)
@younghoon-na commented on GitHub (Feb 26, 2025):
Let me attach the output resulted from
syno_hdd_db.sh -nAlso, I attached the screenshot in storage manager.
@007revad commented on GitHub (Feb 28, 2025):
Does this return nothing? Or "Not Edited"
@younghoon-na commented on GitHub (Feb 28, 2025):
I tried two times, but no output at all.
@007revad commented on GitHub (Feb 28, 2025):
That's actually good as it means storage manager has been successfully edited.
@younghoon-na commented on GitHub (Feb 28, 2025):
yes I expected as seen in the script.
is there any possibility that ‘enabling storage pool on pcie adapter’ can also depend on another file other than ‘storage_panel.js’?
I asked you this because, I need to run two scripts, and from some unintended mistake (like run script with invalid order - syno_hdd_db and then enable_m2) whether there is any possibility that some of essential changes are not applied.
I tried several times repeatedly, so just want to know if there is any of such case.
@007revad commented on GitHub (Feb 28, 2025):
The first time, or after a DSM update, syno_enable_m2_card needs to run before syno_hdd_db (otherwise syno_hdd_db will say "No M.2 PCIe card found".
As a test do you want to try changing "is_business_model" to no? I suspect it's only used by the Active Insight package, but I'm curious if it will make a difference.
Then if you already have storage manager open close and reopen storage manager.
@younghoon-na commented on GitHub (Feb 28, 2025):
no change after the command
@007revad commented on GitHub (Feb 28, 2025):
Okay, change it back to yes.
Package center on a business model may install a copy of ‘storage_panel.js’ somewhere else.
Does this return nothing? Or "Not Edited"
@younghoon-na commented on GitHub (Mar 1, 2025):
the output is nothing again.
@younghoon-na commented on GitHub (Mar 1, 2025):
It seems it might not be able to be resolved simply..
I have some idea without enable creating storage pool on pci adapter directly.
Let me try and update.
@007revad commented on GitHub (Mar 1, 2025):
I searched for this other issue or discussion and could not find it. Do you have a link to it?
@younghoon-na commented on GitHub (Mar 2, 2025):
This one :
https://github.com/007revad/Synology_HDD_db/issues/189
https://github.com/007revad/Synology_HDD_db/issues/189#issuecomment-1868151100
@younghoon-na commented on GitHub (Mar 3, 2025):
I followed the steps :
The reason I moved the nvme into E10M20-T1's slot is the read/write performance.
For ds1823xs+, read/write speed was ~680MB/s / ~2400MB/s for internal m.2 slot / E10M20-T1 resp.