[GH-ISSUE #432] enable creating m.2 storage pool on ds1823xs+ with E10M20-T1 #652

Open
opened 2026-03-11 12:54:04 +03:00 by kerem · 19 comments
Owner

Originally created by @younghoon-na on GitHub (Feb 26, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/432

Originally assigned to: @007revad on GitHub.

Hi,

Thanks for sharing such an amazing script so that we can unlock(?) many features on Synology nas.

Here, I'm posting this issue because I currently have some problem in enabling to create m.2 storage pool on ds1823xs+ with E10M20-T1.

More precisely, the two m.2 nvme devices are all recognized in storage manager and they can be used as ssd cache,
but GUI shows the error message and I cannot make a storage pool for those m.2 nvme devices with E10M20-T1.

What I did is,

  1. Run script for Synology Enable M.2 card (call it script_enable_m2) -> reboot -> run the script in this repo (call it script_syno_hdd) without / with -n argument.
  2. script_enable_m2 -> reboot -> script_syno_hdd without / with -n -> reboot
  3. restore for script_enable_m2 / restore for script_syno_hdd -> reboot -> repeat 1) and 2) above.

But for all of those trials, I saw the same message that this drive is installed via an adapter card and cannot be used in M.2 SSD storage pools.
If I run script_syno_hdd without -n argument, after a while, sometimes the message becomes the drive is not verified to use m.2 storage pools.. but it changes to this drive is installed via an adapter card and cannot be used in M.2 SSD storage pools if I run script_syno_hdd again.

One another observation is that,
even if I restore for script_syno_hdd, only some of drives become red status (not verified) and some are remained as green status.
My expectation was that all of drives become red status since none of my drives are synology ones, and my nas is ds1823xs+.

I also tried to update the drive database manually (downloaded from synology website), but it only shows already up-to-date so cannot update.

I know that your another script Synology M2 Volume can create the storage pool and volume.
But for my case, I want to convert some of my drive into RAID with m.2, so really want to do it in storage manager.

Could you please help or give me any appropriate trial (order of script...) so that I can try?
Any comment would be very valuable for me.

Thank you very much.

Originally created by @younghoon-na on GitHub (Feb 26, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/432 Originally assigned to: @007revad on GitHub. Hi, Thanks for sharing such an amazing script so that we can unlock(?) many features on Synology nas. Here, I'm posting this issue because I currently have some problem in enabling to create m.2 storage pool on ds1823xs+ with E10M20-T1. More precisely, the two m.2 nvme devices are all recognized in storage manager and they can be used as ssd cache, but GUI shows the error message and I cannot make a storage pool for those m.2 nvme devices with E10M20-T1. What I did is, 1. Run script for Synology Enable M.2 card (call it `script_enable_m2`) -> reboot -> run the script in this repo (call it `script_syno_hdd`) without / with -n argument. 2. `script_enable_m2` -> reboot -> `script_syno_hdd` without / with -n -> reboot 3. `restore for script_enable_m2` / `restore for script_syno_hdd` -> reboot -> repeat 1) and 2) above. But for all of those trials, I saw the same message that `this drive is installed via an adapter card and cannot be used in M.2 SSD storage pools`. If I run `script_syno_hdd` without -n argument, after a while, sometimes the message becomes `the drive is not verified to use m.2 storage pools`.. but it changes to `this drive is installed via an adapter card and cannot be used in M.2 SSD storage pools` if I run script_syno_hdd again. One another observation is that, even if I restore for script_syno_hdd, only some of drives become `red` status (not verified) and some are remained as `green status`. My expectation was that all of drives become `red` status since none of my drives are synology ones, and my nas is ds1823xs+. I also tried to update the drive database manually (downloaded from synology website), but it only shows `already up-to-date so cannot update`. I know that your another script `Synology M2 Volume` can create the storage pool and volume. But for my case, I want to convert some of my drive into RAID with m.2, so really want to do it in storage manager. Could you please help or give me any appropriate trial (order of script...) so that I can try? Any comment would be very valuable for me. Thank you very much.
Author
Owner

@007revad commented on GitHub (Feb 26, 2025):

Firstly, I never use syno_enable_m2_volume on a '20 series or newer Synology as it's not needed with syno_hdd-db

I assume this is a real DS1823xs+ and not xpenology?

What DSM version are you using? Can you reply with the full output from syno_hdd_db

Syno_enable_m2_card obviously worked because Storage Manager can see the NVMe drives in the E10M20-T1.

Syno_hdd_db should always be run with the -n option. Syno_hdd_db edits a Storage Manager file to make it allow creating storage pools, and volumes, on NVMe drives in a PCIe card.

On my DS1821+ I have syno_hdd_db and syno_enable_m2_card scheduled to run at boot up. With syno_enable_m2_card set as a pre-task for the syno_hdd_db schedule, just to make sure syno_enable_m2_card runs before syno_hdd_db.

Note In the following screenshots the command to run the scripts wraps after a - dash, so it looks like - -model and -- autoupdate when it's really --model and --autoupdate

syno_enable_m2_card schedule
Image
Image

syno_hdd_db schedule
Image
Image

<!-- gh-comment-id:2683900303 --> @007revad commented on GitHub (Feb 26, 2025): Firstly, I never use syno_enable_m2_volume on a '20 series or newer Synology as it's not needed with syno_hdd-db I assume this is a real DS1823xs+ and not xpenology? What DSM version are you using? Can you reply with the full output from syno_hdd_db Syno_enable_m2_card obviously worked because Storage Manager can see the NVMe drives in the E10M20-T1. Syno_hdd_db should always be run with the -n option. Syno_hdd_db edits a Storage Manager file to make it allow creating storage pools, and volumes, on NVMe drives in a PCIe card. On my DS1821+ I have syno_hdd_db and syno_enable_m2_card scheduled to run at boot up. With syno_enable_m2_card set as a pre-task for the syno_hdd_db schedule, just to make sure syno_enable_m2_card runs before syno_hdd_db. Note In the following screenshots the command to run the scripts wraps after a - dash, so it looks like `- -model` and `-- autoupdate` when it's really `--model` and `--autoupdate` syno_enable_m2_card schedule ![Image](https://github.com/user-attachments/assets/4ac40445-aa11-4793-aa18-45018855d514) ![Image](https://github.com/user-attachments/assets/c4ce61a1-017f-47c7-bbbb-0c2616e224c0) syno_hdd_db schedule ![Image](https://github.com/user-attachments/assets/f55de39e-d6a7-40b9-a6e3-562e7b01c869) ![Image](https://github.com/user-attachments/assets/2d14091a-97a4-4ae1-9005-f6531673f024)
Author
Owner

@younghoon-na commented on GitHub (Feb 26, 2025):

Hi,

Thanks for your comment.
You’re right, that is, I’m not using xpenology and using pure DS1823xs+.

I also tried with scheduled task, but the result was same. For all the trials, storage manager can see drives, but not able to create storage pool. only for cache.

One strange thing (for me) was, I understand about the reason that we can schedule the task on boot-up for ‘syno_enable_m2’ is that, if there is an update and E10M20-T1 is disappeared, then we need to enable so we can run it at boot-up.
But in my ds1823xs+, I need to reboot one more time after boot-up with scheduled boot-up task of ‘syno_enable_m2’ before the card is recognized.
(So, it’s not visible just after boot up, and becomes visible after one more reboot)

This is not straightforward to me, because if I need to reboot one more time, no strict reason to run the script on boot-up.. is this correct?

For the result from ‘syno_hdd_db’, I will be able to upload the screen today night.
Please kindly let me know if there any other information valuable to understand the situation.

Thanks.

<!-- gh-comment-id:2684100398 --> @younghoon-na commented on GitHub (Feb 26, 2025): Hi, Thanks for your comment. You’re right, that is, I’m not using xpenology and using pure DS1823xs+. I also tried with scheduled task, but the result was same. For all the trials, storage manager can see drives, but not able to create storage pool. only for cache. One strange thing (for me) was, I understand about the reason that we can schedule the task on boot-up for ‘syno_enable_m2’ is that, if there is an update and E10M20-T1 is disappeared, then we need to enable so we can run it at boot-up. But in my ds1823xs+, I need to reboot one more time after boot-up with scheduled boot-up task of ‘syno_enable_m2’ before the card is recognized. (So, it’s not visible just after boot up, and becomes visible after one more reboot) This is not straightforward to me, because if I need to reboot one more time, no strict reason to run the script on boot-up.. is this correct? For the result from ‘syno_hdd_db’, I will be able to upload the screen today night. Please kindly let me know if there any other information valuable to understand the situation. Thanks.
Author
Owner

@007revad commented on GitHub (Feb 26, 2025):

After any DSM update that replaces the compatible drive database files or the storage manager package you need to run both scripts then reboot. It's annoying waiting for the Synology to boot after a DSM update and then having to reboot it again, but I haven't found a way around that. Thankfully DSM is not updated too often.

<!-- gh-comment-id:2684233563 --> @007revad commented on GitHub (Feb 26, 2025): After any DSM update that replaces the compatible drive database files or the storage manager package you need to run both scripts then reboot. It's annoying waiting for the Synology to boot after a DSM update and then having to reboot it again, but I haven't found a way around that. Thankfully DSM is not updated too often.
Author
Owner

@younghoon-na commented on GitHub (Feb 26, 2025):

yes..
I actually read another posting which was about 1 year back, and you mentioned that you suspect that there is a kind of database cache so the script is not applied directly for an enterprise model.

I’m not sure whether my case is also similar.
My dsm version is a recent one, 7.2.2-72806

<!-- gh-comment-id:2684259614 --> @younghoon-na commented on GitHub (Feb 26, 2025): yes.. I actually read another posting which was about 1 year back, and you mentioned that you suspect that there is a kind of database cache so the script is not applied directly for an enterprise model. I’m not sure whether my case is also similar. My dsm version is a recent one, 7.2.2-72806
Author
Owner

@007revad commented on GitHub (Feb 26, 2025):

I just searched in the DS1823xs+'s DSM 7.2.2-72806 for "disk_compatbility_info" and it was only in the db files and libhwcontrol.so.1. So I compared the DS1823xs+ libhwcontrol.so.1 to the DS1821+ and they are the same file.

The included Storage Manager package is also the same for both models. They both have StorageManager-v1000-1.0.0-00502.spk

I also compared /usr/lib between DS1821+ and DS1823xs+ and the only difference is the DS1823xs+ has syno-oob-check-status.service because it has an OOB port. There are other differences in DSM but they mostly relate to the OOB port and different number of 1GbE and 10GbE ports.

Something interesting, but unrelated, I found is that the DS1823xs+ does not have /var.defaults/lib/smartmontools/drivedb.h

There are differences in synoinfo.conf
Image

is_business_model="yes" is interesting. I search all of DSM for "is_business_model" and it was only found in synoinfo.conf. Maybe it's used by storage manager. I also checked in storage manager and didn't find "is_business_model".

And I'm not sure what support_fc="yes" does.

<!-- gh-comment-id:2684409680 --> @007revad commented on GitHub (Feb 26, 2025): I just searched in the DS1823xs+'s DSM 7.2.2-72806 for "disk_compatbility_info" and it was only in the db files and libhwcontrol.so.1. So I compared the DS1823xs+ libhwcontrol.so.1 to the DS1821+ and they are the same file. The included Storage Manager package is also the same for both models. They both have StorageManager-v1000-1.0.0-00502.spk I also compared /usr/lib between DS1821+ and DS1823xs+ and the only difference is the DS1823xs+ has syno-oob-check-status.service because it has an OOB port. There are other differences in DSM but they mostly relate to the OOB port and different number of 1GbE and 10GbE ports. Something interesting, but unrelated, I found is that the DS1823xs+ does not have `/var.defaults/lib/smartmontools/drivedb.h` There are differences in synoinfo.conf ![Image](https://github.com/user-attachments/assets/de5424e0-ec80-40b9-a32f-59fc5ba98df5) `is_business_model="yes"` is interesting. I search all of DSM for "is_business_model" and it was only found in synoinfo.conf. Maybe it's used by storage manager. I also checked in storage manager and didn't find "is_business_model". And I'm not sure what `support_fc="yes"` does.
Author
Owner

@younghoon-na commented on GitHub (Feb 26, 2025):

I have no idea about the exact meaning of ‘is_business_model’.

But, as far as I know and experienced,
ds1823xs+ was intended to use only Synology hdd, which is verified by Synology.
If not, storage manager shows those drives are at risk, and assign the orange flag (warning).

If there is no other purposes, storage manager might use ‘is_business_model’ for this purpose..

I have been using the same/similar hdd on including ds918, ds1621+ and now ds1823xs+, and found no other difference on storage manager. (other than deduplication)

<!-- gh-comment-id:2684492393 --> @younghoon-na commented on GitHub (Feb 26, 2025): I have no idea about the exact meaning of ‘is_business_model’. But, as far as I know and experienced, ds1823xs+ was intended to use only Synology hdd, which is verified by Synology. If not, storage manager shows those drives are at risk, and assign the orange flag (warning). If there is no other purposes, storage manager might use ‘is_business_model’ for this purpose.. I have been using the same/similar hdd on including ds918, ds1621+ and now ds1823xs+, and found no other difference on storage manager. (other than deduplication)
Author
Owner

@younghoon-na commented on GitHub (Feb 26, 2025):

Let me attach the output resulted from syno_hdd_db.sh -n
Also, I attached the screenshot in storage manager.

Synology_HDD_db v3.6.110
DS1823xs+ x86_64 DSM 7.2.2-72806-3
StorageManager 1.0.0-00502

ds1823xs+_host_v7 version 8034

Using options: -n
Running from: /volume2/Scripts/syno_hdd_db/syno_hdd_db.sh

HDD/SSD models found: 6
SSD 870 EVO 2TB,SVT03B6Q,2000 GB
ST14000VN0008-2JG101,SC60,14000 GB
WD80EFAX-68KNBN0,81.00A81,8001 GB
WUH721414ALE604,LDGSW2L0,14000 GB
WUH721818ALE6L4,PCGNW232,18000 GB
WUH721818ALE6L4,PCGNW680,18000 GB

M.2 drive models found: 3
Samsung SSD 990 PRO 2TB,4B2QJXD7,2000 GB
Samsung SSD 990 PRO with Heatsink 2TB,4B2QJXD7,2000 GB
WD_BLACK SN770 2TB,731100WD,2000 GB

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

SSD 870 EVO 2TB already exists in ds1823xs+_host_v7.db
ST14000VN0008-2JG101 already exists in ds1823xs+_host_v7.db
WD80EFAX-68KNBN0 already exists in ds1823xs+_host_v7.db
WUH721414ALE604 already exists in ds1823xs+_host_v7.db
Updated WUH721818ALE6L4 in ds1823xs+_host_v7.db
Updated WUH721818ALE6L4 in ds1823xs+_host_v7.db
Samsung SSD 990 PRO 2TB already exists in ds1823xs+_host_v7.db
Samsung SSD 990 PRO 2TB already exists in ds1823xs+_e10m20-t1_v7.db
Samsung SSD 990 PRO with Heatsink 2TB already exists in ds1823xs+_host_v7.db
Samsung SSD 990 PRO with Heatsink 2TB already exists in ds1823xs+_e10m20-t1_v7.db
WD_BLACK SN770 2TB already exists in ds1823xs+_host_v7.db
WD_BLACK SN770 2TB already exists in ds1823xs+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1823xs+
E10M20-T1 NVMe already enabled for DS1823xs+
E10M20-T1 already exists in model.dtb

Re-enabled support disk compatibility.

Support memory compatibility already enabled.

NVMe support already enabled.

M.2 volume support already enabled.

Drive db auto updates already disabled.

Creating pool in UI on drives in M.2 adaptor card already enabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.
Image
<!-- gh-comment-id:2684748709 --> @younghoon-na commented on GitHub (Feb 26, 2025): Let me attach the output resulted from `syno_hdd_db.sh -n` Also, I attached the screenshot in storage manager. ``` Synology_HDD_db v3.6.110 DS1823xs+ x86_64 DSM 7.2.2-72806-3 StorageManager 1.0.0-00502 ds1823xs+_host_v7 version 8034 Using options: -n Running from: /volume2/Scripts/syno_hdd_db/syno_hdd_db.sh HDD/SSD models found: 6 SSD 870 EVO 2TB,SVT03B6Q,2000 GB ST14000VN0008-2JG101,SC60,14000 GB WD80EFAX-68KNBN0,81.00A81,8001 GB WUH721414ALE604,LDGSW2L0,14000 GB WUH721818ALE6L4,PCGNW232,18000 GB WUH721818ALE6L4,PCGNW680,18000 GB M.2 drive models found: 3 Samsung SSD 990 PRO 2TB,4B2QJXD7,2000 GB Samsung SSD 990 PRO with Heatsink 2TB,4B2QJXD7,2000 GB WD_BLACK SN770 2TB,731100WD,2000 GB M.2 PCIe card models found: 1 E10M20-T1 No Expansion Units found SSD 870 EVO 2TB already exists in ds1823xs+_host_v7.db ST14000VN0008-2JG101 already exists in ds1823xs+_host_v7.db WD80EFAX-68KNBN0 already exists in ds1823xs+_host_v7.db WUH721414ALE604 already exists in ds1823xs+_host_v7.db Updated WUH721818ALE6L4 in ds1823xs+_host_v7.db Updated WUH721818ALE6L4 in ds1823xs+_host_v7.db Samsung SSD 990 PRO 2TB already exists in ds1823xs+_host_v7.db Samsung SSD 990 PRO 2TB already exists in ds1823xs+_e10m20-t1_v7.db Samsung SSD 990 PRO with Heatsink 2TB already exists in ds1823xs+_host_v7.db Samsung SSD 990 PRO with Heatsink 2TB already exists in ds1823xs+_e10m20-t1_v7.db WD_BLACK SN770 2TB already exists in ds1823xs+_host_v7.db WD_BLACK SN770 2TB already exists in ds1823xs+_e10m20-t1_v7.db E10M20-T1 NIC already enabled for DS1823xs+ E10M20-T1 NVMe already enabled for DS1823xs+ E10M20-T1 already exists in model.dtb Re-enabled support disk compatibility. Support memory compatibility already enabled. NVMe support already enabled. M.2 volume support already enabled. Drive db auto updates already disabled. Creating pool in UI on drives in M.2 adaptor card already enabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` <img width="816" alt="Image" src="https://github.com/user-attachments/assets/70140d2b-c371-4582-8c06-4df7010f648c" />
Author
Owner

@007revad commented on GitHub (Feb 28, 2025):

Does this return nothing? Or "Not Edited"

grep -q 'notSupportM2Pool_addOnCard' /var/packages/StorageManager/target/ui/storage_panel.js && echo "Not Edited"
<!-- gh-comment-id:2689431321 --> @007revad commented on GitHub (Feb 28, 2025): Does this return nothing? Or "Not Edited" ``` grep -q 'notSupportM2Pool_addOnCard' /var/packages/StorageManager/target/ui/storage_panel.js && echo "Not Edited" ```
Author
Owner

@younghoon-na commented on GitHub (Feb 28, 2025):

I tried two times, but no output at all.

<!-- gh-comment-id:2689440014 --> @younghoon-na commented on GitHub (Feb 28, 2025): I tried two times, but no output at all.
Author
Owner

@007revad commented on GitHub (Feb 28, 2025):

That's actually good as it means storage manager has been successfully edited.

<!-- gh-comment-id:2689453083 --> @007revad commented on GitHub (Feb 28, 2025): That's actually good as it means storage manager has been successfully edited.
Author
Owner

@younghoon-na commented on GitHub (Feb 28, 2025):

yes I expected as seen in the script.
is there any possibility that ‘enabling storage pool on pcie adapter’ can also depend on another file other than ‘storage_panel.js’?
I asked you this because, I need to run two scripts, and from some unintended mistake (like run script with invalid order - syno_hdd_db and then enable_m2) whether there is any possibility that some of essential changes are not applied.

I tried several times repeatedly, so just want to know if there is any of such case.

<!-- gh-comment-id:2689559639 --> @younghoon-na commented on GitHub (Feb 28, 2025): yes I expected as seen in the script. is there any possibility that ‘enabling storage pool on pcie adapter’ can also depend on another file other than ‘storage_panel.js’? I asked you this because, I need to run two scripts, and from some unintended mistake (like run script with invalid order - syno_hdd_db and then enable_m2) whether there is any possibility that some of essential changes are not applied. I tried several times repeatedly, so just want to know if there is any of such case.
Author
Owner

@007revad commented on GitHub (Feb 28, 2025):

The first time, or after a DSM update, syno_enable_m2_card needs to run before syno_hdd_db (otherwise syno_hdd_db will say "No M.2 PCIe card found".

As a test do you want to try changing "is_business_model" to no? I suspect it's only used by the Active Insight package, but I'm curious if it will make a difference.

sudo synosetkeyvalue /etc.defaults/synoinfo.conf  is_business_model no

Then if you already have storage manager open close and reopen storage manager.

<!-- gh-comment-id:2689681799 --> @007revad commented on GitHub (Feb 28, 2025): The first time, or after a DSM update, syno_enable_m2_card needs to run before syno_hdd_db (otherwise syno_hdd_db will say "No M.2 PCIe card found". As a test do you want to try changing "is_business_model" to no? I suspect it's only used by the Active Insight package, but I'm curious if it will make a difference. ``` sudo synosetkeyvalue /etc.defaults/synoinfo.conf is_business_model no ``` Then if you already have storage manager open close and reopen storage manager.
Author
Owner

@younghoon-na commented on GitHub (Feb 28, 2025):

no change after the command

sudo synosetkeyvalue /etc.defaults/synoinfo.conf  is_business_model no
<!-- gh-comment-id:2690702321 --> @younghoon-na commented on GitHub (Feb 28, 2025): no change after the command ``` sudo synosetkeyvalue /etc.defaults/synoinfo.conf is_business_model no ```
Author
Owner

@007revad commented on GitHub (Feb 28, 2025):

Okay, change it back to yes.

sudo synosetkeyvalue /etc.defaults/synoinfo.conf  is_business_model yes

is there any possibility that ‘enabling storage pool on pcie adapter’ can also depend on another file other than ‘storage_panel.js’?
It is possible. But when I compared DSM from DS1823xs+ to DS1821+ I did not find any other possible files.

Package center on a business model may install a copy of ‘storage_panel.js’ somewhere else.

Does this return nothing? Or "Not Edited"

grep -q 'isConditionInvalid:0<this.pciSlot' /var/packages/StorageManager/target/ui/storage_panel.js && echo "Not Edited"
<!-- gh-comment-id:2691338080 --> @007revad commented on GitHub (Feb 28, 2025): Okay, change it back to yes. ``` sudo synosetkeyvalue /etc.defaults/synoinfo.conf is_business_model yes ``` > is there any possibility that ‘enabling storage pool on pcie adapter’ can also depend on another file other than ‘storage_panel.js’? It is possible. But when I compared DSM from DS1823xs+ to DS1821+ I did not find any other possible files. Package center on a business model may install a copy of ‘storage_panel.js’ somewhere else. Does this return nothing? Or "Not Edited" ``` grep -q 'isConditionInvalid:0<this.pciSlot' /var/packages/StorageManager/target/ui/storage_panel.js && echo "Not Edited" ```
Author
Owner

@younghoon-na commented on GitHub (Mar 1, 2025):

the output is nothing again.

grep -q 'isConditionInvalid:0<this.pciSlot' /var/packages/StorageManager/target/ui/storage_panel.js && echo "Not Edited"
<!-- gh-comment-id:2692033602 --> @younghoon-na commented on GitHub (Mar 1, 2025): the output is nothing again. ``` grep -q 'isConditionInvalid:0<this.pciSlot' /var/packages/StorageManager/target/ui/storage_panel.js && echo "Not Edited" ```
Author
Owner

@younghoon-na commented on GitHub (Mar 1, 2025):

It seems it might not be able to be resolved simply..

I have some idea without enable creating storage pool on pci adapter directly.
Let me try and update.

<!-- gh-comment-id:2692151146 --> @younghoon-na commented on GitHub (Mar 1, 2025): It seems it might not be able to be resolved simply.. I have some idea without enable creating storage pool on pci adapter directly. Let me try and update.
Author
Owner

@007revad commented on GitHub (Mar 1, 2025):

I actually read another posting which was about 1 year back, and you mentioned that you suspect that there is a kind of database cache so the script is not applied directly for an enterprise model.

I searched for this other issue or discussion and could not find it. Do you have a link to it?

<!-- gh-comment-id:2692391444 --> @007revad commented on GitHub (Mar 1, 2025): > I actually read another posting which was about 1 year back, and you mentioned that you suspect that there is a kind of database cache so the script is not applied directly for an enterprise model. I searched for this other issue or discussion and could not find it. Do you have a link to it?
Author
Owner
<!-- gh-comment-id:2692567579 --> @younghoon-na commented on GitHub (Mar 2, 2025): This one : https://github.com/007revad/Synology_HDD_db/issues/189 https://github.com/007revad/Synology_HDD_db/issues/189#issuecomment-1868151100
Author
Owner

@younghoon-na commented on GitHub (Mar 3, 2025):

I followed the steps :

  1. firstly, confirmed that after creating m.2 storage pool in internal slot, and moving it into E10M20-T1, the storage pool is remained as same.
  2. So, I moved the nvme into internal slots for creating storage pools, and after creating the pool, I again moved back to E10M20-T1.

The reason I moved the nvme into E10M20-T1's slot is the read/write performance.
For ds1823xs+, read/write speed was ~680MB/s / ~2400MB/s for internal m.2 slot / E10M20-T1 resp.

<!-- gh-comment-id:2693980328 --> @younghoon-na commented on GitHub (Mar 3, 2025): I followed the steps : 1. firstly, confirmed that after creating m.2 storage pool in internal slot, and moving it into E10M20-T1, the storage pool is remained as same. 2. So, I moved the nvme into internal slots for creating storage pools, and after creating the pool, I again moved back to E10M20-T1. The reason I moved the nvme into E10M20-T1's slot is the read/write performance. For ds1823xs+, read/write speed was ~680MB/s / ~2400MB/s for internal m.2 slot / E10M20-T1 resp.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#652
No description provided.