mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 21:55:59 +03:00
[GH-ISSUE #108] M2D20 or FireCuda 530 now detected #753
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#753
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @7Uppers on GitHub (Jul 1, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/108
Inserted a M2D20 cache card and 2x Seagate FireCuda 530 ZP500GM3A013
The firecudas seem undetected by syno_hdd_db scirpt
however they are detected by the syno_create_m2_volume script
End result is that I cannot use them.....
@7Uppers commented on GitHub (Jul 1, 2023):
latest rc doesn't help
@007revad commented on GitHub (Jul 1, 2023):
DSM only lets you use NVMe drives in an M.2 PCIe card as a cache. So to use them as a volume you will need to use the syno_create_m2_volume.sh script. But first we need to get the DS1621xs+ use the M2D20 card.
What does the following command return:
cat /proc/sys/kernel/syno_hw_version@007revad commented on GitHub (Jul 2, 2023):
If you're feeling brave you could try extracting the model.dtb from the attached zip file then copy it to /etc.defaults/model.dtb then
chmod 644 "/etc.defaults/model.dtb"and reboot. Finally check if storage manager can now see the M2D20.DS1621xs+_model.zip
I can think of 3 possible outcomes:
To mitigate against outcome 3 I would suggest you:
Assuming step 4 went okay, shut down the NAS, remove the test drive and reinstall your original drives.
@7Uppers commented on GitHub (Jul 2, 2023):
It works now, unexpectedly
This morning after the nightly shutdown, the M2D20, m2 drives and storage pool became visible.
I think I rebooted most of the time after every script run, so I am surprised it works.
It may be something with the chronological order where one script should run after another, or maybe I did miss a reboot, I don't know.
The only extra command i tried is:
sudo set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme DS1621xs+ yes
i created a new volume, and defined a new storage in VMM, I migrated the virtual machines to the volume on the M2D20 card and did some speed tests in windows:
The result is good, and in line what I expected.
When the M2 storage was installed in the internal cache slots, they only function at PCIe 2.0 speed, so not better than sata-600 slots and not using the speed capabilities of nvme.
Now that the M2 storage is installed on the M2D20 card they can function at much higher PCIe 3.0 speed.
I ran a quick scan in windows defender: previously it ran 8 minutes, in the new set up it runs 4 minutes.
I think it is safe to say that the storage speed is doubled with this set up, which is a big deal for a virtual machine.
Not everything works: running a benchmark action in storage manager takes forever and gives no result
Thanks for the scripts, and for all your help
@007revad commented on GitHub (Jul 2, 2023):
Nice. I wish my DS1821+ had a built-in 10GbE port so I could use the PCIe slot for an M2D20.