mirror of
https://github.com/007revad/Synology_enable_M2_card.git
synced 2026-04-25 04:45:56 +03:00
[GH-ISSUE #51] Other PCI Card Support #70
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_enable_M2_card#70
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @allebone on GitHub (Oct 2, 2024).
Original GitHub issue: https://github.com/007revad/Synology_enable_M2_card/issues/51
Howdy!
is there any chance this script would work for enabling other third party cards? For example, an LQD-3000 4xM.2 “Honeybadger?
I tested one today in the SA6400, Info Center shows PCI Slot as occupied. If I SSH into it, I can see 4 new NVME Drives under /dev, but nothing in storage center.
I tried running this script and the DB script but it said no M.2 detected.
Any other tricks or ideas?
Note: I’m not looking to create a volume, hoping to use as SSD cache.
@007revad commented on GitHub (Oct 2, 2024):
Do they show up in /sys/block ?
The SA6400 uses a device tree blob so in theory it should possible.
Do you know what PLX chip the LQD-3000 uses? I believe that DSM would need to have a driver that supports the PLX chip (but I could be wrong).
What does
lspci -s 0return?Synology's M2D20 and E10M20-T1 use the ASMedia ASM2824 PCIe Gen3 switch chip.
Synology's older M2D18 uses the old Microsemi/PMC/IDT PES24T6G2 PCIe Gen2 switch chip.
The next problem would be working out how DSM identifies the PCIe M.2 cards.
@allebone commented on GitHub (Oct 2, 2024):
Sorry, I'm working a few time zone differences, and I appreciate the response!
It does show up in the sys/block directory:

I'll have to ask our Liqid guys, its not apparent from looking at it, and its covered by the Heatsink.
Output to Text File of PCI Output.
PCI_Output.txt
Liqid/Synology Photos:
@007revad commented on GitHub (Oct 2, 2024):
Your photo shows the LQD-3000 is in PCIe slot 2. But
ls -lshows it as PCIe slot 10000:40:03.01but that's probably due to how XPE handles things.I assume the LQD-3000 has a compatible PLX chip because DSM is detecting the 4 NVMe drives.
From your screenshots I know what would need to be added to the "model.dtb" file, except for the first 3 lines:
@allebone commented on GitHub (Oct 2, 2024):
Yeah, I noticed the PCI Slot Numbering but I guess they didn't expect anyone to use Slots 1 and 4, since that's where the Infiniband cards are for SA-6400 expansion bays. Sad they made the slots a different shape too, so you can't just use 4 PCI Slots.
Thank you for your quick reply. I hate to take up more of your time and I feel dumb asking...what do I do with the code above related to models.dtb? I've only used the script in the simplest form, so not sure where to edit models directly.
Also, am I at risk of that file changing during updates etc?
@007revad commented on GitHub (Oct 2, 2024):
Oh, it's a real Synology SA6400. I though it was some other server and you had installed xpenology on it. I googled for "SA6400 motherboard" and what google found did not look like your photo. I just googled again and realised I was previously looking at a photo of a FS6400 motherboard instead of a SA6400.
That code was more for my reference so when I get to editing the script to support the LQD-3000 I can use that code. The bits I'm still unsure about are:
which would need to be changed to whatever DSM sees the LQD-3000 as (once I figure out how to get that information from DSM).
Probably something like
Once we have a script working to automatically edit the model.dtb file you would need to schedule the script to run at boot so it runs after any DSM updates.
@007revad commented on GitHub (Oct 3, 2024):
Can you it as sudo as lspci then provides a lot more detailed information:
@allebone commented on GitHub (Oct 3, 2024):
Oh yeah! Makes more sense now. Sorry, if I was being dense in the original reply. I should have taken a larger photo.
I'll run that command in the AM. I've already retreated for the day. Also...tbh, I have to turn it on,run commands and turn it off...it's LOUD for a small office. Once I'm certain I can get it working, I'll put it into the datacenter for a while.
@allebone commented on GitHub (Oct 3, 2024):
lspci-v.txt
lspci-mm.txt
lspci-4.txt
lspci-3.txt
lspci-2.txt
lspci-1.txt
lspci-0.txt
lspci-t.txt
@007revad commented on GitHub (Oct 3, 2024):
Ok. lspci on an Intel NAS provides a lot less information than it does for AMD.
And some good news, I've discovered that it's possible to make the 4 NVMe drives in the PCIe card appear as internal NVMe drives. I can hard-code the values to suit your LQD-3000. I'll have a script for you to test today.
Ideally I want the script to get the pci paths automatically, but the only way I can currently do that requires there be 4 NVMe drives installed in the LQD-3000.
@007revad commented on GitHub (Oct 3, 2024):
Try this script. sa6400_lqd3000.zip
It will only work with the LQD-3000 in the PCIe slot labelled slot-2.
@allebone commented on GitHub (Oct 4, 2024):
You bet! I'll give it a shot in the AM. Thank you!
Since you mentioned it. LQD-3000 always 4 NVME. From the factory. And if you order a small size, it's still 4x smaller sizes drives.
The LQD4500 has 8 drives but can't fit FHFL in Synology!
@allebone commented on GitHub (Oct 4, 2024):
Script Ran successfully. Rebooted. Does not show in Storage Pane, currently, though.
It shows PCI Slot 1 (within the GUI) as occupied, but the card is in Slot 2 Physically, like you said.
The 4 drives show up in /sys/block though.
If I misunderstood you, I could move it to Slot 3, which is Slot 2 in the GUI. Or I can run additional command, etc.
@007revad commented on GitHub (Oct 4, 2024):
Moving it to Slot 3 won't help because the script is hard-coded to use the Slot 2 PCIe slot path from your /sys/block screenshot:

I must be missing something. I'll keep investigating.
@007revad commented on GitHub (Oct 4, 2024):
What does
udevadm info /dev/nvme0n1output?@007revad commented on GitHub (Oct 5, 2024):
And
grep 'nvme' /var/log/synoscgi.log@allebone commented on GitHub (Oct 5, 2024):
synoscgi.txt
udevadm.txt
You bet! Here ya go.
@allebone commented on GitHub (Oct 10, 2024):
Any other commands I missed?
@007revad commented on GitHub (Oct 10, 2024):
What do these commands return?
@allebone commented on GitHub (Oct 10, 2024):
You got it!
admin@server:/$ head -5 /usr/syno/etc/adapter_cards.conf
[FX2422N_sup_nic]
[FX2422N_sup_nvme]
FS6600N=yes
SA6400=yes
[FX2422N_sup_sata]
admin@server:/$ head -5 /usr/syno/etc.defaults/adapter_cards.conf
[FX2422N_sup_nic]
[FX2422N_sup_nvme]
FS6600N=yes
SA6400=yes
[FX2422N_sup_sata]
@007revad commented on GitHub (Oct 10, 2024):
Unfortunately they're both okay.
I've posted a question about this on the xpenology forum. https://xpenology.com/forum/topic/70791-getting-liqid-lqd3000-4xm2-working-in-sa6400/ to see if any of the xpenology developers can offer any guidance.
@allebone commented on GitHub (Oct 10, 2024):
Bummer! But I can't thank you enough for the referral and assist. If there is anything I can do...let me know. I feel like a freeloader/tourist just hanging out, watching.