[GH-ISSUE #51] Other PCI Card Support #129

Open
opened 2026-03-12 15:31:21 +03:00 by kerem · 21 comments
Owner

Originally created by @allebone on GitHub (Oct 2, 2024).
Original GitHub issue: https://github.com/007revad/Synology_enable_M2_card/issues/51

Howdy!

is there any chance this script would work for enabling other third party cards? For example, an LQD-3000 4xM.2 “Honeybadger?

I tested one today in the SA6400, Info Center shows PCI Slot as occupied. If I SSH into it, I can see 4 new NVME Drives under /dev, but nothing in storage center.

I tried running this script and the DB script but it said no M.2 detected.

Any other tricks or ideas?

Note: I’m not looking to create a volume, hoping to use as SSD cache.

Originally created by @allebone on GitHub (Oct 2, 2024). Original GitHub issue: https://github.com/007revad/Synology_enable_M2_card/issues/51 Howdy! is there any chance this script would work for enabling other third party cards? For example, an LQD-3000 4xM.2 “Honeybadger? I tested one today in the SA6400, Info Center shows PCI Slot as occupied. If I SSH into it, I can see 4 new NVME Drives under /dev, but nothing in storage center. I tried running this script and the DB script but it said no M.2 detected. Any other tricks or ideas? Note: I’m not looking to create a volume, hoping to use as SSD cache.
Author
Owner

@007revad commented on GitHub (Oct 2, 2024):

Do they show up in /sys/block ?

 /sys/block/nvme0n1
 /sys/block/nvme1n1
 /sys/block/nvme2n1
 /sys/block/nvme3n1

The SA6400 uses a device tree blob so in theory it should possible.

Do you know what PLX chip the LQD-3000 uses? I believe that DSM would need to have a driver that supports the PLX chip (but I could be wrong).

What does lspci -s 0 return?

Synology's M2D20 and E10M20-T1 use the ASMedia ASM2824 PCIe Gen3 switch chip.

Synology's older M2D18 uses the old Microsemi/PMC/IDT PES24T6G2 PCIe Gen2 switch chip.

The next problem would be working out how DSM identifies the PCIe M.2 cards.

  1. If DSM checks for a Synology vendor id and product id for PCIe cards.
  2. Where in DSM via SSH to find the vendor name and model number of the PCIe card.
<!-- gh-comment-id:2387513959 --> @007revad commented on GitHub (Oct 2, 2024): Do they show up in /sys/block ? ``` /sys/block/nvme0n1 /sys/block/nvme1n1 /sys/block/nvme2n1 /sys/block/nvme3n1 ``` The SA6400 uses a device tree blob so in theory it should possible. Do you know what PLX chip the LQD-3000 uses? I believe that DSM would need to have a driver that supports the PLX chip (but I could be wrong). What does `lspci -s 0` return? Synology's M2D20 and E10M20-T1 use the ASMedia ASM2824 PCIe Gen3 switch chip. Synology's older M2D18 uses the old Microsemi/PMC/IDT PES24T6G2 PCIe Gen2 switch chip. The next problem would be working out how DSM identifies the PCIe M.2 cards. 1. If DSM checks for a Synology vendor id and product id for PCIe cards. 2. Where in DSM via SSH to find the vendor name and model number of the PCIe card.
Author
Owner

@allebone commented on GitHub (Oct 2, 2024):

Sorry, I'm working a few time zone differences, and I appreciate the response!

It does show up in the sys/block directory:
Screenshot 2024-10-02 at 10 51 28 AM

  1. I'll have to ask our Liqid guys, its not apparent from looking at it, and its covered by the Heatsink.

  2. Output to Text File of PCI Output.
    PCI_Output.txt

Liqid/Synology Photos:

IMG_7905
IMG_7907
IMG_7906

<!-- gh-comment-id:2389077932 --> @allebone commented on GitHub (Oct 2, 2024): Sorry, I'm working a few time zone differences, and I appreciate the response! It does show up in the sys/block directory: <img width="1731" alt="Screenshot 2024-10-02 at 10 51 28 AM" src="https://github.com/user-attachments/assets/81239e64-1ac0-4ff1-be89-1c424392afd9"> 2. I'll have to ask our Liqid guys, its not apparent from looking at it, and its covered by the Heatsink. 3. Output to Text File of PCI Output. [PCI_Output.txt](https://github.com/user-attachments/files/17232479/PCI_Output.txt) Liqid/Synology Photos: ![IMG_7905](https://github.com/user-attachments/assets/ddd0ce5a-ed33-4054-ae4d-be9983380e64) ![IMG_7907](https://github.com/user-attachments/assets/961c1ce4-1162-4150-91f1-90fb67db7aac) ![IMG_7906](https://github.com/user-attachments/assets/41addb72-b421-4e8b-8b3b-8c4d92c205fe)
Author
Owner

@007revad commented on GitHub (Oct 2, 2024):

It does show up in the sys/block directory:
Screenshot 2024-10-02 at 10 51 28 AM

Your photo shows the LQD-3000 is in PCIe slot 2. But ls -l shows it as PCIe slot 1 0000:40:03.01 but that's probably due to how XPE handles things.

I assume the LQD-3000 has a compatible PLX chip because DSM is detecting the 4 NVMe drives.

From your screenshots I know what would need to be added to the "model.dtb" file, except for the first 3 lines:

E10M20-T1 {
	compatible = "Synology";
	model = "synology_e10m20-t1";
	power_limit = "100,100,100,100";

	m2_card@1 {

		nvme {
			pcie_postfix = "00.0,01.0,00.0";
			port_type = "ssdcache";
		};
	};

	m2_card@2 {

		nvme {
			pcie_postfix = "00.0,02.0,00.0";
			port_type = "ssdcache";
		};
	};

	m2_card@3 {

		nvme {
			pcie_postfix = "00.0,09.0,00.0";
			port_type = "ssdcache";
		};
	};

	m2_card@4 {

		nvme {
			pcie_postfix = "00.0,0a.0,00.0";
			port_type = "ssdcache";
		};
	};
};
<!-- gh-comment-id:2389650555 --> @007revad commented on GitHub (Oct 2, 2024): > It does show up in the sys/block directory: > <img alt="Screenshot 2024-10-02 at 10 51 28 AM" width="1731" src="https://private-user-images.githubusercontent.com/5341619/372934705-81239e64-1ac0-4ff1-be89-1c424392afd9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mjc4OTY4NTcsIm5iZiI6MTcyNzg5NjU1NywicGF0aCI6Ii81MzQxNjE5LzM3MjkzNDcwNS04MTIzOWU2NC0xYWMwLTRmZjEtYmU4OS0xYzQyNDM5MmFmZDkucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MTAwMiUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDEwMDJUMTkxNTU3WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ODI5YmZiYTY2ZGVkYmM5Y2ZhOGFhZmQwMWY1ZjFlN2QxYTkxNDllOTYxYzU1Mjc0YTk0YmY4NDFkMTBjOTAyYSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.YQpSAOKeeSx1WVjZAUzvelj8SBg0GX4hKM0ys4gT6mE"> Your photo shows the LQD-3000 is in PCIe slot 2. But `ls -l` shows it as PCIe slot 1 `0000:40:03.01` but that's probably due to how XPE handles things. I assume the LQD-3000 has a compatible PLX chip because DSM is detecting the 4 NVMe drives. From your screenshots I know what would need to be added to the "model.dtb" file, except for the first 3 lines: ``` E10M20-T1 { compatible = "Synology"; model = "synology_e10m20-t1"; power_limit = "100,100,100,100"; m2_card@1 { nvme { pcie_postfix = "00.0,01.0,00.0"; port_type = "ssdcache"; }; }; m2_card@2 { nvme { pcie_postfix = "00.0,02.0,00.0"; port_type = "ssdcache"; }; }; m2_card@3 { nvme { pcie_postfix = "00.0,09.0,00.0"; port_type = "ssdcache"; }; }; m2_card@4 { nvme { pcie_postfix = "00.0,0a.0,00.0"; port_type = "ssdcache"; }; }; }; ```
Author
Owner

@allebone commented on GitHub (Oct 2, 2024):

Yeah, I noticed the PCI Slot Numbering but I guess they didn't expect anyone to use Slots 1 and 4, since that's where the Infiniband cards are for SA-6400 expansion bays. Sad they made the slots a different shape too, so you can't just use 4 PCI Slots.

Thank you for your quick reply. I hate to take up more of your time and I feel dumb asking...what do I do with the code above related to models.dtb? I've only used the script in the simplest form, so not sure where to edit models directly.

Also, am I at risk of that file changing during updates etc?

<!-- gh-comment-id:2389791207 --> @allebone commented on GitHub (Oct 2, 2024): Yeah, I noticed the PCI Slot Numbering but I guess they didn't expect anyone to use Slots 1 and 4, since that's where the Infiniband cards are for SA-6400 expansion bays. Sad they made the slots a different shape too, so you can't just use 4 PCI Slots. Thank you for your quick reply. I hate to take up more of your time and I feel dumb asking...what do I do with the code above related to models.dtb? I've only used the script in the simplest form, so not sure where to edit models directly. Also, am I at risk of that file changing during updates etc?
Author
Owner

@007revad commented on GitHub (Oct 2, 2024):

Oh, it's a real Synology SA6400. I though it was some other server and you had installed xpenology on it. I googled for "SA6400 motherboard" and what google found did not look like your photo. I just googled again and realised I was previously looking at a photo of a FS6400 motherboard instead of a SA6400.

That code was more for my reference so when I get to editing the script to support the LQD-3000 I can use that code. The bits I'm still unsure about are:

E10M20-T1 {
	compatible = "Synology";
	model = "synology_e10m20-t1

which would need to be changed to whatever DSM sees the LQD-3000 as (once I figure out how to get that information from DSM).

Probably something like

LQD3000 {
	compatible = "Liqid";
	model = "liqid_lqd3000"

Once we have a script working to automatically edit the model.dtb file you would need to schedule the script to run at boot so it runs after any DSM updates.

<!-- gh-comment-id:2389989042 --> @007revad commented on GitHub (Oct 2, 2024): Oh, it's a real Synology SA6400. I though it was some other server and you had installed xpenology on it. I googled for "SA6400 motherboard" and what google found did not look like your photo. I just googled again and realised I was previously looking at a photo of a FS6400 motherboard instead of a SA6400. That code was more for my reference so when I get to editing the script to support the LQD-3000 I can use that code. The bits I'm still unsure about are: ``` E10M20-T1 { compatible = "Synology"; model = "synology_e10m20-t1 ``` which would need to be changed to whatever DSM sees the LQD-3000 as (once I figure out how to get that information from DSM). Probably something like ``` LQD3000 { compatible = "Liqid"; model = "liqid_lqd3000" ``` Once we have a script working to automatically edit the model.dtb file you would need to schedule the script to run at boot so it runs after any DSM updates.
Author
Owner

@007revad commented on GitHub (Oct 3, 2024):

3. Output to Text File of PCI Output.
PCI_Output.txt

Can you it as sudo as lspci then provides a lot more detailed information:

sudo lspci -s 0
<!-- gh-comment-id:2390199174 --> @007revad commented on GitHub (Oct 3, 2024): > 3\. Output to Text File of PCI Output. > [PCI_Output.txt](https://github.com/user-attachments/files/17232479/PCI_Output.txt) Can you it as sudo as lspci then provides a lot more detailed information: ``` sudo lspci -s 0 ```
Author
Owner

@allebone commented on GitHub (Oct 3, 2024):

Oh yeah! Makes more sense now. Sorry, if I was being dense in the original reply. I should have taken a larger photo.

I'll run that command in the AM. I've already retreated for the day. Also...tbh, I have to turn it on,run commands and turn it off...it's LOUD for a small office. Once I'm certain I can get it working, I'll put it into the datacenter for a while.

<!-- gh-comment-id:2390291776 --> @allebone commented on GitHub (Oct 3, 2024): Oh yeah! Makes more sense now. Sorry, if I was being dense in the original reply. I should have taken a larger photo. I'll run that command in the AM. I've already retreated for the day. Also...tbh, I have to turn it on,run commands and turn it off...it's LOUD for a small office. Once I'm certain I can get it working, I'll put it into the datacenter for a while.
Author
Owner
<!-- gh-comment-id:2391801266 --> @allebone commented on GitHub (Oct 3, 2024): [lspci-v.txt](https://github.com/user-attachments/files/17246989/lspci-v.txt) [lspci-mm.txt](https://github.com/user-attachments/files/17246990/lspci-mm.txt) [lspci-4.txt](https://github.com/user-attachments/files/17246991/lspci-4.txt) [lspci-3.txt](https://github.com/user-attachments/files/17246992/lspci-3.txt) [lspci-2.txt](https://github.com/user-attachments/files/17246993/lspci-2.txt) [lspci-1.txt](https://github.com/user-attachments/files/17246994/lspci-1.txt) [lspci-0.txt](https://github.com/user-attachments/files/17246995/lspci-0.txt) [lspci-t.txt](https://github.com/user-attachments/files/17246996/lspci-t.txt)
Author
Owner

@007revad commented on GitHub (Oct 3, 2024):

Ok. lspci on an Intel NAS provides a lot less information than it does for AMD.

And some good news, I've discovered that it's possible to make the 4 NVMe drives in the PCIe card appear as internal NVMe drives. I can hard-code the values to suit your LQD-3000. I'll have a script for you to test today.

Ideally I want the script to get the pci paths automatically, but the only way I can currently do that requires there be 4 NVMe drives installed in the LQD-3000.

<!-- gh-comment-id:2392232402 --> @007revad commented on GitHub (Oct 3, 2024): Ok. lspci on an Intel NAS provides a lot less information than it does for AMD. And some good news, I've discovered that it's possible to make the 4 NVMe drives in the PCIe card appear as internal NVMe drives. I can hard-code the values to suit your LQD-3000. I'll have a script for you to test today. Ideally I want the script to get the pci paths automatically, but the only way I can currently do that requires there be 4 NVMe drives installed in the LQD-3000.
Author
Owner

@007revad commented on GitHub (Oct 3, 2024):

Try this script. sa6400_lqd3000.zip

It will only work with the LQD-3000 in the PCIe slot labelled slot-2.

<!-- gh-comment-id:2392460253 --> @007revad commented on GitHub (Oct 3, 2024): Try this script. [sa6400_lqd3000.zip](https://github.com/user-attachments/files/17250757/sa6400_lqd3000.zip) It will only work with the LQD-3000 in the PCIe slot labelled slot-2.
Author
Owner

@allebone commented on GitHub (Oct 4, 2024):

You bet! I'll give it a shot in the AM. Thank you!

Since you mentioned it. LQD-3000 always 4 NVME. From the factory. And if you order a small size, it's still 4x smaller sizes drives.

The LQD4500 has 8 drives but can't fit FHFL in Synology!

<!-- gh-comment-id:2392658125 --> @allebone commented on GitHub (Oct 4, 2024): You bet! I'll give it a shot in the AM. Thank you! Since you mentioned it. LQD-3000 always 4 NVME. From the factory. And if you order a small size, it's still 4x smaller sizes drives. The LQD4500 has 8 drives but can't fit FHFL in Synology!
Author
Owner

@allebone commented on GitHub (Oct 4, 2024):

Script Ran successfully. Rebooted. Does not show in Storage Pane, currently, though.

It shows PCI Slot 1 (within the GUI) as occupied, but the card is in Slot 2 Physically, like you said.

The 4 drives show up in /sys/block though.

Screenshot 2024-10-04 at 7 21 52 AM Screenshot 2024-10-04 at 7 21 32 AM

If I misunderstood you, I could move it to Slot 3, which is Slot 2 in the GUI. Or I can run additional command, etc.

<!-- gh-comment-id:2393590949 --> @allebone commented on GitHub (Oct 4, 2024): Script Ran successfully. Rebooted. Does not show in Storage Pane, currently, though. It shows PCI Slot 1 (within the GUI) as occupied, but the card is in Slot 2 Physically, like you said. The 4 drives show up in /sys/block though. <img width="1685" alt="Screenshot 2024-10-04 at 7 21 52 AM" src="https://github.com/user-attachments/assets/a2da36ee-10b4-4917-a31c-9fd488403da7"> <img width="696" alt="Screenshot 2024-10-04 at 7 21 32 AM" src="https://github.com/user-attachments/assets/1023efbc-7902-4315-b5a8-c37e42c97ce5"> If I misunderstood you, I could move it to Slot 3, which is Slot 2 in the GUI. Or I can run additional command, etc.
Author
Owner

@007revad commented on GitHub (Oct 4, 2024):

Moving it to Slot 3 won't help because the script is hard-coded to use the Slot 2 PCIe slot path from your /sys/block screenshot:
image

I must be missing something. I'll keep investigating.

<!-- gh-comment-id:2394637370 --> @007revad commented on GitHub (Oct 4, 2024): Moving it to Slot 3 won't help because the script is hard-coded to use the Slot 2 PCIe slot path from your /sys/block screenshot: ![image](https://github.com/user-attachments/assets/f58053de-4635-4965-b9b9-4c753b3e53ab) I must be missing something. I'll keep investigating.
Author
Owner

@007revad commented on GitHub (Oct 4, 2024):

What does udevadm info /dev/nvme0n1 output?

<!-- gh-comment-id:2394674179 --> @007revad commented on GitHub (Oct 4, 2024): What does `udevadm info /dev/nvme0n1` output?
Author
Owner

@007revad commented on GitHub (Oct 5, 2024):

And grep 'nvme' /var/log/synoscgi.log

<!-- gh-comment-id:2394999534 --> @007revad commented on GitHub (Oct 5, 2024): And `grep 'nvme' /var/log/synoscgi.log`
Author
Owner

@allebone commented on GitHub (Oct 5, 2024):

synoscgi.txt
udevadm.txt

You bet! Here ya go.

<!-- gh-comment-id:2395138578 --> @allebone commented on GitHub (Oct 5, 2024): [synoscgi.txt](https://github.com/user-attachments/files/17267073/synoscgi.txt) [udevadm.txt](https://github.com/user-attachments/files/17267074/udevadm.txt) You bet! Here ya go.
Author
Owner

@allebone commented on GitHub (Oct 10, 2024):

Any other commands I missed?

<!-- gh-comment-id:2405790009 --> @allebone commented on GitHub (Oct 10, 2024): Any other commands I missed?
Author
Owner

@007revad commented on GitHub (Oct 10, 2024):

What do these commands return?

head -5 /usr/syno/etc/adapter_cards.conf
head -5 /usr/syno/etc.defaults/adapter_cards.conf
<!-- gh-comment-id:2405962768 --> @007revad commented on GitHub (Oct 10, 2024): What do these commands return? ``` head -5 /usr/syno/etc/adapter_cards.conf head -5 /usr/syno/etc.defaults/adapter_cards.conf ```
Author
Owner

@allebone commented on GitHub (Oct 10, 2024):

You got it!

admin@server:/$ head -5 /usr/syno/etc/adapter_cards.conf
[FX2422N_sup_nic]
[FX2422N_sup_nvme]
FS6600N=yes
SA6400=yes
[FX2422N_sup_sata]

admin@server:/$ head -5 /usr/syno/etc.defaults/adapter_cards.conf
[FX2422N_sup_nic]
[FX2422N_sup_nvme]
FS6600N=yes
SA6400=yes
[FX2422N_sup_sata]

<!-- gh-comment-id:2405983892 --> @allebone commented on GitHub (Oct 10, 2024): You got it! admin@server:/$ head -5 /usr/syno/etc/adapter_cards.conf [FX2422N_sup_nic] [FX2422N_sup_nvme] FS6600N=yes SA6400=yes [FX2422N_sup_sata] admin@server:/$ head -5 /usr/syno/etc.defaults/adapter_cards.conf [FX2422N_sup_nic] [FX2422N_sup_nvme] FS6600N=yes SA6400=yes [FX2422N_sup_sata]
Author
Owner

@007revad commented on GitHub (Oct 10, 2024):

Unfortunately they're both okay.

I've posted a question about this on the xpenology forum. https://xpenology.com/forum/topic/70791-getting-liqid-lqd3000-4xm2-working-in-sa6400/ to see if any of the xpenology developers can offer any guidance.

<!-- gh-comment-id:2405997189 --> @007revad commented on GitHub (Oct 10, 2024): Unfortunately they're both okay. I've posted a question about this on the xpenology forum. https://xpenology.com/forum/topic/70791-getting-liqid-lqd3000-4xm2-working-in-sa6400/ to see if any of the xpenology developers can offer any guidance.
Author
Owner

@allebone commented on GitHub (Oct 10, 2024):

Bummer! But I can't thank you enough for the referral and assist. If there is anything I can do...let me know. I feel like a freeloader/tourist just hanging out, watching.

<!-- gh-comment-id:2406080162 --> @allebone commented on GitHub (Oct 10, 2024): Bummer! But I can't thank you enough for the referral and assist. If there is anything I can do...let me know. I feel like a freeloader/tourist just hanging out, watching.
Sign in to join this conversation.
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_enable_M2_card#129
No description provided.