[GH-ISSUE #491] RS1221+, Startech x4 PCI Express - M.2 PCIe SSD and WD Red SN700 500GB SSD M.2 #167

Closed
opened 2026-03-07 19:16:32 +03:00 by kerem · 10 comments
Owner

Originally created by @kterkkila on GitHub (Jul 8, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/491

Hello,

Before found the great Synology_HDD_db-project, I was hassling with the system. The Linux system did recognize the nvme device, so did:
sudo mkfs.ext4 /dev/nvme0n1
sudo mkdir -p /volumeSSD
sudo mount /dev/nvme0n1 /volumeSSD

Also set support_disk_compatibility="no" and support_m2_pool="yes"

So I got a volume that can be used in SSH, read and write, but not existing in the Synology UI. Then I did found the project and tried few options, lastly:
syno_hdd_db.sh -fpin

Currently:
sudo syno_hdd_util --ssd_detect --> not in the list
sudo synonvme --get-location /dev/nvme0 --> Can't get the location of /dev/nvme0
sudo synonvme --is-nvme-ssd /dev/nvme0 --> It is a NVMe SSD
sudo synonvme --m2-card-model-get /dev/nvme0 --> Not M.2 adapter card
sudo synonvme --model-get /dev/nvme0 --> Model name: WD Red SN700 500GB

sudo nvme list
Node SN Model Namespace Usage Format FW Rev


/dev/nvme0n1 25035L800770 WD Red SN700 500GB 1 500.11 GB / 500.11 GB 512 B + 0 B 111150WD

udevadm info /dev/nvme0
P: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0
N: nvme0
E: DEVNAME=/dev/nvme0
E: DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0
E: MAJOR=250
E: MINOR=0
E: PHYSDEVBUS=pci
E: PHYSDEVDRIVER=nvme
E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.1/0000:01:00.0
E: SUBSYSTEM=nvme
E: SYNO_INFO_PLATFORM_NAME=v1000
E: SYNO_KERNEL_VERSION=4.4
E: SYNO_SUPPORT_USB_PRINTER=yes
E: SYNO_SUPPORT_XA=no
E: USEC_INITIALIZED=995487

cat /etc.defaults/extensionPorts
[pci]
pci1="0000:00:01.1"

sudo smartctl -a /dev/nvme0 --> Read NVMe Identify Controller failed: NVMe Status 0x400b

Fan is running full speed, probably because no temperature information from the NVMe Controller.

Rebooted, yes. Next try - "shut down to cold and plug off the power cord", but not until finished ongoing file copy. Not feeling so lucky, so asking already. Expecting lack of own knowledge about these systems. Update needs arise when Btrtfs was causing problems under 30 million files..

I'm not really sure if the system should even should work with Startech x4 PCI Express adapter. It seems to work at least partially at the moment.

Cheers, Kimmo

Originally created by @kterkkila on GitHub (Jul 8, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/491 Hello, Before found the great Synology_HDD_db-project, I was hassling with the system. The Linux system did recognize the nvme device, so did: sudo mkfs.ext4 /dev/nvme0n1 sudo mkdir -p /volumeSSD sudo mount /dev/nvme0n1 /volumeSSD Also set support_disk_compatibility="no" and support_m2_pool="yes" So I got a volume that can be used in SSH, read and write, but not existing in the Synology UI. Then I did found the project and tried few options, lastly: syno_hdd_db.sh -fpin Currently: sudo syno_hdd_util --ssd_detect --> *not in the list* sudo synonvme --get-location /dev/nvme0 --> Can't get the location of /dev/nvme0 sudo synonvme --is-nvme-ssd /dev/nvme0 --> It is a NVMe SSD sudo synonvme --m2-card-model-get /dev/nvme0 --> Not M.2 adapter card sudo synonvme --model-get /dev/nvme0 --> Model name: WD Red SN700 500GB sudo nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 25035L800770 WD Red SN700 500GB 1 500.11 GB / 500.11 GB 512 B + 0 B 111150WD udevadm info /dev/nvme0 P: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0 N: nvme0 E: DEVNAME=/dev/nvme0 E: DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0 E: MAJOR=250 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:01.1/0000:01:00.0 E: SUBSYSTEM=nvme E: SYNO_INFO_PLATFORM_NAME=v1000 E: SYNO_KERNEL_VERSION=4.4 E: SYNO_SUPPORT_USB_PRINTER=yes E: SYNO_SUPPORT_XA=no E: USEC_INITIALIZED=995487 cat /etc.defaults/extensionPorts [pci] pci1="0000:00:01.1" sudo smartctl -a /dev/nvme0 --> Read NVMe Identify Controller failed: NVMe Status 0x400b Fan is running full speed, probably because no temperature information from the NVMe Controller. Rebooted, yes. Next try - "shut down to cold and plug off the power cord", but not until finished ongoing file copy. Not feeling so lucky, so asking already. Expecting lack of own knowledge about these systems. Update needs arise when Btrtfs was causing problems under 30 million files.. I'm not really sure if the system should even should work with Startech x4 PCI Express adapter. It seems to work at least partially at the moment. Cheers, Kimmo
kerem closed this issue 2026-03-07 19:16:32 +03:00
Author
Owner

@007revad commented on GitHub (Jul 8, 2025):

Since DSM 7.2.1 it runs the fans at full speed when the PCIe M.2 slots don't have a power limit set in the kernel device tree module (/etc/model.dtb).

But your "udevadm info /dev/nvme0" output looks more like what an internal M.2 drive looks like:
P: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0

An M.2 drive in a PCIe card, like the E10M20-T1 looks like this:
P: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1/nvme1n1

Which DSM version do have?

I'm sure I can create a short script for you to add a M.2 slot to model.dtb (with a power limit) so it appears as a internal M.2 slot. I'd make it only add the M.2 slot to model.dtb if it was missing in model.dtb, so you could schedule it run as root at shutdown and boot so it updates model.dtb after a DSM update.

<!-- gh-comment-id:3050382671 --> @007revad commented on GitHub (Jul 8, 2025): Since DSM 7.2.1 it runs the fans at full speed when the PCIe M.2 slots don't have a power limit set in the kernel device tree module (/etc/model.dtb). But your "udevadm info /dev/nvme0" output looks more like what an internal M.2 drive looks like: P: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0 An M.2 drive in a PCIe card, like the E10M20-T1 looks like this: P: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1/nvme1n1 Which DSM version do have? I'm sure I can create a short script for you to add a M.2 slot to model.dtb (with a power limit) so it appears as a internal M.2 slot. I'd make it only add the M.2 slot to model.dtb if it was missing in model.dtb, so you could schedule it run as root at shutdown and boot so it updates model.dtb after a DSM update.
Author
Owner

@kterkkila commented on GitHub (Jul 9, 2025):

The version is DSM 7.2.2-72806 Update 3. The reason for full speed fan sounds logical.

Such a script would be really nice! Would you need any other information from the system for the script or for developing the project forward?

<!-- gh-comment-id:3050976236 --> @kterkkila commented on GitHub (Jul 9, 2025): The version is DSM 7.2.2-72806 Update 3. The reason for full speed fan sounds logical. Such a script would be really nice! Would you need any other information from the system for the script or for developing the project forward?
Author
Owner

@kterkkila commented on GitHub (Jul 11, 2025):

Here's how dts file looks at the moment. Next steps, the modifying parts, are more critical, so some advice would be valuable.

cat model.dts
/dts-v1/;

/ {
        compatible = "Synology";
        model = "synology_v1000_rs1221+";
        version = <0x01>;
        syno_spinup_group = <0x08>;
        syno_spinup_group_delay = <0x00>;
        syno_hdd_powerup_seq = "true";
        syno_smbus_hdd_type = "cpld";
        syno_smbus_hdd_adapter = <0x01>;
        syno_smbus_hdd_address = <0x42>;
        syno_cmos_reg_secure_flash = <0xe0>;
        syno_cmos_reg_secure_boot = <0xe2>;
        syno_uart_logout_gpio = <0x07 0x01>;

        RX418 {
                compatible = "Synology";
                model = "synology_rx418";

                pmp_slot@1 {

                        libata {
                                EMID = <0x00>;
                                pmp_link = <0x00>;
                        };
                };

                pmp_slot@2 {

                        libata {
                                EMID = <0x00>;
                                pmp_link = <0x01>;
                        };
                };

                pmp_slot@3 {

                        libata {
                                EMID = <0x00>;
                                pmp_link = <0x02>;
                        };
                };

                pmp_slot@4 {

                        libata {
                                EMID = <0x00>;
                                pmp_link = <0x03>;
                        };
                };
        };

        E10M20-T1 {
                compatible = "Synology";
                model = "synology_e10m20-t1";
                power_limit = "14.85,14.85";

                m2_card@1 {

                        nvme {
                                pcie_postfix = "00.0,08.0,00.0";
                                port_type = "ssdcache";
                        };
                };

                m2_card@2 {

                        nvme {
                                pcie_postfix = "00.0,04.0,00.0";
                                port_type = "ssdcache";
                        };
                };
        };

        M2D18 {
                compatible = "Synology";
                model = "synology_m2d18";
                power_limit = "9.9,9.9";

                m2_card@1 {

                        ahci {
                                pcie_postfix = "00.0,03.0,00.0";
                                ata_port = <0x00>;
                        };

                        nvme {
                                pcie_postfix = "00.0,04.0,00.0";
                                port_type = "ssdcache";
                        };
                };

                m2_card@2 {

                        ahci {
                                pcie_postfix = "00.0,03.0,00.0";
                                ata_port = <0x01>;
                        };

                        nvme {
                                pcie_postfix = "00.0,05.0,00.0";
                                port_type = "ssdcache";
                        };
                };
        };

        M2D20 {
                compatible = "Synology";
                model = "synology_m2d20";
                power_limit = "14.85,14.85";

                m2_card@1 {

                        nvme {
                                pcie_postfix = "00.0,08.0,00.0";
                                port_type = "ssdcache";
                        };
                };

                m2_card@2 {

                        nvme {
                                pcie_postfix = "00.0,04.0,00.0";
                                port_type = "ssdcache";
                        };
                };
        };

        internal_slot@1 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.6,00.0";
                        ata_port = <0x00>;
                };

                led_green {
                        led_name = "syno_led0";
                };

                led_orange {
                        led_name = "syno_led1";
                };
        };

        internal_slot@2 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.6,00.0";
                        ata_port = <0x02>;
                };

                led_green {
                        led_name = "syno_led2";
                };

                led_orange {
                        led_name = "syno_led3";
                };
        };

        internal_slot@3 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.7,00.0";
                        ata_port = <0x01>;
                };

                led_green {
                        led_name = "syno_led4";
                };

                led_orange {
                        led_name = "syno_led5";
                };
        };

        internal_slot@4 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.7,00.0";
                        ata_port = <0x03>;
                };

                led_green {
                        led_name = "syno_led6";
                };

                led_orange {
                        led_name = "syno_led7";
                };
        };

        internal_slot@5 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.6,00.0";
                        ata_port = <0x01>;
                };

                led_green {
                        led_name = "syno_led8";
                };

                led_orange {
                        led_name = "syno_led9";
                };
        };

        internal_slot@6 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.6,00.0";
                        ata_port = <0x03>;
                };

                led_green {
                        led_name = "syno_led10";
                };

                led_orange {
                        led_name = "syno_led11";
                };
        };

        internal_slot@7 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.7,00.0";
                        ata_port = <0x00>;
                };

                led_green {
                        led_name = "syno_led12";
                };

                led_orange {
                        led_name = "syno_led13";
                };
        };

        internal_slot@8 {
                protocol_type = "sata";
                led_type = "lp3943";

                ahci {
                        pcie_root = "00:01.7,00.0";
                        ata_port = <0x02>;
                };

                led_green {
                        led_name = "syno_led14";
                };

                led_orange {
                        led_name = "syno_led15";
                };
        };

        esata_port@1 {

                ahci {
                        pcie_root = "00:01.6,00.0";
                        ata_port = <0x04>;
                };
        };

        pcie_slot@1 {
                pcie_root = "00:01.1";
        };

        usb_slot@1 {

                vbus {
                        syno_gpio = <0x2a 0x01>;
                };

                usb2 {
                        usb_port = "1-2";
                };

                usb3 {
                        usb_port = "2-2";
                };
        };

        usb_slot@2 {

                vbus {
                        syno_gpio = <0x0b 0x01>;
                };

                usb2 {
                        usb_port = "3-1";
                };

                usb3 {
                        usb_port = "4-1";
                };
        };
};
<!-- gh-comment-id:3061770120 --> @kterkkila commented on GitHub (Jul 11, 2025): Here's how dts file looks at the moment. Next steps, the modifying parts, are more critical, so some advice would be valuable. ``` cat model.dts /dts-v1/; / { compatible = "Synology"; model = "synology_v1000_rs1221+"; version = <0x01>; syno_spinup_group = <0x08>; syno_spinup_group_delay = <0x00>; syno_hdd_powerup_seq = "true"; syno_smbus_hdd_type = "cpld"; syno_smbus_hdd_adapter = <0x01>; syno_smbus_hdd_address = <0x42>; syno_cmos_reg_secure_flash = <0xe0>; syno_cmos_reg_secure_boot = <0xe2>; syno_uart_logout_gpio = <0x07 0x01>; RX418 { compatible = "Synology"; model = "synology_rx418"; pmp_slot@1 { libata { EMID = <0x00>; pmp_link = <0x00>; }; }; pmp_slot@2 { libata { EMID = <0x00>; pmp_link = <0x01>; }; }; pmp_slot@3 { libata { EMID = <0x00>; pmp_link = <0x02>; }; }; pmp_slot@4 { libata { EMID = <0x00>; pmp_link = <0x03>; }; }; }; E10M20-T1 { compatible = "Synology"; model = "synology_e10m20-t1"; power_limit = "14.85,14.85"; m2_card@1 { nvme { pcie_postfix = "00.0,08.0,00.0"; port_type = "ssdcache"; }; }; m2_card@2 { nvme { pcie_postfix = "00.0,04.0,00.0"; port_type = "ssdcache"; }; }; }; M2D18 { compatible = "Synology"; model = "synology_m2d18"; power_limit = "9.9,9.9"; m2_card@1 { ahci { pcie_postfix = "00.0,03.0,00.0"; ata_port = <0x00>; }; nvme { pcie_postfix = "00.0,04.0,00.0"; port_type = "ssdcache"; }; }; m2_card@2 { ahci { pcie_postfix = "00.0,03.0,00.0"; ata_port = <0x01>; }; nvme { pcie_postfix = "00.0,05.0,00.0"; port_type = "ssdcache"; }; }; }; M2D20 { compatible = "Synology"; model = "synology_m2d20"; power_limit = "14.85,14.85"; m2_card@1 { nvme { pcie_postfix = "00.0,08.0,00.0"; port_type = "ssdcache"; }; }; m2_card@2 { nvme { pcie_postfix = "00.0,04.0,00.0"; port_type = "ssdcache"; }; }; }; internal_slot@1 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.6,00.0"; ata_port = <0x00>; }; led_green { led_name = "syno_led0"; }; led_orange { led_name = "syno_led1"; }; }; internal_slot@2 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.6,00.0"; ata_port = <0x02>; }; led_green { led_name = "syno_led2"; }; led_orange { led_name = "syno_led3"; }; }; internal_slot@3 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.7,00.0"; ata_port = <0x01>; }; led_green { led_name = "syno_led4"; }; led_orange { led_name = "syno_led5"; }; }; internal_slot@4 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.7,00.0"; ata_port = <0x03>; }; led_green { led_name = "syno_led6"; }; led_orange { led_name = "syno_led7"; }; }; internal_slot@5 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.6,00.0"; ata_port = <0x01>; }; led_green { led_name = "syno_led8"; }; led_orange { led_name = "syno_led9"; }; }; internal_slot@6 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.6,00.0"; ata_port = <0x03>; }; led_green { led_name = "syno_led10"; }; led_orange { led_name = "syno_led11"; }; }; internal_slot@7 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.7,00.0"; ata_port = <0x00>; }; led_green { led_name = "syno_led12"; }; led_orange { led_name = "syno_led13"; }; }; internal_slot@8 { protocol_type = "sata"; led_type = "lp3943"; ahci { pcie_root = "00:01.7,00.0"; ata_port = <0x02>; }; led_green { led_name = "syno_led14"; }; led_orange { led_name = "syno_led15"; }; }; esata_port@1 { ahci { pcie_root = "00:01.6,00.0"; ata_port = <0x04>; }; }; pcie_slot@1 { pcie_root = "00:01.1"; }; usb_slot@1 { vbus { syno_gpio = <0x2a 0x01>; }; usb2 { usb_port = "1-2"; }; usb3 { usb_port = "2-2"; }; }; usb_slot@2 { vbus { syno_gpio = <0x0b 0x01>; }; usb2 { usb_port = "3-1"; }; usb3 { usb_port = "4-1"; }; }; }; ```
Author
Owner

@kterkkila commented on GitHub (Jul 11, 2025):

Maybe something like following?

StarTechAdapter {
    compatible = "Synology";
    model = "startech_nvme";
    power_limit = "9.9,9.9";

    m2_card@1 {
        nvme {
            pcie_postfix = "00.0,01.0,00.0";  // adjust based on your NVMe lspci
            port_type = "ssdcache";
        };
    };
};
<!-- gh-comment-id:3061797706 --> @kterkkila commented on GitHub (Jul 11, 2025): Maybe something like following? ``` StarTechAdapter { compatible = "Synology"; model = "startech_nvme"; power_limit = "9.9,9.9"; m2_card@1 { nvme { pcie_postfix = "00.0,01.0,00.0"; // adjust based on your NVMe lspci port_type = "ssdcache"; }; }; }; ```
Author
Owner

@007revad commented on GitHub (Jul 13, 2025):

To stop the fans running full speed try:
sudo synosetkeyvalue /etc.defaults/synoinfo.conf support_fan_adjust_by_ext_nic low

<!-- gh-comment-id:3066608215 --> @007revad commented on GitHub (Jul 13, 2025): To stop the fans running full speed try: `sudo synosetkeyvalue /etc.defaults/synoinfo.conf support_fan_adjust_by_ext_nic low`
Author
Owner

@007revad commented on GitHub (Jul 13, 2025):

I'm not sure if "StarTechAdapter" and "startech_nvme" will work.

StarTechAdapter {
    compatible = "Synology";
    model = "startech_nvme";
    power_limit = "9.9,9.9";

    m2_card@1 {
        nvme {
            pcie_postfix = "00.0";  // this is just an educated guess
            port_type = "ssdcache";
        };
    };
};
<!-- gh-comment-id:3066612541 --> @007revad commented on GitHub (Jul 13, 2025): I'm not sure if "StarTechAdapter" and "startech_nvme" will work. ``` StarTechAdapter { compatible = "Synology"; model = "startech_nvme"; power_limit = "9.9,9.9"; m2_card@1 { nvme { pcie_postfix = "00.0"; // this is just an educated guess port_type = "ssdcache"; }; }; }; ```
Author
Owner

@kterkkila commented on GitHub (Jul 13, 2025):

The part number for the Startech adapter is PEX4M2E1, so maybe it's better to use it.

<!-- gh-comment-id:3067230112 --> @kterkkila commented on GitHub (Jul 13, 2025): The part number for the Startech adapter is PEX4M2E1, so maybe it's better to use it.
Author
Owner

@kterkkila commented on GitHub (Jul 14, 2025):

Modified model.dts, adapter_cards.conf and extensionPorts. No success. Also both pcie_postfix = "00.0" and pcie_postfix = "00.0,01.0,00.0" tested.

Some related discussion can be found from Xpenology forum. Startech adapters mentioned.
https://xpenology.com/forum/topic/58072-how-to-have-ds3622xs-recognize-nvme-ssd-cache-drive-maybe-works-on-other-models/page/3/

I guess I need to find another adapter card?

<!-- gh-comment-id:3068553406 --> @kterkkila commented on GitHub (Jul 14, 2025): Modified model.dts, adapter_cards.conf and extensionPorts. No success. Also both pcie_postfix = "00.0" and pcie_postfix = "00.0,01.0,00.0" tested. Some related discussion can be found from Xpenology forum. Startech adapters mentioned. https://xpenology.com/forum/topic/58072-how-to-have-ds3622xs-recognize-nvme-ssd-cache-drive-maybe-works-on-other-models/page/3/ I guess I need to find another adapter card?
Author
Owner

@007revad commented on GitHub (Jul 26, 2025):

Sorry, I've been busy and forgot about this. I was going to buy a Startech x4 PCI Express M.2 PCIe SSD card to test with...

I think you'd need to add PEX4M2E1 to /usr/syno/etc.defaults/adapter_cards.conf like:

[PEX4M2E1_sup_nvme]
PEX4M2E1=yes

/etc.defaults/extensionPorts doesn't need editing because it already has the RS1221+'s PCIe port in it.

Then in model.dtb something like:

PEX4M2E1 {
    compatible = "Synology";
    model = "synology_pex4m2e1";
    power_limit = "9.9,9.9";

    m2_card@1 {
        nvme {
            pcie_postfix = "00.0";  // this is just an educated guess
            port_type = "ssdcache";
        };
    };
};

or maybe

PEX4M2E1 {
    compatible = "Startech";
    model = "startech_pex4m2e1";
    power_limit = "9.9,9.9";

    m2_card@1 {
        nvme {
            pcie_postfix = "00.0";  // this is just an educated guess
            port_type = "ssdcache";
        };
    };
};

Though I'm not 100% sure on what the pcie_postfix should be.

After editing /etc.defaults/model.dtb you need to copy it to /etc/model.dtb and make sure they both have the correct permissions.
Image

If none of that works, and you want to buy a card that does work, you'd need a Synology M2D20 or E10M20-T1. There are 3rd party cards that use the same PLX chip as the M2D20 and E10M20-T1 but the 3rd party cards are almost the same price as the M2D20. And with a 3rd party card even with the same ASMedia ASM2812 PLX chip.

See my thoughts on this from 2 years ago where I did find some cheap Chinese cards that use the ASMedia ASM2812 PLX chip: https://www.reddit.com/r/synology/comments/18tdli8/using_a_cheap_pcie_m2_card_in_a_synology_nas_part/

<!-- gh-comment-id:3121457326 --> @007revad commented on GitHub (Jul 26, 2025): Sorry, I've been busy and forgot about this. I was going to buy a Startech x4 PCI Express M.2 PCIe SSD card to test with... I think you'd need to add PEX4M2E1 to /usr/syno/etc.defaults/adapter_cards.conf like: ``` [PEX4M2E1_sup_nvme] PEX4M2E1=yes ``` /etc.defaults/extensionPorts doesn't need editing because it already has the RS1221+'s PCIe port in it. Then in model.dtb something like: ``` PEX4M2E1 { compatible = "Synology"; model = "synology_pex4m2e1"; power_limit = "9.9,9.9"; m2_card@1 { nvme { pcie_postfix = "00.0"; // this is just an educated guess port_type = "ssdcache"; }; }; }; ``` or maybe ``` PEX4M2E1 { compatible = "Startech"; model = "startech_pex4m2e1"; power_limit = "9.9,9.9"; m2_card@1 { nvme { pcie_postfix = "00.0"; // this is just an educated guess port_type = "ssdcache"; }; }; }; ``` Though I'm not 100% sure on what the `pcie_postfix` should be. After editing /etc.defaults/model.dtb you need to copy it to /etc/model.dtb and make sure they both have the correct permissions. <img width="400" height="456" alt="Image" src="https://github.com/user-attachments/assets/2d17243d-04e5-45a8-bf74-6d78630e35d7" /> If none of that works, and you want to buy a card that does work, you'd need a Synology M2D20 or E10M20-T1. There are 3rd party cards that use the same PLX chip as the M2D20 and E10M20-T1 but the 3rd party cards are almost the same price as the M2D20. And with a 3rd party card even with the same ASMedia ASM2812 PLX chip. See my thoughts on this from 2 years ago where I did find some cheap Chinese cards that use the ASMedia ASM2812 PLX chip: https://www.reddit.com/r/synology/comments/18tdli8/using_a_cheap_pcie_m2_card_in_a_synology_nas_part/
Author
Owner

@kterkkila commented on GitHub (Feb 9, 2026):

I also had busy times and forgot to answer. I did buy M2D18 and it seems to work. Another modifications needed to handle that huge amount of small files was to move on EXT4 file system. BTRFS was always in problems and when get below 20% free disk space, the system become useless. Did set two RAID 10 volumes at the same time and after that haven't had anything to complain.

<!-- gh-comment-id:3873166439 --> @kterkkila commented on GitHub (Feb 9, 2026): I also had busy times and forgot to answer. I did buy M2D18 and it seems to work. Another modifications needed to handle that huge amount of small files was to move on EXT4 file system. BTRFS was always in problems and when get below 20% free disk space, the system become useless. Did set two RAID 10 volumes at the same time and after that haven't had anything to complain.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#167
No description provided.