mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 21:55:59 +03:00
[GH-ISSUE #524] DSM 7.2.2, Double 980 pro M2, raid0, but only 800MB/s #183
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#183
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Musicminion on GitHub (Nov 6, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/524
The speed of a single M.2 interface on Synology is 700MB/s, and I have confirmed this with both Synology engineers.
So, I think if I use 2 M.2 980pro, I should get 1.4GB/s, but:
However, recent hands-on testing revealed that Synology's dual M.2 drives configured in RAID 0 only achieve a speed of 800MB/s. After reviewing relevant documentation, the final conclusion was that the M.2 storage pool was previously created using the syno_hdd_db script. The SSDs are two SN550 models, both operating at normal speeds with PCIe Gen 3x2 bandwidth—nearly 2GB/s for 10GbE multi-channel. The system is running DSM 7.2.1 Update 4
But now it seems to be restricted, and I don't even know where the read-write speed limitations are set.
related topic:(chinese): https://www.chiphell.com/thread-2659251-1-1.html
@007revad commented on GitHub (Nov 6, 2025):
I've tried NVMe drives in the internal M.2 slots and in a E10M20-T2 and while the NVMe drives in a E10M20-T2 are able to read at 2400 MB/s via
sudo hdparm -tT --direct /dev/nvme0n1I've never seen speeds faster than 980 MB/s via SMB. I've even tried RAID 0 with 4 NVMe drives.https://github.com/007revad/Synology_Information_Wiki/blob/main/pages/NVMe-speed.md
It's interesting that the first post in the chiphell thread said the speeds were faster with 7.2.1 update 4.
There is a sequential I/O setting that Synology removed. Enabling this setting on SSD or NVMe caches makes them as fast as they used to be in DSM 6. I wonder if something similar can be done for NVMe drives.
@Musicminion commented on GitHub (Nov 7, 2025):
Hi, I have solved this problem, the reason is: SMB SecuritySignature.
Here is some cmd for windows:
Before removed signature:
After:
@007revad commented on GitHub (Nov 7, 2025):
Nice. I'm now seeing 1.10 GB/s writes and 1.16 GB/s reads to and from a single WD Black SN770 500GB in the E10M20-T1.
Strangely I get the best speeds to and from an old SATA Samsung SSD 850 EVO 500GB in my PC than I do to a Samsung NVMe 970 EVO Plus 500GB.
Sadly my speeds drop off a lot after a short while.