mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #470] NVME Storage Pool degraded after copying lots of data (40GB+) #158
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#158
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @alinmiron on GitHub (May 15, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/470
Thanks for this script and for the others that help us truly own our Synology NAS.
I have a DS 720+ NAS that runs DSM Version: 7.2.2-72806 Update 3 . I have two Seagate NAS IronWolf 12 TB 3.5" SATA III [ST12000VN0008-2YS101]in Storage Pool 1 which runs fine.
I've added two NVME's : Kingston FURY Renegade PCIe 4.0 NVMe M.2 SSD [KINGSTON SFYRD2000G] and I've used your script ran as a scheduled task:
/usr/share/syno_hdd_db/syno_hdd_db.sh -n -r -iI was able to create a Storage Pool 2 with these NVME's :
RAID Type: Synology Hybrid RAID (SHR) (With data protection for 1-drive fault tolerance)
Total capacity: 1.8TB
Multiple volume support: Yes
Volume encryption: Yes
Almost every time I copy a large amount of data from Storage Pool 1 (Raid1 Seagate HDDS) to Storage Pool 2 (NVMEs), System Health widget shows Warning and if I look on the Storage Manager, I see Drive Status Critical to one or both NVMEs.
The overall transfer speed is slow... oscillating bertween 14MB/ and 60 MB/s
Deactivating the Critical drive and Repair takes a lot of time (more than 15 hours), and it's also formatting the NVME. This will shorten the lifespan of the NVME and I want to avoid it as much as possible.
Digging further on the interwebs I found another of your script Synology_Clear_Drive_Error-1.0.3. Ran "sudo -s ./syno_clear_drive_error.sh" and got "0 status_critical entries found" every time.
Now, the weird part is that even with the output above, when the System Health status is "Warning" the script seems to work. I just reboot the NAS and it gets "Healthy" again.
Proceeding further with copy of another large data set of files, the System Health will become "Critical" again. The copy process works if I ignore the beeping and the "Critical" status.
Running (again) the "sudo -s ./syno_clear_drive_error.sh" will return same status "0 status_critical entries found" and after reboot the System Health remain "Critical".
Then I have to Deactivate the "Critical" NVME, shutdown NAS, physically remove and re-add NVME , reboot NAS then run Repair storage pool.
It drives me crazy! Please help!
P.S.:
I have a total of 10GB of RAM 2 onboard and a 8GB 2666Mhz, CL19 Samsung DDR4 module.
Ran memtest using Synology Assistant and there are no memory issues.