mirror of
https://github.com/007revad/Synology_M2_volume.git
synced 2026-04-25 15:56:06 +03:00
[GH-ISSUE #181] Volume gets degraded after 24 hours #43
Labels
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_M2_volume#43
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lucidlemon on GitHub (Dec 6, 2024).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/181
Hi there, i created a volume as described in the documentation, used SHR1 as raid mode.
My volume got degraded after just 4 hours, telling me one of the SSDs has failed, i ordered a new one, put that one in, repaired the volume after running the hdd script, it repaired, after one night it told me again that it is degraded.
Reason: One of those drives reports as "Drive Status: Critical" in the storage manager.
Is there something wrong with my synology or is that kinda normal behaviour?
Synology DS 920+
DSM 7.2.2-72806 Update 2
20gb RAM
Smart data tells me both SSDs are working fine:
@007revad commented on GitHub (Dec 6, 2024):
What brand of NVMe drives are they?
A few people had similar issues with certain brands. Usually Chinese brands and Crucial if I remember correctly (which is strange because Crucial Memory is good).
@007revad commented on GitHub (Dec 6, 2024):
FYI With a DS920+ running DSM 7.2 or later you don't need this script.
Use https://github.com/007revad/Synology_HDD_db instead.