mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #430] SSD Not detected anymore #149
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#149
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @thibaudbrg on GitHub (Feb 20, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/430
Hi, not sure if this is the right place to post such issue but I'm pretty concerned about what heppen yesterday to my NAS.
A mothn ago I setup a
PNY CS2230 500GB SSD(M.2 slot) as a Storage pool successfully in (SHR) thanks to syno_create_m2_volume.shI then setup a scheduled task
syno_hdd_db.sh -nto run at each boot of my NAS DS920+Everything did work fine for more than a month. I'm runinng on
DSM 7.2.1-69057 Update 5not updates has been done.Then sudently at 1h39am I received this mail:
Since then the SSD is missing, not detected by synology, not dected running
ls /dev/nvme*orlspci | grep -i nvmeor evensudo nvme listRunning manually
syno_hdd_db.sh -nresult in:I tried removing the SSD, then putting it into the other M.2 slot of the NAS, still not detected.
What would you guys try to do ? I have all my docker containers config on it... So everything is currently broken right know..
@nagyrobi commented on GitHub (Feb 20, 2025):
Try in a laptop, does it detect it?
Otherwise replace it in warranty...
@thibaudbrg commented on GitHub (Feb 20, 2025):
Thanks for your quick answer. So I sadly don't have a USB adapter to test it on a PC right now + I'm not at home physically next to the NAS which is a bummer. But I ordered on amazon a M.2 to USB adapter that hopefully will quickly be delivered.
While all this, I managed to print more logs:
Which likely comes from the SSD as my HDDs do not show any problem of failure. Furthermore:
and
Which surely show a hardware failure as the SSD is indeed PCI-level-detected but unmounted...
I will contact PNY support immediately. But why?? How unlucky I am?? After 1 month??
I'm wondering if some automatic synology decisions like data scrubbing during the night or some other auto tests might have been run at the same time and the failure would have occurred while running those tests? Might it be a possibility?
Sadly even after having looked at all the logs I can't go back in time to that exact time. All the logs got overriden already.
@thibaudbrg commented on GitHub (Feb 20, 2025):
UPDATE:
Becoming desperate, I was ready to kill myself when I decided to take advantage of this moment to update the NAS which is always complicated due to potential failures with my Docker configs.
Then sudently,
?????????????
And the logs show:
I am speechless... I close this issue with this comment, and I hope nobody won't encounter similar stress... I might as well think of buying Compatible Synology SSD drives next time even tho
Synology_HDD_dbsupposedly does the trick.@nagyrobi commented on GitHub (Feb 21, 2025):
And never rely on a single drive with critical data... Always put at least two in a RAID.
That said, happened with me this week: bought 4 pieces of brand new 1TB WD RED SATA SSDs, one of them died within 4 hours. Apparently DOF, as new.