mirror of
https://github.com/007revad/Synology_M2_volume.git
synced 2026-04-26 00:06:14 +03:00
[GH-ISSUE #51] Fail to repair RAID1 volume #206
Labels
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_M2_volume#206
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @goobags on GitHub (May 17, 2023).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/51
Hi,
I have used this script to set up a RAID1 array (mirrored). I used some old small M.2 drives as testers and have since ordered two bigger drives. I replaced one today, similar to how I have rebuilt RAID1 arrays in the past, just replace a drive the rebuild through the UI. The problem is I cannot get it to work.
Trying to repair the Storage Pool results in an error saying there are no drives that meet the requirements. Clicking the new drive under HDD/SSD doesn't;t let me do anything other than a SSD Cache (for obvious reasons I'm on a DS918+)
Re-running the script only lets me select one drive (the new one) and the script fails to finish.
@goobags commented on GitHub (May 17, 2023):
And I just put the original drive back in now it's crashed (the one drive not the entire RAID array) and cannot repair due to it being seen as SSD cache only in DSM UI.
@007revad commented on GitHub (May 17, 2023):
I never considered that someone might try to replace an M2 drive to rebuild or expand the RAID. What you wanted to do might have been available from storage manager if you were using DSM 7.2 and you had run https://github.com/007revad/Synology_HDD_db.
Is one of the original small M.2 drives still showing as degraded, or has the whole array crashed?
@goobags commented on GitHub (May 18, 2023):
Just one drive is crashed, the storage pool is still functional. Just trying to prevent having to reinstall a few apps and Docker back on an entirely new storage pool/volume.
@007revad commented on GitHub (May 18, 2023):
I think backing up, then reinstalling a new storage pool/volume may be quickest solution.
It's going to take a while for me to work out how DSM does a RAID repair.
@dantrauner commented on GitHub (Oct 21, 2023):
@007revad Thanks for all of the work you've done on your scripts – they're really useful!
I wanted to check in on this issue since I recently had a RAID1 M.2 volume using the official 10G NIC + M.2 adapter card lose a drive. Have you looked at all into allowing a blank disk to be added to an existing volume? If not, I'd be interested in helping get this working if you have some idea of where to start.
@007revad commented on GitHub (Oct 21, 2023):
@dantrauner
First, make sure your data from the NVMe volume is backed up.
What does the following command return:
sudo synostgpool --auto-repair -hAnd this one:
sudo synostgpool --misc --get-pool-info | jqI only need the nvme section like this:
@007revad commented on GitHub (Oct 22, 2023):
@dantrauner
Just now I was able to repair a NVMe RAID 1 storage pool from Storage Manager. For the steps I used to work I need to know a few things about your setup.
@dantrauner commented on GitHub (Oct 22, 2023):
Probably 60 seconds before your last reply, I decided to just use this opportunity to practice my DR procedure 😄 I'm bookmarking this and will try to repair next time, but:
@007revad commented on GitHub (Oct 22, 2023):
For future reference I've created a few wiki pages documenting how I repaired my NVMe RAID 1 after replacing a drive.
Repair M.2 RAID 1 in internal M.2 slots
Repair M.2 RAID 1 in adaptor card - Requires the NAS has Internal M.2 slots.
Repair RAID via SSH - I have not tested this method yet...
@kidhasmoxy commented on GitHub (Nov 1, 2023):
The following comes from a blog post on how to create the volume manually (which even cites your script @007revad.) It's a snippet to get the new nvme drive to show up as an option to repair the failed array. in this case, md3 is the group for your existing storage group and the nvme is referenced by /dev.
https://academy.pointtosource.com/synology/synology-ds920-nvme-m2-ssd-volume/