mirror of
https://github.com/007revad/Synology_M2_volume.git
synced 2026-04-25 15:56:06 +03:00
[GH-ISSUE #66] NOT working on Synology 920+ after update to DSM 7.2-64570 Update 1 #14
Labels
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_M2_volume#14
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Jensilein on GitHub (Jun 28, 2023).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/66
Hi, after update of Synology 920+ to DSM 7.2-64570 Update 1 my SSD cache was gone and the SSD‘s were shown as „not supported by this DSM version“. I applied the the Synology_HDD_db script made a reboot and applied the syno_create_m2_volume script. After another reboot SSD‘s are shown as „detected“ but Online Assemble fails. Both scripts were running without failure messages. Thanks for support.
@007revad commented on GitHub (Jun 29, 2023):
I didn't reply to this as I saw you were getting help (or attempts at helping) on reddit. I've just replied on reddit to your last comment in that thread on reddit.
@Jensilein commented on GitHub (Jun 29, 2023):
No, that‘s not me. I didn‘t ask for help on reddit. I only asked here. Your help is very much appreciated. Thanks a lot.
@007revad commented on GitHub (Jun 29, 2023):
Your description of your issue was exactly like the post on reddit.
Try the https://github.com/007revad/Synology_enable_M2_volume script then reboot. After that you should be able to go to storage manager and delete the storage pool then do an online assemble. Once the online assemble has finished you can create a storage pool and volume all from within storage manager.
@Jensilein commented on GitHub (Jun 30, 2023):
That solved the problem. Thank you very much again for your support!