mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 21:55:59 +03:00
[GH-ISSUE #334] DS1821+ E10M20-T1 PCIe card not recognised #617
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#617
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @AndrewTapp on GitHub (Jul 26, 2024).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/334
I've had to migrate from a DS1823xs+ to a DS1821+ due to overheating issues, which is another story!!
Anyway I'm using the same setup as the DS1823xs+ i.e. E10M20-T1 PCIe card, which shows up in the Info Center, however both the network port and M2 drives don't show up either in network settings or storage manager.
Am guessing it's either incorrectly using some settings left over from the migration, or the card isn't defined for the DS1821+
I've attached the output from your script for your information.
Your help would be appreciated.
@007revad commented on GitHub (Jul 26, 2024):
You need https://github.com/007revad/Synology_enable_M2_card to make the DS1821+ support the E10M20-T1
You'll then need to run syno_hdd_db again (if it's not already scheduled to run at boot) so it can find and add the NVMe drives in the E10M20-T1 to the E10M20-T1's compatible drives database.
FYI NVMe drives in the E10M20-T1 run twice as fast as the same drives in the internal M.2 slots.
@AndrewTapp commented on GitHub (Jul 26, 2024):
Many thanks for that, all working as expected now.
Interesting about the speed of the M.2 slots, as a result I've switched the caching around to make use of the faster PCIe card.
For information the first time I ran the script was from a scheduled task within the NAS, the eMail confirmation mentioned about downloading dtc. I subsequently ran it from SSH and manually confirmed the download of dtc.
I take it dtc is only needed on first run of the script, are there any updates that I need to be looking out for?
Just wondered your views about automatically downloading dtc (via an option) within the script?
@007revad commented on GitHub (Jul 26, 2024):
I did automate downloading dtc in syno_hdd_db.
I'll have a look at what's different between syno_hdd_db and syno_enable_m2_card. They both should behave the same way as far as installing dtc is concerned.
If dtc is needed and not already installed the script should:
@AndrewTapp commented on GitHub (Jul 27, 2024):
Sorry for the delay in getting back to you.
Your script does in deed do what you've indicated, however the first time I ran it was from the NAS, unattended. I didn't appreciate that it required user input, asking if dtc needed to be downloaded. I subsequently ran it from an SSH session and answered the question, which downloaded dtc and ran successfully.
I was eluding to, in order to run fully unattended from the NAS there should perhaps be an option to automatically download dtc?
@007revad commented on GitHub (Jul 27, 2024):
The script only needs to download dtc if the included dtc or bin/dtc are in the same directory as syno_hdd_db.sh
From the readme.md
Required files
The following files from the downloaded zip file must be in the same folder:
Previously the script used to automatically download dtc if needed, but someone had security concerns about the script downloading a binary file without asking. But making the script ask for permission before downloading prevented the script being scheduled to run unattended.
To satisfy security concerns and running as a scheduled task I changed the script so the --autoupdate option would behave as if the user had answered yes. Unfortunately I was unable to detect if the script was running from a scheduled task so using the --autoupdate option seemed like the best solution.
@jyubonk commented on GitHub (Jan 19, 2025):
Hi, may I know what issue that you face when using DS1823xs+ with E10M20-T1?
I just get the NAS 2 days ago and planning to get E10M20-T1 for SSD cache while internal M2 for SSD storage.
Can explain more regarding the heating issue?
@007revad commented on GitHub (Jan 20, 2025):
@jyubonk
@AndrewTapp didn't say his overheating issue was related to the E10M20-T1 or NVMe drives. He only said "had to migrate from a DS1823xs+ to a DS1821+ due to overheating issues" which could have been due to the much hotter running CPU in the DS1823xs+.
If you are going to use read/write cache I would NOT use the NVMe drives in a E10M20-T1 as a read/write cache.
Point 2 will cause point 1 (data loss).
A read cache is okay.
@AndrewTapp commented on GitHub (Jan 20, 2025):
Yes, the issue was to do with the CPU not the E10M20-T1 card, which has been working throughout with no issues.
@jyubonk commented on GitHub (Jan 20, 2025):
@007revad I see, so if my current DS1823xs+ internal nvme slot as ssd storage and plan to add E10M20-T1 card to add more nvme slot for read and write cache - should I change the internal nvme as read/write cache and the external nvme slot as ssd storage?
@AndrewTapp so what kind of overheating issue happen that makes you change back from DS1823xs+ to DS1821+?
@007revad commented on GitHub (Jan 20, 2025):
Yes. That' is what I'd do.
Though I won't use a write cache (they are too dangerous). Just install 32 or 64GB of memory in DS1823xs+ instead. Memory is faster than a NVMe cache and DSM only caches small files less than 1MB.