mirror of
https://github.com/hirschmann/nbfc.git
synced 2026-04-25 16:45:53 +03:00
[GH-ISSUE #27] Ability to choose among temperature sensors #27
Labels
No labels
Stale
bug
config
discussion
duplicate
enhancement
experimental
feature
help-wanted
info
invalid
invalid
pull-request
question
up-for-grabs
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nbfc-hirschmann#27
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jesse-git on GitHub (Oct 1, 2015).
Original GitHub issue: https://github.com/hirschmann/nbfc/issues/27
The GPU in my Asus G1Sn is always running hotter than my CPU, and my HDD temperature is always nearing (or over) its max. operating temperature. It would be absolutely awesome if one of these temperature sensors could be selected in the program to tie the fan speeds to. (And thanks for your good program so far!)
@hirschmann commented on GitHub (Oct 1, 2015):
I'll add a way to configure the temperature plugin.
@snow3461 commented on GitHub (Oct 21, 2015):
Hi there,
I second @jesse-git.
On my laptop (and I guess on majority of laptop with discrete GPU, and Optimus switching technology), one of the FAN is dedicated to cool the CPU, and one obviously for the GPU.
It would make sense to be able to choose, for each fan, which sensors it should be monitoring.
Don't know if it is exactly what @jesse-git meant to say or if my comment on this is really helpfull.
Thanks a lot !
@jesse-git commented on GitHub (Oct 21, 2015):
@snow3461 Actually, I think your case is the more common one where the feature would be useful.
In my case, I have a dedicated Nvidia 9500M GPU but no dedicated fan for it. The heatsink/heatpipe of the GPU is very long and goes to the same fan as the CPU. Only one fan in the G1Sn. More often than anything else, the HDD will overheat because it's placed fairly close to the GPU and has no heat dissipation ability of any kind (Asus engineers really screwed up on that one). The maximum operating temperature of the HDD is only 55C, whereas GPU is more like 100C, so in my case, I'd probably be tying the fan speed to the HDD.
@hirschmann I was thinking about possibly requesting the ability to combine/average different temperature sensors together. This could be done either by directly hard coding some algorithms (max temperature of all sensors, average temperature of all sensors with weights, etc.) or by letting the user draw curves or step-wise graphs. Either way, once the equation has been defined, it can be thought of as a new "virtual" temperature sensor which the fan speed can be tied to. But, I didn't request it because I know it's probably a lot of work ;)
@snow3461 commented on GitHub (Oct 22, 2015):
@jesse-git What you are describing would be the perfect solution to fine tune the cooling on some laptop like mine, where the CPU and GPU have their "dedicated" fan, but also share one or two heatpipe (part of it)
@hirschmann commented on GitHub (Nov 26, 2015):
Your ideas sound pretty cool, but there are some problems:
What I'm planning to do:
First I'll let you configure the existing temperature plugin for Windows via config file. It will be possible to enable multiple temperature sensors. The max. value will be reported to the fan control module.
This feature will be available in the upcoming release.
In the feature I will change the plugin-system and config layout, so that you can assign multiple temperature sources to a fan via config editor.
@jesse-git commented on GitHub (Nov 26, 2015):
Looking forward to it!
Not being able to access the GPU from a service is interesting.
@nhantrn commented on GitHub (Mar 23, 2016):
Is there any progress on this? I ran program this for a week, but had to stop when noticing during gaming the fan doesn't go full throttle because it doesn't read the temp from the gpu.
@hirschmann commented on GitHub (Mar 20, 2017):
Unfortunately there is no progress on this. See #203