mirror of
https://github.com/amidaware/tacticalrmm.git
synced 2026-04-26 06:55:52 +03:00
[PR #2155] [CLOSED] Dynamic uWSGI Configuration Optimization Based on System Resources #3855
Labels
No labels
In Process
bug
bug
dev-triage
documentation
duplicate
enhancement
fixed
good first issue
help wanted
integration
invalid
pull-request
question
requires agent update
security
ui tweak
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/tacticalrmm#3855
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/amidaware/tacticalrmm/pull/2155
Author: @Lordkaly
Created: 2/25/2025
Status: ❌ Closed
Base:
develop← Head:Lordkaly-Dynamic-uWSGI-1📝 Commits (1)
2b7eb7fDynamic uWSGI Configuration Optimization Based on System Resources📊 Changes
1 file changed (+55 additions, -34 deletions)
View changed files
📝
api/tacticalrmm/core/management/commands/create_uwsgi_conf.py(+55 -34)📄 Description
Overview
This pull request enhances the uWSGI configuration generated by
tacticalrmm/management/commands/create_uwsgi_conf.pyby dynamically adjusting key parameters (workers,threads,cheaper,socket-timeout, andharakiri) based on the system's CPU count and available RAM. The changes improve scalability and performance, particularly under high agent loads, while maintaining compatibility with smaller systems.Motivation
In a production environment with 10 CPUs and 16 GB of RAM, running Tactical RMM with many active agents (like 2.000), I encountered frequent
connect() failed (11: Resource temporarily unavailable)errors from NGINX due to uWSGI socket saturation. The original static configuration struggled to handle the load, leading to dropped connections and degraded performance. Through extensive testing, I identified optimized settings that resolved these issues.Changes
Resource Detection:
multiprocessing.cpu_count()(unchanged from original)./proc/meminfoforMemTotal, converted to GB usingmath.ceil.Dynamic Parameter Calculation:
max_workers: Scales from 4 to 8 workers per CPU, capped by RAM (100 MB per worker):threads: Set to 4 for ≤ 8 CPUs, reduced to 3 for > 8 CPUs to balance CPU usage and concurrency.cheaperandcheaper-initial: Set to ~33% ofmax_workers(minimum 2) for efficient initial scaling.max_requests: Adjusted by RAM (500 for ≤ 4 GB, 2000 for 4-8 GB, 5000 for > 8 GB) to optimize worker recycling.socket-timeout: Increased to 600 seconds to handle high concurrency.harakiri: Raised to 900 seconds to prevent premature worker termination under load.cheaper-step = 2,cheaper-overload = 1,busyness-min = 20,busyness-max = 50for faster scaling and stability.Testing
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.