mirror of
https://github.com/misiektoja/instagram_monitor.git
synced 2026-04-25 22:35:49 +03:00
[GH-ISSUE #25] Multiple targets? #19
Labels
No labels
Stale
Stale
bug
enhancement
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/instagram_monitor#19
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @YouveGotMeowxy on GitHub (Dec 26, 2025).
Original GitHub issue: https://github.com/misiektoja/instagram_monitor/issues/25
Hi
I know that I can spawn many processes to monitor multiple targets, but wouldn't it be better and more elegant to be able to do it all with 1 process?
For example, just enter an array of targets, and the script will loop through that array?
@misiektoja commented on GitHub (Dec 27, 2025):
Hey, yes, this is a very good idea and you are not the first one asking for it. It has been on my mind for some time. Hopefully, I'll find time to implement it in the near future.
@misiektoja commented on GitHub (Dec 28, 2025):
Hey, I've found some time today and implemented the multi-user monitoring feature you requested. Here's what's been added:
instagram_data_user1.csv,instagram_data_user2.csv) using your configured CSV filename as a prefixinstagram_monitor_user1_user2_user3.log) to prevent filename collisionsHow to Use
Simply pass multiple usernames as arguments:
instagram_monitor target_user_1 target_user_2 target_user_3Or use the
--targetsflag with comma-separated values:instagram_monitor --targets "target_user_1,target_user_2,target_user_3"The tool will automatically stagger the start of each target's monitoring loop (spread across your
INSTA_CHECK_INTERVALby default). You can also control the staggering manually:instagram_monitor target_user_1 target_user_2 --targets-stagger 300This sets a 5-minute delay between each target's first poll.
When monitoring multiple users in a single process, the effective request rate is multiplied by the number of targets. For example, monitoring 5 users with a 1-hour interval means 5 requests per hour. To maintain the same per-account request rate, increase the check interval proportionally. If you normally use 1 hour for a single user, consider using 5 hours (or more) when monitoring 5 users. The tool automatically staggers requests between targets, but the overall request frequency should still be adjusted based on the total number of monitored users.
Configuration Options
You can also configure multi-target behavior in your config file:
TARGET_USERNAMES = ["user1", "user2"]- Define targets in config (CLI arguments take precedence)MULTI_TARGET_STAGGER = 0- Set to 0 for auto-spread, or specify seconds between targetsMULTI_TARGET_STAGGER_JITTER = 5- Add random jitter (in seconds) to staggeringMULTI_TARGET_SERIALIZE_HTTP = True- Serialize HTTP requests across targets (enabled by default for safety)Please test it out and let me know if it works as expected with your use case? Any issues or edge cases you encounter?
The feature is available in the latest beta v2.0 version on the dev branch: instagram_monitor.py
@tomballgithub - maybe you could also check it ? I think you also had a use case to monitor multiple users ...
@tomballgithub commented on GitHub (Dec 28, 2025):
I'll add a 2nd account tonight to help test this. I need see how this works anyway for a few PR's I am working on
@YouveGotMeowxy commented on GitHub (Jan 6, 2026):
I ran it for a few days on multiple targets and it seems to be working great so far. :)
Is there a way to tell the script where to save the stuff? Right now it seems to just dump everything into my User folder. I'm a bit of an organization freak and would love to be able to have it auto save to organized dirs. Like this (per target):
etc.
@misiektoja commented on GitHub (Jan 6, 2026):
I'm glad to hear that! :-)
Lets continue in #35.