mirror of
https://github.com/jehna/humanify.git
synced 2026-04-27 01:26:00 +03:00
[GH-ISSUE #167] Parallel renames #51
Labels
No labels
bug
enhancement
pull-request
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/humanify#51
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jehna on GitHub (Oct 18, 2024).
Original GitHub issue: https://github.com/jehna/humanify/issues/167
My thought is, that it should speed up the process a lot if the renames were done in parallel. Especially if the user has enough OpenAI quota, it could be much faster to process large files by parallelising the work.
Local inference should also be able to be run in parallel, if the user has good enough GPU at hand.
One big problems is, that I've gotten the best results when applying renames from the bottom up – so say we have:
It seems that running the rename in order
a -> b -> cyields much better results than runningc -> b -> a.But if we'd have multiple same-level identifiers like:
At least in theory it would be possible to run
afirst and[b, c, d]in parallel to get feasible results.In the best case scenario there would be a second LLM step to check that all variables still make sense after the parallel run has finished.
@jehna commented on GitHub (Oct 18, 2024):
Need to implement proper request throttling and retry logic when doing this
@0xdevalias commented on GitHub (Oct 20, 2024):
Related:
This seems to be the section of code for implementing better throttling/retry logic (at least for the openai plugin):
@brianjenkins94 commented on GitHub (Oct 21, 2024):
Resume-ability would also be a good thing to consider.
@0xdevalias commented on GitHub (Oct 21, 2024):
Some of the discussion in the following issue could tangentially relate to resumability (specifically if a consistent 'map' of renames was created, perhaps that could also show which sections of the code hadn't yet been processed):
@brianjenkins94 commented on GitHub (Oct 23, 2024):
I'm trying to process a pretty huge file and just ran into this:
I'm going to see about improving the rate limiting here:
@0xdevalias commented on GitHub (Oct 23, 2024):
Context from other thread:
@neoOpus commented on GitHub (Oct 24, 2024):
Could we have a PR with the majority of the fixes, even if it's not production ready? I paused my work as I lost track of the tasks and became discouraged by the errors, compounded by a sluggish machine. I still want to deobfuscate some Chrome extensions to modify them or understand their functions better.
@neoOpus commented on GitHub (Mar 12, 2025):
Maybe allowing to use multiple API keys from different accounts can achieve a better result too
@0xdevalias commented on GitHub (Mar 12, 2025):
@neoOpus Curious, would you see this as:
@neoOpus commented on GitHub (Mar 12, 2025):
I would say both to bypass rate limitations, speedup the processing and make it more resilient as well, so it can have some switching algorithms associated to hop between them if we decide to or have them work in tandem, sequence... Of course having this implanted for all the models would be the best as well.
So yeah it can be all keys from OpenAI using different accounts or any other provider... And mixing should be allowed too.
Having some proxying implanted would make things even better so each request using an API key can be routed via a configured proxy. (I still need to know if there are some who work that available for a small fee or for free 😜)
@0xdevalias commented on GitHub (Apr 17, 2025):
See also:
@0xdevalias commented on GitHub (Apr 24, 2025):
For one potential solution to 'load balancing' / using different models/providers when hitting an error like a rate limit/etc:
Specifically parts like:
@0xdevalias commented on GitHub (May 30, 2025):
@neoOpus For proxy support, the following issues may be of interest:
But more specifically, at least for
openai, this most recent update I posted:Which is mirroring what I originally shared here:
And comes from this version bump PR (or any that replace it):
@neoOpus commented on GitHub (Jun 9, 2025):
@0xdevalias I stumbled upon this in HN
https://llmgateway.io/