mirror of
https://github.com/jehna/humanify.git
synced 2026-04-27 09:35:58 +03:00
[GH-ISSUE #84] Suggestions : Alternative Models, Batch and Auto-renaming #38
Labels
No labels
bug
enhancement
pull-request
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/humanify#38
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @neoOpus on GitHub (Sep 6, 2024).
Original GitHub issue: https://github.com/jehna/humanify/issues/84
Hi Jesse!
I greatly appreciate your ongoing work on this deobfuscation tool. I'm currently trying to reverse a minified, obfuscated extension that has been abandoned by its authors. Unfortunately, the JavaScript files are quite large, and despite using tools like Gemini, I hit a snag at about 7% or just not showing any progress for several hours equally by using the local models on my old machine it didn't show any progress for several hours even with the
--verbose. So I am not sure what is the problem other than having an error about "Punycode" module being deprecated.I know the potential of your tool is significant, and with a few enhancements, it could serve even more users looking to modify and maintain older extensions. If you consider integrating additional models like Perplexity API ($5 free tier for Pro every month), Meta Llama 3.1 (free) or using Groq API (super-fast and support several models), Claud 3.5 Sonnet (Super good at code), it will really open up possibilities for those of us dealing with complex files.
This is important because the operation takes a long time, and while I could create a batch file, it would be challenging to maintain for ongoing projects that require regular deobfuscation of updated JS files from any other Chrome extension I am trying to modify to add some fixes or features... Just to see the differences in comparison tools and dismissing those who are just due to the model variable naming variations can take hours let alone looking for necessary modifications (patching). I hope this makes sense.
Suggested To-Do List: Improving the Deobfuscation Tool
Explore New Model Integrations
Implement Multiple File Input
Auto-Rename Deobfuscated Files
Enhance Performance
Implementing the suggestions above could make a substantial difference in usability and performance. Thank you for considering these enhancements and keep up the great work—your efforts are genuinely invaluable to the community!
@jehna commented on GitHub (Sep 6, 2024):
Hey! Very good suggestions, thank you for those.
A quick reply about the models: This library relies heavily on a specific API feature: forcing the output, which seems not to be available on most cloud providers I've looked at. I'd guess that this is changing in the near future, as OpenAI juat recently introduced the json output mode and I'm sure many will follow.
At the moment Groq and LLama3.1 (through Azure at least) are not compatible with the requirements of humanify. Claude seems to not have it either (not sure if choosing the tool in advance would work). I'll add them in as soon as they start supporting grammars or forced json output.
Meta.ai is not yet available for my country, is there other good places to find a hosted version?
@neoOpus commented on GitHub (Sep 10, 2024):
Ah! I understand for the models, thanks for the clarification, I will keep an over that feature (JSON output) and I will also send them requests in every communication channel they have in order to make them know that it is a popular demand (using their own LLMs to formulate the messages of course, hehe)
Meta.ai is not yet available for my country, is there other good places to find a hosted version?Please allow me some time and I will get back to you with some options, meanwhile you can try to use a VPN (ProtonVPN Free recommended) ... I will verify first that the API that I will suggest allow the requirement for your script to work (right now I think Perplexity can be a good option but there are so many so far and I will find one that will work best for everyone)
@neoOpus commented on GitHub (Sep 10, 2024):
I appreciate your willingness to strengthen this project. I’m currently focused on understanding its internal workings to develop an effective workflow, as I see great potential here. I have some suggestions, but I want to ensure they’re feasible. One idea is to leverage multiple LLMs by chaining their outputs, where the result of one serves as the input for another to perform transformations or adapt to the required format. This approach could reduce API usage for a single service and maximize the benefits of each one’s free tier, which is important since not everyone can afford commercial use.
@neoOpus commented on GitHub (Sep 12, 2024):
I am still looking for alternatives that are free and I think that Mixtral could be the one (https://docs.mistral.ai/capabilities/json_mode/)... Codestral specifically could be the one (but I maybe mistaken)
https://mistral.ai/news/codestral/
@0xdevalias commented on GitHub (Mar 12, 2025):
Just wanted to link some (tangentially) related issues for this part for easier continuity:
Also these upstream
webcrackissues/PRs may end up being relevant too:See also:
See also:
@0xdevalias commented on GitHub (Apr 10, 2025):
See also: