mirror of
https://github.com/AJaySi/ALwrity.git
synced 2026-04-25 08:55:58 +03:00
[GH-ISSUE #287] [FEATURE] Ollama local AI model support. #202
Labels
No labels
AI Content Agents
AI Content Strategy
AI Content planning
AI Marketing Tools
AI SEO
AI personalization
AI writer
ALwrity Copi-lot
Alwrity web search
Anthropic
DeepSeek
Gemini AI
Integration
LLM
OnBoarding
OnBoarding
RAG knowledgebase Memory
bug
documentation
enhancement
good first issue
help wanted
invalid
openai
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ALwrity#202
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @AJaySi on GitHub (Oct 10, 2025).
Original GitHub issue: https://github.com/AJaySi/ALwrity/issues/287
Originally assigned to: @AJaySi on GitHub.
@Ratna-Babu
1). Ollama will help ALwrity run open source AI models on end user laptops.
This provides free access to AI models like llama, gemma, deepseek etc.
This also provides data privacy and cost savings.
2). Since, Alwrity is Analytics data driven, there are lot of external API and AI calls that happen. Commercial AI models will prove to be very expensive, very soon.
3). One AI model should not be used for all the tasks. AI data driven analysis can happen on smaller AI Models and complex tasks on reasoning models.
4). As our target audience are content creators and digital marketing professionals, we will need to abstract all these details. The end user will only tick mark an option 'Run Free AI models Locally' and then we will set up everything for them, in the backend.
5). We also need to explore ollama cloud and litellm and if they provide a generous free API calls per months, then using that will be easier for setup and maintenance.
6). Also, well crafted AI prompts with right context can generate comparable, if not better, results compared to best commercial AI models.
7). Later on, Ollama + Unsloth will also become our gateway to Fine tuning Open source AI models for our end users. This would mean providing new AI models trained on end users data, social media and blogs and a SME in digital marketing. So, we will end up making our own ALwrity AI model for digital marketing.
8). LLMs are overkill for most of the tasks as they are generic and cater to all domains of knowledge and ALwrity only needs AI model trained and SME only end user's digital Prescence + Digital marketing knowledge.
@AJaySi commented on GitHub (Oct 10, 2025):
@Ratna-Babu
1). Refer to this folder: https://github.com/AJaySi/ALwrity/tree/main/backend/services/llm_providers
This contains logic and code on integrating different AI models.
Note: Presently, its is wrongly supporting only Gemini.
2). You need to modify and use this module: https://github.com/AJaySi/ALwrity/blob/main/backend/services/llm_providers/main_text_generation.py
3). The logic is basically based on gpt_provider flag. When the end user gives Gemini API Keys, we set a flag GPT_PROVIDER=Gemini in the .env and decides which AI models to call.
This way, the same codebase will work for all AI models without changing any code, but simply providing the right AI API keys.
Let me know if you have any doubts, I can help with. Thanks.
@AJaySi commented on GitHub (Oct 13, 2025):
@Om-Singh1808 will review this and get back to you, we discussed some work around it yesterday and he will update you.
@AJaySi commented on GitHub (Oct 14, 2025):
@Om-Singh1808
Please update this ?
@Om-Singh1808 commented on GitHub (Oct 14, 2025):
I am updating a PR for the local installation in about 15 mins.
@AJaySi commented on GitHub (Oct 14, 2025):
@Om-Singh1808 @Ratna-Babu
Hmmm, have a discussion between you two and reconcile ollama into one PR.
https://github.com/AJaySi/ALwrity/pull/299
@Ratna-Babu commented on GitHub (Oct 14, 2025):
should i merge this PR into #299 ?
@AJaySi commented on GitHub (Oct 15, 2025):
@Ratna-Babu
Thank you.
@AJaySi commented on GitHub (Oct 15, 2025):
@Ratna-Babu
1: Lets not think of local install of ollama, instead only work with ollama cloud API keys ?
2: Ollama cloud APIs also ensure data privacy, so no feature loss and dont need #299 ?
This will expedite the merging and quickly supporting ollama and moving to openrouter ?
@Om-Singh1808 commented on GitHub (Oct 15, 2025):
Are we not doing local installation of ollama now?
Maybe it's better working with only cloud API.