[GH-ISSUE #287] [FEATURE] Ollama local AI model support. #202

Open
opened 2026-03-02 23:34:26 +03:00 by kerem · 9 comments
Owner

Originally created by @AJaySi on GitHub (Oct 10, 2025).
Original GitHub issue: https://github.com/AJaySi/ALwrity/issues/287

Originally assigned to: @AJaySi on GitHub.

@Ratna-Babu

1). Ollama will help ALwrity run open source AI models on end user laptops.
This provides free access to AI models like llama, gemma, deepseek etc.
This also provides data privacy and cost savings.

2). Since, Alwrity is Analytics data driven, there are lot of external API and AI calls that happen. Commercial AI models will prove to be very expensive, very soon.

3). One AI model should not be used for all the tasks. AI data driven analysis can happen on smaller AI Models and complex tasks on reasoning models.

4). As our target audience are content creators and digital marketing professionals, we will need to abstract all these details. The end user will only tick mark an option 'Run Free AI models Locally' and then we will set up everything for them, in the backend.

5). We also need to explore ollama cloud and litellm and if they provide a generous free API calls per months, then using that will be easier for setup and maintenance.

6). Also, well crafted AI prompts with right context can generate comparable, if not better, results compared to best commercial AI models.

7). Later on, Ollama + Unsloth will also become our gateway to Fine tuning Open source AI models for our end users. This would mean providing new AI models trained on end users data, social media and blogs and a SME in digital marketing. So, we will end up making our own ALwrity AI model for digital marketing.

8). LLMs are overkill for most of the tasks as they are generic and cater to all domains of knowledge and ALwrity only needs AI model trained and SME only end user's digital Prescence + Digital marketing knowledge.

Originally created by @AJaySi on GitHub (Oct 10, 2025). Original GitHub issue: https://github.com/AJaySi/ALwrity/issues/287 Originally assigned to: @AJaySi on GitHub. @Ratna-Babu 1). Ollama will help ALwrity run open source AI models on end user laptops. This provides free access to AI models like llama, gemma, deepseek etc. This also provides data privacy and cost savings. 2). Since, Alwrity is Analytics data driven, there are lot of external API and AI calls that happen. Commercial AI models will prove to be very expensive, very soon. 3). One AI model should not be used for all the tasks. AI data driven analysis can happen on smaller AI Models and complex tasks on reasoning models. 4). As our target audience are content creators and digital marketing professionals, we will need to abstract all these details. The end user will only tick mark an option 'Run Free AI models Locally' and then we will set up everything for them, in the backend. 5). We also need to explore ollama cloud and litellm and if they provide a generous free API calls per months, then using that will be easier for setup and maintenance. 6). Also, well crafted AI prompts with right context can generate comparable, if not better, results compared to best commercial AI models. 7). Later on, Ollama + Unsloth will also become our gateway to Fine tuning Open source AI models for our end users. This would mean providing new AI models trained on end users data, social media and blogs and a SME in digital marketing. So, we will end up making our own ALwrity AI model for digital marketing. 8). LLMs are overkill for most of the tasks as they are generic and cater to all domains of knowledge and ALwrity only needs AI model trained and SME only end user's digital Prescence + Digital marketing knowledge.
Author
Owner

@AJaySi commented on GitHub (Oct 10, 2025):

@Ratna-Babu

1). Refer to this folder: https://github.com/AJaySi/ALwrity/tree/main/backend/services/llm_providers
This contains logic and code on integrating different AI models.

Note: Presently, its is wrongly supporting only Gemini.

2). You need to modify and use this module: https://github.com/AJaySi/ALwrity/blob/main/backend/services/llm_providers/main_text_generation.py

3). The logic is basically based on gpt_provider flag. When the end user gives Gemini API Keys, we set a flag GPT_PROVIDER=Gemini in the .env and decides which AI models to call.

This way, the same codebase will work for all AI models without changing any code, but simply providing the right AI API keys.

Let me know if you have any doubts, I can help with. Thanks.

<!-- gh-comment-id:3390297986 --> @AJaySi commented on GitHub (Oct 10, 2025): @Ratna-Babu 1). Refer to this folder: https://github.com/AJaySi/ALwrity/tree/main/backend/services/llm_providers This contains logic and code on integrating different AI models. Note: Presently, its is wrongly supporting only Gemini. 2). You need to modify and use this module: https://github.com/AJaySi/ALwrity/blob/main/backend/services/llm_providers/main_text_generation.py 3). The logic is basically based on gpt_provider flag. When the end user gives Gemini API Keys, we set a flag GPT_PROVIDER=Gemini in the .env and decides which AI models to call. This way, the same codebase will work for all AI models without changing any code, but simply providing the right AI API keys. Let me know if you have any doubts, I can help with. Thanks.
Author
Owner

@AJaySi commented on GitHub (Oct 13, 2025):

@Om-Singh1808 will review this and get back to you, we discussed some work around it yesterday and he will update you.

<!-- gh-comment-id:3395943872 --> @AJaySi commented on GitHub (Oct 13, 2025): @Om-Singh1808 will review this and get back to you, we discussed some work around it yesterday and he will update you.
Author
Owner

@AJaySi commented on GitHub (Oct 14, 2025):

@Om-Singh1808
Please update this ?

<!-- gh-comment-id:3400470991 --> @AJaySi commented on GitHub (Oct 14, 2025): @Om-Singh1808 Please update this ?
Author
Owner

@Om-Singh1808 commented on GitHub (Oct 14, 2025):

I am updating a PR for the local installation in about 15 mins.

<!-- gh-comment-id:3400494131 --> @Om-Singh1808 commented on GitHub (Oct 14, 2025): I am updating a PR for the local installation in about 15 mins.
Author
Owner

@AJaySi commented on GitHub (Oct 14, 2025):

@Om-Singh1808 @Ratna-Babu

Hmmm, have a discussion between you two and reconcile ollama into one PR.
https://github.com/AJaySi/ALwrity/pull/299

<!-- gh-comment-id:3400624999 --> @AJaySi commented on GitHub (Oct 14, 2025): @Om-Singh1808 @Ratna-Babu Hmmm, have a discussion between you two and reconcile ollama into one PR. https://github.com/AJaySi/ALwrity/pull/299
Author
Owner

@Ratna-Babu commented on GitHub (Oct 14, 2025):

should i merge this PR into #299 ?

<!-- gh-comment-id:3402650837 --> @Ratna-Babu commented on GitHub (Oct 14, 2025): should i merge this PR into #299 ?
Author
Owner

@AJaySi commented on GitHub (Oct 15, 2025):

@Ratna-Babu

  • If you have reviewed #299 and feel its a disjoint, then merge it.
  • Please Ensure no duplicity and redundancy and that it only requires a click on UI to get going with ollama. This should be present in #299
  • Please try completing the onboarding with ollama local model and if results are comparable with gemini ?
  • There should also be an option for providing the ollama cloud API keys, in this we are skipping local setup.

Thank you.

<!-- gh-comment-id:3404350153 --> @AJaySi commented on GitHub (Oct 15, 2025): @Ratna-Babu - If you have reviewed #299 and feel its a disjoint, then merge it. - Please Ensure no duplicity and redundancy and that it only requires a click on UI to get going with ollama. This should be present in #299 - Please try completing the onboarding with ollama local model and if results are comparable with gemini ? - There should also be an option for providing the ollama cloud API keys, in this we are skipping local setup. Thank you.
Author
Owner

@AJaySi commented on GitHub (Oct 15, 2025):

@Ratna-Babu

1: Lets not think of local install of ollama, instead only work with ollama cloud API keys ?
2: Ollama cloud APIs also ensure data privacy, so no feature loss and dont need #299 ?

This will expedite the merging and quickly supporting ollama and moving to openrouter ?

  • Supporting install on local environments is always irritating, as the variety of systems and their local problems.
<!-- gh-comment-id:3405062033 --> @AJaySi commented on GitHub (Oct 15, 2025): @Ratna-Babu 1: Lets not think of local install of ollama, instead only work with ollama cloud API keys ? 2: Ollama cloud APIs also ensure data privacy, so no feature loss and dont need #299 ? This will expedite the merging and quickly supporting ollama and moving to openrouter ? - Supporting install on local environments is always irritating, as the variety of systems and their local problems.
Author
Owner

@Om-Singh1808 commented on GitHub (Oct 15, 2025):

Are we not doing local installation of ollama now?
Maybe it's better working with only cloud API.

<!-- gh-comment-id:3405661387 --> @Om-Singh1808 commented on GitHub (Oct 15, 2025): Are we not doing local installation of ollama now? Maybe it's better working with only cloud API.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ALwrity#202
No description provided.