[GH-ISSUE #214] Integrate google caching for large context and prompt chaining #151

Open
opened 2026-03-02 23:34:02 +03:00 by kerem · 0 comments
Owner

Originally created by @AJaySi on GitHub (Aug 21, 2025).
Original GitHub issue: https://github.com/AJaySi/ALwrity/issues/214

Originally assigned to: @AJaySi on GitHub.

COntent strategy and content calendar generation are using more than 270+ datapoints and large amount of context data. the problem is then simple, sending the same data again or large prompts will increase the cost. Also, the number of prompts required to generate strategy, calender, insights from database is also considerable.

Implementing google cache for context storing will then almost reduce the AI API costing by 60%

Originally created by @AJaySi on GitHub (Aug 21, 2025). Original GitHub issue: https://github.com/AJaySi/ALwrity/issues/214 Originally assigned to: @AJaySi on GitHub. COntent strategy and content calendar generation are using more than 270+ datapoints and large amount of context data. the problem is then simple, sending the same data again or large prompts will increase the cost. Also, the number of prompts required to generate strategy, calender, insights from database is also considerable. Implementing google cache for context storing will then almost reduce the AI API costing by 60%
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ALwrity#151
No description provided.