[GH-ISSUE #659] To cache full content that is behind a pay wall #423

Closed
opened 2026-03-02 11:49:42 +03:00 by kerem · 1 comment
Owner

Originally created by @mrhalliwell on GitHub (Nov 15, 2024).
Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/659

Describe the feature you'd like

Taking examples from my subscription of Financial Times. For Omnivore, if I logged in to the web-version FT website using laptop, once I clicked Omnivore extension it will cache ALL content into the database. In Hoarder, its mechanism is collect just the address and crawl data from its browser (this is how I see its mechanism).

It would be great if Hoarder could employ similar mechanism to collect the full content that I wanted into the backend.

ps. If I'm using Omnivore via phones, the app only store links. So for this part, Omnivore and Hoarder worked the same way. What I usually do is to clean up those that requires subscriptions once a week. I logged into the web version and cached all articles and delete those that only have links.

Describe the benefits this would bring to existing Hoarder users

If I paid for the subscription, caching the contents into one place seems reasonable. Those contents behind a paywall are usually very informative.

Can the goal of this request already be achieved via other means?

So far I don't see how I can do that with simple mechanism.

Have you searched for an existing open/closed issue?

  • I have searched for existing issues and none cover my fundamental request

Additional context

n/a

Originally created by @mrhalliwell on GitHub (Nov 15, 2024). Original GitHub issue: https://github.com/karakeep-app/karakeep/issues/659 ### Describe the feature you'd like Taking examples from my subscription of Financial Times. For **Omnivore**, if I logged in to the web-version FT website using laptop, once I clicked Omnivore extension it will cache **ALL** content into the database. In Hoarder, its mechanism is collect just the address and crawl data from its browser (this is how I see its mechanism). It would be great if Hoarder could employ similar mechanism to collect the full content that I wanted into the backend. ps. If I'm using Omnivore via phones, the app only store links. So for this part, Omnivore and Hoarder worked the same way. What I usually do is to clean up those that requires subscriptions once a week. I logged into the web version and cached all articles and delete those that only have links. ### Describe the benefits this would bring to existing Hoarder users If I paid for the subscription, caching the contents into one place seems reasonable. Those contents behind a paywall are usually very informative. ### Can the goal of this request already be achieved via other means? So far I don't see how I can do that with simple mechanism. ### Have you searched for an existing open/closed issue? - [X] I have searched for existing issues and none cover my fundamental request ### Additional context n/a
kerem closed this issue 2026-03-02 11:49:42 +03:00
Author
Owner

@MohamedBassem commented on GitHub (Nov 15, 2024):

This is in our roadmap and is tracked as part of #172

<!-- gh-comment-id:2478142132 --> @MohamedBassem commented on GitHub (Nov 15, 2024): This is in our roadmap and is tracked as part of #172
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/karakeep#423
No description provided.