mirror of
https://github.com/ArchiveBox/ArchiveBox.git
synced 2026-04-25 09:06:02 +03:00
[GH-ISSUE #1000] Bug: Parsing Wallabag Atom feed tries to open nonexisting files #2137
Labels
No labels
expected: maybe someday
expected: next release
expected: release after next
expected: unlikely unless contributed
good first ticket
help wanted
pull-request
scope: all users
scope: windows users
size: easy
size: hard
size: medium
size: medium
status: backlog
status: blocked
status: done
status: idea-phase
status: needs followup
status: wip
status: wontfix
touches: API/CLI/Spec
touches: configuration
touches: data/schema/architecture
touches: dependencies/packaging
touches: docs
touches: js
touches: views/replayers/html/css
why: correctness
why: functionality
why: performance
why: security
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ArchiveBox#2137
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @peterrus on GitHub (Jul 18, 2022).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/1000
Describe the bug
After commit
github.com/ArchiveBox/ArchiveBox@a6767671fbit seems thatadd()(github.com/ArchiveBox/ArchiveBox@a6767671fb/archivebox/main.py (L555)=) sometimes gets called with aurlparameter containing the entire atom feed instead of a list of actual urls. I am not sure if this happens only with the Wallabag parser but this input is not expected atgithub.com/ArchiveBox/ArchiveBox@a6767671fb/archivebox/parsers/init.py#L158= and aNo such file or directoryerror is raised.Steps to reproduce
devbranch aftergithub.com/ArchiveBox/ArchiveBox@a6767671fbcurl https://app.wallabag.it/feed/dokafad/TDzxV9ejsZiWMq/archive | archivebox add --parser=wallabag_atomScreenshots or log output
See: https://github.com/ArchiveBox/ArchiveBox/issues/971#issuecomment-1122499507=
ArchiveBox version
But actually running
github.com/ArchiveBox/ArchiveBox@a6767671fb@peterrus commented on GitHub (Oct 7, 2024):
I have taken a look at the current code that parses the Wallabag feed and it seems it relies heavily on string parsing instead of a something that 'understands' the XML document (so one can use XPath for example). After some experimenting with Python's
lxmlI ran into issues where the document returned by Wallabag's RSS feed was simply too large forlxmlto handle (at least on my machine/setup). I suspect this is due to the fact that the Wallabag feed includes the entire saved page's content and I have configured Wallabag te return a feed containing 2000 documents. My collection is already containing 1600+ items so I expect this to be a problem sooner of later.I opted for an (imho) cleaner approach where the Wallabag API is called, using pagination and using Wallabag's
metadata-only option (to prevent huge blobs of data being processed). Because I had trouble setting up a Docker dev environment on thedevbranch (something related tonodejs) I opted to just put everything in a separate Python script that I pipe into theurl_listparser and use that as a workaround for now. Maybe someone else feels up to integrating this into Archivebox:Edit: I noticed are more intelligent RSS feed parser was already used in https://github.com/ArchiveBox/ArchiveBox/issues/1000#issuecomment-2396652382, I still believe - due to the fact that Wallabag's RSS feed can get huge - an API based solution is more elegant, the only 'downside' is that you would have to configure some credentials for the API somewhere.