mirror of
https://github.com/ArchiveBox/ArchiveBox.git
synced 2026-04-25 09:06:02 +03:00
[GH-ISSUE #1066] Bug: archivebox update and archivebox list are slow to start due to unnecessary calls to merge_links() when reading from disk #3687
Labels
No labels
expected: maybe someday
expected: next release
expected: release after next
expected: unlikely unless contributed
good first ticket
help wanted
pull-request
scope: all users
scope: windows users
size: easy
size: hard
size: medium
size: medium
status: backlog
status: blocked
status: done
status: idea-phase
status: needs followup
status: wip
status: wontfix
touches: API/CLI/Spec
touches: configuration
touches: data/schema/architecture
touches: dependencies/packaging
touches: docs
touches: js
touches: views/replayers/html/css
why: correctness
why: functionality
why: performance
why: security
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ArchiveBox#3687
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ntevenhere on GitHub (Dec 22, 2022).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/1066
Describe the bug
On a large archive,
archivebox updateorarchivebox liststarts right away some CPU intensive process that takes a long time to complete.Steps to reproduce
archivebox updateorlistScreenshots or log output
The user is simply left waiting, and for the lack of an explanation, the user is asked of his faith. Fans are screeching, it could be an infinite loop. If the user waits, they would wait 3 minutes. If they kill the program, the suspicion of an infinite loop would be wrong: (who could blame them)
Long story short, I snooped after setting up the dev enviroment. Before telling the user anything, archivebox is iterating on every matching link, of 1847. But that's not what's slow! It's a particular function,
merge_links()that, ran 1847 times, adds up to a lot of waiting.Merge_links() is called by load_link_details(), seemingly to combine the information from disk about the link we're currently processing, and prettify it too. So far so good, but why are we doing in this in bulk? For example,
archivebox listis going to iterate on each link, to print it, so why not "merge" as you roll? Do we really need a complete list of merged links before doing anything? Perhaps I'm not seeing the entire picture though...@pirate commented on GitHub (Jan 19, 2024):
The merge is done before the list in order to dedupe them, as sometimes there are duplicate snapshots (between sqlite db and disk folder, or disk folder with another disk folder) from an older install or other archive getting merged in. https://github.com/ArchiveBox/ArchiveBox/wiki/Upgrading-or-Merging-Archives#merge-two-or-more-existing-archives
I'll likely improve this in the future, but it might require splitting out the import/dedupe step into an explicit user-run command in order to unblock the performance changes.