mirror of
https://github.com/ArchiveBox/ArchiveBox.git
synced 2026-04-25 17:16:00 +03:00
[GH-ISSUE #712] Feature Request: prioritize types of pages #445
Labels
No labels
expected: maybe someday
expected: next release
expected: release after next
expected: unlikely unless contributed
good first ticket
help wanted
pull-request
scope: all users
scope: windows users
size: easy
size: hard
size: medium
size: medium
status: backlog
status: blocked
status: done
status: idea-phase
status: needs followup
status: wip
status: wontfix
touches: API/CLI/Spec
touches: configuration
touches: data/schema/architecture
touches: dependencies/packaging
touches: docs
touches: js
touches: views/replayers/html/css
why: correctness
why: functionality
why: performance
why: security
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ArchiveBox#445
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @dominictarr on GitHub (Apr 19, 2021).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/712
Type
What is the problem that your feature request solves
Often static sites have thumbnail images in the article and a link to large image.
It would be great to get those images as soon as possible, before other html pages on the site.
Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes
option to prioritize file types that can be known to be leaves - they won't add more links to the database
What hacks or alternative solutions have you tried to solve the problem?
How badly do you want this new feature?
@pirate commented on GitHub (Apr 19, 2021):
I think you're trying to use archivebox for something it's not designed primarily for. It's not built to archive entire domains recursively, there are better tools for that. You can use a scraper or spider to find the urls you want to archive in whatever order you like, then pipe them into archivebox once you have them in the order you want them archived.
@dominictarr commented on GitHub (Apr 19, 2021):
thanks. I was recommended archivebox when I asked about a better
wget -kr 1do you know something that might suit me better?@pirate commented on GitHub (Apr 21, 2021):
Maybe Photon? https://github.com/s0md3v/Photon
@pirate commented on GitHub (May 7, 2021):
You can find many more alternatives here too: https://github.com/ArchiveBox/ArchiveBox/wiki/Web-Archiving-Community#other-archivebox-alternatives