mirror of
https://github.com/ArchiveBox/ArchiveBox.git
synced 2026-04-25 17:16:00 +03:00
[GH-ISSUE #546] Feature Request: ability to include arbitrary files into an archive that are linked on a page - like zip files #3367
Labels
No labels
expected: maybe someday
expected: next release
expected: release after next
expected: unlikely unless contributed
good first ticket
help wanted
pull-request
scope: all users
scope: windows users
size: easy
size: hard
size: medium
size: medium
status: backlog
status: blocked
status: done
status: idea-phase
status: needs followup
status: wip
status: wontfix
touches: API/CLI/Spec
touches: configuration
touches: data/schema/architecture
touches: dependencies/packaging
touches: docs
touches: js
touches: views/replayers/html/css
why: correctness
why: functionality
why: performance
why: security
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ArchiveBox#3367
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ykuksenko on GitHub (Nov 22, 2020).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/546
What is the problem that your feature request solves
I would like to archive non media files on a given web page for example
zipfiles and possibly other extensions.Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes
Probably an extractor that is dedicated to grabbing various files that are directly linked on a page with a configurable regex argument or an array of extensions if regex is not available.
Alternatively allow invoking the same extractor multiple times with custom arguments and allow placing the result in a different sub directory each time.
For example allow keeping the default wget extractor but then also allow calling it again and have it place its results in a subdirectory with a new set of arguments - ones that would fetch only zip files from the page for example. Also allow over ridding timeouts per extractor.
What hacks or alternative solutions have you tried to solve the problem?
Tried to modify default argument to WGET to achieve a similar outcome but you cannot turn off robots due to a parsing error (see #545 )
edit: I was able to modify the config file directly, but the timeout for wget seems to only be tunable via the general timeout setting.
How badly do you want this new feature?
Would be nice to have.
@pirate commented on GitHub (Nov 22, 2020):
ArchiveBox will happily download zip files (and other staticfiles) as-is, you just have to feed it the links to those zip files directly or use
--depth=1. There is no built-in config to only crawl a page for.zips or any other statcifile extension.I don't think we'll add a flag specifically to crawl for zip files or other staticfiles on a page, that's up to the user to manage, but I believe it's usable to achieve that end-goal as is right now.
Here are the two approaches for you to try and report back if they work for you / don't work:
or: