[GH-ISSUE #546] Feature Request: ability to include arbitrary files into an archive that are linked on a page - like zip files #1857

Closed
opened 2026-03-01 17:54:21 +03:00 by kerem · 1 comment
Owner

Originally created by @ykuksenko on GitHub (Nov 22, 2020).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/546

What is the problem that your feature request solves

I would like to archive non media files on a given web page for example zip files and possibly other extensions.

Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes

Probably an extractor that is dedicated to grabbing various files that are directly linked on a page with a configurable regex argument or an array of extensions if regex is not available.

Alternatively allow invoking the same extractor multiple times with custom arguments and allow placing the result in a different sub directory each time.

For example allow keeping the default wget extractor but then also allow calling it again and have it place its results in a subdirectory with a new set of arguments - ones that would fetch only zip files from the page for example. Also allow over ridding timeouts per extractor.

What hacks or alternative solutions have you tried to solve the problem?

Tried to modify default argument to WGET to achieve a similar outcome but you cannot turn off robots due to a parsing error (see #545 )

edit: I was able to modify the config file directly, but the timeout for wget seems to only be tunable via the general timeout setting.

How badly do you want this new feature?

Would be nice to have.

Originally created by @ykuksenko on GitHub (Nov 22, 2020). Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/546 ## What is the problem that your feature request solves I would like to archive non media files on a given web page for example `zip` files and possibly other extensions. ## Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes Probably an extractor that is dedicated to grabbing various files that are directly linked on a page with a configurable regex argument or an array of extensions if regex is not available. Alternatively allow invoking the same extractor multiple times with custom arguments and allow placing the result in a different sub directory each time. For example allow keeping the default wget extractor but then also allow calling it again and have it place its results in a subdirectory with a new set of arguments - ones that would fetch only zip files from the page for example. Also allow over ridding timeouts per extractor. ## What hacks or alternative solutions have you tried to solve the problem? Tried to modify default argument to WGET to achieve a similar outcome but you cannot turn off robots due to a parsing error (see #545 ) edit: I was able to modify the config file directly, but the timeout for wget seems to only be tunable via the general timeout setting. ## How badly do you want this new feature? Would be nice to have.
kerem 2026-03-01 17:54:21 +03:00
Author
Owner

@pirate commented on GitHub (Nov 22, 2020):

ArchiveBox will happily download zip files (and other staticfiles) as-is, you just have to feed it the links to those zip files directly or use --depth=1. There is no built-in config to only crawl a page for .zips or any other statcifile extension.

I don't think we'll add a flag specifically to crawl for zip files or other staticfiles on a page, that's up to the user to manage, but I believe it's usable to achieve that end-goal as is right now.

Here are the two approaches for you to try and report back if they work for you / don't work:

archivebox add --depth=1 https://example.com/some/page/with/zip/files/linked/in/it.html
# this should add a single snapshot for the main page, and then a snapshot for each of the linked staticfiles within (including zips)
# the snapshots for the linked zipfiles should only contain the staticfile downloaded by wget
# (i.e. it wont run the screenshot/dom dump/etc any of the other extractors because it'll detect that it's a plain staticfile that just needs to be downloaded normally)

or:

curl https://example.com/some/page/with/zip/files/linked/in/it.html | env URL_BLACKLIST='^.*(?!\.zip)\w{4}$' archivebox add
# this archives all URLs on the page but blacklists any URL that doesn't end with .zip using a negative lookahead regex
<!-- gh-comment-id:731784038 --> @pirate commented on GitHub (Nov 22, 2020): ArchiveBox will happily download zip files (and other staticfiles) as-is, you just have to feed it the links to those zip files directly or use `--depth=1`. There is no built-in config to only crawl a page for `.zip`s or any other statcifile extension. I don't think we'll add a flag specifically to crawl for zip files or other staticfiles on a page, that's up to the user to manage, but I believe it's usable to achieve that end-goal as is right now. Here are the two approaches for you to try and report back if they work for you / don't work: ```bash archivebox add --depth=1 https://example.com/some/page/with/zip/files/linked/in/it.html # this should add a single snapshot for the main page, and then a snapshot for each of the linked staticfiles within (including zips) # the snapshots for the linked zipfiles should only contain the staticfile downloaded by wget # (i.e. it wont run the screenshot/dom dump/etc any of the other extractors because it'll detect that it's a plain staticfile that just needs to be downloaded normally) ``` or: ```bash curl https://example.com/some/page/with/zip/files/linked/in/it.html | env URL_BLACKLIST='^.*(?!\.zip)\w{4}$' archivebox add # this archives all URLs on the page but blacklists any URL that doesn't end with .zip using a negative lookahead regex ```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ArchiveBox#1857
No description provided.