mirror of
https://github.com/ArchiveBox/ArchiveBox.git
synced 2026-04-25 17:16:00 +03:00
[GH-ISSUE #133] Docker build fails on Synology NAS #1600
Labels
No labels
expected: maybe someday
expected: next release
expected: release after next
expected: unlikely unless contributed
good first ticket
help wanted
pull-request
scope: all users
scope: windows users
size: easy
size: hard
size: medium
size: medium
status: backlog
status: blocked
status: done
status: idea-phase
status: needs followup
status: wip
status: wontfix
touches: API/CLI/Spec
touches: configuration
touches: data/schema/architecture
touches: dependencies/packaging
touches: docs
touches: js
touches: views/replayers/html/css
why: correctness
why: functionality
why: performance
why: security
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ArchiveBox#1600
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mawmawmawm on GitHub (Jan 22, 2019).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/133
Hi there,
I was playing around with ArchiveBox yesterday and I tried to install it on my Synology NAS on
DSM 6.2.1-23824 Update 4throughdocker-compose up -d.Unfortunately this didn't work out and the build process stopped with this error message:
This seems to be related to the Chromium download / install.
Is this something that can be fixed easily by myself or would this require a change to the docker file?
Thanks for this great project, by the way, I really like it. Thanks!
@pirate commented on GitHub (Jan 23, 2019):
Ah shoot it looks like I may have to build a new image that supports Chrome better:
I think I'll base it off of https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md#running-puppeteer-in-docker
since that seems to be the official way provided by Google Chrome.
Were you able to build it successfully if you comment out/remove the browser install process and just run
./archivewithFETCH_PDF,FETCH_SCREENSHOT_FETCH_DOM=False?@mawmawmawm commented on GitHub (Jan 23, 2019):
Thanks for the quick response.
I need to test this, but it's gonna take a couple days until I can give it a shot. I believe it should work, I'll let you know here.
@pirate commented on GitHub (Jan 23, 2019):
Actually nvm, I think I fixed it, give
mastera try now. Seee1be96e.@mawmawmawm commented on GitHub (Jan 23, 2019):
Awesome, thanks for the quick fix. I’ll give it a try as soon as I get a chance.
@mawmawmawm commented on GitHub (Jan 24, 2019):
Hey there,
we're a step closer I think, but now there's an issue with
puppeteer:Any idea what this could be? The log file mentioned here didn't exist in this folder
@pirate commented on GitHub (Jan 24, 2019):
Looks like it could be a temporary download issue. Can you try rebuilding one more time.
@mawmawmawm commented on GitHub (Jan 25, 2019):
I did a rebuild from scratch and this time the step 7/15 worked fine.
I received some warnings for
puppeteerthough, just FYI:Also at the end things did go wrong:
@mawmawmawm commented on GitHub (Jan 25, 2019):
Actually... creating that
datadirectory withchmod 777solved the issue and both containers are booting up now.I see the web server running on port 8098 with a blank index file (no content, just an empty dir listing):
However when I try to add an example URL through (as root)
echo "https://example.com" | docker-compose exec -T archivebox /bin/archivejust nothing happens. No input file is being added, no URL is being crawled. Any idea what's (not) going on there?
docker-compose exec archivebox /bin/archive https://example.com/some/feed.rssworks however and the script errors out because of the 404 (expected):
Adding my pinboard feed seems to work and the corresponding txt / json files are being created:
Links are being crawled / files being pulled now, screenshots and PDFs etc being generated... It works! (Just the simple URL add through the command line apparently doesn't. No big deal. ). Thanks again!
@pirate commented on GitHub (Jan 25, 2019):
Glad to hear you got it working!
I'll add a note to the docs about creating that
data/folder beforehand, and I'll see if I can fix the ability to add a single url via stdin.