mirror of
https://github.com/RD17/ambar.git
synced 2026-04-25 15:35:49 +03:00
[GH-ISSUE #35] Feature request: S3 crawler #35
Labels
No labels
$$ Paid Support
bug
bug
enhancement
help wanted
invalid
pull-request
question
question
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ambar#35
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jamesinc on GitHub (May 16, 2017).
Original GitHub issue: https://github.com/RD17/ambar/issues/35
I'd really like to be able to crawl an S3 bucket. I feel sketchy serving web traffic out of my Dropbox!
@sochix commented on GitHub (May 16, 2017):
Hi! FTP or SMB crawler will not work for you?
@jamesinc commented on GitHub (May 17, 2017):
It would be possible to work around it using s3fs. I was just hoping for a native solution that didn't require mounting a bucket into a directory. It would work, but it is not ideal, as it would want to cache the contents of the bucket to the local disk. In this case, I'm trying to avoid needing lots of storage on the instance running Ambar.
@jmgilman commented on GitHub (May 18, 2017):
+1
@sochix commented on GitHub (Apr 19, 2018):
In the latest release you can mount S3 folder to local crawler
@AyKarsi commented on GitHub (Sep 10, 2018):
how can I mount a s3 folder to the local crawler?
@sochix commented on GitHub (Sep 10, 2018):
@AyKarsi just google on how to map s3 folder to local folder, and then map this local folder to the crawler.