[GH-ISSUE #111] Can't Download File (File Content Not Found) #109

Closed
opened 2026-02-27 15:55:04 +03:00 by kerem · 7 comments
Owner

Originally created by @dkhd on GitHub (Jan 10, 2018).
Original GitHub issue: https://github.com/RD17/ambar/issues/111

Hi,

I managed to successfully run Ambar on my system. It works like a charm. Only, I cannot download the files using the API request (I use Google Chrome).

Here's the reference:
GET http://ambar/api/files/:download_uri

I tried on the https://app.ambar.cloud/ and the method works fine, but it just don't work on mine. Instead, it shows this error message:

{
    "message": "File content not found"
}

How can this happen and how to solve it?

Originally created by @dkhd on GitHub (Jan 10, 2018). Original GitHub issue: https://github.com/RD17/ambar/issues/111 Hi, I managed to successfully run Ambar on my system. It works like a charm. Only, I cannot download the files using the API request (I use Google Chrome). Here's the reference: `GET http://ambar/api/files/:download_uri ` I tried on the `https://app.ambar.cloud/` and the method works fine, but it just don't work on mine. Instead, it shows this error message: ``` { "message": "File content not found" } ``` How can this happen and how to solve it?
kerem closed this issue 2026-02-27 15:55:04 +03:00
Author
Owner

@noviceiii commented on GitHub (Jan 10, 2018):

I had the same issue when

  1. I ran the indexing service with preserveOriginals=false (for testing)
    and then
  2. enabled preserveOriginals=true

This had the effect that all existing documents in ambar had their files missing.
Since I did install the server anyway again I had not to deal with it.

However, and if the above steps could be the cause, I would try the following ideas
a. change the file name of the original files so they got loaded again; re-index
b. touch the file; re-index
c. Create the Crawler again (Settings on the WebInterface -> Crawler)
...it is probably sufficient to just delete the UID

<!-- gh-comment-id:356734188 --> @noviceiii commented on GitHub (Jan 10, 2018): I had the same issue when 1. I ran the indexing service with preserveOriginals=false (for testing) and then 2. enabled preserveOriginals=true This had the effect that all existing documents in ambar had their files missing. Since I did install the server anyway again I had not to deal with it. However, and if the above steps could be the cause, I would try the following ideas a. change the file name of the original files so they got loaded again; re-index b. touch the file; re-index c. Create the Crawler again (Settings on the WebInterface -> Crawler) ...it is probably sufficient to just delete the UID
Author
Owner

@sochix commented on GitHub (Jan 11, 2018):

@noviceiii you should always use preserveOriginals=true to download files via API

<!-- gh-comment-id:356965428 --> @sochix commented on GitHub (Jan 11, 2018): @noviceiii you should always use preserveOriginals=true to download files via API
Author
Owner

@dkhd commented on GitHub (Jan 12, 2018):

Hi all,

It works now. I can download the files by setting the perserveOriginals=true in the config.json.

Thank's for the help!

<!-- gh-comment-id:357134401 --> @dkhd commented on GitHub (Jan 12, 2018): Hi all, It works now. I can download the files by setting the `perserveOriginals=true` in the `config.json`. Thank's for the help!
Author
Owner

@ljgstudy commented on GitHub (Jun 12, 2018):

@dkhd Could you tell me which config.json modified? Thanks in advance!

<!-- gh-comment-id:396602619 --> @ljgstudy commented on GitHub (Jun 12, 2018): @dkhd Could you tell me which config.json modified? Thanks in advance!
Author
Owner

@itnoreplymax commented on GitHub (Oct 20, 2018):

@dkhd Could you tell me which config.json modified? Thanks in advance..

<!-- gh-comment-id:431587078 --> @itnoreplymax commented on GitHub (Oct 20, 2018): @dkhd Could you tell me which config.json modified? Thanks in advance..
Author
Owner

@evandroabreu commented on GitHub (Aug 24, 2020):

Hi all,

It works now. I can download the files by setting the perserveOriginals=true in the config.json.

Thank's for the help!

Hello, please, where is config.json ?

<!-- gh-comment-id:679293916 --> @evandroabreu commented on GitHub (Aug 24, 2020): > Hi all, > > It works now. I can download the files by setting the `perserveOriginals=true` in the `config.json`. > > Thank's for the help! Hello, please, where is config.json ?
Author
Owner

@iliapir2 commented on GitHub (Aug 25, 2020):

@evandroabreu this setting was removed since Ambar v2. All files that uploaded via crawler can be downloaded from it's source (crawler will work as a proxy), files uploaded via UI can't be downloaded

<!-- gh-comment-id:679860757 --> @iliapir2 commented on GitHub (Aug 25, 2020): @evandroabreu this setting was removed since Ambar v2. All files that uploaded via crawler can be downloaded from it's source (crawler will work as a proxy), files uploaded via UI can't be downloaded
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ambar#109
No description provided.