[GH-ISSUE #147] Cannot view/download files #147

Closed
opened 2026-02-27 15:55:17 +03:00 by kerem · 15 comments
Owner

Originally created by @dandantheflyingman on GitHub (Apr 19, 2018).
Original GitHub issue: https://github.com/RD17/ambar/issues/147

Hi,

I am struggling to understand how to access my files from the Web interface?

Is there meant to be a download button? I can find the image preview, but that is it..

image

Originally created by @dandantheflyingman on GitHub (Apr 19, 2018). Original GitHub issue: https://github.com/RD17/ambar/issues/147 Hi, I am struggling to understand how to access my files from the Web interface? Is there meant to be a download button? I can find the image preview, but that is it.. ![image](https://user-images.githubusercontent.com/19147435/38975217-60c81274-43f0-11e8-9df5-005a03849f4d.png)
kerem 2026-02-27 15:55:17 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@dandantheflyingman commented on GitHub (Apr 19, 2018):

Guess this is what i am looking for:
image
This button --^

Have i got a configuration error?

My install is a clean install on Ubuntu 16.04 using the new 2.0.0rc using the docker-compose.yml 2.0.0rc example.

I have tried ingesting the documents via upload and through a crawler. both times the documents seem to be successfully analysed and added, but I cannot find a way to view/download them?

<!-- gh-comment-id:382630690 --> @dandantheflyingman commented on GitHub (Apr 19, 2018): Guess this is what i am looking for: ![image](https://user-images.githubusercontent.com/19147435/38975640-f4feb55a-43f1-11e8-888a-c10e591a4024.png) This button --^ Have i got a configuration error? My install is a clean install on Ubuntu 16.04 using the new 2.0.0rc using the docker-compose.yml 2.0.0rc example. I have tried ingesting the documents via upload and through a crawler. both times the documents seem to be successfully analysed and added, but I cannot find a way to view/download them?
Author
Owner

@sochix commented on GitHub (Apr 19, 2018):

Hi @dandantheflyingman it's a bug in rc, we'll fix it

<!-- gh-comment-id:382647069 --> @sochix commented on GitHub (Apr 19, 2018): Hi @dandantheflyingman it's a bug in rc, we'll fix it
Author
Owner

@dandantheflyingman commented on GitHub (Apr 19, 2018):

Awesome thanks! I was just concerned that i had something misconfigured.

<!-- gh-comment-id:382723936 --> @dandantheflyingman commented on GitHub (Apr 19, 2018): Awesome thanks! I was just concerned that i had something misconfigured.
Author
Owner

@andrea-ligios commented on GitHub (May 2, 2018):

I guess turning preserveOriginals to true by default would solve the problem.
I'm quite curious about the way this will get fixed.

<!-- gh-comment-id:385963992 --> @andrea-ligios commented on GitHub (May 2, 2018): I guess turning [`preserveOriginals` to `true` by default ](https://github.com/RD17/ambar/blob/master/FrontEnd/src/routes/CoreLayout/modules/CoreLayout.js#L145) would solve the problem. I'm quite curious about the way this will get fixed.
Author
Owner

@sochix commented on GitHub (May 3, 2018):

@andrea-ligios nope, it won't solve the problem

<!-- gh-comment-id:386219132 --> @sochix commented on GitHub (May 3, 2018): @andrea-ligios nope, it won't solve the problem
Author
Owner

@andrea-ligios commented on GitHub (May 3, 2018):

@sochix , first of all, thank you for answering.
Just to be sure to understand what's happening here (I'm new to Ambar):

If I'm not mistaken:

  • Up to version 1.3.0, there was a script called ambar.py and a file called config.json which have now been deprecated in favor of a full docker-compose installation.
  • preserveOriginals was a parameter in the config.json file used to specify whether or not to save a binary copy of the file in MongoDB (in addition to the data extracted from it and stored in Elasticsearch).

My questions:

  1. Where is config.json gone in 2.0.0rc?
    If it's not available anymore, where can we specify whether to preserve or not the originals?
  2. If we don't preserve the originals, why do you consider a bug not having the download button?
    I mean, what should we be able to download, if the original file is gone? Are you referring to the other missing button, "TEXT"?

P.S: your product is really cool, thank you very much for releasing it!

<!-- gh-comment-id:386251715 --> @andrea-ligios commented on GitHub (May 3, 2018): @sochix , first of all, thank you for answering. Just to be sure to understand what's happening here (I'm new to Ambar): If I'm not mistaken: - Up to version `1.3.0`, there was a script called `ambar.py` and a file called `config.json` which have now been deprecated in favor of a full docker-compose installation. - `preserveOriginals` was a parameter in the `config.json` file used to specify whether or not to save a binary copy of the file in MongoDB (in addition to the data extracted from it and stored in Elasticsearch). My questions: 1) **Where is `config.json` gone in `2.0.0rc`?** If it's not available anymore, where can we specify whether to preserve or not the originals? 2) **If we don't preserve the originals, why do you consider a bug not having the download button?** I mean, what should we be able to download, if the original file is gone? Are you referring to the other missing button, "TEXT"? --- P.S: your product is really cool, thank you very much for releasing it!
Author
Owner

@sochix commented on GitHub (May 3, 2018):

  1. Now all config goes from docker-compose file throught the env vars. In 2.0.0rc we accidentally removed 'preserveOriginals' option.

  2. In the next release we will fix the issue with stroing original files. We plan to download it from local crawler ( not MongoDB). The TEXT button removed completely.

<!-- gh-comment-id:386253946 --> @sochix commented on GitHub (May 3, 2018): 1. Now all config goes from docker-compose file throught the env vars. In 2.0.0rc we accidentally removed 'preserveOriginals' option. 2. In the next release we will fix the issue with stroing original files. We plan to download it from local crawler ( not MongoDB). The TEXT button removed completely.
Author
Owner

@andrea-ligios commented on GitHub (May 10, 2018):

@sochix : sorry to bother you, but is there any public roadmap we can look at?
Otherwise, do you have some (coarse-grained) time window for the next release?
Thanks in advance,
Cheers

<!-- gh-comment-id:388080785 --> @andrea-ligios commented on GitHub (May 10, 2018): @sochix : sorry to bother you, but is there any public roadmap we can look at? Otherwise, do you have some (coarse-grained) time window for the next release? Thanks in advance, Cheers
Author
Owner

@sochix commented on GitHub (May 10, 2018):

We are open source and Ambar is not our full-time job.

But, we received sponosrship from IFIC.co.uk last week, so the next release will be at the end of the next week.

<!-- gh-comment-id:388086302 --> @sochix commented on GitHub (May 10, 2018): We are open source and Ambar is not our full-time job. But, we received sponosrship from IFIC.co.uk last week, so the next release will be at the end of the next week.
Author
Owner

@andrea-ligios commented on GitHub (May 10, 2018):

Great news! Thanks for your answer

<!-- gh-comment-id:388105453 --> @andrea-ligios commented on GitHub (May 10, 2018): Great news! Thanks for your answer
Author
Owner

@denis1482 commented on GitHub (May 16, 2018):

Fixed in 2.1.8

<!-- gh-comment-id:389517559 --> @denis1482 commented on GitHub (May 16, 2018): Fixed in 2.1.8
Author
Owner

@andrea-ligios commented on GitHub (May 16, 2018):

Thank you @sochix , I've tested it this morning and it works like this:

  1. uploading through a local crawler lets you download it from the UI
  2. uploading through the UI will NOT let you download anything
  3. On MongoDB, only the extracted TEXT part is stored (which is stripped from Elasticsearch after the indexing but before storing it), NOT the actual file.
    Have I got everything correct?
    Again, great job 👍
<!-- gh-comment-id:389649203 --> @andrea-ligios commented on GitHub (May 16, 2018): Thank you @sochix , I've tested it this morning and it works like this: 1) uploading through a local crawler lets you download it from the UI 2) uploading through the UI will NOT let you download anything 3) On MongoDB, only the extracted TEXT part is stored (which is stripped from Elasticsearch after the indexing but before storing it), NOT the actual file. Have I got everything correct? Again, great job :+1:
Author
Owner

@sochix commented on GitHub (May 17, 2018):

@andrea-ligios

  1. yes
  2. yes
  3. yes

Maybe you can help us with documenting Ambar?

<!-- gh-comment-id:389873046 --> @sochix commented on GitHub (May 17, 2018): @andrea-ligios 1. yes 2. yes 3. yes Maybe you can help us with documenting Ambar?
Author
Owner

@andrea-ligios commented on GitHub (May 21, 2018):

@sochix I need to know it myself, first;
as soon as I'll go deeper, though, and assuming to find some free time, I could help.
Just tell me what you had in mind.
Cheers

<!-- gh-comment-id:390688779 --> @andrea-ligios commented on GitHub (May 21, 2018): @sochix I need to know it myself, first; as soon as I'll go deeper, though, and assuming to find some free time, I could help. Just tell me what you had in mind. Cheers
Author
Owner

@sochix commented on GitHub (May 22, 2018):

@andrea-ligios for example, you can help us filling up FAQ on front page

<!-- gh-comment-id:390927349 --> @sochix commented on GitHub (May 22, 2018): @andrea-ligios for example, you can help us filling up FAQ on front page
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ambar#147
No description provided.