[GH-ISSUE #6] "Oops.... Something went wrong" during loading #5

Closed
opened 2026-02-27 15:54:24 +03:00 by kerem · 18 comments
Owner

Originally created by @agreenfield1 on GitHub (Apr 7, 2017).
Original GitHub issue: https://github.com/RD17/ambar/issues/6

It seems like the api is not accessible, even though installation went without any apparent issue. During loading of the page, I get the error "Oops.... Something went wrong" at the bottom. It looks like the ambar-webapi container is restarting every 5 minutes due to not connecting to the ambar-es container?

andrew@onlyoffice:~$ sudo ./ambar.py start


______           ____     ______  ____
/\  _  \  /'\_/`\/\  _`\  /\  _  \/\  _`\
\ \ \L\ \/\      \ \ \L\ \ \ \L\ \ \ \L\ \
 \ \  __ \ \ \__\ \ \  _ <'\ \  __ \ \ ,  /
  \ \ \/\ \ \ \_/\ \ \ \L\ \ \ \/\ \ \ \ \
   \ \_\ \_\ \_\ \_\ \____/ \ \_\ \_\ \_\ \_\
    \/_/\/_/\/_/ \/_/\/___/   \/_/\/_/\/_/\/ /



Docker version 17.03.1-ce, build c6d412e
docker-compose version 1.11.2, build dfed245
vm.max_map_count = 262144
net.ipv4.ip_local_port_range = 15000 61000
net.ipv4.tcp_fin_timeout = 30
net.core.somaxconn = 1024
net.core.netdev_max_backlog = 2000
net.ipv4.tcp_max_syn_backlog = 2048
ambar_db_1 is up-to-date
ambar_es_1 is up-to-date
ambar_rabbit_1 is up-to-date
ambar_frontend_1 is up-to-date
ambar_webapi_1 is up-to-date
ambar_webapi-cache_1 is up-to-date
Waiting for Ambar to start...
Ambar is running on http://10.20.30.13:80

ambar-webapi container log output:

2017/04/07 05:08:51 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200]
2017/04/07 05:08:52 Waiting for host:
2017/04/07 05:08:52 Waiting for host: es:9200
2017/04/07 05:08:52 Connected to unix:///var/run/docker.sock
2017/04/07 05:13:52 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200]
2017/04/07 05:13:52 Waiting for host:
2017/04/07 05:13:52 Waiting for host: es:9200
2017/04/07 05:13:52 Connected to unix:///var/run/docker.sock
2017/04/07 05:18:52 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200]
2017/04/07 05:18:52 Waiting for host:
2017/04/07 05:18:52 Waiting for host: es:9200
2017/04/07 05:18:52 Connected to unix:///var/run/docker.sock

ambar-es container logs:

[2017-04-07T05:22:01,567][INFO ][o.e.n.Node               ] [BtkYnk-] stopping ...
[2017-04-07T05:22:01,633][INFO ][o.e.n.Node               ] [BtkYnk-] stopped
[2017-04-07T05:22:01,633][INFO ][o.e.n.Node               ] [BtkYnk-] closing ...
[2017-04-07T05:22:01,646][INFO ][o.e.n.Node               ] [BtkYnk-] closed
[2017-04-07T05:22:03,494][INFO ][o.e.n.Node               ] [] initializing ...
[2017-04-07T05:22:03,612][INFO ][o.e.e.NodeEnvironment    ] [BtkYnk-] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/onlyoffice--vg-root)]], net usable_space [34.7gb], net total_space [46.6gb], spins? [possibly], types [ext4]
[2017-04-07T05:22:03,612][INFO ][o.e.e.NodeEnvironment    ] [BtkYnk-] heap size [1007.3mb], compressed ordinary object pointers [true]
[2017-04-07T05:22:03,660][INFO ][o.e.n.Node               ] node name [BtkYnk-] derived from node ID [BtkYnk-rRXGLNCk4JZeisA]; set [node.name] to override
[2017-04-07T05:22:03,665][INFO ][o.e.n.Node               ] version[5.2.2], pid[1], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/4.4.0-72-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_121/25.121-b13]
[2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [aggs-matrix-stats]
[2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [ingest-common]
[2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [lang-expression]
[2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [lang-groovy]
[2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [lang-mustache]
[2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [lang-painless]
[2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [percolator]
[2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [reindex]
[2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [transport-netty3]
[2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded module [transport-netty4]
[2017-04-07T05:22:05,242][INFO ][o.e.p.PluginsService     ] [BtkYnk-] loaded plugin [analysis-morphology]
[2017-04-07T05:22:05,395][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2017-04-07T05:22:08,149][INFO ][o.e.n.Node               ] initialized
[2017-04-07T05:22:08,150][INFO ][o.e.n.Node               ] [BtkYnk-] starting ...
[2017-04-07T05:22:08,258][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: f5:84:67:88:74:e6:c5:b2
[2017-04-07T05:22:08,326][INFO ][o.e.t.TransportService   ] [BtkYnk-] publish_address {172.19.0.3:9300}, bound_addresses {[::]:9300}
[2017-04-07T05:22:08,335][INFO ][o.e.b.BootstrapChecks    ] [BtkYnk-] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-04-07T05:22:11,400][INFO ][o.e.c.s.ClusterService   ] [BtkYnk-] new_master {BtkYnk-}{BtkYnk-rRXGLNCk4JZeisA}{bcr5fJbTS6WeNLWTn3-wbg}{172.19.0.3}{172.19.0.3:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-07T05:22:11,419][INFO ][o.e.h.HttpServer         ] [BtkYnk-] publish_address {172.19.0.3:9200}, bound_addresses {[::]:9200}
[2017-04-07T05:22:11,419][INFO ][o.e.n.Node               ] [BtkYnk-] started
[2017-04-07T05:22:11,669][INFO ][o.e.g.GatewayService     ] [BtkYnk-] recovered [2] indices into cluster_state
[2017-04-07T05:22:12,231][INFO ][o.e.c.r.a.AllocationService] [BtkYnk-] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[ambar_log_record_data][7]] ...]).
Originally created by @agreenfield1 on GitHub (Apr 7, 2017). Original GitHub issue: https://github.com/RD17/ambar/issues/6 It seems like the api is not accessible, even though installation went without any apparent issue. During loading of the page, I get the error "Oops.... Something went wrong" at the bottom. It looks like the ambar-webapi container is restarting every 5 minutes due to not connecting to the ambar-es container? ``` andrew@onlyoffice:~$ sudo ./ambar.py start ______ ____ ______ ____ /\ _ \ /'\_/`\/\ _`\ /\ _ \/\ _`\ \ \ \L\ \/\ \ \ \L\ \ \ \L\ \ \ \L\ \ \ \ __ \ \ \__\ \ \ _ <'\ \ __ \ \ , / \ \ \/\ \ \ \_/\ \ \ \L\ \ \ \/\ \ \ \ \ \ \_\ \_\ \_\ \_\ \____/ \ \_\ \_\ \_\ \_\ \/_/\/_/\/_/ \/_/\/___/ \/_/\/_/\/_/\/ / Docker version 17.03.1-ce, build c6d412e docker-compose version 1.11.2, build dfed245 vm.max_map_count = 262144 net.ipv4.ip_local_port_range = 15000 61000 net.ipv4.tcp_fin_timeout = 30 net.core.somaxconn = 1024 net.core.netdev_max_backlog = 2000 net.ipv4.tcp_max_syn_backlog = 2048 ambar_db_1 is up-to-date ambar_es_1 is up-to-date ambar_rabbit_1 is up-to-date ambar_frontend_1 is up-to-date ambar_webapi_1 is up-to-date ambar_webapi-cache_1 is up-to-date Waiting for Ambar to start... Ambar is running on http://10.20.30.13:80 ``` ambar-webapi container log output: ``` 2017/04/07 05:08:51 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200] 2017/04/07 05:08:52 Waiting for host: 2017/04/07 05:08:52 Waiting for host: es:9200 2017/04/07 05:08:52 Connected to unix:///var/run/docker.sock 2017/04/07 05:13:52 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200] 2017/04/07 05:13:52 Waiting for host: 2017/04/07 05:13:52 Waiting for host: es:9200 2017/04/07 05:13:52 Connected to unix:///var/run/docker.sock 2017/04/07 05:18:52 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200] 2017/04/07 05:18:52 Waiting for host: 2017/04/07 05:18:52 Waiting for host: es:9200 2017/04/07 05:18:52 Connected to unix:///var/run/docker.sock ``` ambar-es container logs: ``` [2017-04-07T05:22:01,567][INFO ][o.e.n.Node ] [BtkYnk-] stopping ... [2017-04-07T05:22:01,633][INFO ][o.e.n.Node ] [BtkYnk-] stopped [2017-04-07T05:22:01,633][INFO ][o.e.n.Node ] [BtkYnk-] closing ... [2017-04-07T05:22:01,646][INFO ][o.e.n.Node ] [BtkYnk-] closed [2017-04-07T05:22:03,494][INFO ][o.e.n.Node ] [] initializing ... [2017-04-07T05:22:03,612][INFO ][o.e.e.NodeEnvironment ] [BtkYnk-] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/onlyoffice--vg-root)]], net usable_space [34.7gb], net total_space [46.6gb], spins? [possibly], types [ext4] [2017-04-07T05:22:03,612][INFO ][o.e.e.NodeEnvironment ] [BtkYnk-] heap size [1007.3mb], compressed ordinary object pointers [true] [2017-04-07T05:22:03,660][INFO ][o.e.n.Node ] node name [BtkYnk-] derived from node ID [BtkYnk-rRXGLNCk4JZeisA]; set [node.name] to override [2017-04-07T05:22:03,665][INFO ][o.e.n.Node ] version[5.2.2], pid[1], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/4.4.0-72-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_121/25.121-b13] [2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [aggs-matrix-stats] [2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [ingest-common] [2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [lang-expression] [2017-04-07T05:22:05,239][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [lang-groovy] [2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [lang-mustache] [2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [lang-painless] [2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [percolator] [2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [reindex] [2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [transport-netty3] [2017-04-07T05:22:05,240][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded module [transport-netty4] [2017-04-07T05:22:05,242][INFO ][o.e.p.PluginsService ] [BtkYnk-] loaded plugin [analysis-morphology] [2017-04-07T05:22:05,395][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead [2017-04-07T05:22:08,149][INFO ][o.e.n.Node ] initialized [2017-04-07T05:22:08,150][INFO ][o.e.n.Node ] [BtkYnk-] starting ... [2017-04-07T05:22:08,258][WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: f5:84:67:88:74:e6:c5:b2 [2017-04-07T05:22:08,326][INFO ][o.e.t.TransportService ] [BtkYnk-] publish_address {172.19.0.3:9300}, bound_addresses {[::]:9300} [2017-04-07T05:22:08,335][INFO ][o.e.b.BootstrapChecks ] [BtkYnk-] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [2017-04-07T05:22:11,400][INFO ][o.e.c.s.ClusterService ] [BtkYnk-] new_master {BtkYnk-}{BtkYnk-rRXGLNCk4JZeisA}{bcr5fJbTS6WeNLWTn3-wbg}{172.19.0.3}{172.19.0.3:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) [2017-04-07T05:22:11,419][INFO ][o.e.h.HttpServer ] [BtkYnk-] publish_address {172.19.0.3:9200}, bound_addresses {[::]:9200} [2017-04-07T05:22:11,419][INFO ][o.e.n.Node ] [BtkYnk-] started [2017-04-07T05:22:11,669][INFO ][o.e.g.GatewayService ] [BtkYnk-] recovered [2] indices into cluster_state [2017-04-07T05:22:12,231][INFO ][o.e.c.r.a.AllocationService] [BtkYnk-] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[ambar_log_record_data][7]] ...]). ```
kerem 2026-02-27 15:54:24 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@sochix commented on GitHub (Apr 7, 2017):

Hello @agreenfield1 ! Can you please show your config.json and docker-compose.yml?

<!-- gh-comment-id:292462369 --> @sochix commented on GitHub (Apr 7, 2017): Hello @agreenfield1 ! Can you please show your `config.json` and `docker-compose.yml`?
Author
Owner

@agreenfield1 commented on GitHub (Apr 8, 2017):

config.json:

{
    "dockerRepo": "ambar",
    "dockerComposeTemplate": "https://static.ambar.cloud/docker-compose.template.yml",
    "ocr": {
        "pdfSymbolsPerPageThreshold": 100,
        "pdfMaxPageCount": 5000
    },
    "es": {
        "containerSize": "2g",
        "heapSize": "1g"
    },
    "fe": {
        "local": {
            "host": "10.20.30.13",
            "port": "80",
            "protocol": "http"
        },
        "external": {
            "host": "10.20.30.13",
            "port": "80",
            "protocol": "http"
        }
    },
    "dataPath": "/opt/ambar",
    "db": {
        "cacheSizeGb": 2
    },
    "api": {
        "pipelineCount": 1,
        "local": {
            "host": "10.20.30.13",
            "port": "8004",
            "protocol": "http"
        },
        "analyticsToken": "cda4b0bb11a1f32aed7564b08c455992",
        "crawlerCount": 1,
        "defaultLangAnalyzer": "ambar_en",
        "auth": "none",
        "external": {
            "host": "10.20.30.13",
            "port": "8004",
            "protocol": "http"
        },
        "cacheSize": "1g"
    },
    "dropbox": {
        "redirectUri": "",
        "clientId": ""
    }
}

docker-compose.yml:

version: "2"
services:
  webapi:
    restart: always
    image: ambar/ambar-webapi:latest
    expose:
      - "8080"
    ports:
      - "8004:8080"
    environment:
      - db=mongodb://db:27017/ambar_data
      - fe=http://10.20.30.13:80
      - api=http://10.20.30.13:8004
      - es=http://es:9200
      - redis=webapi-cache
      - rabbit=amqp://10.20.30.13
      - mode=prod
      - pipelineCount=1
      - crawlerCount=1
      - dropboxClientId=
      - dropboxRedirectUri=
      - defaultLangAnalyzer=ambar_en
      - analyticsToken=cda4b0bb11a1f32aed7564b08c455992
      - auth=none
      - ocrPdfMaxPageCount=5000
      - ocrPdfSymbolsPerPageThreshold=100
    depends_on:
      - db
      - rabbit
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  webapi-cache:
    restart: always
    image: redis:alpine
    expose:
      - "6379"
    ports:
      - "6379:6379"
    depends_on:
      - webapi
    mem_limit: 1g
  frontend:
    image: ambar/ambar-frontend:latest
    ports:
      - "80:80"
    restart: always
    environment:
      - api=http://10.20.30.13:8004
  db:
    restart: always
    image: ambar/ambar-mongodb:latest
    environment:
      - cacheSizeGB=2
    volumes:
      - /opt/ambar/db:/data/db
    ports:
      - "27017:27017"
    expose:
      - "27017"
  es:
    image: ambar/ambar-es:latest
    restart: always
    expose:
      - "9200"
    ports:
      - "9200:9200"
    environment:
      - cluster.name=ambar-es
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - security.manager.enabled=false
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 2g
    cap_add:
      - IPC_LOCK
    volumes:
      - /opt/ambar/es:/usr/share/elasticsearch/data
  rabbit:
    image: ambar/ambar-rabbit:latest
    hostname: ambar-rabbit
    ports:
      - 15672:15672
      - 5672:5672
    volumes:
      - /opt/ambar/rabbit:/var/lib/rabbitmq


<!-- gh-comment-id:292680063 --> @agreenfield1 commented on GitHub (Apr 8, 2017): config.json: ``` { "dockerRepo": "ambar", "dockerComposeTemplate": "https://static.ambar.cloud/docker-compose.template.yml", "ocr": { "pdfSymbolsPerPageThreshold": 100, "pdfMaxPageCount": 5000 }, "es": { "containerSize": "2g", "heapSize": "1g" }, "fe": { "local": { "host": "10.20.30.13", "port": "80", "protocol": "http" }, "external": { "host": "10.20.30.13", "port": "80", "protocol": "http" } }, "dataPath": "/opt/ambar", "db": { "cacheSizeGb": 2 }, "api": { "pipelineCount": 1, "local": { "host": "10.20.30.13", "port": "8004", "protocol": "http" }, "analyticsToken": "cda4b0bb11a1f32aed7564b08c455992", "crawlerCount": 1, "defaultLangAnalyzer": "ambar_en", "auth": "none", "external": { "host": "10.20.30.13", "port": "8004", "protocol": "http" }, "cacheSize": "1g" }, "dropbox": { "redirectUri": "", "clientId": "" } } ``` docker-compose.yml: ``` version: "2" services: webapi: restart: always image: ambar/ambar-webapi:latest expose: - "8080" ports: - "8004:8080" environment: - db=mongodb://db:27017/ambar_data - fe=http://10.20.30.13:80 - api=http://10.20.30.13:8004 - es=http://es:9200 - redis=webapi-cache - rabbit=amqp://10.20.30.13 - mode=prod - pipelineCount=1 - crawlerCount=1 - dropboxClientId= - dropboxRedirectUri= - defaultLangAnalyzer=ambar_en - analyticsToken=cda4b0bb11a1f32aed7564b08c455992 - auth=none - ocrPdfMaxPageCount=5000 - ocrPdfSymbolsPerPageThreshold=100 depends_on: - db - rabbit volumes: - /var/run/docker.sock:/var/run/docker.sock webapi-cache: restart: always image: redis:alpine expose: - "6379" ports: - "6379:6379" depends_on: - webapi mem_limit: 1g frontend: image: ambar/ambar-frontend:latest ports: - "80:80" restart: always environment: - api=http://10.20.30.13:8004 db: restart: always image: ambar/ambar-mongodb:latest environment: - cacheSizeGB=2 volumes: - /opt/ambar/db:/data/db ports: - "27017:27017" expose: - "27017" es: image: ambar/ambar-es:latest restart: always expose: - "9200" ports: - "9200:9200" environment: - cluster.name=ambar-es - bootstrap.memory_lock=true - xpack.security.enabled=false - security.manager.enabled=false - "ES_JAVA_OPTS=-Xms1g -Xmx1g" ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 mem_limit: 2g cap_add: - IPC_LOCK volumes: - /opt/ambar/es:/usr/share/elasticsearch/data rabbit: image: ambar/ambar-rabbit:latest hostname: ambar-rabbit ports: - 15672:15672 - 5672:5672 volumes: - /opt/ambar/rabbit:/var/lib/rabbitmq ```
Author
Owner

@isido993 commented on GitHub (Apr 11, 2017):

Here is the issue:
2017/04/07 05:18:52 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200]
Since it connected to Docker socket (2017/04/07 05:18:52 Connected to unix:///var/run/docker.sock), the problem is that WebApi can't reach ES on es:9200 (internal Docker network). And it's quite strange...
Please check if ES is accessible at http://10.20.30.13:9200/_cat/indices and post the result here.
Thnx

<!-- gh-comment-id:293244706 --> @isido993 commented on GitHub (Apr 11, 2017): Here is the issue: `2017/04/07 05:18:52 Timeout after 5m0s waiting on dependencies to become available: [unix:///var/run/docker.sock http://es:9200]` Since it connected to Docker socket (`2017/04/07 05:18:52 Connected to unix:///var/run/docker.sock`), the problem is that WebApi can't reach ES on es:9200 (internal Docker network). And it's quite strange... Please check if ES is accessible at http://10.20.30.13:9200/_cat/indices and post the result here. Thnx
Author
Owner

@agreenfield1 commented on GitHub (Apr 11, 2017):

http://10.20.30.13:9200/_cat/indices gives me this:

yellow open ambar_log_record_data                                    Amli_EH2R-C3PCSIBC9Veg 10 1 10 0  46kb  46kb
yellow open ambar_file_data_d033e22ae348aeb5660fc2140aec35850c4da997 NxgsuuY2Thi48Mc9g-stWw  8 1  0 0 1.2kb 1.2kb
<!-- gh-comment-id:293416436 --> @agreenfield1 commented on GitHub (Apr 11, 2017): http://10.20.30.13:9200/_cat/indices gives me this: ``` yellow open ambar_log_record_data Amli_EH2R-C3PCSIBC9Veg 10 1 10 0 46kb 46kb yellow open ambar_file_data_d033e22ae348aeb5660fc2140aec35850c4da997 NxgsuuY2Thi48Mc9g-stWw 8 1 0 0 1.2kb 1.2kb ```
Author
Owner

@isido993 commented on GitHub (Apr 13, 2017):

So, ES is up and WebApi has access to it (since it successfully created two indexes).
To be honest, I'm quite out of guesses now... Did you try updating Docker and docker-compose?

<!-- gh-comment-id:293864590 --> @isido993 commented on GitHub (Apr 13, 2017): So, ES is up and WebApi has access to it (since it successfully created two indexes). To be honest, I'm quite out of guesses now... Did you try updating Docker and docker-compose?
Author
Owner

@agreenfield1 commented on GitHub (Apr 14, 2017):

Well, I got it working:

Updating to the following didn't help:

Docker version 17.04.0-ce, build 4845c56
docker-compose version 1.12.0, build b31ff33

I deleted the /opt/ambar directory and reinstalled, but it still didn't work. But I noticed that after doing this http://10.20.30.13:9200/_cat/indices just showed an empty page (no indices), again pointing to a communication problem between webapi and es containers. In the docker-compose.template.yml file I replaced the following line in the webapi>environment section:
- es=http://es:9200
with
- es=http://10.20.30.13:9200

After stopping and starting, it worked.

<!-- gh-comment-id:294068638 --> @agreenfield1 commented on GitHub (Apr 14, 2017): Well, I got it working: Updating to the following didn't help: ``` Docker version 17.04.0-ce, build 4845c56 docker-compose version 1.12.0, build b31ff33 ``` I deleted the /opt/ambar directory and reinstalled, but it still didn't work. But I noticed that after doing this http://10.20.30.13:9200/_cat/indices just showed an empty page (no indices), again pointing to a communication problem between webapi and es containers. In the docker-compose.template.yml file I replaced the following line in the webapi>environment section: ` - es=http://es:9200` with ` - es=http://10.20.30.13:9200` After stopping and starting, it worked.
Author
Owner

@isido993 commented on GitHub (Apr 14, 2017):

You're quite an enthusiast :)
Referring ES by IP is not the best option, in this case all the interaction with ES will be done through physical network adapter, and that's not the most efficient way. That's why we force WebApi to access ES via internal Docker network (by referring it as http://es:9200).
In your case due to some unknown reason WebApi can not access ES via internal net, I honestly have no clue why this could happen. Looks like a bug in Docker or docker-compose. If you eventually figure out why this happened - please let us know.
Thanks!

<!-- gh-comment-id:294106999 --> @isido993 commented on GitHub (Apr 14, 2017): You're quite an enthusiast :) Referring ES by IP is not the best option, in this case all the interaction with ES will be done through physical network adapter, and that's not the most efficient way. That's why we force WebApi to access ES via internal Docker network (by referring it as http://es:9200). In your case due to some unknown reason WebApi can not access ES via internal net, I honestly have no clue why this could happen. Looks like a bug in Docker or docker-compose. If you eventually figure out why this happened - please let us know. Thanks!
Author
Owner

@agreenfield1 commented on GitHub (Apr 14, 2017):

Stubborn more like it! One last update:
I changed docker-compose.template.yml and back to
- es=http://es:9200
The problem returned as expected after stopping/starting. I went into the ambar_webapi_1 container:
sudo docker exec -it ambar_webapi_1 bash
and installed httpie. I was able to connect to http://es:9200 with no issue from within the webapi container, even though the main problem was still occurring. Not sure what to make of that.

<!-- gh-comment-id:294174095 --> @agreenfield1 commented on GitHub (Apr 14, 2017): Stubborn more like it! One last update: I changed docker-compose.template.yml and back to `- es=http://es:9200` The problem returned as expected after stopping/starting. I went into the ambar_webapi_1 container: `sudo docker exec -it ambar_webapi_1 bash` and installed httpie. I was able to connect to http://es:9200 with no issue from within the webapi container, even though the main problem was still occurring. Not sure what to make of that.
Author
Owner

@robert-mcdermott commented on GitHub (Apr 17, 2017):

I'm experiencing the same thing, here is how I got it working:

diff docker-compose.yml.broken docker-compose.yml.working
11c11
<       - db=mongodb://db:27017/ambar_data
---
>       - db=mongodb://172.17.64.108:27017/ambar_data
14c14
<       - es=http://es:9200
---
>       - es=http://172.17.64.108:9200
<!-- gh-comment-id:294527183 --> @robert-mcdermott commented on GitHub (Apr 17, 2017): I'm experiencing the same thing, here is how I got it working: ``` diff docker-compose.yml.broken docker-compose.yml.working 11c11 < - db=mongodb://db:27017/ambar_data --- > - db=mongodb://172.17.64.108:27017/ambar_data 14c14 < - es=http://es:9200 --- > - es=http://172.17.64.108:9200 ```
Author
Owner

@Panos512 commented on GitHub (Apr 24, 2017):

Hello,
I am having the same issue.
I tried replacing es and db with my localhost ip (127.0.0.1) on docker-compose.template.yml and I also added the same ip on config,json as local and external host for the api and fe.

http://127.0.0.1:9200/_cat/indices is giving me:

yellow open ambar_log_record_data zRU8iE2QTUGwkQKmF3q0dQ 10 1 0 0 1.5kb 1.5kb

and the logs seem fine:

panos@panos-MS-7823:~/ambar$ sudo docker logs ambar_webapi_1
2017/04/24 14:33:12 Waiting for host: 
2017/04/24 14:33:12 Waiting for host: 127.0.0.1:9200
2017/04/24 14:33:12 Connected to unix:///var/run/docker.sock
panos@panos-MS-7823:~/ambar$ sudo docker logs ambar_es_1
[2017-04-24T14:33:10,124][INFO ][o.e.n.Node               ] [] initializing ...
[2017-04-24T14:33:10,225][INFO ][o.e.e.NodeEnvironment    ] [gWyPTfh] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb5)]], net usable_space [199.2gb], net total_space [219.1gb], spins? [possibly], types [ext4]
[2017-04-24T14:33:10,225][INFO ][o.e.e.NodeEnvironment    ] [gWyPTfh] heap size [990.7mb], compressed ordinary object pointers [true]
[2017-04-24T14:33:10,242][INFO ][o.e.n.Node               ] node name [gWyPTfh] derived from node ID [gWyPTfhfTV2O2kUDTqszuQ]; set [node.name] to override
[2017-04-24T14:33:10,244][INFO ][o.e.n.Node               ] version[5.2.2], pid[1], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/4.10.0-19-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_121/25.121-b13]
[2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [aggs-matrix-stats]
[2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [ingest-common]
[2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [lang-expression]
[2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [lang-groovy]
[2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [lang-mustache]
[2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [lang-painless]
[2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [percolator]
[2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [reindex]
[2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [transport-netty3]
[2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded module [transport-netty4]
[2017-04-24T14:33:11,438][INFO ][o.e.p.PluginsService     ] [gWyPTfh] loaded plugin [analysis-morphology]
[2017-04-24T14:33:11,588][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead
[2017-04-24T14:33:13,893][INFO ][o.e.n.Node               ] initialized
[2017-04-24T14:33:13,893][INFO ][o.e.n.Node               ] [gWyPTfh] starting ...
[2017-04-24T14:33:13,953][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 2a:85:80:28:99:90:8a:b3
[2017-04-24T14:33:14,011][INFO ][o.e.t.TransportService   ] [gWyPTfh] publish_address {172.18.0.3:9300}, bound_addresses {0.0.0.0:9300}
[2017-04-24T14:33:14,015][INFO ][o.e.b.BootstrapChecks    ] [gWyPTfh] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-04-24T14:33:17,054][INFO ][o.e.c.s.ClusterService   ] [gWyPTfh] new_master {gWyPTfh}{gWyPTfhfTV2O2kUDTqszuQ}{44qKOUUCQz2hZBM01ZZS1w}{172.18.0.3}{172.18.0.3:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-24T14:33:17,084][INFO ][o.e.h.HttpServer         ] [gWyPTfh] publish_address {172.18.0.3:9200}, bound_addresses {0.0.0.0:9200}
[2017-04-24T14:33:17,085][INFO ][o.e.n.Node               ] [gWyPTfh] started
[2017-04-24T14:33:17,492][INFO ][o.e.g.GatewayService     ] [gWyPTfh] recovered [1] indices into cluster_state
[2017-04-24T14:33:18,941][INFO ][o.e.c.r.a.AllocationService] [gWyPTfh] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[ambar_log_record_data][8]] ...]).

Any ideas what could be wrong?

Thank you in advance.

<!-- gh-comment-id:296689351 --> @Panos512 commented on GitHub (Apr 24, 2017): Hello, I am having the same issue. I tried replacing `es` and `db` with my localhost ip (127.0.0.1) on `docker-compose.template.yml` and I also added the same ip on `config,json` as local and external host for the api and fe. http://127.0.0.1:9200/_cat/indices is giving me: ``` yellow open ambar_log_record_data zRU8iE2QTUGwkQKmF3q0dQ 10 1 0 0 1.5kb 1.5kb ``` and the logs seem fine: ``` panos@panos-MS-7823:~/ambar$ sudo docker logs ambar_webapi_1 2017/04/24 14:33:12 Waiting for host: 2017/04/24 14:33:12 Waiting for host: 127.0.0.1:9200 2017/04/24 14:33:12 Connected to unix:///var/run/docker.sock ``` ``` panos@panos-MS-7823:~/ambar$ sudo docker logs ambar_es_1 [2017-04-24T14:33:10,124][INFO ][o.e.n.Node ] [] initializing ... [2017-04-24T14:33:10,225][INFO ][o.e.e.NodeEnvironment ] [gWyPTfh] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb5)]], net usable_space [199.2gb], net total_space [219.1gb], spins? [possibly], types [ext4] [2017-04-24T14:33:10,225][INFO ][o.e.e.NodeEnvironment ] [gWyPTfh] heap size [990.7mb], compressed ordinary object pointers [true] [2017-04-24T14:33:10,242][INFO ][o.e.n.Node ] node name [gWyPTfh] derived from node ID [gWyPTfhfTV2O2kUDTqszuQ]; set [node.name] to override [2017-04-24T14:33:10,244][INFO ][o.e.n.Node ] version[5.2.2], pid[1], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/4.10.0-19-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_121/25.121-b13] [2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [aggs-matrix-stats] [2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [ingest-common] [2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [lang-expression] [2017-04-24T14:33:11,436][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [lang-groovy] [2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [lang-mustache] [2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [lang-painless] [2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [percolator] [2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [reindex] [2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [transport-netty3] [2017-04-24T14:33:11,437][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded module [transport-netty4] [2017-04-24T14:33:11,438][INFO ][o.e.p.PluginsService ] [gWyPTfh] loaded plugin [analysis-morphology] [2017-04-24T14:33:11,588][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead [2017-04-24T14:33:13,893][INFO ][o.e.n.Node ] initialized [2017-04-24T14:33:13,893][INFO ][o.e.n.Node ] [gWyPTfh] starting ... [2017-04-24T14:33:13,953][WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: 2a:85:80:28:99:90:8a:b3 [2017-04-24T14:33:14,011][INFO ][o.e.t.TransportService ] [gWyPTfh] publish_address {172.18.0.3:9300}, bound_addresses {0.0.0.0:9300} [2017-04-24T14:33:14,015][INFO ][o.e.b.BootstrapChecks ] [gWyPTfh] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [2017-04-24T14:33:17,054][INFO ][o.e.c.s.ClusterService ] [gWyPTfh] new_master {gWyPTfh}{gWyPTfhfTV2O2kUDTqszuQ}{44qKOUUCQz2hZBM01ZZS1w}{172.18.0.3}{172.18.0.3:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) [2017-04-24T14:33:17,084][INFO ][o.e.h.HttpServer ] [gWyPTfh] publish_address {172.18.0.3:9200}, bound_addresses {0.0.0.0:9200} [2017-04-24T14:33:17,085][INFO ][o.e.n.Node ] [gWyPTfh] started [2017-04-24T14:33:17,492][INFO ][o.e.g.GatewayService ] [gWyPTfh] recovered [1] indices into cluster_state [2017-04-24T14:33:18,941][INFO ][o.e.c.r.a.AllocationService] [gWyPTfh] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[ambar_log_record_data][8]] ...]). ``` Any ideas what could be wrong? Thank you in advance.
Author
Owner

@sochix commented on GitHub (Apr 25, 2017):

Hello @Panos512 ! Can you please post your machine configuration? What OS is it running?

<!-- gh-comment-id:297029716 --> @sochix commented on GitHub (Apr 25, 2017): Hello @Panos512 ! Can you please post your machine configuration? What OS is it running?
Author
Owner

@Panos512 commented on GitHub (May 4, 2017):

@sochix I was trying to run it on Ubuntu 17.04. When I tried on another machine using Ubuntu 16.04 everything worked perfectly. Docker and docker-compose versions were the latest on both installations and I cannot figure anything being different apart from the os version. Anyways I managed to get it working perfectly :)

<!-- gh-comment-id:299187147 --> @Panos512 commented on GitHub (May 4, 2017): @sochix I was trying to run it on `Ubuntu 17.04`. When I tried on another machine using `Ubuntu 16.04` everything worked perfectly. `Docker` and `docker-compose` versions were the latest on both installations and I cannot figure anything being different apart from the os version. Anyways I managed to get it working perfectly :)
Author
Owner

@sochix commented on GitHub (May 4, 2017):

Ok, nice! Ubuntu > 16.04 have some issues with docker that's the problem.

<!-- gh-comment-id:299214245 --> @sochix commented on GitHub (May 4, 2017): Ok, nice! Ubuntu > 16.04 have some issues with docker that's the problem.
Author
Owner

@davemanster commented on GitHub (Jun 25, 2017):

I am on Ubuntu 16.04 and am having the same issue. I have tried replacing http://es:9200 with http://IP:9200 with no success.

<!-- gh-comment-id:310886308 --> @davemanster commented on GitHub (Jun 25, 2017): I am on Ubuntu 16.04 and am having the same issue. I have tried replacing http://es:9200 with http://*IP*:9200 with no success.
Author
Owner

@davemanster commented on GitHub (Jun 25, 2017):

Please close. After three minutes of not working it is now working with http://IP:9200.

Thanks

<!-- gh-comment-id:310886370 --> @davemanster commented on GitHub (Jun 25, 2017): Please close. After three minutes of not working it is now working with http://IP:9200. Thanks
Author
Owner

@sochix commented on GitHub (Jun 25, 2017):

@davemanster nice!

<!-- gh-comment-id:310894380 --> @sochix commented on GitHub (Jun 25, 2017): @davemanster nice!
Author
Owner

@andychoi commented on GitHub (Sep 23, 2017):

same error. followed https://blog.ambar.cloud/ambar-installation-step-by-step-guide-2/

<!-- gh-comment-id:331674699 --> @andychoi commented on GitHub (Sep 23, 2017): same error. followed https://blog.ambar.cloud/ambar-installation-step-by-step-guide-2/
Author
Owner

@sochix commented on GitHub (Sep 28, 2017):

@andychoi please, try again. Seem like it's solved problem for @davemanster

<!-- gh-comment-id:332837950 --> @sochix commented on GitHub (Sep 28, 2017): @andychoi please, try again. Seem like it's solved problem for @davemanster
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ambar#5
No description provided.