[GH-ISSUE #214] ES stuck in restarting state #209

Closed
opened 2026-02-27 15:55:38 +03:00 by kerem · 3 comments
Owner

Originally created by @zx2slow on GitHub (Jan 18, 2019).
Original GitHub issue: https://github.com/RD17/ambar/issues/214

On a new install on CentOS7 the ES appears to be stuck in restarting - is there an issue with my configuration?

docker-compose ps

    Name                   Command                  State                                   Ports
-----------------------------------------------------------------------------------------------------------------------------
home_db_1       /entrypoint.sh                   Up (healthy)   27017/tcp
home_es_1       /docker-entrypoint.sh elas ...   Restarting
home_rabbit_1   docker-entrypoint.sh rabbi ...   Up (healthy)   15671/tcp, 15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 5672/tcp
home_redis_1    docker-entrypoint.sh redis ...   Up (healthy)   6379/tcp

docker-compose.yml

version: "2.1"
networks:
  internal_network:
services:
  db:
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-mongodb:latest
    environment:
      - cacheSizeGB=2
    volumes:
      - /opt/ambar/db:/data/db
    expose:
      - "27017"
  es:
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-es:latest
    expose:
      - "9200"
    environment:
      - cluster.name=ambar-es
      - ES_JAVA_OPTS=-Xms2g -Xmx2g
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    cap_add:
      - IPC_LOCK
    volumes:
      - /opt/ambar/es:/usr/share/elasticsearch/data
  rabbit:
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-rabbit:latest
    hostname: rabbit
    expose:
      - "15672"
      - "5672"
    volumes:
      - /opt/ambar/rabbit:/var/lib/rabbitmq
  redis:
    restart: always
    sysctls:
      - net.core.somaxconn=1024
    networks:
      - internal_network
    image: ambar/ambar-redis:latest
    expose:
      - "6379"
  serviceapi:
    depends_on:
      redis:
        condition: service_healthy
      rabbit:
        condition: service_healthy
      es:
        condition: service_healthy
      db:
        condition: service_healthy
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-serviceapi:latest
    expose:
      - "8081"
    environment:
      - mongoDbUrl=mongodb://db:27017/ambar_data
      - elasticSearchUrl=http://es:9200
      - redisHost=redis
      - redisPort=6379
      - rabbitHost=amqp://rabbit
      - langAnalyzer=ambar_en
  webapi:
    depends_on:
      serviceapi:
        condition: service_healthy
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-webapi:latest
    expose:
      - "8080"
    ports:
      - "8080:8080"
    environment:
      - uiLang=en
      - mongoDbUrl=mongodb://db:27017/ambar_data
      - elasticSearchUrl=http://es:9200
      - redisHost=redis
      - redisPort=6379
      - serviceApiUrl=http://serviceapi:8081
      - rabbitHost=amqp://rabbit
  frontend:
    depends_on:
      webapi:
        condition: service_healthy
    image: ambar/ambar-frontend:latest
    restart: always
    networks:
      - internal_network
    ports:
      - "80:80"
    expose:
      - "80"
    environment:
      - api=http://192.168.1.136:8080
  pipeline0:
    depends_on:
      serviceapi:
        condition: service_healthy
    image: ambar/ambar-pipeline:latest
    restart: always
    networks:
      - internal_network
    environment:
      - id=0
      - api_url=http://serviceapi:8081
      - rabbit_host=amqp://rabbit
  my-files:
    depends_on:
      serviceapi:
        condition: service_healthy
    image: ambar/ambar-local-crawler
    restart: always
    networks:
      - internal_network
    expose:
      - "8082"
    environment:
      - name=/my-files
    volumes:
      - /mnt:/usr/data

# docker-compose logs | tail -40

es_1          | [2019-01-15T19:22:07,765][INFO ][o.e.n.Node               ] [Fo-B3LF] starting ...
es_1          | [2019-01-15T19:22:07,908][INFO ][o.e.t.TransportService   ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300}
es_1          | [2019-01-15T19:22:07,916][INFO ][o.e.b.BootstrapChecks    ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
es_1          | ERROR: [1] bootstrap checks failed
es_1          | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
es_1          | [2019-01-15T19:22:07,929][INFO ][o.e.n.Node               ] [Fo-B3LF] stopping ...
es_1          | [2019-01-15T19:22:07,942][INFO ][o.e.n.Node               ] [Fo-B3LF] stopped
es_1          | [2019-01-15T19:22:07,943][INFO ][o.e.n.Node               ] [Fo-B3LF] closing ...
es_1          | [2019-01-15T19:22:07,951][INFO ][o.e.n.Node               ] [Fo-B3LF] closed
es_1          | [2019-01-15T19:23:09,705][INFO ][o.e.n.Node               ] [] initializing ...
es_1          | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment    ] [Fo-B3LF] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [10.9gb], net total_space [16.9gb], spins? [possibly], types [xfs]
es_1          | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment    ] [Fo-B3LF] heap size [1.9gb], compressed ordinary object pointers [true]
es_1          | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node               ] node name [Fo-B3LF] derived from node ID [Fo-B3LF5T_e6PEDWj5BU2Q]; set [node.name] to override
es_1          | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node               ] version[5.6.3], pid[1], build[1a2f265/2017-10-06T20:33:39.012Z], OS[Linux/3.10.0-957.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
es_1          | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms2g, -Xmx2g, -Des.path.home=/usr/share/elasticsearch]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [aggs-matrix-stats]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [ingest-common]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-expression]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-groovy]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-mustache]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-painless]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [parent-join]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [percolator]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [reindex]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [transport-netty3]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [transport-netty4]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded plugin [analysis-morphology]
es_1          | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded plugin [analysis-smartcn]
es_1          | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded plugin [analysis-stempel]
es_1          | [2019-01-15T19:23:12,252][INFO ][o.e.d.DiscoveryModule    ] [Fo-B3LF] using discovery type [zen]
es_1          | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node               ] initialized
es_1          | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node               ] [Fo-B3LF] starting ...
es_1          | [2019-01-15T19:23:13,010][INFO ][o.e.t.TransportService   ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300}
es_1          | [2019-01-15T19:23:13,017][INFO ][o.e.b.BootstrapChecks    ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
es_1          | ERROR: [1] bootstrap checks failed
es_1          | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
es_1          | [2019-01-15T19:23:13,023][INFO ][o.e.n.Node               ] [Fo-B3LF] stopping ...
es_1          | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node               ] [Fo-B3LF] stopped
es_1          | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node               ] [Fo-B3LF] closing ...
es_1          | [2019-01-15T19:23:13,047][INFO ][o.e.n.Node               ] [Fo-B3LF] closed
Originally created by @zx2slow on GitHub (Jan 18, 2019). Original GitHub issue: https://github.com/RD17/ambar/issues/214 On a new install on CentOS7 the ES appears to be stuck in restarting - is there an issue with my configuration? **docker-compose ps** ``` Name Command State Ports ----------------------------------------------------------------------------------------------------------------------------- home_db_1 /entrypoint.sh Up (healthy) 27017/tcp home_es_1 /docker-entrypoint.sh elas ... Restarting home_rabbit_1 docker-entrypoint.sh rabbi ... Up (healthy) 15671/tcp, 15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 5672/tcp home_redis_1 docker-entrypoint.sh redis ... Up (healthy) 6379/tcp ``` **docker-compose.yml** ``` version: "2.1" networks: internal_network: services: db: restart: always networks: - internal_network image: ambar/ambar-mongodb:latest environment: - cacheSizeGB=2 volumes: - /opt/ambar/db:/data/db expose: - "27017" es: restart: always networks: - internal_network image: ambar/ambar-es:latest expose: - "9200" environment: - cluster.name=ambar-es - ES_JAVA_OPTS=-Xms2g -Xmx2g ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 cap_add: - IPC_LOCK volumes: - /opt/ambar/es:/usr/share/elasticsearch/data rabbit: restart: always networks: - internal_network image: ambar/ambar-rabbit:latest hostname: rabbit expose: - "15672" - "5672" volumes: - /opt/ambar/rabbit:/var/lib/rabbitmq redis: restart: always sysctls: - net.core.somaxconn=1024 networks: - internal_network image: ambar/ambar-redis:latest expose: - "6379" serviceapi: depends_on: redis: condition: service_healthy rabbit: condition: service_healthy es: condition: service_healthy db: condition: service_healthy restart: always networks: - internal_network image: ambar/ambar-serviceapi:latest expose: - "8081" environment: - mongoDbUrl=mongodb://db:27017/ambar_data - elasticSearchUrl=http://es:9200 - redisHost=redis - redisPort=6379 - rabbitHost=amqp://rabbit - langAnalyzer=ambar_en webapi: depends_on: serviceapi: condition: service_healthy restart: always networks: - internal_network image: ambar/ambar-webapi:latest expose: - "8080" ports: - "8080:8080" environment: - uiLang=en - mongoDbUrl=mongodb://db:27017/ambar_data - elasticSearchUrl=http://es:9200 - redisHost=redis - redisPort=6379 - serviceApiUrl=http://serviceapi:8081 - rabbitHost=amqp://rabbit frontend: depends_on: webapi: condition: service_healthy image: ambar/ambar-frontend:latest restart: always networks: - internal_network ports: - "80:80" expose: - "80" environment: - api=http://192.168.1.136:8080 pipeline0: depends_on: serviceapi: condition: service_healthy image: ambar/ambar-pipeline:latest restart: always networks: - internal_network environment: - id=0 - api_url=http://serviceapi:8081 - rabbit_host=amqp://rabbit my-files: depends_on: serviceapi: condition: service_healthy image: ambar/ambar-local-crawler restart: always networks: - internal_network expose: - "8082" environment: - name=/my-files volumes: - /mnt:/usr/data ``` **# docker-compose logs | tail -40** ``` es_1 | [2019-01-15T19:22:07,765][INFO ][o.e.n.Node ] [Fo-B3LF] starting ... es_1 | [2019-01-15T19:22:07,908][INFO ][o.e.t.TransportService ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300} es_1 | [2019-01-15T19:22:07,916][INFO ][o.e.b.BootstrapChecks ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks es_1 | ERROR: [1] bootstrap checks failed es_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] es_1 | [2019-01-15T19:22:07,929][INFO ][o.e.n.Node ] [Fo-B3LF] stopping ... es_1 | [2019-01-15T19:22:07,942][INFO ][o.e.n.Node ] [Fo-B3LF] stopped es_1 | [2019-01-15T19:22:07,943][INFO ][o.e.n.Node ] [Fo-B3LF] closing ... es_1 | [2019-01-15T19:22:07,951][INFO ][o.e.n.Node ] [Fo-B3LF] closed es_1 | [2019-01-15T19:23:09,705][INFO ][o.e.n.Node ] [] initializing ... es_1 | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment ] [Fo-B3LF] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [10.9gb], net total_space [16.9gb], spins? [possibly], types [xfs] es_1 | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment ] [Fo-B3LF] heap size [1.9gb], compressed ordinary object pointers [true] es_1 | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node ] node name [Fo-B3LF] derived from node ID [Fo-B3LF5T_e6PEDWj5BU2Q]; set [node.name] to override es_1 | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node ] version[5.6.3], pid[1], build[1a2f265/2017-10-06T20:33:39.012Z], OS[Linux/3.10.0-957.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12] es_1 | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms2g, -Xmx2g, -Des.path.home=/usr/share/elasticsearch] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [aggs-matrix-stats] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [ingest-common] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-expression] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-groovy] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-mustache] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-painless] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [parent-join] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [percolator] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [reindex] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [transport-netty3] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [transport-netty4] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded plugin [analysis-morphology] es_1 | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded plugin [analysis-smartcn] es_1 | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded plugin [analysis-stempel] es_1 | [2019-01-15T19:23:12,252][INFO ][o.e.d.DiscoveryModule ] [Fo-B3LF] using discovery type [zen] es_1 | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node ] initialized es_1 | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node ] [Fo-B3LF] starting ... es_1 | [2019-01-15T19:23:13,010][INFO ][o.e.t.TransportService ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300} es_1 | [2019-01-15T19:23:13,017][INFO ][o.e.b.BootstrapChecks ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks es_1 | ERROR: [1] bootstrap checks failed es_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] es_1 | [2019-01-15T19:23:13,023][INFO ][o.e.n.Node ] [Fo-B3LF] stopping ... es_1 | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node ] [Fo-B3LF] stopped es_1 | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node ] [Fo-B3LF] closing ... es_1 | [2019-01-15T19:23:13,047][INFO ][o.e.n.Node ] [Fo-B3LF] closed ```
kerem closed this issue 2026-02-27 15:55:39 +03:00
Author
Owner

@ian-emsens-sb commented on GitHub (Dec 2, 2019):

@zx2slow I'm experiencing the same thing, any resolution you remember?

<!-- gh-comment-id:560288039 --> @ian-emsens-sb commented on GitHub (Dec 2, 2019): @zx2slow I'm experiencing the same thing, any resolution you remember?
Author
Owner

@zx2slow commented on GitHub (Dec 2, 2019):

@ian-emsens-sb

See the section for setting up the environment:
https://ambar.cloud/docs/installation-docker/

Something was not set up initially on my system.

<!-- gh-comment-id:560380985 --> @zx2slow commented on GitHub (Dec 2, 2019): @ian-emsens-sb See the section for setting up the environment: https://ambar.cloud/docs/installation-docker/ Something was not set up initially on my system.
Author
Owner

@ian-emsens-sb commented on GitHub (Dec 2, 2019):

Thanks, for me the issue was a combination of:

<!-- gh-comment-id:560382915 --> @ian-emsens-sb commented on GitHub (Dec 2, 2019): Thanks, for me the issue was a combination of: - memory shortage (resolved by upgrading to a `t2.small` instance and adjusting the docker-compose.yml file) - and the `ERROR: [1] bootstrap checks failed` error you're also experiencing in the logs above (resolved by https://github.com/docker-library/elasticsearch/issues/111)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ambar#209
No description provided.