[GH-ISSUE #192] ES keep restarting, won't process any files #188

Closed
opened 2026-02-27 15:55:32 +03:00 by kerem · 4 comments
Owner

Originally created by @Triangulum9r on GitHub (Oct 11, 2018).
Original GitHub issue: https://github.com/RD17/ambar/issues/192

Elastic Search keeps restarting.

webapi_1 | Catastrophic failure! { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) webapi_1 | cause: { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }, webapi_1 | isOperational: true, webapi_1 | code: 'ECONNRESET', webapi_1 | errno: 'ECONNRESET', webapi_1 | syscall: 'read' } webapi_1 | Catastrophic failure! { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) webapi_1 | cause: { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }, webapi_1 | isOperational: true, webapi_1 | code: 'ECONNRESET', webapi_1 | errno: 'ECONNRESET', webapi_1 | syscall: 'read' } webapi_1 | Started on :::8080

RabbitMQ is crashing too:

`2018-10-11 19:35:52 Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces
{auth,init,['Argument__1']}
2018-10-11 19:35:52 crash_report
<0.46.0>
[]
{exit,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,352}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
[net_sup,kernel_sup,<0.34.0>]
[]
[<0.44.0>]
[]
true
running
610
27
668
initial_call: pid: registered_name: error_info: ancestors: messages: links: dictionary: trap_exit: status: heap_size: stack_size: reductions: 2018-10-11 19:35:52 supervisor_report
{local,net_sup}
start_error
{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
[{pid,undefined},{id,auth},{mfargs,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]
supervisor: errorContext: reason: offender: 2018-10-11 19:35:52 supervisor_report
{local,kernel_sup}
start_error
{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}}
[{pid,undefined},{id,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]
supervisor: errorContext: reason: offender: 2018-10-11 19:35:52 crash_report
{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}
<0.33.0>
[]
{exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,134}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
[<0.32.0>]
[{'EXIT',<0.34.0>,normal}]
[<0.32.0>,<0.31.0>]
[]
true
running
987
27
175
initial_call: pid: registered_name: error_info: ancestors: messages: links: dictionary: trap_exit: status: heap_size: stack_size: reductions: 2018-10-11 19:35:52 std_info
kernel
{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}}}},{kernel,start,[normal,[]]}}
permanent
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}}}},{kernel,start,[normal,[]]}}}"}
application: exited: type: Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq

Crash dump is being written to: erl_crash.dump.../usr/lib/rabbitmq/bin/rabbitmq-server: 51: /usr/lib/rabbitmq/bin/rabbitmq-server: cannot create /var/lib/rabbitmq/mnesia/rabbit@rabbit.pid: Permission denied
Failed to write pid file: /var/lib/rabbitmq/mnesia/rabbit@rabbit.pid

=INFO REPORT==== 11-Oct-2018::19:36:04 ===
Starting RabbitMQ 3.6.16 on Erlang 19.2.1
Copyright (C) 2007-2018 Pivotal Software, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/

          RabbitMQ 3.6.16. Copyright (C) 2007-2018 Pivotal Software, Inc.

## Licensed under the MPL. See http://www.rabbitmq.com/

########## Logs: tty

## tty

##########
Starting broker...

=INFO REPORT==== 11-Oct-2018::19:36:04 ===
node : rabbit@rabbit
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : s57KxGNCGYKGa241b3lTgg==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbit@rabbit

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
Memory high watermark set to 3193 MiB (3348381696 bytes) of 7983 MiB (8370954240 bytes) total

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
Enabling free disk space monitoring

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
Disk free limit set to 50MB

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
Limiting to approx 1048476 file handles (943626 sockets)

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
FHC read buffering: OFF
FHC write buffering: ON

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
Waiting for Mnesia tables for 30000 ms, 9 retries left

=CRASH REPORT==== 11-Oct-2018::19:36:06 ===
crasher:
initial call: application_master:init/4
pid: <0.158.0>
registered_name: []
exception exit: {{could_not_write_file,
"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config",
eacces},
{rabbit,start,[normal,[]]}}
in function application_master:init/4 (application_master.erl, line 134)
ancestors: [<0.156.0>]
messages: [{'EXIT',<0.160.0>,normal}]
links: [<0.156.0>,<0.31.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 1598
stack_size: 27
reductions: 98
neighbours:

=INFO REPORT==== 11-Oct-2018::19:36:06 ===
application: rabbit
exited: {{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config",
eacces},
{rabbit,start,[normal,[]]}}
type: transient
2018-10-11 19:36:07 Error in process p with exit value:npn
<0.3.0>
{badarg,[{ets,lookup,[ac_tab,{env,rabbit,error_logger}],[]},{application_controller,get_env,2,[{file,"application_controller.erl"},{line,332}]},{rabbit,log_location,1,[{file,"src/rabbit.erl"},{line,893}]},{rabbit,boot_error,2,[{file,"src/rabbit.erl"},{line,786}]},{rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,430}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config",eacces},{rabbit,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config",eacces},{rabbit,start,[normal,[]]

Crash dump is being written to: erl_crash.dump...
=ERROR REPORT==== 11-Oct-2018::19:36:18 ===
Mnesia(rabbit@rabbit): ** ERROR ** (could not write core file: eacces)
** FATAL ** mnesia_tm crashed: {"Cannot read schema",
"/var/lib/rabbitmq/mnesia/rabbit@rabbit/schema.DAT",
{error,
{file_error,
"/var/lib/rabbitmq/mnesia/rabbit@rabbit/schema.DAT",
eacces}}} state: [<0.123.0>]`

Docker Compose:
`
version: "2.1"
networks:
internal_network:
services:
db:
restart: always
networks:
- internal_network
image: ambar/ambar-mongodb:latest
environment:
- cacheSizeGB=2
volumes:
- ~/midas/ambar:/data/db
expose:
- "27017"
ports:
- "27017:27017"
es:
restart: always
networks:
- internal_network
image: ambar/ambar-es:latest
expose:
- "9200"
environment:
- cluster.name=ambar-es
- ES_JAVA_OPTS=-Xms2g -Xmx2g
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- ~/midas/ambar:/usr/share/elasticsearch/data
rabbit:
restart: always
networks:
- internal_network
image: ambar/ambar-rabbit:latest
hostname: rabbit
expose:
- "15672"
- "5672"
volumes:
- ~/midas/ambar:/var/lib/rabbitmq
redis:
restart: always
sysctls:
- net.core.somaxconn=1024
networks:
- internal_network
image: ambar/ambar-redis:latest
expose:
- "6379"
serviceapi:
depends_on:
redis:
condition: service_healthy
rabbit:
condition: service_healthy
es:
condition: service_healthy
db:
condition: service_healthy
restart: always
networks:
- internal_network
image: ambar/ambar-serviceapi:latest
expose:
- "8081"
environment:
- mongoDbUrl=mongodb://db:27017/ambar_data
- elasticSearchUrl=http://es:9200
- redisHost=redis
- redisPort=6379
- rabbitHost=amqp://rabbit
- langAnalyzer=ambar_en
webapi:
depends_on:
serviceapi:
condition: service_healthy
restart: always
networks:
- internal_network
image: ambar/ambar-webapi:latest
expose:
- "8080"
ports:
- "8080:8080"
environment:
- uiLang=en
- mongoDbUrl=mongodb://db:27017/ambar_data
- elasticSearchUrl=http://es:9200
- redisHost=redis
- redisPort=6379
- serviceApiUrl=http://serviceapi:8081
- rabbitHost=amqp://rabbit
frontend:
depends_on:
webapi:
condition: service_healthy
# image: ambar/ambar-frontend:latest
build:
context: ./FrontEnd
dockerfile: Dockerfile
restart: always
networks:
- internal_network
ports:
- "80:80"
expose:
- "80"
environment:
- api=http://127.0.0.1:8080

pipeline0:
depends_on:
serviceapi:
condition: service_healthy
image: ambar/ambar-pipeline:latest
restart: always
networks:
- internal_network
environment:
- id=0
- api_url=http://serviceapi:8081
- rabbit_host=amqp://rabbit

`

Originally created by @Triangulum9r on GitHub (Oct 11, 2018). Original GitHub issue: https://github.com/RD17/ambar/issues/192 Elastic Search keeps restarting. ` webapi_1 | Catastrophic failure! { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) webapi_1 | cause: { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }, webapi_1 | isOperational: true, webapi_1 | code: 'ECONNRESET', webapi_1 | errno: 'ECONNRESET', webapi_1 | syscall: 'read' } webapi_1 | Catastrophic failure! { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) webapi_1 | cause: { Error: read ECONNRESET webapi_1 | at _errnoException (util.js:1022:11) webapi_1 | at TCP.onread (net.js:628:25) code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }, webapi_1 | isOperational: true, webapi_1 | code: 'ECONNRESET', webapi_1 | errno: 'ECONNRESET', webapi_1 | syscall: 'read' } webapi_1 | Started on :::8080 ` RabbitMQ is crashing too: `2018-10-11 19:35:52 Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces {auth,init,['Argument__1']} 2018-10-11 19:35:52 crash_report <0.46.0> [] {exit,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,352}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]} [net_sup,kernel_sup,<0.34.0>] [] [<0.44.0>] [] true running 610 27 668 initial_call: pid: registered_name: error_info: ancestors: messages: links: dictionary: trap_exit: status: heap_size: stack_size: reductions: 2018-10-11 19:35:52 supervisor_report {local,net_sup} start_error {"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]} [{pid,undefined},{id,auth},{mfargs,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}] supervisor: errorContext: reason: offender: 2018-10-11 19:35:52 supervisor_report {local,kernel_sup} start_error {shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}} [{pid,undefined},{id,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}] supervisor: errorContext: reason: offender: 2018-10-11 19:35:52 crash_report {application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']} <0.33.0> [] {exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,134}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]} [<0.32.0>] [{'EXIT',<0.34.0>,normal}] [<0.32.0>,<0.31.0>] [] true running 987 27 175 initial_call: pid: registered_name: error_info: ancestors: messages: links: dictionary: trap_exit: status: heap_size: stack_size: reductions: 2018-10-11 19:35:52 std_info kernel {{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces",[{auth,init_cookie,0,[{file,"auth.erl"},{line,286}]},{auth,init,1,[{file,"auth.erl"},{line,140}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}}}}},{kernel,start,[normal,[]]}} permanent {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{\"Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces\",[{auth,init_cookie,0,[{file,\"auth.erl\"},{line,286}]},{auth,init,1,[{file,\"auth.erl\"},{line,140}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,247}]}]}}}}},{kernel,start,[normal,[]]}}}"} application: exited: type: Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,auth,{"Error when reading /var/lib/rabbitmq Crash dump is being written to: erl_crash.dump.../usr/lib/rabbitmq/bin/rabbitmq-server: 51: /usr/lib/rabbitmq/bin/rabbitmq-server: cannot create /var/lib/rabbitmq/mnesia/rabbit@rabbit.pid: Permission denied Failed to write pid file: /var/lib/rabbitmq/mnesia/rabbit@rabbit.pid =INFO REPORT==== 11-Oct-2018::19:36:04 === Starting RabbitMQ 3.6.16 on Erlang 19.2.1 Copyright (C) 2007-2018 Pivotal Software, Inc. Licensed under the MPL. See http://www.rabbitmq.com/ RabbitMQ 3.6.16. Copyright (C) 2007-2018 Pivotal Software, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: tty ###### ## tty ########## Starting broker... =INFO REPORT==== 11-Oct-2018::19:36:04 === node : rabbit@rabbit home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : s57KxGNCGYKGa241b3lTgg== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbit@rabbit =INFO REPORT==== 11-Oct-2018::19:36:06 === Memory high watermark set to 3193 MiB (3348381696 bytes) of 7983 MiB (8370954240 bytes) total =INFO REPORT==== 11-Oct-2018::19:36:06 === Enabling free disk space monitoring =INFO REPORT==== 11-Oct-2018::19:36:06 === Disk free limit set to 50MB =INFO REPORT==== 11-Oct-2018::19:36:06 === Limiting to approx 1048476 file handles (943626 sockets) =INFO REPORT==== 11-Oct-2018::19:36:06 === FHC read buffering: OFF FHC write buffering: ON =INFO REPORT==== 11-Oct-2018::19:36:06 === Waiting for Mnesia tables for 30000 ms, 9 retries left =CRASH REPORT==== 11-Oct-2018::19:36:06 === crasher: initial call: application_master:init/4 pid: <0.158.0> registered_name: [] exception exit: {{could_not_write_file, "/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config", eacces}, {rabbit,start,[normal,[]]}} in function application_master:init/4 (application_master.erl, line 134) ancestors: [<0.156.0>] messages: [{'EXIT',<0.160.0>,normal}] links: [<0.156.0>,<0.31.0>] dictionary: [] trap_exit: true status: running heap_size: 1598 stack_size: 27 reductions: 98 neighbours: =INFO REPORT==== 11-Oct-2018::19:36:06 === application: rabbit exited: {{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config", eacces}, {rabbit,start,[normal,[]]}} type: transient 2018-10-11 19:36:07 Error in process ~p with exit value:~n~p~n <0.3.0> {badarg,[{ets,lookup,[ac_tab,{env,rabbit,error_logger}],[]},{application_controller,get_env,2,[{file,"application_controller.erl"},{line,332}]},{rabbit,log_location,1,[{file,"src/rabbit.erl"},{line,893}]},{rabbit,boot_error,2,[{file,"src/rabbit.erl"},{line,786}]},{rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,430}]},{init,start_em,1,[]},{init,do_boot,3,[]}]} {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{could_not_write_file,\"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config\",eacces},{rabbit,start,[normal,[]]}}}"} Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{could_not_write_file,"/var/lib/rabbitmq/mnesia/rabbit@rabbit/cluster_nodes.config",eacces},{rabbit,start,[normal,[]] Crash dump is being written to: erl_crash.dump... =ERROR REPORT==== 11-Oct-2018::19:36:18 === Mnesia(rabbit@rabbit): ** ERROR ** (could not write core file: eacces) ** FATAL ** mnesia_tm crashed: {"Cannot read schema", "/var/lib/rabbitmq/mnesia/rabbit@rabbit/schema.DAT", {error, {file_error, "/var/lib/rabbitmq/mnesia/rabbit@rabbit/schema.DAT", eacces}}} state: [<0.123.0>]` Docker Compose: ` version: "2.1" networks: internal_network: services: db: restart: always networks: - internal_network image: ambar/ambar-mongodb:latest environment: - cacheSizeGB=2 volumes: - ~/midas/ambar:/data/db expose: - "27017" ports: - "27017:27017" es: restart: always networks: - internal_network image: ambar/ambar-es:latest expose: - "9200" environment: - cluster.name=ambar-es - ES_JAVA_OPTS=-Xms2g -Xmx2g ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 cap_add: - IPC_LOCK volumes: - ~/midas/ambar:/usr/share/elasticsearch/data rabbit: restart: always networks: - internal_network image: ambar/ambar-rabbit:latest hostname: rabbit expose: - "15672" - "5672" volumes: - ~/midas/ambar:/var/lib/rabbitmq redis: restart: always sysctls: - net.core.somaxconn=1024 networks: - internal_network image: ambar/ambar-redis:latest expose: - "6379" serviceapi: depends_on: redis: condition: service_healthy rabbit: condition: service_healthy es: condition: service_healthy db: condition: service_healthy restart: always networks: - internal_network image: ambar/ambar-serviceapi:latest expose: - "8081" environment: - mongoDbUrl=mongodb://db:27017/ambar_data - elasticSearchUrl=http://es:9200 - redisHost=redis - redisPort=6379 - rabbitHost=amqp://rabbit - langAnalyzer=ambar_en webapi: depends_on: serviceapi: condition: service_healthy restart: always networks: - internal_network image: ambar/ambar-webapi:latest expose: - "8080" ports: - "8080:8080" environment: - uiLang=en - mongoDbUrl=mongodb://db:27017/ambar_data - elasticSearchUrl=http://es:9200 - redisHost=redis - redisPort=6379 - serviceApiUrl=http://serviceapi:8081 - rabbitHost=amqp://rabbit frontend: depends_on: webapi: condition: service_healthy # image: ambar/ambar-frontend:latest build: context: ./FrontEnd dockerfile: Dockerfile restart: always networks: - internal_network ports: - "80:80" expose: - "80" environment: - api=http://127.0.0.1:8080 pipeline0: depends_on: serviceapi: condition: service_healthy image: ambar/ambar-pipeline:latest restart: always networks: - internal_network environment: - id=0 - api_url=http://serviceapi:8081 - rabbit_host=amqp://rabbit `
kerem closed this issue 2026-02-27 15:55:33 +03:00
Author
Owner

@Triangulum9r commented on GitHub (Oct 11, 2018):

Figured it out myself.. i deleted and recreated the folder for which the volumes were pointing. This fixed the issues.

<!-- gh-comment-id:429097086 --> @Triangulum9r commented on GitHub (Oct 11, 2018): Figured it out myself.. i deleted and recreated the folder for which the volumes were pointing. This fixed the issues.
Author
Owner

@ajeebkp23 commented on GitHub (Oct 17, 2018):

@mas-dse-juremigi You may close this issue. If your issue resolved.

<!-- gh-comment-id:430594899 --> @ajeebkp23 commented on GitHub (Oct 17, 2018): @mas-dse-juremigi You may close this issue. If your issue resolved.
Author
Owner

@zx2slow commented on GitHub (Jan 15, 2019):

I am seeing a similar issue on a new install:

docker-compose ps

    Name                   Command                  State                                   Ports
-----------------------------------------------------------------------------------------------------------------------------
home_db_1       /entrypoint.sh                   Up (healthy)   27017/tcp
home_es_1       /docker-entrypoint.sh elas ...   Restarting
home_rabbit_1   docker-entrypoint.sh rabbi ...   Up (healthy)   15671/tcp, 15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 5672/tcp
home_redis_1    docker-entrypoint.sh redis ...   Up (healthy)   6379/tcp

docker-compose.yml

version: "2.1"
networks:
  internal_network:
services:
  db:
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-mongodb:latest
    environment:
      - cacheSizeGB=2
    volumes:
      - /opt/ambar/db:/data/db
    expose:
      - "27017"
  es:
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-es:latest
    expose:
      - "9200"
    environment:
      - cluster.name=ambar-es
      - ES_JAVA_OPTS=-Xms2g -Xmx2g
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    cap_add:
      - IPC_LOCK
    volumes:
      - /opt/ambar/es:/usr/share/elasticsearch/data
  rabbit:
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-rabbit:latest
    hostname: rabbit
    expose:
      - "15672"
      - "5672"
    volumes:
      - /opt/ambar/rabbit:/var/lib/rabbitmq
  redis:
    restart: always
    sysctls:
      - net.core.somaxconn=1024
    networks:
      - internal_network
    image: ambar/ambar-redis:latest
    expose:
      - "6379"
  serviceapi:
    depends_on:
      redis:
        condition: service_healthy
      rabbit:
        condition: service_healthy
      es:
        condition: service_healthy
      db:
        condition: service_healthy
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-serviceapi:latest
    expose:
      - "8081"
    environment:
      - mongoDbUrl=mongodb://db:27017/ambar_data
      - elasticSearchUrl=http://es:9200
      - redisHost=redis
      - redisPort=6379
      - rabbitHost=amqp://rabbit
      - langAnalyzer=ambar_en
  webapi:
    depends_on:
      serviceapi:
        condition: service_healthy
    restart: always
    networks:
      - internal_network
    image: ambar/ambar-webapi:latest
    expose:
      - "8080"
    ports:
      - "8080:8080"
    environment:
      - uiLang=en
      - mongoDbUrl=mongodb://db:27017/ambar_data
      - elasticSearchUrl=http://es:9200
      - redisHost=redis
      - redisPort=6379
      - serviceApiUrl=http://serviceapi:8081
      - rabbitHost=amqp://rabbit
  frontend:
    depends_on:
      webapi:
        condition: service_healthy
    image: ambar/ambar-frontend:latest
    restart: always
    networks:
      - internal_network
    ports:
      - "80:80"
    expose:
      - "80"
    environment:
      - api=http://192.168.1.136:8080
  pipeline0:
    depends_on:
      serviceapi:
        condition: service_healthy
    image: ambar/ambar-pipeline:latest
    restart: always
    networks:
      - internal_network
    environment:
      - id=0
      - api_url=http://serviceapi:8081
      - rabbit_host=amqp://rabbit
  my-files:
    depends_on:
      serviceapi:
        condition: service_healthy
    image: ambar/ambar-local-crawler
    restart: always
    networks:
      - internal_network
    expose:
      - "8082"
    environment:
      - name=/my-files
    volumes:
      - /mnt:/usr/data

# docker-compose logs | tail -40

es_1          | [2019-01-15T19:22:07,765][INFO ][o.e.n.Node               ] [Fo-B3LF] starting ...
es_1          | [2019-01-15T19:22:07,908][INFO ][o.e.t.TransportService   ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300}
es_1          | [2019-01-15T19:22:07,916][INFO ][o.e.b.BootstrapChecks    ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
es_1          | ERROR: [1] bootstrap checks failed
es_1          | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
es_1          | [2019-01-15T19:22:07,929][INFO ][o.e.n.Node               ] [Fo-B3LF] stopping ...
es_1          | [2019-01-15T19:22:07,942][INFO ][o.e.n.Node               ] [Fo-B3LF] stopped
es_1          | [2019-01-15T19:22:07,943][INFO ][o.e.n.Node               ] [Fo-B3LF] closing ...
es_1          | [2019-01-15T19:22:07,951][INFO ][o.e.n.Node               ] [Fo-B3LF] closed
es_1          | [2019-01-15T19:23:09,705][INFO ][o.e.n.Node               ] [] initializing ...
es_1          | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment    ] [Fo-B3LF] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [10.9gb], net total_space [16.9gb], spins? [possibly], types [xfs]
es_1          | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment    ] [Fo-B3LF] heap size [1.9gb], compressed ordinary object pointers [true]
es_1          | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node               ] node name [Fo-B3LF] derived from node ID [Fo-B3LF5T_e6PEDWj5BU2Q]; set [node.name] to override
es_1          | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node               ] version[5.6.3], pid[1], build[1a2f265/2017-10-06T20:33:39.012Z], OS[Linux/3.10.0-957.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
es_1          | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms2g, -Xmx2g, -Des.path.home=/usr/share/elasticsearch]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [aggs-matrix-stats]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [ingest-common]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-expression]
es_1          | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-groovy]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-mustache]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [lang-painless]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [parent-join]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [percolator]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [reindex]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [transport-netty3]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded module [transport-netty4]
es_1          | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded plugin [analysis-morphology]
es_1          | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded plugin [analysis-smartcn]
es_1          | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService     ] [Fo-B3LF] loaded plugin [analysis-stempel]
es_1          | [2019-01-15T19:23:12,252][INFO ][o.e.d.DiscoveryModule    ] [Fo-B3LF] using discovery type [zen]
es_1          | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node               ] initialized
es_1          | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node               ] [Fo-B3LF] starting ...
es_1          | [2019-01-15T19:23:13,010][INFO ][o.e.t.TransportService   ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300}
es_1          | [2019-01-15T19:23:13,017][INFO ][o.e.b.BootstrapChecks    ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
es_1          | ERROR: [1] bootstrap checks failed
es_1          | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
es_1          | [2019-01-15T19:23:13,023][INFO ][o.e.n.Node               ] [Fo-B3LF] stopping ...
es_1          | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node               ] [Fo-B3LF] stopped
es_1          | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node               ] [Fo-B3LF] closing ...
es_1          | [2019-01-15T19:23:13,047][INFO ][o.e.n.Node               ] [Fo-B3LF] closed
<!-- gh-comment-id:454519812 --> @zx2slow commented on GitHub (Jan 15, 2019): I am seeing a similar issue on a new install: **docker-compose ps** ``` Name Command State Ports ----------------------------------------------------------------------------------------------------------------------------- home_db_1 /entrypoint.sh Up (healthy) 27017/tcp home_es_1 /docker-entrypoint.sh elas ... Restarting home_rabbit_1 docker-entrypoint.sh rabbi ... Up (healthy) 15671/tcp, 15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 5672/tcp home_redis_1 docker-entrypoint.sh redis ... Up (healthy) 6379/tcp ``` **docker-compose.yml** ``` version: "2.1" networks: internal_network: services: db: restart: always networks: - internal_network image: ambar/ambar-mongodb:latest environment: - cacheSizeGB=2 volumes: - /opt/ambar/db:/data/db expose: - "27017" es: restart: always networks: - internal_network image: ambar/ambar-es:latest expose: - "9200" environment: - cluster.name=ambar-es - ES_JAVA_OPTS=-Xms2g -Xmx2g ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 cap_add: - IPC_LOCK volumes: - /opt/ambar/es:/usr/share/elasticsearch/data rabbit: restart: always networks: - internal_network image: ambar/ambar-rabbit:latest hostname: rabbit expose: - "15672" - "5672" volumes: - /opt/ambar/rabbit:/var/lib/rabbitmq redis: restart: always sysctls: - net.core.somaxconn=1024 networks: - internal_network image: ambar/ambar-redis:latest expose: - "6379" serviceapi: depends_on: redis: condition: service_healthy rabbit: condition: service_healthy es: condition: service_healthy db: condition: service_healthy restart: always networks: - internal_network image: ambar/ambar-serviceapi:latest expose: - "8081" environment: - mongoDbUrl=mongodb://db:27017/ambar_data - elasticSearchUrl=http://es:9200 - redisHost=redis - redisPort=6379 - rabbitHost=amqp://rabbit - langAnalyzer=ambar_en webapi: depends_on: serviceapi: condition: service_healthy restart: always networks: - internal_network image: ambar/ambar-webapi:latest expose: - "8080" ports: - "8080:8080" environment: - uiLang=en - mongoDbUrl=mongodb://db:27017/ambar_data - elasticSearchUrl=http://es:9200 - redisHost=redis - redisPort=6379 - serviceApiUrl=http://serviceapi:8081 - rabbitHost=amqp://rabbit frontend: depends_on: webapi: condition: service_healthy image: ambar/ambar-frontend:latest restart: always networks: - internal_network ports: - "80:80" expose: - "80" environment: - api=http://192.168.1.136:8080 pipeline0: depends_on: serviceapi: condition: service_healthy image: ambar/ambar-pipeline:latest restart: always networks: - internal_network environment: - id=0 - api_url=http://serviceapi:8081 - rabbit_host=amqp://rabbit my-files: depends_on: serviceapi: condition: service_healthy image: ambar/ambar-local-crawler restart: always networks: - internal_network expose: - "8082" environment: - name=/my-files volumes: - /mnt:/usr/data ``` **# docker-compose logs | tail -40** ``` es_1 | [2019-01-15T19:22:07,765][INFO ][o.e.n.Node ] [Fo-B3LF] starting ... es_1 | [2019-01-15T19:22:07,908][INFO ][o.e.t.TransportService ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300} es_1 | [2019-01-15T19:22:07,916][INFO ][o.e.b.BootstrapChecks ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks es_1 | ERROR: [1] bootstrap checks failed es_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] es_1 | [2019-01-15T19:22:07,929][INFO ][o.e.n.Node ] [Fo-B3LF] stopping ... es_1 | [2019-01-15T19:22:07,942][INFO ][o.e.n.Node ] [Fo-B3LF] stopped es_1 | [2019-01-15T19:22:07,943][INFO ][o.e.n.Node ] [Fo-B3LF] closing ... es_1 | [2019-01-15T19:22:07,951][INFO ][o.e.n.Node ] [Fo-B3LF] closed es_1 | [2019-01-15T19:23:09,705][INFO ][o.e.n.Node ] [] initializing ... es_1 | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment ] [Fo-B3LF] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [10.9gb], net total_space [16.9gb], spins? [possibly], types [xfs] es_1 | [2019-01-15T19:23:09,775][INFO ][o.e.e.NodeEnvironment ] [Fo-B3LF] heap size [1.9gb], compressed ordinary object pointers [true] es_1 | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node ] node name [Fo-B3LF] derived from node ID [Fo-B3LF5T_e6PEDWj5BU2Q]; set [node.name] to override es_1 | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node ] version[5.6.3], pid[1], build[1a2f265/2017-10-06T20:33:39.012Z], OS[Linux/3.10.0-957.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12] es_1 | [2019-01-15T19:23:09,776][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms2g, -Xmx2g, -Des.path.home=/usr/share/elasticsearch] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [aggs-matrix-stats] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [ingest-common] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-expression] es_1 | [2019-01-15T19:23:11,119][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-groovy] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-mustache] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [lang-painless] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [parent-join] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [percolator] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [reindex] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [transport-netty3] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded module [transport-netty4] es_1 | [2019-01-15T19:23:11,120][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded plugin [analysis-morphology] es_1 | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded plugin [analysis-smartcn] es_1 | [2019-01-15T19:23:11,121][INFO ][o.e.p.PluginsService ] [Fo-B3LF] loaded plugin [analysis-stempel] es_1 | [2019-01-15T19:23:12,252][INFO ][o.e.d.DiscoveryModule ] [Fo-B3LF] using discovery type [zen] es_1 | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node ] initialized es_1 | [2019-01-15T19:23:12,901][INFO ][o.e.n.Node ] [Fo-B3LF] starting ... es_1 | [2019-01-15T19:23:13,010][INFO ][o.e.t.TransportService ] [Fo-B3LF] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300} es_1 | [2019-01-15T19:23:13,017][INFO ][o.e.b.BootstrapChecks ] [Fo-B3LF] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks es_1 | ERROR: [1] bootstrap checks failed es_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] es_1 | [2019-01-15T19:23:13,023][INFO ][o.e.n.Node ] [Fo-B3LF] stopping ... es_1 | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node ] [Fo-B3LF] stopped es_1 | [2019-01-15T19:23:13,040][INFO ][o.e.n.Node ] [Fo-B3LF] closing ... es_1 | [2019-01-15T19:23:13,047][INFO ][o.e.n.Node ] [Fo-B3LF] closed ```
Author
Owner

@sanikolov commented on GitHub (May 13, 2019):

yes, still happening with latest images. Tends to occur when the file to be processed is large, in my case a 150MB djvu (which cannot be parsed by tika) and a 120MB epub which can be parsed by apache tika.
Is there a timeout that can be defined or tuned in the yml file?
The other option of course is to reduce sizes but then some files are left out.
Ideally the infinite loop ought to be detected by the container and it should move on to other files.

<!-- gh-comment-id:491774616 --> @sanikolov commented on GitHub (May 13, 2019): yes, still happening with latest images. Tends to occur when the file to be processed is large, in my case a 150MB djvu (which cannot be parsed by tika) and a 120MB epub which can be parsed by apache tika. Is there a timeout that can be defined or tuned in the yml file? The other option of course is to reduce sizes but then some files are left out. Ideally the infinite loop ought to be detected by the container and it should move on to other files.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ambar#188
No description provided.