Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Compose example goes to Silent Configuration Mode Failed. is it problem in elastic search? #239

Closed
vladyslav2 opened this issue Jul 29, 2019 · 10 comments
Assignees
Labels
bug Something isn't working documentation An issue or clarification on documentation support Further information is requested or user requires assistance

Comments

@vladyslav2
Copy link

Docker Compose example goes to Silent Configuration Mode Failed during an attempt to start fusionauth. is it a problem in elastic search?

Description

Seems like default installation instruction is broken or some of the docker images are not assembled correctly

Steps to reproduce

  1. Start Macos 10.12.6
  2. Start docker 2.0.0.3 + docker-compose 1.23.2
  3. Use steps from instruction:
    https://fusionauth.io/docs/v1/tech/installation-guide/docker
curl -o docker-compose.yml https://raw.githubusercontent.com/FusionAuth/fusionauth-containers/master/docker/fusionauth/docker-compose.yml && curl -o .env https://raw.githubusercontent.com/FusionAuth/fusionauth-containers/master/docker/fusionauth/.env && docker-compose up
  1. Open http://127.0.0.1:9011. After few minutes you will be redirected to http://127.0.0.1:9011/maintenance-mode-silent-configuration-failed

Знімок екрана 2019-07-29 о 5 44 38 дп

Logs:
db: db.txt
fusionauth:
fusionauth_1.txt
elastic:
fus_search_1.txt

Seems like elastic search are not able to start for some reason?

[2019-07-29T12:53:33,351][INFO ][o.e.b.BootstrapChecks    ] [NFcuRqf] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2019-07-29T12:53:33,372][INFO ][o.e.n.Node               ] [NFcuRqf] stopping ...
[2019-07-29T12:53:33,422][INFO ][o.e.n.Node               ] [NFcuRqf] stopped
[2019-07-29T12:53:33,423][INFO ][o.e.n.Node               ] [NFcuRqf] closing ...
[2019-07-29T12:53:33,439][INFO ][o.e.n.Node               ] [NFcuRqf] closed
[2019-07-29T12:53:33,445][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started
[2019-07-29T12:53:37,688][INFO ][o.e.n.Node               ] [] initializing ...
[2019-07-29T12:53:37,783][INFO ][o.e.e.NodeEnvironment    ] [NFcuRqf] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [42.5gb], net total_space [58.4gb], types [ext4]
[2019-07-29T12:53:37,785][INFO ][o.e.e.NodeEnvironment    ] [NFcuRqf] heap size [1007.3mb], compressed ordinary object pointers [true]
[2019-07-29T12:53:37,788][INFO ][o.e.n.Node               ] [NFcuRqf] node name derived from node ID [NFcuRqf-Tf68s-_FS2D9fw]; set [node.name] to override

also, fusionauth logs do not look okay:

mkdir: cannot create directory '/usr/local/fusionauth/fusionauth-app/apache-tomcat/../../logs': Permission denied

12:53:22,198 |-ERROR in ch.qos.logback.core.FileAppender[FILE] - openFile(/usr/local/fusionauth/fusionauth-app/apache-tomcat/../../logs/importer.log,true) call failed. java.io.FileNotFoundException: /usr/local/fusionauth/fusionauth-app/apache-tomcat/../../logs/importer.log (No such file or directory)

Platform

  • Device: Macbook pro
  • OS: Macos 10.12.6

Additional context

The product looks promising, appreciate your work!
I just wish it would be more easy to run it locally

@robotdan
Copy link
Member

Perhaps related to #191

@robotdan robotdan added support Further information is requested or user requires assistance triage labels Jul 29, 2019
@robotdan robotdan self-assigned this Jul 29, 2019
@robotdan
Copy link
Member

Because Elastic starts up and binds to a non localhost address, in this case 172.22.0.2 it goes into production mode and performs bootstrap checks.

[2019-07-29T14:12:44,779][INFO ][o.e.n.Node               ] [NFcuRqf] initialized
[2019-07-29T14:12:44,780][INFO ][o.e.n.Node               ] [NFcuRqf] starting ...
[2019-07-29T14:12:45,112][INFO ][o.e.t.TransportService   ] [NFcuRqf] publish_address {172.22.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-07-29T14:12:45,145][INFO ][o.e.b.BootstrapChecks    ] [NFcuRqf] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2019-07-29T14:12:45,163][INFO ][o.e.n.Node               ] [NFcuRqf] stopping ...
[2019-07-29T14:12:45,190][INFO ][o.e.n.Node               ] [NFcuRqf] stopped
[2019-07-29T14:12:45,191][INFO ][o.e.n.Node               ] [NFcuRqf] closing ...
[2019-07-29T14:12:45,249][INFO ][o.e.n.Node               ] [NFcuRqf] closed

One of the bootstrap checks fail and it shuts down.

[1]: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap

In the .env file you downloaded it should contain ES_JAVA_OPTS=-Xms256m -Xmx256m which sets the min and max heap size to 256m. This should allow this bootstrap check to pass.

Perhaps that file is not getting read in or something, can you do a directory listing to show all of the file permissions on the docker-compose.yml and the .env file, and what directory did you run the curl and docker compose commands?

@vladyslav2
Copy link
Author

@robotdan, I'm pretty sure I have correct env file, please take a look:

env:

homes-MacBook-Pro:fus home$ cat .env
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
DATABASE_USER=fusionauth
DATABASE_PASSWORD=hkaLBM3RVnyYeYeqE3WI1w2e4Avpy0Wd5O3s3

ES_JAVA_OPTS=-Xms256m -Xmx256m
FUSIONAUTH_MEMORY=256

docker-compose.yml

Mhomes-MacBook-Pro:fus home$ cat docker-compose.yml
version: '3'

services:
  db:
    image: postgres:9.6
    environment:
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    ports:
      - 5432:5432
    networks:
      - db
    restart: unless-stopped
    volumes:
      - db_data:/var/lib/postgresql/data

  search:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
    environment:
      cluster.name: fusionauth
      bootstrap.memory_lock: "true"
      ES_JAVA_OPTS: "${ES_JAVA_OPTS}"
    ports:
    - 9200:9200
    - 9300:9300
    networks:
      - search
    restart: unless-stopped
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es_data:/usr/share/elasticsearch/data

  fusionauth:
    image: fusionauth/fusionauth-app:latest
    depends_on:
      - db
      - search
    environment:
      DATABASE_URL: jdbc:postgresql://db:5432/fusionauth
      DATABASE_ROOT_USER: ${POSTGRES_USER}
      DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD}
      DATABASE_USER: ${DATABASE_USER}
      DATABASE_PASSWORD: ${DATABASE_PASSWORD}
      FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
      FUSIONAUTH_SEARCH_SERVERS: http://search:9200
      FUSIONAUTH_URL: http://fusionauth:9011
    networks:
     - db
     - search
    restart: unless-stopped
    ports:
      - 9011:9011
    volumes:
      - fa_config:/usr/local/fusionauth/config

networks:
  db:
    driver: bridge
  search:
    driver: bridge

volumes:
  db_data:
  es_data:
  fa_config:homes-MacBook-Pro:fus home$

Starting docker compose:

homes-MacBook-Pro:fus home$ docker-compose up
Starting fus_db_1     ... done
Starting fus_search_1 ... done
Starting fus_fusionauth_1 ... done
Attaching to fus_db_1, fus_search_1, fus_fusionauth_1

checking logs

$docker logs fus_search_1
[2019-07-29T14:09:30,957][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [repository-url]
[2019-07-29T14:09:30,957][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [transport-netty4]
[2019-07-29T14:09:30,957][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [tribe]
[2019-07-29T14:09:30,957][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-core]
[2019-07-29T14:09:30,957][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-deprecation]
[2019-07-29T14:09:30,958][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-graph]
[2019-07-29T14:09:30,958][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-logstash]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-ml]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-monitoring]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-rollup]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-security]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-sql]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-upgrade]
[2019-07-29T14:09:30,959][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-watcher]
[2019-07-29T14:09:30,961][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded plugin [ingest-geoip]
[2019-07-29T14:09:30,961][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded plugin [ingest-user-agent]
[2019-07-29T14:09:34,604][INFO ][o.e.x.s.a.s.FileRolesStore] [NFcuRqf] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-07-29T14:09:35,494][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/77] [Main.cc@109] controller (64 bit): Version 6.3.1 (Build 4d0b8f0a0ef401) Copyright (c) 2018 Elasticsearch BV
[2019-07-29T14:09:36,733][INFO ][o.e.d.DiscoveryModule    ] [NFcuRqf] using discovery type [zen]
[2019-07-29T14:09:37,682][INFO ][o.e.n.Node               ] [NFcuRqf] initialized
[2019-07-29T14:09:37,683][INFO ][o.e.n.Node               ] [NFcuRqf] starting ...
[2019-07-29T14:09:37,933][INFO ][o.e.t.TransportService   ] [NFcuRqf] publish_address {172.22.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-07-29T14:09:37,954][INFO ][o.e.b.BootstrapChecks    ] [NFcuRqf] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2019-07-29T14:09:37,972][INFO ][o.e.n.Node               ] [NFcuRqf] stopping ...
[2019-07-29T14:09:38,009][INFO ][o.e.n.Node               ] [NFcuRqf] stopped
[2019-07-29T14:09:38,010][INFO ][o.e.n.Node               ] [NFcuRqf] closing ...
[2019-07-29T14:09:38,027][INFO ][o.e.n.Node               ] [NFcuRqf] closed
[2019-07-29T14:09:38,030][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-07-29T14:09:42,574][INFO ][o.e.n.Node               ] [] initializing ...
[2019-07-29T14:09:42,641][INFO ][o.e.e.NodeEnvironment    ] [NFcuRqf] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [42.5gb], net total_space [58.4gb], types [ext4]
[2019-07-29T14:09:42,641][INFO ][o.e.e.NodeEnvironment    ] [NFcuRqf] heap size [1007.3mb], compressed ordinary object pointers [true]
[2019-07-29T14:09:42,643][INFO ][o.e.n.Node               ] [NFcuRqf] node name derived from node ID [NFcuRqf-Tf68s-_FS2D9fw]; set [node.name] to override
[2019-07-29T14:09:42,644][INFO ][o.e.n.Node               ] [NFcuRqf] version[6.3.1], pid[1], build[default/tar/eb782d0/2018-06-29T21:59:26.107521Z], OS[Linux/4.9.125-linuxkit/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/10.0.1/10.0.1+10]
[2019-07-29T14:09:42,644][INFO ][o.e.n.Node               ] [NFcuRqf] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.5A1BPnXs, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Des.cgroups.hierarchy.override=/, -Xms256m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-07-29T14:09:45,361][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [aggs-matrix-stats]
[2019-07-29T14:09:45,362][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [analysis-common]
[2019-07-29T14:09:45,363][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [ingest-common]
[2019-07-29T14:09:45,364][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [lang-expression]
[2019-07-29T14:09:45,365][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [lang-mustache]
[2019-07-29T14:09:45,365][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [lang-painless]
[2019-07-29T14:09:45,365][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [mapper-extras]
[2019-07-29T14:09:45,366][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [parent-join]
[2019-07-29T14:09:45,366][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [percolator]
[2019-07-29T14:09:45,366][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [rank-eval]
[2019-07-29T14:09:45,367][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [reindex]
[2019-07-29T14:09:45,367][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [repository-url]
[2019-07-29T14:09:45,367][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [transport-netty4]
[2019-07-29T14:09:45,368][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [tribe]
[2019-07-29T14:09:45,368][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-core]
[2019-07-29T14:09:45,368][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-deprecation]
[2019-07-29T14:09:45,369][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-graph]
[2019-07-29T14:09:45,369][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-logstash]
[2019-07-29T14:09:45,371][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-ml]
[2019-07-29T14:09:45,371][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-monitoring]
[2019-07-29T14:09:45,371][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-rollup]
[2019-07-29T14:09:45,371][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-security]
[2019-07-29T14:09:45,371][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-sql]
[2019-07-29T14:09:45,371][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-upgrade]
[2019-07-29T14:09:45,372][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded module [x-pack-watcher]
[2019-07-29T14:09:45,374][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded plugin [ingest-geoip]
[2019-07-29T14:09:45,375][INFO ][o.e.p.PluginsService     ] [NFcuRqf] loaded plugin [ingest-user-agent]
[2019-07-29T14:09:48,657][INFO ][o.e.x.s.a.s.FileRolesStore] [NFcuRqf] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-07-29T14:09:49,517][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/77] [Main.cc@109] controller (64 bit): Version 6.3.1 (Build 4d0b8f0a0ef401) Copyright (c) 2018 Elasticsearch BV
[2019-07-29T14:09:50,223][INFO ][o.e.d.DiscoveryModule    ] [NFcuRqf] using discovery type [zen]
[2019-07-29T14:09:51,143][INFO ][o.e.n.Node               ] [NFcuRqf] initialized
[2019-07-29T14:09:51,144][INFO ][o.e.n.Node               ] [NFcuRqf] starting ...
[2019-07-29T14:09:51,325][INFO ][o.e.t.TransportService   ] [NFcuRqf] publish_address {172.22.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-07-29T14:09:51,340][INFO ][o.e.b.BootstrapChecks    ] [NFcuRqf] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2019-07-29T14:09:51,353][INFO ][o.e.n.Node               ] [NFcuRqf] stopping ...
[2019-07-29T14:09:51,398][INFO ][o.e.n.Node               ] [NFcuRqf] stopped
[2019-07-29T14:09:51,399][INFO ][o.e.n.Node               ] [NFcuRqf] closing ...
[2019-07-29T14:09:51,416][INFO ][o.e.n.Node               ] [NFcuRqf] closed
[2019-07-29T14:09:51,418][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started

@robotdan
Copy link
Member

Strange, it seems to be sort picking up two sets of Java args.

The startup of ES shows

... [NFcuRqf] JVM arguments [-Xms1g, -Xmx1g, ... , -Xms256m, 

The -Xmx1g I think is coming from your jvm.options file, not sure where that is, perhaps shipped with Elasticsearch? And -Xms256m is then set again from the .env file, I don't know why -Xmx isn't showing up twice as well.

So this then makes sense of the error from Elasticsearch, the 256m and 1g respectively show up in the error message.

ERROR: [1] bootstrap checks failed
[1]: initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap

So it looks like there are some collisions here on the VM configuration options.

To test a work-around can you try the following?

Test 1:

Modify your .env file to double quote the value for ES_JAVA_OPTS.

Example: ES_JAVA_OPTS="-Xms256m -Xmx256m"

Expectation, unclear, it may work and override the values from jvm.options and set min and max heap to 256m.

Test 2:

Modify your .env file to remove the value and allow jvm.options to win.

Example: ES_JAVA_OPTS=

Expectation: Startup is ok, and Elastic will show JVM arguments [-Xms1g, -Xmx1g ... in the VM arguments.

@robotdan
Copy link
Member

It looks like Elasticsearch does pick up both jvm.options and ES_JAVA_OPTS and this is working as designed. See elastic/elasticsearch-docker#22 (comment)

But in theory the VM should be only accepting the right most defined value which is how Elasticsearch is expecting it to work.

So in theory we should be seeing -Xms and -Xmx twice, and I think I only see -Xms twice, maybe it is dropping the -Xmx we are setting in the ES_JAVA_OPTS variable.

We could try changing how we set this via the compose file, it looks like many of the examples I see for Elasticsearch use a YML array instead of an object for environment.

https://discuss.elastic.co/t/setting-heap-size-at-docker-compose/156326

vladyslav2 added a commit to webdeveloppro/fusionauth-containers that referenced this issue Jul 29, 2019
Update Elastic search version
Update ES_JAVA_OPTS default to 512 (based on the elastic search recommendataions)
@vladyslav2
Copy link
Author

@robotdan Daniel, honestly, I did not understand much from what you told to me ...

but I played with elastic search for a little bit and found docker image which works, (at least for me)

Please take a look on my pull request above

@robotdan
Copy link
Member

@robotdan Daniel, honestly, I did not understand much from what you told to me ...

To summarize it means what I'm observing is the expected behavior and Java / Elasticsearch should handle the duplicate Java properties on the command line. So if it isn't working there may be an issue with our we are exporting the environment variable in the docker-compose.yml example.

I'll take a look at your PR.

@robotdan
Copy link
Member

robotdan commented Jul 29, 2019

Re: webdeveloppro/fusionauth-containers@4aed574

I'll have to test to verify, but perhaps just changing the way the environment variables are defined in the search service will be adequate to fix this issue.

For example, the existing way they are defined in the docker-compose.yml is as follows:

    environment:
      cluster.name: fusionauth
      bootstrap.memory_lock: "true"
      ES_JAVA_OPTS: "${ES_JAVA_OPTS}"

And the way you have it in the PR, the values are defined in YML as an array instead of an object as they are currently.

    environment:
      - cluster.name=fusionauth
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=${ES_JAVA_OPTS}"

@vladyslav2
Copy link
Author

@robotdan
yeap, you were right, we can just update the way we describe environments in docker-compose.yaml

and, my guess, if you are using zsh you will not face that problem
so probably its bash related

@robotdan
Copy link
Member

Yes, I am using zsh. Ok, I'll test it out and then if it looks make the update.

Thanks for the help debugging this issue!

robotdan added a commit to FusionAuth/fusionauth-containers that referenced this issue Jul 30, 2019
@robotdan robotdan added bug Something isn't working documentation An issue or clarification on documentation and removed triage labels Jul 30, 2019
tyduptyler13 added a commit to CleanSpeak/cleanspeak-containers that referenced this issue Aug 7, 2019
tyduptyler13 added a commit to CleanSpeak/cleanspeak-containers that referenced this issue Aug 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation An issue or clarification on documentation support Further information is requested or user requires assistance
Projects
None yet
Development

No branches or pull requests

2 participants