Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an official image for Apache Storm #1641

Merged
merged 10 commits into from
Oct 12, 2016
Merged

Conversation

31z4
Copy link
Contributor

@31z4 31z4 commented Apr 17, 2016

As version 1.0.0 is a huge milestone for Apache Storm it would be great to finally have an official image for it. See corresponding docs PR.

Checklist for Review

@31z4
Copy link
Contributor Author

31z4 commented May 2, 2016

Hey guys @psftw @tianon @yosifkit I've recently made some improvements to the image and the documentation. Any chance someone look at it and provide some feedback? Thanks!

@tianon
Copy link
Member

tianon commented May 5, 2016

Hey @31z4, sorry for the delay!

A few thoughts:

  1. GPG_KEY should be set to the full fingerprint (ie, ACEFE18DD2322E1E84587A148DE03962E80B8FFD), so that gpg can verify the key after it's fetched
  2. gpg --verify should be gpg --batch --verify (see Fix suggested "gpg" usage to stop relying on deprecated and insecure behavior #1420)
  3. it's worth considering adding ../storm/bin to the path such that users can expect docker run storm storm --... to work appropriately
  4. is /data going to be used for mutable state of the application? would it be appropriate to put it into a VOLUME? (should users expect to need to back it up / will the application performance suffer if the data stored there is getting a performance hit for being on the CoW filesystem Docker provides?)
  5. regarding ENTRYPOINT, you probably want to give https://github.com/docker-library/official-images/blob/master/README.md#consistency a read-through 😄

@31z4
Copy link
Contributor Author

31z4 commented May 5, 2016

@tianon many thanks for your feedback!

  1. Done. How do you get it? 😊
  2. Done.
  3. Done.
  4. /data can be used as a value for a storm.local.dir config param. This is where the Nimbus and Supervisor daemons save their small amounts of state (like jars, confs, and things like that). I don't think it makes much sense to put it into a VOLUME by default as well as backing up this dir. I have no idea how CoW filesystem will affect Storm performance. Didn't find any benchmarks somehow related to that.
  5. Done.

@HeartSaVioR
Copy link

Great work!
Btw, maybe off-topic or not, I guess you may want to also address drpc (for supervisor nodes or just master node), ui (for master node), logviewer (for supervisor nodes) to docker-compose.
Please refer to https://github.com/wurstmeister/storm-docker.

@tianon
Copy link
Member

tianon commented May 11, 2016

Nice! (31z4/storm-docker@d050a2b...d71c8ad)

Regarding GPG, I did gpg --keyserver ha.pool.sks-keyservers.net --recv-key 8DE03962E80B8FFD, and then gpg --fingerprint, which shows the full fingerprint. 👍

Only comment I'd have left is that you probably want to add CMD ["storm"] so that users who do just docker run storm without a command get a running container (where CMD is just the default; docker run storm some-other-command will still work appropriately)! 👍

@31z4
Copy link
Contributor Author

31z4 commented May 13, 2016

@tianon Unfortunately, we can't get working Storm with just a single command. At minimum we have to start the Nimbus and Supervisor daemons. Of course I can start those two using supervisor. But I'd like to keep an image as simple as possible and avoid additional dependencies. What do you think? Does it make sense to have CMD ["supervisor"] which starts both Nimbus and Supervisor daemons inside a single container?

@tianon
Copy link
Member

tianon commented May 13, 2016

Ah, naw -- that makes sense! 👍 (https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run-only-one-process-per-container)

I'm looking over the docs now, and I'm a little worried about the Zookeeper dependency. We don't have an official zookeeper image yet (which you're aware of), but "No official images can be derived from, or depend on, non-official images." (https://github.com/docker-library/official-images/blob/master/README.md#repeatability) Is there a way to run Storm without Zookeeper? (or perhaps do you know any members of the Zookeeper project who might be interested in collaborating on/maintaining an image? 😈)

@31z4
Copy link
Contributor Author

31z4 commented May 13, 2016

@tianon There is no way to run Storm in cluster mode without Zookeeper. Though, Zookeeper is not required if you run Storm in local mode. I was also concerned about not having an official Zookeeper image. I planned to manage this after the Storm image. It seems that plan has changed 😃

@tianon
Copy link
Member

tianon commented May 13, 2016

Oh, I suppose before we get too far down that rabbit hole, has there been any contact with either upstream to see if they're interested in collaboration on the image? We like to make sure they're at least aware of what we're doing, especially since we've burned bridges by being too hasty without consulting upstream on whether they even wanted to be part of the official images, so we like to be a little more proactive than we've been in the past. 😅 😊

@31z4
Copy link
Contributor Author

31z4 commented May 14, 2016

@tianon @ApacheStorm retwitted my tweet. I've also just created a JIRA issue. So I believe they're aware 😊

@31z4
Copy link
Contributor Author

31z4 commented May 14, 2016

@HeartSaVioR Thank you for the feedback! There are several reasons why I didn't include drpc,ui and logviewer into docker-compose.yml

  1. I'd like to keep it as simple as possible and include only necessary services. Everything besides Zookeeper, Nimbus and Supervisor is optional.
  2. Regarding ui and logviewer there are known issues (one, two) which I'd like to address first to get a fully working dockerized ui. Have you ever encountered those issues?

@harshach
Copy link

@31z4 ui and log viewer would be great to have in the docker image.

@31z4
Copy link
Contributor Author

31z4 commented May 16, 2016

@harshach as you run the ui using the same storm binary - it's already in the image. Here is an example how to run it assuming nimbus is serving on nimbus-host

  $ docker run -d -p 8080:8080 31z4/storm:1.0.1 storm ui -c nimbus.host=storm-nimbus

@31z4
Copy link
Contributor Author

31z4 commented May 22, 2016

@tianon One step closer #1765 😊

@yosifkit
Copy link
Member

Now that we have the zookeeper image, we can hopefully get this in soon!

The only concern I have is related to the docs where all the supporting containers are run with --net container:other-container; which removes some of the isolation provided by containers and prevents multi-host usage by tying all the containers to one host. Could we swap the inline examples to be more like the compose example, using hostnames or IP addresses? Using hostnames would require a docker network to be created, but that is what kapacitor does in their docs.

@31z4
Copy link
Contributor Author

31z4 commented Sep 15, 2016

@yosifkit thanks for your feedback! I'll look into kapacitor docs. Also I'd like to make some improvements including using docker-entrypoint.sh.

@tianon
Copy link
Member

tianon commented Oct 7, 2016

Am I understanding correctly that Storm requires both "nimbus" and "zookeeper" to function? In the examples in the documentation, you seem to be using -c to supply both storm.zookeeper.servers and nimbus.host, presumably because those values default to localhost upstream? Would it make sense to change those defaults in the Dockerfile so that users can still overwrite them, but so that by default, the image is slightly easier to use if you name or --network-alias your containers as zookeeper and nimbus?

@tianon
Copy link
Member

tianon commented Oct 7, 2016

Update instruction format
@31z4
Copy link
Contributor Author

31z4 commented Oct 8, 2016

@yosifkit @tianon guys, thanks for suggestions! I think the image is ready for the final review now.

@abipit
Copy link

abipit commented Oct 10, 2016

Hello @31z4, great job, thank you.

Just 2 points :

  1. do you have the part for docker-compose to add a storm UI ?

  2. I have a topology that works in other environment (storm with ambari) but my topology don't work in your environment.
    Error: A JNI error has occurred, please check your installation and try again
    Exception in thread "main" java.lang.NoClassDefFoundError: backtype/storm/topology/IRichSpout

    I test a lot of thinks but without success, do you have an idea ?

Thanks

@HeartSaVioR
Copy link

@abipit
Regarding 2 you need to check your topology is ready to Storm 1.x since package base name is changed from backtype/storm to org.apache.storm.

@abipit
Copy link

abipit commented Oct 10, 2016

thank you @HeartSaVioR, great news, I will test with the storm container 0.10.2 !

For my first point, I didn't see this anwser with the command line to launch the UI, my bad :)

@abipit
Copy link

abipit commented Oct 10, 2016

For storm ui, I launch this command :

docker run -d -p 8081:8080 --name=storm_ui 31z4/storm:0.10.2 storm ui -c nimbus.host=nimbus

But without success, in the UI, under the "Internal Server Error" section, I have :
java.lang.RuntimeException: org.apache.thrift7.transport.TTransportException: java.net.UnknownHostException: nimbus

I made several tests for the nimbus.host but i don't find the good.
Do you have an idea ?
Thanks a lot

@abipit
Copy link

abipit commented Oct 10, 2016

It's ok for storm_UI but with the docker-compose, not with a command-line.

Because in command_line, the network for storm_UI was 172.18.0.0 while the 3 others containers created by the docker-compose, the network was 172.17.0.0

I put this code in the docker-compose :

    storm_ui:
        image: 31z4/storm:0.10.2
        container_name: storm_ui
        command: storm ui -c nimbus.host=nimbus
        depends_on:
            - nimbus
            - zookeeper
        links:
            - nimbus
            - zookeeper
        restart: always
        ports:
            - 8081:8080

and it's OK now because all containers are in the same network 172.17....

@abipit
Copy link

abipit commented Oct 10, 2016

I found the solution to launch the storm_UI directly in command_line, I indicate that the network is the storm network :

docker run -d -p 8090:8080 --name=storm_ui --net="storm_default" 31z4/storm:0.10.2 storm ui -c nimbus.host=nimbus

@31z4
Copy link
Contributor Author

31z4 commented Oct 11, 2016

@HeartSaVioR @abipit thank you guys for your comments. Just added the Storm UI example into the docs.

@tianon
Copy link
Member

tianon commented Oct 11, 2016

31z4/storm-docker@5fb2661...f38117c seems pretty sane to me! 👍 ❤️

@yosifkit
Copy link
Member

Dockerization seems sane to me. I have just a few questions about logging. Does it log to STORM_LOG_DIR by default? The common case in most of the docker official images is to log to stdout so that logs can be viewed with docker logs; is this possible? I think even just an example in the docs would be enough.

@31z4
Copy link
Contributor Author

31z4 commented Oct 12, 2016

@yosifkit Thanks for pointing this out. Found out that actually logs are not going to the /logs 😊 Will push a fix right away.

According to logging everything to stdout by default I think it doesn't make sense. Storm has pretty complex and meaningful default logging configuration. The idea is to separate different log sources into different files to facilitate browsing and searching the logs. Logging also depends on the service you're running. For example nimbus has the following files in its /logs:

/logs/:
access-nimbus.log      access-web-nimbus.log  nimbus.log

While working supervisor has a bit more:

/logs/:
access-supervisor.log      access-web-supervisor.log  supervisor.log             workers-artifacts

/logs/workers-artifacts:
topology-1-1476256953

/logs/workers-artifacts/topology-1-1476256953:
6700  6701  6702  6703

/logs/workers-artifacts/topology-1-1476256953/6700:
gc.log.0.current    worker.log          worker.log.err      worker.log.metrics  worker.log.out      worker.pid          worker.yaml

/logs/workers-artifacts/topology-1-1476256953/6701:
gc.log.0.current    worker.log          worker.log.err      worker.log.metrics  worker.log.out      worker.pid          worker.yaml

/logs/workers-artifacts/topology-1-1476256953/6702:
gc.log.0.current    worker.log          worker.log.err      worker.log.metrics  worker.log.out      worker.pid          worker.yaml

/logs/workers-artifacts/topology-1-1476256953/6703:
gc.log.0.current    worker.log          worker.log.err      worker.log.metrics  worker.log.out      worker.pid          worker.yaml

Hope this makes sense. Will add a short note about logging in the image documentation.

Logs go to /logs
Data go to /data
Copy link
Member

@yosifkit yosifkit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build test of #1641; 0d52884 (storm):

$ bashbrew build storm:0.9.7
Building bashbrew/cache:a70c57a367c067be50d77fa33f71b4b6d94eebea2f24626dde27280d4fd0bf6c (storm:0.9.7)
Tagging storm:0.9.7
Tagging storm:0.9

$ test/run.sh storm:0.9.7
testing storm:0.9.7
    'utc' [1/4]...passed
    'cve-2014--shellshock' [2/4]...passed
    'no-hard-coded-passwords' [3/4]...passed
    'override-cmd' [4/4]...passed


$ bashbrew build storm:0.10.2
Building bashbrew/cache:b57cc8807a771f1302fc79cb14c7e33cacee6ac4584cf73e93a004b6e7458935 (storm:0.10.2)
Tagging storm:0.10.2
Tagging storm:0.10

$ test/run.sh storm:0.10.2
testing storm:0.10.2
    'utc' [1/4]...passed
    'cve-2014--shellshock' [2/4]...passed
    'no-hard-coded-passwords' [3/4]...passed
    'override-cmd' [4/4]...passed


$ bashbrew build storm:1.0.2
Building bashbrew/cache:a4c7cd706817216c8172ba54a9bcd6a4b39c4f948aa34eef78aef58f966eb906 (storm:1.0.2)
Tagging storm:1.0.2
Tagging storm:1.0
Tagging storm:latest

$ test/run.sh storm:1.0.2
testing storm:1.0.2
    'utc' [1/4]...passed
    'cve-2014--shellshock' [2/4]...passed
    'no-hard-coded-passwords' [3/4]...passed
    'override-cmd' [4/4]...passed

@yosifkit
Copy link
Member

diff --git a/storm_0.10/Dockerfile b/storm_0.10/Dockerfile
new file mode 100644
index 0000000..c851d57
--- /dev/null
+++ b/storm_0.10/Dockerfile
@@ -0,0 +1,43 @@
+FROM openjdk:8-jre-alpine
+MAINTAINER Elisey Zanko <[email protected]>
+
+# Install required packages
+RUN apk add --no-cache \
+    bash \
+    python \
+    su-exec
+
+ENV STORM_USER=storm
+ENV STORM_CONF_DIR=/conf
+ENV STORM_DATA_DIR=/data
+ENV STORM_LOG_DIR=/logs
+
+# Add a user and make dirs
+RUN set -x \
+    && adduser -D "$STORM_USER" \
+    && mkdir -p "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR" \
+    && chown -R "$STORM_USER:$STORM_USER" "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR"
+
+ARG GPG_KEY=ACEFE18DD2322E1E84587A148DE03962E80B8FFD
+ARG DISTRO_NAME=apache-storm-0.10.2
+
+# Download Apache Storm, verify its PGP signature, untar and clean up
+RUN set -x \
+    && apk add --no-cache --virtual .build-deps \
+        gnupg \
+    && wget -q "http://www.apache.org/dist/storm/$DISTRO_NAME/$DISTRO_NAME.tar.gz" \
+    && wget -q "http://www.apache.org/dist/storm/$DISTRO_NAME/$DISTRO_NAME.tar.gz.asc" \
+    && export GNUPGHOME="$(mktemp -d)" \
+    && gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" \
+    && gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz" \
+    && tar -xzf "$DISTRO_NAME.tar.gz" \
+    && chown -R "$STORM_USER:$STORM_USER" "$DISTRO_NAME" \
+    && rm -r "$GNUPGHOME" "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc" \
+    && apk del .build-deps
+
+WORKDIR $DISTRO_NAME
+
+ENV PATH $PATH:/$DISTRO_NAME/bin
+
+COPY docker-entrypoint.sh /
+ENTRYPOINT ["/docker-entrypoint.sh"]
diff --git a/storm_0.10/docker-entrypoint.sh b/storm_0.10/docker-entrypoint.sh
new file mode 100755
index 0000000..a8aea91
--- /dev/null
+++ b/storm_0.10/docker-entrypoint.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+set -e
+
+# Allow the container to be started with `--user`
+if [ "$1" = 'storm' -a "$(id -u)" = '0' ]; then
+    chown -R "$STORM_USER" "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR"
+    exec su-exec "$STORM_USER" "$0" "$@"
+fi
+
+# Generate the config only if it doesn't exist
+CONFIG="$STORM_CONF_DIR/storm.yaml"
+if [ ! -f "$CONFIG" ]; then
+    cat << EOF > "$CONFIG"
+storm.zookeeper.servers: [zookeeper]
+nimbus.seeds: [nimbus]
+storm.log.dir: "$STORM_LOG_DIR"
+storm.local.dir: "$STORM_DATA_DIR"
+EOF
+fi
+
+exec "$@"
diff --git a/storm_0.9/Dockerfile b/storm_0.9/Dockerfile
new file mode 100644
index 0000000..5830aee
--- /dev/null
+++ b/storm_0.9/Dockerfile
@@ -0,0 +1,43 @@
+FROM openjdk:8-jre-alpine
+MAINTAINER Elisey Zanko <[email protected]>
+
+# Install required packages
+RUN apk add --no-cache \
+    bash \
+    python \
+    su-exec
+
+ENV STORM_USER=storm
+ENV STORM_CONF_DIR=/conf
+ENV STORM_DATA_DIR=/data
+ENV STORM_LOG_DIR=/logs
+
+# Add a user and make dirs
+RUN set -x \
+    && adduser -D "$STORM_USER" \
+    && mkdir -p "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR" \
+    && chown -R "$STORM_USER:$STORM_USER" "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR"
+
+ARG GPG_KEY=ACEFE18DD2322E1E84587A148DE03962E80B8FFD
+ARG DISTRO_NAME=apache-storm-0.9.7
+
+# Download Apache Storm, verify its PGP signature, untar and clean up
+RUN set -x \
+    && apk add --no-cache --virtual .build-deps \
+        gnupg \
+    && wget -q "http://www.apache.org/dist/storm/$DISTRO_NAME/$DISTRO_NAME.tar.gz" \
+    && wget -q "http://www.apache.org/dist/storm/$DISTRO_NAME/$DISTRO_NAME.tar.gz.asc" \
+    && export GNUPGHOME="$(mktemp -d)" \
+    && gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" \
+    && gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz" \
+    && tar -xzf "$DISTRO_NAME.tar.gz" \
+    && chown -R "$STORM_USER:$STORM_USER" "$DISTRO_NAME" \
+    && rm -r "$GNUPGHOME" "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc" \
+    && apk del .build-deps
+
+WORKDIR $DISTRO_NAME
+
+ENV PATH $PATH:/$DISTRO_NAME/bin
+
+COPY docker-entrypoint.sh /
+ENTRYPOINT ["/docker-entrypoint.sh"]
diff --git a/storm_0.9/docker-entrypoint.sh b/storm_0.9/docker-entrypoint.sh
new file mode 100755
index 0000000..a8aea91
--- /dev/null
+++ b/storm_0.9/docker-entrypoint.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+set -e
+
+# Allow the container to be started with `--user`
+if [ "$1" = 'storm' -a "$(id -u)" = '0' ]; then
+    chown -R "$STORM_USER" "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR"
+    exec su-exec "$STORM_USER" "$0" "$@"
+fi
+
+# Generate the config only if it doesn't exist
+CONFIG="$STORM_CONF_DIR/storm.yaml"
+if [ ! -f "$CONFIG" ]; then
+    cat << EOF > "$CONFIG"
+storm.zookeeper.servers: [zookeeper]
+nimbus.seeds: [nimbus]
+storm.log.dir: "$STORM_LOG_DIR"
+storm.local.dir: "$STORM_DATA_DIR"
+EOF
+fi
+
+exec "$@"
diff --git a/storm_latest/Dockerfile b/storm_latest/Dockerfile
new file mode 100644
index 0000000..972e3b3
--- /dev/null
+++ b/storm_latest/Dockerfile
@@ -0,0 +1,43 @@
+FROM openjdk:8-jre-alpine
+MAINTAINER Elisey Zanko <[email protected]>
+
+# Install required packages
+RUN apk add --no-cache \
+    bash \
+    python \
+    su-exec
+
+ENV STORM_USER=storm
+ENV STORM_CONF_DIR=/conf
+ENV STORM_DATA_DIR=/data
+ENV STORM_LOG_DIR=/logs
+
+# Add a user and make dirs
+RUN set -x \
+    && adduser -D "$STORM_USER" \
+    && mkdir -p "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR" \
+    && chown -R "$STORM_USER:$STORM_USER" "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR"
+
+ARG GPG_KEY=ACEFE18DD2322E1E84587A148DE03962E80B8FFD
+ARG DISTRO_NAME=apache-storm-1.0.2
+
+# Download Apache Storm, verify its PGP signature, untar and clean up
+RUN set -x \
+    && apk add --no-cache --virtual .build-deps \
+        gnupg \
+    && wget -q "http://www.apache.org/dist/storm/$DISTRO_NAME/$DISTRO_NAME.tar.gz" \
+    && wget -q "http://www.apache.org/dist/storm/$DISTRO_NAME/$DISTRO_NAME.tar.gz.asc" \
+    && export GNUPGHOME="$(mktemp -d)" \
+    && gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" \
+    && gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz" \
+    && tar -xzf "$DISTRO_NAME.tar.gz" \
+    && chown -R "$STORM_USER:$STORM_USER" "$DISTRO_NAME" \
+    && rm -r "$GNUPGHOME" "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc" \
+    && apk del .build-deps
+
+WORKDIR $DISTRO_NAME
+
+ENV PATH $PATH:/$DISTRO_NAME/bin
+
+COPY docker-entrypoint.sh /
+ENTRYPOINT ["/docker-entrypoint.sh"]
diff --git a/storm_latest/docker-entrypoint.sh b/storm_latest/docker-entrypoint.sh
new file mode 100755
index 0000000..a8aea91
--- /dev/null
+++ b/storm_latest/docker-entrypoint.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+set -e
+
+# Allow the container to be started with `--user`
+if [ "$1" = 'storm' -a "$(id -u)" = '0' ]; then
+    chown -R "$STORM_USER" "$STORM_CONF_DIR" "$STORM_DATA_DIR" "$STORM_LOG_DIR"
+    exec su-exec "$STORM_USER" "$0" "$@"
+fi
+
+# Generate the config only if it doesn't exist
+CONFIG="$STORM_CONF_DIR/storm.yaml"
+if [ ! -f "$CONFIG" ]; then
+    cat << EOF > "$CONFIG"
+storm.zookeeper.servers: [zookeeper]
+nimbus.seeds: [nimbus]
+storm.log.dir: "$STORM_LOG_DIR"
+storm.local.dir: "$STORM_DATA_DIR"
+EOF
+fi
+
+exec "$@"

@tianon
Copy link
Member

tianon commented Oct 12, 2016

LGTM

@tianon tianon merged commit e2d01f1 into docker-library:master Oct 12, 2016
@abipit
Copy link

abipit commented Oct 13, 2016

@31z4 Great job, your image is now "Official" on the docker hub ;-) 👍

About logs, is it possible to have the logviewer of supervisor on port 8000 ? 8000 is normally the default port, I tested to export it but without success, no page. It's for my developper, he don't have access on the server but he needs to consult the topology's log (in /logs/workers-artifacts/...).

Do you have an idea ? Thank you

@31z4
Copy link
Contributor Author

31z4 commented Oct 14, 2016

@abipit There are some known issues with the logviewer in dockerized environments.

I see two major approaches here. The first is starting the logviewer in a separate container and using container linking and --volumes-from. And the second is starting both logviewer and supervisor inside a single container.

I'm thinking of creating an image variant with the logviever included. RabbitMQ does something similar with their -managed variant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants