Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sugestion - avoid pm2 in docker #2820

Closed
ksemaev opened this issue Jul 16, 2019 · 14 comments
Closed

Sugestion - avoid pm2 in docker #2820

ksemaev opened this issue Jul 16, 2019 · 14 comments

Comments

@ksemaev
Copy link

ksemaev commented Jul 16, 2019

It is a very unstable and uncomfortable way - to run pm2 inside docker (For example - https://stackoverflow.com/questions/51191378/what-is-the-point-of-using-pm2-and-docker-together).

Can you please provide instructions on how to run process without pm2? With plain node for example?

@ghost
Copy link

ghost commented Jul 16, 2019

Thanks for opening this issue! A maintainer will review this in the next few days and explicitly select labels so you know what's going on.

If no reviewer appears after a week, a reminder will be sent out.

@ksemaev
Copy link
Author

ksemaev commented Jul 16, 2019

From pm2 show I see that I can actually start the app with:

│ script path       │ /home/node/.config/yarn/global/node_modules/@arkecosystem/core/bin/run  │
│ script args       │ relay:run --suffix=relay --env=production --token=ark --network=mainnet │

@faustbrian
Copy link
Contributor

The post you linked already answers your question. If you start a node.js script and it crashes then the whole container will go down with it and you need to setup restart policies to get it back up in such cases.

pm2 is a no-brainer to run in and outside of docker for our use-case so that is what we are going with as it doesn't require any special docker configuration to keep the process running if it crashes.

@ghost
Copy link

ghost commented Jul 17, 2019

This issue has been closed. If you wish to re-open it please provide additional information.

@ksemaev
Copy link
Author

ksemaev commented Jul 17, 2019

@faustbrian the use of docker is single-process app that you allow to die when it dies, and use docker-compose/kubernetes/whatever to handle those cases. Current dockerfile is not production ready, but a sandbox toy to play with, and can't be considered to use in any serious infra sadly. I would like to help on this one, but we need to have that desire from both of us :)

@ksemaev
Copy link
Author

ksemaev commented Jul 17, 2019

I removed pm2 part from dockerfile like this:

RUN apk add --no-cache --virtual .build-deps make gcc g++ python git && \
    apk add --no-cache bash sudo git openntpd openssl && \
    su node -c "yarn global add @arkecosystem/core@$VERSION" && \
    su node -c "yarn cache clean" && \
    apk del .build-deps && \
    rm -rf /home/node/.config/ark-core/* && \
    rm -rf /home/node/.local/state/ark-core/* && \
    ln -s /home/node/.yarn/bin/ark /usr/bin/ark && \
    chown node:node -R /home/node

And replaced start script with ark relay:run --suffix=relay --env=production --token=ark --network=mainnet , so this one is sorted out.

TY for having relay:run script

@adrian69
Copy link
Collaborator

adrian69 commented Jul 17, 2019

@ksemaev

In our implementation those lines:

   rm -rf /home/node/.config/ark-core/* && \
   rm -rf /home/node/.local/state/ark-core/* && \
   ln -s /home/node/.yarn/bin/ark /usr/bin/ark && \
   chown node:node -R /home/node 

have to be run by entrypoint.sh as those paths are mounted as volumes on container start. We need to keep things compatible for everyone and not target only large scale container deployments.

@ksemaev
Copy link
Author

ksemaev commented Jul 17, 2019

@adrian69 TY for explainig, now I see the reason. Can you please tell why you use volumes for this? What information shoud be persistent while docker restarts?
I see those lines here: https://github.com/ArkEcosystem/core/blob/master/docker/production/mainnet/docker-compose.yml#L32-L34 But I can't find doc, explaining why you need them in volumes.

My current docker container restarted ~10 times (as I updated the configs) and seems to work fine without any data stored on volumes.

@adrian69
Copy link
Collaborator

adrian69 commented Jul 17, 2019

@ksemaev

Same reason what @faustbrian mentioned in:
#2818

Our goal is user experience to be as close as possible to a native core node. Having that said, logs being accessible locally is necessary.

@adrian69
Copy link
Collaborator

@ksemaev

You are free to build your own images if those provided by ARK does not fit your environment. Basically this is why we provided:
https://github.com/ArkEcosystem/core/blob/master/docker/production/mainnet/docker-compose-build.yml

@ksemaev
Copy link
Author

ksemaev commented Jul 17, 2019

@adrian69 sorry, I just don't get it. Those locations store logs? :

    volumes:
     - ~/.config/ark-core:/home/node/.config/ark-core
     - ~/.local/share/ark-core:/home/node/.local/share/ark-core
     - ~/.local/state/ark-core:/home/node/.local/state/ark-core

Can you please point me to the doc, where I would find what is supposed to be there?

@adrian69
Copy link
Collaborator

@ksemaev

That is global path and naming convention used by ARK. From point of end user view we try to keep it as simple as possible so code can be run on any computer. So in general one would run the code with docker and can find everything locally as if it would have been running a native node. It doesn't mean they are supposed to be there. It means this is our implementation and we have certain reasons to build it that way. Everyone is free to have his own implementation and not necessarily to use images provided by ARK. I'm sure professional like you would find his way to make it running in a production containerized environment.

@ksemaev
Copy link
Author

ksemaev commented Jul 17, 2019

@adrian69 ah, so you mean runing with docker and without ON THE SAME INSTANCE ? Now I think I get it, sorry that it took me so long :)

Would you consider some PRs for kuber-ready dockerfiles, or you don't need it? I already have built my own images, and use them, was only wondering why you had that weird scheme. The only thing left is to understand why it doesn't start listening on an api port.

@adrian69
Copy link
Collaborator

@ksemaev

ON THE SAME INSTANCE would generally not be applicable because of conflict ports and multiple nodes with a single public ip address (unless you use different floating IPs per container and map external ports to specific container internal address) if i got you right. As for kubernetes ready stuff i'll have to discuss it with my colleagues prior to answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants