-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker manageiq do not start after adding/removing network interface - memcached connectivity #17274
Comments
I have a similar issue: not even able to restart the container after stopping it:
|
I faced the same issue with manageiq docker image.
|
The default /manageiq/docker-assets/appliance-initialize.sh only starts memcached and postresql if the DB has not been initialized. See line 18. I created local appliance-initialize.sh that include start commands for the aforementioned services if the DB is initialized. My local Dockerfile includes a Copy of this new file that overwrites the base image default. |
Allowing to restart MIQ Container after stopping it (cf. ManageIQ#17274 (comment))
Allowing to restart MIQ Container after stopping it (cf. ManageIQ#17274 (comment))
I just pulled today and got the same issue on restart
|
You need to change the file in two location in docker container: /usr/bin and /var/www/miq/vmdb/docker-assets. First spin up the manageiq with docker then use docker cp to copy the original file to Host then modify as per Guilrom and copy back to docker instance using same docker cp. Once copied back then restart the container. It should be working. I tested this approach and working fine. |
After making above changes, the httpd is not starting. we have to start it manually to access the manageiq web page. /usr/sbin/httpd -DFOREGROUND & I think we are missing some config here not sure how to find it. still checking. It is happening for only docker images. The vmware image i.e ova or ovf works fine. |
This issue has been automatically marked as stale because it has not been updated for at least 6 months. If you can still reproduce this issue on the current release or on Thank you for all your contributions! |
Hello, I have the same issue ... is there any progress ? Thanks, |
I could not fix it. I tried lot of options but no luck.
On Tuesday, August 20, 2019, 06:54:53 PM GMT+5:30, Michal Arbet <[email protected]> wrote:
Hello,
I have the same issue ... is there any progress ?
Thanks,
Michal
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@eselvam none. this docker image is a one shot launch.... ;-) after you stopped it, unable to start it again. And they don't care. tested with hammer-6, hammer-10 |
@carbonin can you take a look at this? |
To do this I overwrote the entrypoint from the base image with what is mostly the previous appliance initialize script. The main changes I made were to add the server start at the end and to remove the old container scripts references by pasting the v2 key writing function into where it was previously called. Additionally I removed starting memcached from the block that only gets called if the database doesn't exist. We should start memcached regardless. This should allow the container to be started after a clean stop Fixes ManageIQ#17274
After #19463 is merged this should work for the most part. The only issue I was still having was that I think the issue is that even if you wait for the server to stop cleanly the other processes in the container will be killed uncleanly which I feel like is what's causing the issue I'm seeing with |
To do this I overwrote the entrypoint from the base image with what is mostly the previous appliance initialize script. The main changes I made were to add the server start at the end and to remove the old container scripts references by pasting the v2 key writing function into where it was previously called. Additionally I removed starting memcached from the block that only gets called if the database doesn't exist. We should start memcached regardless. This should allow the container to be started after a clean stop Fixes ManageIQ#17274
i have launched the latest docker manageiq (20180404) container.
it was working fine.
i had the "strange" idea to change network configuration, adding a second network interface, and removing it after (docker network connect / docker network disconnect)
Now the container do not start and i don't know why since the network conf looks like before.
The log error is:
Error something seems wrong, we need at least two parameters to check service status
== Checking MIQ database status ==
** DB already initialized
{"@timestamp":"2018-04-05T15:47:32.690234 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for evm.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.690894 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for vim.log has been changed to [WARN]"}
{"@timestamp":"2018-04-05T15:47:32.691549 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for rhevm.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.692014 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for aws.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.692380 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for kubernetes.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.692814 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for datawarehouse.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.693234 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for container_monitoring.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.693607 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for scvmm.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.693993 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for api.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.694350 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for fog.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.694738 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for azure.log has been changed to [WARN]"}
{"@timestamp":"2018-04-05T15:47:32.695077 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for lenovo.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.695500 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for websocket.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.695935 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for vcloud.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:32.696293 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(Vmdb::Loggers.apply_config) Log level for nuage.log has been changed to [INFO]"}
{"@timestamp":"2018-04-05T15:47:33.068484 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"info","message":"MIQ(SessionStore) Using session_store: ActionDispatch::Session::MemCacheStore"}
{"@timestamp":"2018-04-05T15:47:33.451305 ","hostname":"manageiq","pid":7,"tid":"3a7140","level":"warning","message":"127.0.0.1:11211 failed (count: 0) Errno::ECONNREFUSED: Connection refused - connect(2) for "127.0.0.1" port 11211"}
/usr/local/lib/ruby/gems/2.3.0/gems/dalli-2.7.6/lib/dalli/ring.rb:45:in
server_for_key': No server available (Dalli::RingError) from /usr/local/lib/ruby/gems/2.3.0/gems/dalli-2.7.6/lib/dalli/client.rb:236:in
alive!'from /usr/local/lib/ruby/gems/2.3.0/gems/dalli-2.7.6/lib/rack/session/dalli.rb:19:in
initialize' from /usr/local/lib/ruby/gems/2.3.0/gems/actionpack-5.0.6/lib/action_dispatch/middleware/session/abstract_store.rb:32:in
initialize'il looks like a memcache network connectivity problem.
What should i modify to make it start again?
Regards.
The text was updated successfully, but these errors were encountered: