Skip to content
This repository has been archived by the owner on Nov 24, 2022. It is now read-only.

Timed out while waiting for the machine to boot #388

Closed
gotlium opened this issue Sep 14, 2015 · 18 comments
Closed

Timed out while waiting for the machine to boot #388

gotlium opened this issue Sep 14, 2015 · 18 comments

Comments

@gotlium
Copy link

gotlium commented Sep 14, 2015

root@gotlium-kvm# vagrant plugin install vagrant-lxc
Installing the 'vagrant-lxc' plugin. This can take a few minutes...
Installed the plugin 'vagrant-lxc (1.1.0)'!

root@gotlium-kvm# vagrant up --provider=lxc
Bringing machine 'default' up with 'lxc' provider...
==> default: Box 'fgrehm/trusty64-lxc' could not be found. Attempting to find and install...
    default: Box Provider: lxc
    default: Box Version: >= 0
==> default: Loading metadata for box 'fgrehm/trusty64-lxc'
    default: URL: https://atlas.hashicorp.com/fgrehm/trusty64-lxc
==> default: Adding box 'fgrehm/trusty64-lxc' (v1.2.0) for provider: lxc
    default: Downloading: https://atlas.hashicorp.com/fgrehm/boxes/trusty64-lxc/versions/1.2.0/providers/lxc.box
==> default: Successfully added box 'fgrehm/trusty64-lxc' (v1.2.0) for 'lxc'!
==> default: Importing base box 'fgrehm/trusty64-lxc'...
==> default: Setting up mount entries for shared folders...
    default: /vagrant => /usr/src/vm
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.

If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.

If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.

Tested on new Ubuntu-Server 14.04 under KVM.

# lxc-ls -f
NAME          STATE    IPV4  IPV6  AUTOSTART  
--------------------------------------------
lp-translate  STOPPED  -     -     NO   

It happened when I use pre-defined variables for memory. Example:

APP_MEMORY = "#{ENV['VM_MEMORY'] || '2048'}"
config.vm.provider :lxc do |lxc, override|
    lxc.customize 'cgroup.memory.limit_in_bytes', APP_MEMORY
end
@ccope
Copy link
Contributor

ccope commented Sep 15, 2015

You'll need to run sudo lxc-start -n lp-translate --log-level=debug --logpriority=debug --logfile start.log and then post the contents of start.log (you have to log to the file, --log-level won't change the output to the console). Your container could have failed to start for many reasons depending on the contents of your Vagrantfile/container config.

@gotlium
Copy link
Author

gotlium commented Sep 15, 2015

Problems when I using pre-defined vars. You can see example on my comment.

APP_MEMORY = "#{ENV['VM_MEMORY'] || '2048'}"
config.vm.provider :lxc do |lxc, override|
    lxc.customize 'cgroup.memory.limit_in_bytes', APP_MEMORY
end

Try configure Vagrantfile by my example, and you can reproduce my problem.

@gotlium
Copy link
Author

gotlium commented Sep 15, 2015

Enabled debug

export VAGRANT_LOG=DEBUG

That output: https://gist.github.com/gotlium/9e1babce8b09afd9c420

@globin
Copy link
Contributor

globin commented Sep 15, 2015

Please do as recommended by @ccope to get the lxc error message

@ccope
Copy link
Contributor

ccope commented Sep 15, 2015

2048 will limit the container to 2KB, which is too small to start properly. Add a suffix, like '2048M' or '2G'.

@gotlium
Copy link
Author

gotlium commented Sep 16, 2015

I think it's a bug, because on another providers it's working properly (virtual box, parallels). But as I see, you do it lxc.customize 'cgroup.memory.limit_in_bytes' instead lxc.memory.

@fgrehm
Copy link
Owner

fgrehm commented Sep 16, 2015

Well, that config section block is provider specific and each provider will have its own way of doing things. Users should not expect them to have the same behavior (even though vbox and parallels might work the same way)

If we had a top level memory config, like:

Vagrant.configure("2") do |config|
  config.vm.memory = '2048'

  # instead of
  # config.vm.provider :lxc do |lxc, override|
  #    lxc.customize 'cgroup.memory.limit_in_bytes', APP_MEMORY
  # end
end

Then yes, I'd consider it a bug

HTH 🍻

@gotlium
Copy link
Author

gotlium commented Sep 16, 2015

I use 4 providers on Vagrantfile. For Windows - VirtualBox, OS X - Parallels, Linux - LXC/KVM.
And I mean why you are using bytes? Why you can not use by default megabytes and standard directives? I'm understand when I want to customize cgroups, but by default you can use simple variable lxc.vm.memory.

config.vm.provider :lxc do |lxc, override|
   lxc.vm.memory = APP_MEMORY
end

That easy and standard behavior for many providers.

Explicit is better than implicit.
                 The Zen of Python

I think, I can do it by another way (it works):

lxc.customize 'cgroup.memory.limit_in_bytes', APP_MEMORY.to_i*1024*1024

Logs

lxc-start 1442419137.444 DEBUG    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1245 - cgroup 'memory.limit_in_bytes' set to '2147483648'
lxc-start 1442419137.444 INFO     lxc_cgmanager - cgmanager.c:cgm_setup_limits:1249 - cgroup limits have been setup

@ccope
Copy link
Contributor

ccope commented Sep 16, 2015

APP_MEMORY + 'M' would also work.

@gotlium
Copy link
Author

gotlium commented Sep 16, 2015

I'm not understand why you won't add standard directives for CPU and MEM?

PS. - I think with CPU not easy, but if you do it on plugin config it will be helpful for all users and will comply with standards.

# for cpu:
lxc.customize 'cgroup.cpuset.cpus', sprintf("0-%d", CT_CPUS.to_i-1)
# and for memory:
lxc.customize 'cgroup.memory.limit_in_bytes', CT_MEMORY.to_i*1024*1024

If you really can't or won't - you can close this issue.

@globin
Copy link
Contributor

globin commented Sep 16, 2015

We'd appreciate a PR implementing this in a compatible way to other providers. 🍻

@ccope
Copy link
Contributor

ccope commented Sep 16, 2015

For CPU in particular, it is hard to make it similar to other providers because container scheduling works differently from VM's. You can either pin to particular cores, or set a relative priority. You can't create 3 machines each with 1 cpu core without explicitly assigning a specific core per container.

@gotlium
Copy link
Author

gotlium commented Sep 16, 2015

I'm understand it. You can write about this option in wiki:)

@kadiiskiFFW
Copy link

I had the same problem (Ubuntu 14.04, using Cibox). The problem was that the nfs server wasn't started. Try with sudo service nfs-kernel-server start and than vagrant up - this worked with me.

@ameyaagashe
Copy link

Any idea when this is going to be fixed??

@richard-scott
Copy link

I get this far every now and then:

==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

However, after this errors the "config.vm.provision" sections in my Vagrantfile fire up and run just fine. I used Ansible to provision the server, and that can SSH in just fine.

@richard-scott
Copy link

richard-scott commented Feb 6, 2017

ah, apparently adding this to your Vagrantfile may help for VirtualBox instances:

vb.customize ["modifyvm", :id, "--cableconnected1", "on"]

Obviously change the number (1) to be the number for the interface you want to set this for.

ref: chef/bento#682

@ccope
Copy link
Contributor

ccope commented Feb 6, 2017

@richard-scott I don't think your comments are related to the original issue here, Virtualbox and LXC are unrelated.

Closing this issue because I don't think there's anything actionable left here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants