Skip to content
This repository has been archived by the owner on Feb 11, 2022. It is now read-only.

Instance creation hangs on "Waiting for SSH to become available..." #3

Closed
patrickdlee opened this issue Mar 15, 2013 · 28 comments
Closed

Comments

@patrickdlee
Copy link

I'm using Vagrant 1.1.0 and v0.1.0 of the vagrant-aws plugin. When I try to create an EC2 instance, the creation process gets hung up on "Waiting for SSH to become available...". However, the instance is created and I can see it in my EC2 Dashboard.

Here is my (redacted) Vagrantfile...

Vagrant.configure("2") do |config|
  config.vm.define :testbox do |testbox|
    testbox.vm.box = 'dummy'
    testbox.ssh.username = 'ubuntu'

    testbox.vm.provider :aws do |aws|
      aws.access_key_id = 'KEY'
      aws.secret_access_key = 'ACCESS_KEY'
      aws.keypair_name = 'vagrant-east1'

      aws.ssh_private_key_path = '/home/patrick/.ssh/vagrant-east1.pem'
      aws.ssh_username = 'ubuntu'
      aws.region = 'us-east-1'
      aws.ami = 'ami-de0d9eb7'
      aws.instance_type = 't1.micro'

      aws.tags = {
        Name: 'Vagrant AWS Precise'
      }
    end
  end
end

And here is the output I'm seeing...

patrick@patrick-pangolin:/opt/vagrant-aws$ vagrant up --provider=aws
Bringing machine 'testbox' up with 'aws' provider...
[testbox] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (`config.vm.network`). They
will be silently ignored.
[testbox] Launching an instance with the following settings...
[testbox]  -- Type: t1.micro
[testbox]  -- AMI: ami-de0d9eb7
[testbox]  -- Region: us-east-1
[testbox]  -- Keypair: vagrant-east1
[testbox] Waiting for instance to become "ready"...
[testbox] Waiting for SSH to become available...

I've tried this in the "us-west-2" and "us-east-1" regions with the same result. I must be doing something wrong, but I have no idea what it is.

@mitchellh
Copy link
Owner

It usually means that it can just never connect with SSH. Can you run it again with VAGRANT_LOG=debug and attempt to get maybe... 5 minutes of output? Or so.

@patrickdlee
Copy link
Author

Sure, no problem. There's a lot of debugging output, but the following set of lines eventually repeats indefinitely...

DEBUG ssh: == Net-SSH connection debug-level log END ==
 INFO retryable: Retryable exception raised: #<Timeout::Error: execution expired>
 INFO ssh: Attempting to connect to SSH: ec2-54-242-252-225.compute-1.amazonaws.com:22
DEBUG ssh: == Net-SSH connection debug-level log START ==
DEBUG ssh: D, [2013-03-15T16:35:34.736090 #27321] DEBUG -- net.ssh.transport.session[2b120f4]: establishing connection to ec2-54-242-252-225.compute-1.amazonaws.com:22

Looks like your diagnosis is correct. Any advice for how to resolve the problem? And did you want the full debug output? There's a lot, so I decided to copy the relevant bits from the end.

@mitchellh
Copy link
Owner

You got the meat I would've found. This is likely bad security groups blocking SSH.

@patrickdlee
Copy link
Author

Right again. I was using the "default" security group, but I created another one called "vagrant" and opened up port 22 to the universe. Then I added this line in the AWS provider section of my Vagrantfile...

aws.security_groups = [ 'vagrant' ]

The instance was created without any issue and I was able to SSH into it immediately. Thanks so much for your help! I'm actually doing a talk on Vagrant and Puppet tomorrow afternoon at Boise Code Camp, so I'm really glad this example is working now. :)

@tralamazza
Copy link
Collaborator

I have a similar problem, but my setup involves a VPC. Vagrant seems unable to connect via ssh. I have the correct security group in my Vagrantfile, I can manually ssh into the instance from another terminal.

DEBUG ssh: == Net-SSH connection debug-level log END ==
 INFO ssh: SSH not up: #<Vagrant::Errors::SSHConnectionRefused: SSH connection was refused! This usually happens if the VM failed to
boot properly. Some steps to try to fix this: First, try reloading your
VM with `vagrant reload`, since a simple restart sometimes fixes things.
If that doesn't work, destroy your VM and recreate it with a `vagrant destroy`
followed by a `vagrant up`. If that doesn't work, contact a Vagrant
maintainer (support channels listed on the website) for more assistance.>
DEBUG ssh: Checking whether SSH is ready...
 INFO machine: Calling action: read_ssh_info on provider AWS (i-589bcf12)
 INFO runner: Preparing hooks for middleware sequence...
 INFO runner: 1 hooks defined.
 INFO runner: Running action: #<Vagrant::Action::Builder:0x00000003bab7f8>
 INFO warden: Calling action: #<Vagrant::Action::Builtin::ConfigValidate:0x00000003be5188>
 INFO warden: Calling action: #<VagrantPlugins::AWS::Action::ConnectAWS:0x00000003be5160>
 INFO connect_aws: Connecting to AWS...
 INFO warden: Calling action: #<VagrantPlugins::AWS::Action::ReadSSHInfo:0x00000003d24030>
DEBUG ssh: Checking key permissions: /home/tralamaz/vms/release/vagrant.pem
 INFO ssh: Attempting SSH. Retries: 100. Timeout: 30
 INFO ssh: Attempting to connect to SSH: :22
DEBUG ssh: == Net-SSH connection debug-level log START ==
DEBUG ssh: D, [2013-03-27T19:39:13.346218 #9378] DEBUG -- net.ssh.transport.session[205a83c]: establishing connection to :22

DEBUG ssh: == Net-SSH connection debug-level log END ==
 INFO retryable: Retryable exception raised: #<Errno::ECONNREFUSED: Connection refused - connect(2)>
 INFO ssh: Attempting to connect to SSH: :22
DEBUG ssh: == Net-SSH connection debug-level log START ==
DEBUG ssh: D, [2013-03-27T19:39:13.349545 #9378] DEBUG -- net.ssh.transport.session[20f1854]: establishing connection to :22

ssh -i vagrant.pem ubuntu@... works

@LeoNogueira
Copy link

I had a similar issue but solved changing the ownership of the key-par. It just worked. Great job with Vagrant!

@tralamazza
Copy link
Collaborator

I fixed my problem with this PR #30

@myttux10
Copy link

Mine was a iptables issue solved by allowing incoming connections to loopback interface.

@javier-lopez
Copy link

Hi, I'm new to vagrant, so far the experience in general has been great, however I think the aws plugin can improve, I'm running Ubuntu and the default instructions doesn't work, the plugin depends on fog and it requires to install some dependencies (issue #163) , when I got to install it correctly, it still didn't work, I was tackled by this bug (not sure why it was closed without a fix). In my opinion, the plugin should work out of the box, it should create and use a default security group (if no default security group is provided) which will allow to control the virtual machine vagrant has just created, it's kind of fun it can create and launch the ec2 instance but not to connect to it =)

Later I was able to connect to the VM by adding another security group in the ec2 dashboard, however I think it would be cooler if vagrant-aws could give the whole enchilada.

Note: It seems such report already exist on #95 , Feature request: create a "vagrant" security group if none is specified

@zelig
Copy link
Contributor

zelig commented Feb 12, 2014

I created a 'vargant' security group with SSH inbound enabled for all IPs, then I put

aws.security_groups = [ 'vagrant' ]

to Vagrantfile provider section.
It still hangs on vagrant up

[default] Waiting for SSH to become available...

I check on EC2 dashboard and the created instance IS running with security group vagrant.

Can anyone help what i am missing?

@mauriciopiber
Copy link

same issue

@mauriciopiber
Copy link

Hi Guys, What i got from this issue to solve it was that Private Key *.pem must have the same user owner thats running vagrant, example if you run vagrant with root, the owner of *.pem file must be root too. If you are running with your own user (aka "piber"), piber must be the owner and goes on.

@dman-coders
Copy link

I'm getting the same. I did figure out it was that a security group was needed, and used my own one that I'd prepared long ago through the ec2 console, and eventually figured that an 'array' meant [brackets] and it wanted the security group description rather than the ID

  aws.security_groups = ["basic ports open"]

HOWEVER, it's still hanging.
I can connect directly to the running instance using the instructions that EC2 console gives when I press 'connect', eg

ssh -i dmanAWS.pem [email protected]

does log me in fine, so the ports and the instance are fine.
I can't see that it's a file ownership thing, and as I know ssh can be picky about that, I did check that, but the .pem file is mine, readable only by me, and the higher directories are secure also.
...
.. and ah, found it, the path to the .pem file was incorrect, I'd actually added my normal private rsa key there, not the .pem file I needed.

Righty. I wish it had told me the key was rejected rather than just hanging there forever.
Got it now.

The sanitized examples in the quickstart had me guessing a lot - could you provide slightly more illustrative example strings?

override.ssh.private_key_path = "PATH TO YOUR PRIVATE KEY" # eg "~/yourkey.pem"

@mmell
Copy link

mmell commented Mar 24, 2014

Same problem here when using my personal (rsa) key pair. I resolved it by using the AWS-generated pem file in my override.ssh.private_key_path. (thanks @dman-coders)

@flare-ws
Copy link

I've seen similar behavior when preseed.cfg is unreachable from vm. Mb you should ensure the template directory contains this file.

@tnj
Copy link

tnj commented Jan 14, 2015

If you are using Mac, your private key has passphrase, and you have stored your passphrase in keychain (with ssh-key -K,) you have to place public key (.pub file) on the same directory of the private key.

I couldn't figure out why this happen, but according to debug log it said ssh couldn't read the private key. With some googling, I eventually found this Ask Different answer so I put corresponding public key there, then the problem disappeared.

@riebling
Copy link

riebling commented Jul 7, 2015

I think if you follow the instructions AWS gives at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/building-shared-amis.html on securing your AMI, where they tell you to disable root login with
sudo passwd -l root (and)
sudo shred -u /etc/ssh/*_key /etc/ssh/*_key.pub
this breaks Vagrant's ability to log in as root in order to first establish ssh connections. (But I may not understand completely what is happening 'under the hood') - if anyone can confirm this, would help, because I just tried the suggestions and am in the same boat waiting forever for SSH, only this time I know what I did to break it. :)

@jeffisenhart
Copy link

I had the same issue (Waiting for SSH to become available...) but my problem was that the admin username in config.yaml was
username: admin
instead of
username: ubuntu

@mattplindsay
Copy link

I'm having issues with this still - can anyone confirm if its the security group ID I need, or the description - and also do I need single quotes or double quotes in the array (i've seen both examples). I'm using Windows.

@giappv
Copy link

giappv commented Apr 13, 2016

Please check your bios setting if your computer allows 64-bit virtualization.

@stunney
Copy link

stunney commented Jun 13, 2016

I too am hitting this. Host OS is Windows 10. Trying to start up a Windows Server 2012 R2 based completely on the Packer scripts with only changes for my license key and the ISO location on disk.

@r-2st
Copy link

r-2st commented Sep 29, 2016

Wanted to throw out my thanks for this thread. Ran into the same problem of not being able to connect to the instance and changing the default security group as mentioned above fixed my issue.

@adnelson
Copy link

adnelson commented Jan 6, 2017

I was having this issue as well. My company's security policies prevent me from creating an instance which has 22 opened to the world, but I can tunnel through another AWS instance in order to access the instance that vagrant brings up. I have an SSH config file (~/.ssh/config) which sets this tunnel up automatically on any EC2 connection, but vagrant doesn't seem to be reading it. Curious if there's a way to make vagrant aws read the ssh config file, or some other way to set extra options on its SSH command? I didn't see anything like this in the README.

@milind2
Copy link

milind2 commented Sep 7, 2017

Hello Guys,

I am facing similar problem.please help me to resolve the error

[root@ip-172-31-19-169 vagrant_test]# vagrant up --provider=aws
Bringing machine 'default' up with 'aws' provider...
==> default: Warning! The AWS provider doesn't support any of the Vagrant
==> default: high-level network configurations (config.vm.network). They
==> default: will be silently ignored.
==> default: Warning! You're launching this instance into a VPC without an
==> default: elastic IP. Please verify you're properly connected to a VPN so
==> default: you can access this machine, otherwise Vagrant will not be able
==> default: to SSH into it.
==> default: Launching an instance with the following settings...
==> default: -- Type: m3.medium
==> default: -- AMI: ami-c998b6b2
==> default: -- Region: us-east-1
==> default: -- Keypair: guruom_northv
==> default: -- Subnet ID: subnet-70d7a338
==> default: -- Security Groups: ["sg-955540e5"]
==> default: -- Block Device Mapping: []
==> default: -- Terminate On Shutdown: false
==> default: -- Monitoring: false
==> default: -- EBS optimized: false
==> default: -- Source Destination check:
==> default: -- Assigning a public IP address in a VPC: true
==> default: -- VPC tenancy specification: default
==> default: Waiting for instance to become "ready"...
==> default: Waiting for SSH to become available...

Execution stuck here and not moving forward.

my Vagrant file is as below

Vagrant.configure("2") do |config|
config.vm.box = "dummy"

config.vm.provider :aws do |aws, override|
aws.access_key_id = "AKIAIETI6RM6RKS6XMEQ"
aws.security_groups = "sg-955540e5"
aws.subnet_id = "subnet-70d7a338"
aws.associate_public_ip = true
aws.secret_access_key = "4C0TIFMBIJs0/DR00aLiJysnUBUeW4pqiYBx2eX/"
aws.keypair_name = "guruom_northv"
aws.region = "us-east-1"
aws.ami = "ami-c998b6b2"

override.ssh.username = "ec2-user"
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY']

end
end

@milind2
Copy link

milind2 commented Sep 7, 2017

None of the above solution is working for me

@softwareplumber
Copy link

Me neither tried 3 different AMI images.

if I launch image XXX from the amazon console, and set security group YYY then I can connect with ssh user WWW and key pair ZZZ; setting XXX YYY WWW and ZZZ in the Vagrantfile creates an image to which I cannot connect.

@jdelafon
Copy link

For me it was a change to override.ssh.username = "ubuntu" to another usename that made this error happen. Either keep "ubuntu" or change the ownership of the KeyPair.

@umbe1987
Copy link

Right again. I was using the "default" security group, but I created another one called "vagrant" and opened up port 22 to the universe. Then I added this line in the AWS provider section of my Vagrantfile...

aws.security_groups = [ 'vagrant' ]

The instance was created without any issue and I was able to SSH into it immediately. Thanks so much for your help! I'm actually doing a talk on Vagrant and Puppet tomorrow afternoon at Boise Code Camp, so I'm really glad this example is working now. :)

Yeah, that did the trick for me too!

loveshack pushed a commit to loveshack/vagrant-aws that referenced this issue Jul 8, 2019
Get region value from provider.region, fix for a12ccea
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests