Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error registering targets with target group: InvalidTarget: The following targets are not in a running state and cannot be registered #7561

Closed
ghost opened this issue Feb 14, 2019 · 7 comments · Fixed by #8483
Labels
bug Addresses a defect in current functionality. enhancement Requests to existing resources that expand the functionality or scope. service/elbv2 Issues and PRs that pertain to the elbv2 service.

Comments

@ghost
Copy link

ghost commented Feb 14, 2019

This issue was originally opened by @edeas123 as hashicorp/terraform#20345. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.11
+ provider.aws v1.54.0

Terraform Configuration Files

resource "aws_alb" "rancher-ctl-host-alb" {
	name = "rancher-ctl-host-alb"
	internal = false
	load_balancer_type = "application"
	ip_address_type  = "ipv4"

	subnets = ["${data.aws_subnet_ids.vpc_subnets.ids}"]
	security_groups = [
		"${data.terraform_remote_state.core.rancher-ctl-host-alb-sg-id}"
	]
}

# create an application load balancer listener 
resource "aws_alb_listener" "rancher-ctl-host-alb-listener" {
	load_balancer_arn = "${aws_alb.rancher-ctl-host-alb.arn}"
	port = 80
	protocol = "HTTP"

	default_action {
    	type             = "forward"
    	target_group_arn = "${aws_alb_target_group.rancher-ctl-host-target-group.arn}"
  	}
}

# create the application load balancer target group
resource "aws_alb_target_group" "rancher-ctl-host-target-group" {
	name = "rancher-ctl-host-target-group"
	port = 8080
	protocol = "HTTP"
	vpc_id = "${data.terraform_remote_state.core.default-vpc-id}"
}

# attach the three hosts to the target group
resource "aws_alb_target_group_attachment" "rancher-ctl-host-target-group-instances" {
  target_group_arn = "${aws_alb_target_group.rancher-ctl-host-target-group.arn}"
  target_id        = "${aws_spot_instance_request.rancher-ctl-host.*.spot_instance_id[count.index]}"
  port             = 8080
  count            = 3
}

Expected Behavior

It should not have given errors applying the plan

Actual Behavior

Error: Error applying plan:

3 error(s) occurred:

* aws_alb_target_group_attachment.rancher-ctl-host-target-group-instances[2]: 1 error(s) occurred:

* aws_alb_target_group_attachment.rancher-ctl-host-target-group-instances.2: Error registering targets with target group: InvalidTarget: The following targets are not in a running state and cannot be registered: 'i-0791f6bee8a082a10'
        status code: 400, request id: 990ee63b-3053-11e9-92bd-4d5e8013e613
* aws_alb_target_group_attachment.rancher-ctl-host-target-group-instances[0]: 1 error(s) occurred:

* aws_alb_target_group_attachment.rancher-ctl-host-target-group-instances.0: Error registering targets with target group: InvalidTarget: The following targets are not in a running state and cannot be registered: 'i-0b8c1f6d35f57c5cb'
        status code: 400, request id: 990f0dbc-3053-11e9-bbd2-3b6b83537945
* aws_alb_target_group_attachment.rancher-ctl-host-target-group-instances[1]: 1 error(s) occurred:

* aws_alb_target_group_attachment.rancher-ctl-host-target-group-instances.1: Error registering targets with target group: InvalidTarget: The following targets are not in a running state and cannot be registered: 'i-033d01746af85be03'
        status code: 400, request id: 990f3424-3053-11e9-aa19-bf12dffa0d2b

Additional Context

The problem seem to be that the alb_target_group_attachment is created before the spot instances are running (although they have been created), and it does not support interpolation which is needed in my case to use the depends_on.

References

@nywilken
Copy link
Contributor

Hi @edeas123 sorry to hear you are running into issues applying this plan. Have you tried reaching out to the community forums for assistance?

We use GitHub issues for tracking bugs and enhancements rather than for questions. While we may be able to help with certain simple problems here it’s generally better to use one of the community forums where there are far more people ready to help, whereas the GitHub issues here are generally monitored only by the small set of code maintainers of each repository.

@edeas123
Copy link

@nywilken are you saying this does not qualify as a bug?

@bflad
Copy link
Contributor

bflad commented Feb 15, 2019

@edeas123 just to confirm your configuration, do you have wait_for_fulfillment enabled for the aws_spot_instance_request resources? Does a second terraform apply work? If so, this may be a case where we need to tell the aws_lb_target_group_attachment to retry on that specific message for a few minutes to allow the spot instances to go from pending to running.

@edeas123
Copy link

@bflad yes to both questions

@nywilken
Copy link
Contributor

Hi @edeas123 my apologies for prematurely closing the issue. In reading the provided information I thought the issue was more of a question around timing issues with the use of spot instances since the expected behavior was "It should not have given errors applying the plan". My apologies for making that assumption. I'll update the labeling on this and reopen.

@nywilken nywilken reopened this Feb 15, 2019
@nywilken nywilken added enhancement Requests to existing resources that expand the functionality or scope. service/ses Issues and PRs that pertain to the ses service. bug Addresses a defect in current functionality. labels Feb 15, 2019
@bflad bflad added service/elbv2 Issues and PRs that pertain to the elbv2 service. and removed service/ses Issues and PRs that pertain to the ses service. labels Feb 16, 2019
jbarrick-mesosphere added a commit to jbarrick-mesosphere/terraform-provider-aws that referenced this issue Apr 29, 2019
jbarrick-mesosphere added a commit to jbarrick-mesosphere/terraform-provider-aws that referenced this issue Apr 29, 2019
@jbarrick-mesosphere
Copy link
Contributor

Opened a PR: #8483

nywilken added a commit that referenced this issue May 2, 2019
…error (#8483)

#7561 - Retry ELB attachment on InvalidTarget.
@ghost
Copy link
Author

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. enhancement Requests to existing resources that expand the functionality or scope. service/elbv2 Issues and PRs that pertain to the elbv2 service.
Projects
None yet
4 participants