Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Certain CloudStack resources not added to tfstate #7583

Closed
keith471 opened this issue Jul 11, 2016 · 15 comments · Fixed by #7828
Closed

Certain CloudStack resources not added to tfstate #7583

keith471 opened this issue Jul 11, 2016 · 15 comments · Fixed by #7828

Comments

@keith471
Copy link

Terraform Version

0.6.16

Affected Resource(s)

Of the many resources in my configuration, the only two I've encountered with this issue are:

  • cloudstack_static_nat
  • cloudstack_loadbalancer_rule

Terraform Configuration Files

provider "cloudstack" {
  api_url           = "${var.api_url}"
  api_key           = "${var.api_key}"
  secret_key        = "${var.secret_key}"
}

resource "cloudstack_template" "template" {
  name              = "tf-template"
  format            = "VHD"
  hypervisor        = "XenServer"
  os_type           = "${var.os_type_id}"
  url               = "http://url/of/template"
  zone              = "-1"
  project           ="${var.project}"
}

resource "cloudstack_vpc" "main_vpc" {
  name              = "tf-default"
  cidr              = "${var.vpc_cidr}"
  vpc_offering      = "${var.vpc_offering}"
  zone              = "${var.zone}"
  project           = "${var.project}"
}

resource "cloudstack_network" "tier" {
  name              = "tf-tier"
  cidr              = "${cidrsubnet("${var.vpc_cidr}", 2, 1)}"
  network_offering  = "${var.network_offering}"
  vpc_id            = "${cloudstack_vpc.main_vpc.id}"
  zone              = "${var.zone}"
  acl_id            = "9ba3ec65-2e1d-11e4-8e05-42a29a39fc92"
  project           = "${var.project}"
}

resource "cloudstack_instance" "instance" {
  name              ="tf-instance"
  zone              ="${var.zone}"
  template          ="${cloudstack_template.template.id}"
  service_offering  ="${var.service_offering}"
  network_id        ="${cloudstack_network.tier.id}"
  project           = "${var.project}"
  keypair           = "${cloudstack_ssh_keypair.sshkey.id}"
  expunge           =false
}

resource "cloudstack_ipaddress" "static_nat_ip" {
  vpc_id            = "${cloudstack_vpc.main_vpc.id}"
  project           = "${var.project}"
}

resource "cloudstack_static_nat" "static_nat" {
  ip_address_id         = "${cloudstack_ipaddress.static_nat_ip.id}"
  virtual_machine_id    = "${cloudstack_instance.instance.id}"
  network_id            = "${cloudstack_network.tier.id}"
}

resource "cloudstack_ipaddress" "lb_ip" {
    vpc_id          = "${cloudstack_vpc.main_vpc.id}"
    project         = "${var.project}"
}

resource "cloudstack_loadbalancer_rule" "lbr" {
  name              = "tf-lbr"
  description       = "App load balancer rule"
  ip_address_id     = "${cloudstack_ipaddress.lb_ip.id}"
  algorithm         = "roundrobin"
  network_id        = "${cloudstack_network.tier.id}"
  private_port      = "33"
  public_port       = "33"
  member_ids        = ["${cloudstack_instance.instance.id}"]
}

resource "cloudstack_ssh_keypair" "sshkey" {
  name              = "tf-ssh-key"
  public_key        = "${file("~/.ssh/id_rsa.pub")}"
  project           = "${var.project}"
}

Debug Output

  1. Run terraform apply to spin up all the infrastructure
  2. Run terraform apply again.
    The debug output here was produced by the second step.

Expected Behavior

Should not have attempted to create the load balancer rule nor the static NAT as they already exist.

Actual Behavior

Attempts to create them, producing the following error output:

2 error(s) occurred:

* cloudstack_static_nat.static_nat: Error enabling static NAT: CloudStack API error 431 (CSExceptionErrorCode: 4350): Failed to enable static nat on the  ip 192.155.69.46 with Id 4102679d-f75d-4e9d-a885-d26f7912f7fd as the vm i-2649-12835-VM with Id cc5fb8b6-3d9c-4917-91b6-9b25733b7520 is already associated with another public ip 192.155.69.46 with id 4102679d-f75d-4e9d-a885-d26f7912f7fd
* cloudstack_loadbalancer_rule.lbr: CloudStack API error 537 (CSExceptionErrorCode: 9999): The range specified, 33-33, conflicts with rule 12294 which has 33-33

Steps to Reproduce

  1. terraform apply
  2. terraform plan - you will see that the load balancer rule and static NAT are staged for creation
  3. terraform apply

References

It could be completely unrelated, but a similar-sounding past problem is #2584.

@svanharmelen
Copy link
Contributor

@keith471 thanks for the report... Could you run your test again, but then do a terraform plan after the first run? I would be very curious what it shows there (as it should show you exactly what is going to change and why)...

@keith471
Copy link
Author

keith471 commented Jul 13, 2016

@svanharmelen thanks for your reply! Here is the output of terraform plan after the first run:

Refreshing Terraform state prior to plan...

cloudstack_ssh_keypair.sshkey: Refreshing state... (ID: tf-ssh-key)
cloudstack_vpc.main_vpc: Refreshing state... (ID: 2ed1bd40-cfa2-426c-a597-7673fa0e231e)
cloudstack_template.template: Refreshing state... (ID: b1be8abf-9763-4f03-8aa7-86687c4d8a34)
cloudstack_ipaddress.static_nat_ip: Refreshing state... (ID: 415397df-8f1a-4e42-bac0-77a378b761a3)
cloudstack_ipaddress.lb_ip: Refreshing state... (ID: 054b9f66-aad0-4eb9-b736-f84a8a3ac58e)
cloudstack_network.tier: Refreshing state... (ID: 04f9ee7e-e10e-498d-b9eb-91d51371ec16)
cloudstack_instance.instance: Refreshing state... (ID: 438985f5-722c-4650-b617-b5f2fa4b5a95)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ cloudstack_loadbalancer_rule.lbr
    algorithm:     "" => "roundrobin"
    description:   "" => "App load balancer rule"
    ip_address_id: "" => "054b9f66-aad0-4eb9-b736-f84a8a3ac58e"
    member_ids.#:  "" => "1"
    member_ids.0:  "" => "438985f5-722c-4650-b617-b5f2fa4b5a95"
    name:          "" => "tf-lbr"
    network_id:    "" => "04f9ee7e-e10e-498d-b9eb-91d51371ec16"
    private_port:  "" => "33"
    public_port:   "" => "33"

+ cloudstack_static_nat.static_nat
    ip_address_id:      "" => "415397df-8f1a-4e42-bac0-77a378b761a3"
    network_id:         "" => "04f9ee7e-e10e-498d-b9eb-91d51371ec16"
    virtual_machine_id: "" => "438985f5-722c-4650-b617-b5f2fa4b5a95"
    vm_guest_ip:        "" => "<computed>"


Plan: 2 to add, 0 to change, 0 to destroy.

And here is the debug output.

I hope this helps! Let me know anything else that might be helpful.

NOTE: This was after a fresh run of terraform apply so resource IDs will be different from the first post.

@svanharmelen
Copy link
Contributor

@keith471 is there any way you can test this with 0.7rc3? As I cannot reproduce this issue with the config you supplied in combination with 0.7rc3. So I'm guessing your problem is already been solved in master.

NOTE: Do NOT use any existing config/state that you want to keep managing after your test! When using 0.7 your state will be updated and can no longer be used with older versions of Terraform!

@keith471
Copy link
Author

@svanharmelen just tested with 0.7rc3 with no luck unfortunately - I'm still seeing the same behavior. Any ideas? I'll do a bit of digging next Monday.

@svanharmelen
Copy link
Contributor

@keith471 that is really weird... So if you look in CloudStack, I assume the resources are actually created right? So if you use the same keys as you are using with Terraform, can you query the resources using cloudmonkey? Do they show up and if so, what is the exact cloudmonkey command you used to list the resources?

@keith471
Copy link
Author

keith471 commented Jul 25, 2016

@svanharmelen this is weird indeed. Yes, the resources exist in CloudStack and I am able to query for them with cloudmonkey. Here are the commands I used:
Load balancer rule
listLoadBalancerRules projectid=<id_of_the_project> publicipid=<load_balancer_ip_address_id>
Static NAT
listPublicIpAddresses projectid=<id_of_the_project> isstaticnat=true

In both cases, the projectid is the same as used in the terraform config (project ="${var.project}"), and the load_balancer_ip_address_id is the id of the IP created for the "lb-ip" resource in the terraform config file.

I also verified that there is no difference between the fields of the output of cloudmonkey and the output of terraform plan, i.e. the fields in

+ cloudstack_loadbalancer_rule.lbr
    algorithm:     "roundrobin"
    description:   "App load balancer rule"
    ip_address_id: "054b9f66-aad0-4eb9-b736-f84a8a3ac58e"
    member_ids.#:  "1"
    member_ids.0:  "3c8e5561-62b5-4357-a17d-2fd35b32fb27"
    name:          "tf-lbr"
    network_id:    "cbe2ef15-5b15-4cf2-a132-f3cc6fc74a37"
    private_port:  "33"
    public_port:   "33"

+ cloudstack_static_nat.static_nat
    ip_address_id:      "415397df-8f1a-4e42-bac0-77a378b761a3"
    network_id:         "cbe2ef15-5b15-4cf2-a132-f3cc6fc74a37"
    virtual_machine_id: "3c8e5561-62b5-4357-a17d-2fd35b32fb27"
    vm_guest_ip:        "<computed>"

match the fields in the output from cloudmonkey.

Any ideas? What more can I do to help? I'm happy to do anything you need.

@svanharmelen
Copy link
Contributor

@keith471 oke... could you also check if you can query the resources with cloudmonkey if you do not specify the projectid?

And could it be you are using a using that only has rights within the used project? I have the feeling it is somehow related to that. If we can proof that, it would be a relatively easy fix.

@keith471
Copy link
Author

@svanharmelen thanks! I cannot query the resources without specifying the projectid or at least projectid=-1. Any queries without this parameter return nothing. Does this help?

The user's rights are not limited to the given project - they have rights to access one or more projects...

@pdube
Copy link

pdube commented Jul 27, 2016

@keith471 @svanharmelen Hey, I checked out the code, and it looks like the issue we had faced with the network ACLs. It is necessary to include the project ID to read resources that belong to projects (even though it is an ID; it is a limit of CS). This means that the resource can never be found after it is created, which means it will never populate the tfstate file

@svanharmelen
Copy link
Contributor

Yeah, that was what I was thinking about... And the answer of @keith471 now confirmed that 😉

Will put in a fix in a hour or so (simple fix). @keith471 can you build master and check/verify after I've put in the fix? That wpuld be great...

@keith471
Copy link
Author

keith471 commented Jul 27, 2016

@svanharmelen absolutely! Thank you! Just give me a shout when you've pushed the fix and I'd be more than happy to test it out.

@svanharmelen
Copy link
Contributor

@keith471 all yours... The fix is in master so go ahead and give it a spin...

@keith471
Copy link
Author

@svanharmelen my apologies for the delay. That being said, I've tested out master and it works like a charm! Can we look forward to these changes in Terraform 0.7.0?

Thanks again for your patience in working through this and for all your help!

@svanharmelen
Copy link
Contributor

@keith471 no worries 😉 And yes, it's already in master so it will be part of 0.7.0

@ghost
Copy link

ghost commented Apr 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants