Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking for help rehydrating OpenStack terraform.tfstate file data (bug?, misunderstanding?) #3405

Closed
hartzell opened this issue Oct 4, 2015 · 9 comments

Comments

@hartzell
Copy link
Contributor

hartzell commented Oct 4, 2015

I'm trying to work with data from a terraform.tfstate file using terraform routines instead of guessing at what things mean and parsing it by hand. I have a piece of code like this:

func doit(src io.Reader) (exitStatus int, errorMessage string) {
    state, err := terraform.ReadState(src)
    if err != nil {
        return 1, "Unable to read state file"
    }

    for _, m := range state.Modules {
        for _, rs := range m.Resources {
            switch rs.Type {
            case "openstack_compute_instance_v2":
                s := rs.Primary
                instanceName := s.Attributes["name"]
                                ...

and I'm able to get at the resource's info.

I'd like to work with the metadata from an OpenStack compute instance. Here a portion of a state file for a comput instance.

                "openstack_compute_instance_v2.mj_other.1": {
                    "type": "openstack_compute_instance_v2",
                    "depends_on": [
                        "openstack_blockstorage_volume_v1.mj_other_volume",
                        "openstack_compute_floatingip_v2.mj_other_fip",
                        "openstack_compute_secgroup_v2.mj_master_secgroup",
                        "openstack_compute_secgroup_v2.mj_master_secgroup"
                    ],
                    "primary": {
                        "id": "17146af5-4ec6-4447-92f6-5542b1bf0742",
                        "attributes": {
                            "access_ip_v4": "10.29.92.120",
                            "access_ip_v6": "",
                            "flavor_id": "n1.small",
                            "flavor_name": "n1.small",
                            "floating_ip": "10.29.92.120",
                            "id": "17146af5-4ec6-4447-92f6-5542b1bf0742",
                            "image_id": "62896feb-0c93-49bd-93a3-23f597d3f9ec",
                            "image_name": "CentOS 6.6",
                            "key_pair": "alanturing-nebula-keypair",
                            "metadata.#": "4",
                            "metadata.foo": "alpha,omega",
                            "metadata.bar": "poodle=bingo, two=b",
                            "metadata.baz": "one=z, two=y",
                            "metadata.this": "that",
                            "name": "mj-other-1",
                            "network.#": "1",
                            "network.0.fixed_ip_v4": "10.0.0.101",
                            "network.0.fixed_ip_v6": "",
                            "network.0.mac": "",
                            "network.0.name": "nebula",
                            "network.0.port": "",
                            "network.0.uuid": "",
                            "security_groups.#": "1",
                            "security_groups.0": "mj-master-security-group",
                            "volume.#": "1",
                            "volume.2756317279.device": "/dev/vdb",
                            "volume.2756317279.id": "2ccf935e-2c07-4ff0-b766-8e503fe115d4",
                            "volume.2756317279.volume_id": "2ccf935e-2c07-4ff0-b766-8e503fe115d4"
                        }
                    }
                },

The attributes have been flattened, and for example, I can use flatmap.Expand to expand the network section (which is an array).

But, flatmap.Expand fails on the metadata section. Looking at the code I see around line 24 of expand.go that it check for a '.#' and if one is found then it assumes there is a flattened array.

Even though metadata is mappish, the OpenStack metadata section has "metadata.#": "4", so expandArray() gets called, which panics.

I discovered that if I edit the terraform.tfstate file and remove the metadata.# line, then flatmap.Expand calls expandMap and things work out perfectly.

So, several questions:

  • is flatmap.Expand the right tool to be using to work with this data? If not, then what is?
  • is there a bug somewhere in the OpenStack resource code that's causing the "metadata.#" to be inserted?
  • has the code that's generating the terraform.tfstate file gotten out of sync with flatmap?

Thanks for any clarifications!

@hartzell
Copy link
Contributor Author

hartzell commented Oct 6, 2015

Digging more, I've come to suspect that the info in the state file has been mangled by some part of the terraform/helper/schema world and not terraform/flatmap. I'd love any hints or mentoring I could get on supported ways of working with it.

Thanks,

@apparentlymart
Copy link
Contributor

@hartzell the encoding and decoding of complex data structures into the flat map is handled at a higher level of abstraction, inside helper/schema.

Unfortunately this thing doesn't directly expose its decoding logic, since its interface is around diffing/applying changes to resources, with the data handling as an implementation detail. ResourceData is the key type, but its important fields are private and the function that instantiates it wants the schema map that you can't easily get from outside.

It looks like there is one possible entry point in instantiating a MapFieldReader directly:

stateReader := &schema.MapFieldReader{
    Schema: ???,
    Map:    schema.BasicMapReader(stateAttributes),
}

But as you can see above you still need to get hold of the resource schema, which requires interacting with the provider directly. Again the provider interface is at the wrong level of abstraction for what you want to do, so it'd be necessary to depend on some implemention details:

provider := openstack.Provider().(*schema.Provider)
instanceSchema := provider.ResourcesMap["openstack_compute_instance_v2"].Schema
stateReader := &schema.MapFieldReader{
    Schema: instanceSchema,
    Map:    schema.BasicMapReader(stateAttributes),
}
metadataResult := stateReader.ReadField([]string{"metadata"})

metadataResult is a FieldReadResult.

...but by this point we've depended on a whole bunch of things that the interface doesn't guarantee, so I wouldn't recommend doing anything like the above in any production code as it's likely to break in future versions of Terraform.

@phinze
Copy link
Contributor

phinze commented Oct 12, 2015

Good question @hartzell and great breakdown @apparentlymart.

This will ultimately be solved by a first class feature once #581 is done. Closing this thread for now - feel free to follow up if you'd like to keep it open.

@hartzell
Copy link
Contributor Author

@apparentlymart -- Thanks for that breakdown. I'd gotten to the point where I decided that there wasn't a proper way to do what I was aiming at. It's great to have someone who knows the codebase confirm my reverse-engineered understanding. I appreciate the time you invested.

@hartzell
Copy link
Contributor Author

@phinze -- Thanks for the follow up! I'll watch #581. If I understand you correctly, then as a side effect of the changes to support #581 there will be a clean way to access the state info. Is that correct?

@hartzell
Copy link
Contributor Author

@apparentlymart -- one more bit of followup. The above suggestion works, and I'm not sure it's any more of a violation of any promises the system has made that it would be for me to chop up the state file and parse it by "hand".

One small correction, ReadField seems to return two values, a result and an error, so:

metadataResult, err := stateReader.ReadField([]string{"metadata"})

and your snippet works like a charm.

Thanks!

hartzell pushed a commit to hartzell/roster that referenced this issue Oct 14, 2015
Use real, if not-really-public, terraform code for handling the state file.

See hashicorp/terraform#3405
@phinze
Copy link
Contributor

phinze commented Oct 21, 2015

If I understand you correctly, then as a side effect of the changes to support #581 there will be a clean way to access the state info. Is that correct?

Ah, rereading the original post - I think I may have read too much into your title.

I thought you started down this path because were looking for the ability to populate a statefile with details gleaned from the environment - which is what terraform import will do. If you're simply looking for general "programmatic access to statefile details" then perhaps this is an independently valid discussion.

So maybe you can help clarify and we can figure out the best path - what's the target use case?

@hartzell
Copy link
Contributor Author

@phinze -- The use case is roster, a dynamic inventory provider for Ansible that reads Terraform state.

In the current version I parse the state using the approach that @apparentlymart suggested above. So far it's working well.

This is the very first release of my very first go appilcation, so I'd love any feedback you can offer. I'll mention it on the terraform and ansible lists (and perhaps go-nuts).

@ghost
Copy link

ghost commented Apr 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants