-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback: disk
schema
#986
Comments
Firstly, thank you, @Tinyblargon and the rest of the team for seeing and listening to a community that uses this provider quite often as part of universal modules. I will try to build this code and test it, next week. disk {
slot = virtio0
size = 10
storage = local-zfs
type = ????
} Can you provide an example? P/S disk {
datastore_id = local.datastore_id
file_id = proxmox_virtual_environment_download_file.latest_debian_12_bookworm_qcow2_img.id
interface = "scsi0"
discard = "on"
cache = "writeback"
ssd = true
} |
@myplixa in your case |
@Tinyblargon Thank you very much for the answer, to my question. Alas, I am now very busy with work tasks and do not have time to build and test this implementation proposed by you. I will try to start this work in the next couple of weeks and give an answer on the current scheme and provide examples of implementation of work with dynamic disks for modules. |
Hi @Tinyblargon, I believe this new schema would be much more easy to work with for a terraform/opentofu user to write modules.
We can then choose which type of disk (virtio, scsi,ide..) with I believe we could write modules almost like in the old version:
module/variables.tf variable "disks" {
type = list(object({
slot = string
size = string
storage = string
emulatessd = bool
iothread = bool
discard = bool
}))
default = [{
slot = "scsi0"
size = "10G"
storage = "local-lvm"
emulatessd = true
iothread = true
discard = true
}]
}
... vm1/main.tf disks = [
{
slot = "scsi0"
iothread = 0
size = "20G"
storage = "ssd1"
},
{
slot = "scsi1"
iothread = 0
size = "180G"
storage = "ssd2"
}
] I'm not sure how I could really test the version from the branch new-disk as using
returns warnings
However, to sum up, I really like the schematics proposed for the disks. |
@Tinyblargon Thank you for putting forward a new design; and thank you to @mathieuHa for decomposing what has been proposed. I had a look at the branch and reverse engineered what has been proposed/implemented. I didn't find any documentation/information that describes the design/strategy. My best concise guess is: The new design continues to provide a nested typed data structure for direct use of the provider, while at the same time provides duck typed array disk structure that allows disks to be defined using a common type that is translated at runtime to the original nested typed data structure. The compatibility disk array can be used by module authors. I haven't compiled the branch and tested it sorry. IMHOThe compatibility layer would make the provider functionally usable (by me). I could and would live with it, but it seems to be a very small evolution. The more I reflect on this provider the more I think it makes sense to break the various resources up into child terraform resources. Those child resources can be highly typed, specific to the purpose, easy to document, and easy to understand due to those constraints (like the new nested disks structure). This takes the provider from being largely a single resource (e.g. the QEMU VM), to a small collection of resources to achieve an outcome. This might be a little bit more boiler plate coding. Each disk could be a first class terraform resource, then the disks in the main VM resource could reference those disks by name. The main QEMU VM resource is IMO getting too big/complex. I have quite a constrained use case deploying 'cattle' VMs (typically Flatcar). I typically delete VMs before reprovisioning them from scratch. Sometimes I will hack a bit of storage (i.e. a disk) into the VM that is outside of the proxmox lifecycle (I have a single host, and understand this stops the machine being migrated to another host). If we could get this provider and proxmox to the point that disks could have a lifetime that is longer than the VM then that would be great (but outside the scope of this feedback). |
Another item I forgot to mention was pulling the |
@gbrackley Your referring to is what Terraform calls Local-only Data Sources. This can be implemented in a non breaking way with the current schema. We can look into this in the future since this would require a lot of extra work. |
Looks like this could help - #907 (comment) , #907 (comment) |
@Tinyblargon thanks for the work on the provider! It would be great if those changes will make the transition to the 3.0.1 release. I'm aware that there is a another proposal how to solve this mentioned by @maksimsamt, but in my opinion the disk schema here would allow cleaner module definitions without defining everything. One topic missing in the new schema definitions is the cloud-init disk. It would be great if the cloud-init disk is included in the new disk schema so it is defined in a similar way. The most important part for me would be that there is a final disk schema which is then hopefully not changing in the near future again. At the moment we are waiting for the final decision regarding the disk schema before we start refactoring all of our terraform running on the old 2.9.14 Version. This means if there will be the decision to work with the solution by @maksimsamt this would be fine as well, but would mean for us a lot of refactoring of our terraform modules. Looking forward on the final decision. |
@h2odriving currently I'm working on: Telmate/proxmox-api-go#339 once that has concluded I'm gonna start workin on the disk implementation. |
Totally agree with @h2odriving, this is a very important point to decide whether to use the current version or not. @Tinyblargon, |
@maksimsamt after the disk schema has been implemented, I'm not planning on changing it anymore as now the way Terraform, Proxmox-SDK and Proxmox Virtual Environment treat disks in the same way. The implementation will slightly differ from what was proposed here due to the |
Also, as we do not want a situation like this ever again, most new features will get their own release candidates before being officially released. |
This is a little unclear:
|
@maksimsamt the |
@Tinyblargon I appreciate you taking feedback from the community before making an official release and taking care to make sure we don't change schemas multiple times in official releases. Back when 3.0.1-rc1 was released, I updated all my modules to use the disks schema and they've been working fine. Normally I wouldn't update to a release candidate, but there are critical fixes that are not in any release, so I had to switch to either a release candidate or the HEAD of the default branch, which would be even more of a moving target. I think the comments from people saying that they can't update their modules to use the disks schema are now outdated since maksimsamt has demonstrated that it is possible, even with complex, dynamic configurations, and has provided multiple implementations. It looks like these examples accommodate the comments from people who were having trouble with the new schema. We had one person confirm it works for them, and nobody has said the example modules do not work for them in terraform. So I hope that the provider doesn't revert back to the old, reportedly unreliable People have already made strong argument for the |
@hestiahacker The new
I want to deliver a stable Terraform provider, but so many things need to be reworked. |
Thanks for the clarification and I'm glad to hear Yeah, those option are tough. 🙁 Unless option 2 is a lot less work than I'd imagine it'd be, I think option 3 looks like the least bad option. |
vote for:
|
It's all of LXC. The difficulty part is going to keep everything backwards compatible with Terraform as things change in the upstream project we use to connect with Proxmox. Current behaviors don't have tests, so this project is going to feel some of the repercussions. It's probably easiest to give every update after 3.0 a release candidate to let the community help with ensuring compatibility before we break things. Option 3 it's probably going to be. Sorry this post is a bit of a rant. |
Guys, how you are testing new-disk branch ? can you please provide terraform provider configuration ?
i tried this but not work |
@arsensimonyanpicsart, you will need to have go installed. Note: I decided to call this version # clone the repo
git clone https://github.com/Telmate/terraform-provider-proxmox
cd terraform-provider-proxmox
# change the branch
git checkout new-disk
# build the binary
make Move the binary into Terraform's plugins directory, the following directions are for Unix. If you are using windows the # move the binary into the terraform plugins directory
VERSION='3.0.1-newdisk'
mkdir -p ~/.terraform.d/plugins/registry.terraform.io/telmate/proxmox/"${VERSION}"/linux_amd64/
cp bin/terraform-provider-proxmox ~/.terraform.d/plugins/registry.terraform.io/telmate/proxmox/"${VERSION}"/linux_amd64/terraform-provider-proxmox_v"${VERSION}" Change your terraform file: terraform {
required_providers {
proxmox = {
source = "Telmate/proxmox"
version = "3.0.1-newdisk"
}
}
} This will create a symlink in ➜ tree ~/.terraform.d/plugin-cache/registry.terraform.io/telmate/proxmox/
/home/$USER/.terraform.d/plugin-cache/registry.terraform.io/telmate/proxmox/
...
├── 3.0.1-newdisk
│ └── linux_amd64 -> /home/$USER/.terraform.d/plugins/registry.terraform.io/telmate/proxmox/3.0.1-newdisk/linux_amd64
... |
thanks @trfore , i was able to test it. |
I would like to test the RC4 but struggle a bit to understand the new schema. Could you please provide a simple example with cloud-init? That would be very helpful for me. |
|
Over the past month we have received quite a few comments that the new
disks
schema doesn't work when creating a module.The reason the
disks
schema was created is because the olddisk
schema didn't reflect how Proxmox actually handles disks and would show weird Terraform diffs. I have been thinking about bringing back thedisk
schema to have better support for modules. To better tend to the community's needs I'm requesting for feedback on this newdisk
schema.The new
disk
schame can be found in in: https://github.com/Telmate/terraform-provider-proxmox/tree/new-diskThe text was updated successfully, but these errors were encountered: