You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The first step work and floating ip's are assigned and the final task of waiting for ssh completes.
Then move over to the deploy part.
kubespray deploy --coreos --user core --sshkey /home/xxxx/.ssh/mykey --network-plugin flannel'
k8s-isomorph-zx2yqj | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
k8s-isomorph-ixklv6 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
k8s-isomorph-xzljat | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
I think this is due to how CoreOS performs the cloud-init. I think they have changed to something else and getting the sshkey in place for the core use requires some extra steps.
Core OS documentation covering this "Our Container Linux Config will also contain SSH keys that will be used to connect to the instance. In order for this to work your OpenStack cloud provider must support config drive or the OpenStack metadata service."
So I guess what needs to happen is for the user to be able to pass an argument with the path to the pub key. And cloud.py needs to contruct the userdata parametes in the Ansible module os_server or maybe better via the configdrive to get the pub key in place for the login to work which is done in the beginning of the deploy phase.
Continued testing switching to other openstack images like ubuntu and fedora-atomic.
Ubuntu does not use systemd by default but do take care of the ssh-key via the cloud-init and the keypair.
So does fedora atomic and it deployed without errors.
$ kubespray --version
kubespray 0.5.2
$ kubespray openstack --floating_ip --nodes 3
$ openstack image list | grep core-os
| 5bb1bd3e-2ba8-4787-bae6-80ba8c7b7cff | core-os-latest | active |
OpenStack options
#---
os_auth_url: "http://192.168.0.5:5000/v2.0"
os_username: ""
os_region_name: "RegionOne"
os_password: ""
os_project_name: "*****"
masters_flavor: "m1.small"
nodes_flavor: "m1.small"
etcds_flavor: "m1.small"
image: "core-os-latest"
network: "k8s-network"
sshkey: "mykey"
The first step work and floating ip's are assigned and the final task of waiting for ssh completes.
Then move over to the deploy part.
kubespray deploy --coreos --user core --sshkey /home/xxxx/.ssh/mykey --network-plugin flannel'
k8s-isomorph-zx2yqj | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
k8s-isomorph-ixklv6 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
k8s-isomorph-xzljat | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
I think this is due to how CoreOS performs the cloud-init. I think they have changed to something else and getting the sshkey in place for the core use requires some extra steps.
Core OS documentation covering this
"Our Container Linux Config will also contain SSH keys that will be used to connect to the instance. In order for this to work your OpenStack cloud provider must support config drive or the OpenStack metadata service."
So I guess what needs to happen is for the user to be able to pass an argument with the path to the pub key. And cloud.py needs to contruct the userdata parametes in the Ansible module os_server or maybe better via the configdrive to get the pub key in place for the login to work which is done in the beginning of the deploy phase.
Cloud-init is depricated in CoreOS
The text was updated successfully, but these errors were encountered: