-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Fleet machine metadata for "environments" #41
Comments
@lukebond e.g. use of different coreos release channels. |
@rimusz +1 for separating production environment - total isolation, nothing shared if possible |
@sublimino @lukebond |
although i agree with this, there shouldn't be anything in Paz that cares whether you separate them or not. Paz just needs to be aware of environments (ie. a parameter to most REST calls) and translate them down to Fleet machine metadata at deployment time. although Paz avoids doing infra stuff, i'm beginning to think it would be good to have a cluster provisioning tool (separate from Paz) that allows you to choose Etcd cluster topology, group machines and add metadata, etc. |
@lukebond I think the separate cluster provisioning tool makes sense, which as you said: allows you to choose Etcd cluster topology, group machines and add metadata, etc. |
That cluster provisioning tool(/GUI?) sounds like it could be a cloud-config generator via https://terraform.io/ - in support of immutable infrastructure we should deploy a new host with new config, health check, rebalance containers, and decommission old host? Servers should automatically be distributed between AZs where applicable. etcd topology - for large deployments of any size CoreOS recommends running a separate 5-node etcd cluster, otherwise etcd should run on each host. |
@sublimino https://terraform.io/ it is good choice for cloud one setups. What about the bare-metal? Regarding the etcd:
Also etcd machines do not have to very powerful as they run only etcd cluster, e.g at GCE g1-small instances work just fine. |
Agree with all of this and aware of the Etcd-on-every-machine anti-pattern from previous experience (and was also at that meet-up). But since Paz doesn't do infra then that's down to whoever sets up the cluster. |
@lukebond yep, it is more for the cluster provisioning tool, which makes sense to have for sure, to prepare cluster for Paz. |
@rimusz if those bare-metal machines are accessible via ssh already we could conceivably rewrite the cloud-config file and reboot the server? Would have to ensure they're all on the same release channel. Also been bitten by etcd 0.4 - hopefully we're fixed in v2, although not stressed it myself yet. Read "on each host" above as "on three or fewer node clusters" - my concern with running less than three nodes is loss of resilience and the smallest machine breaking the cluster (AWS micro/small is not sufficient for etcd nodes). How much hand-holding should a provisioning tool do, @lukebond? And possibly it's another issue as I've hijacked this one! :) As a footnote, the upper bound of etcd nodes required for stability across any cluster size is 5 according to a chat with Alex Polvi via some Chubby engineers. Further nodes add no meaningful resilience. |
@sublimino We can have a choice e.g. if somebody wants very small cluster of 3-5 nodes, they can have if they want just one etcd node, then 3 or 5 nodes depending on cluster size :-) @lukebond regarding this cluster provisioning tool, we need a separate repository under paz-sh. |
@rimusz good idea. i took the liberty of choosing a name: https://github.com/paz-sh/clusterform |
👍 |
Splendid! On 24 March 2015 at 17:47, Rimas Mocevicius [email protected]
|
Will Paz support already provisioned clusters? |
@rimusz currently that's all it supports. there are some helper scripts for bringing up a cluster (for testing/playing only really) but the idea is that you've already got your cluster and then you put Paz on it. |
If paz is going to use all that metadata stuff, some instructions need to be provided then, what metadata settings needs to be set on to current cluster to make paz to function properly |
Yes, when we start using it. Currently there are no such requirements but there soon will be, e.g. for tying scheduler and service directory to a particular host (they're the ones that have a DB and therefore need a volume mount and to not move hosts). I've been doing that manually so far. There will also be some metadata for environments and as you say that needs to be defined and documented. |
Cool |
Let's say you want dev, QA, staging and production clusters. Rather than have multiple clusters of Paz, they could be the same cluster but use Fleet machine metadata to schedule units only on hosts containing units from their environment.
e.g. 4 environments, each a 3-node cluster, you may have the following metadata for them:
More can be read about Fleet scheduling with metadata here: https://coreos.com/docs/launching-containers/launching/launching-containers-fleet/#schedule-based-on-machine-metadata
Credit to @rimusz for the idea.
The text was updated successfully, but these errors were encountered: