-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secure kube-scheduler #41285
Secure kube-scheduler #41285
Conversation
07c2baf
to
7b86472
Compare
Audit and role changes lgtm, not qualified for salt |
rbac.NewRule("update").Groups(legacyGroup).Resources("pods/status").RuleOrDie(), | ||
// things that select pods | ||
rbac.NewRule(Read...).Groups(legacyGroup).Resources("services", "replicationcontrollers").RuleOrDie(), | ||
rbac.NewRule(Read...).Groups(extensionsGroup).Resources("replicasets").RuleOrDie(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a pattern that's going to scale out well people build other resources that result in pods. Something @kubernetes/sig-scheduling-misc is looking at? We can't bake them all in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just say that "system:kube-scheduler" is for the built-in scheduler, and custom schedulers may need to come packaged w/ ClusterRoles & ClusterRoleBindings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, we're saying this won't scale well for the built-in scheduler... it's currently trying to spread selected pods by watching all the resources that use label selectors over pods
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😟 yeah that is concerning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a subresource for selectors?
permissions look fine. @cjcullen @mikedanese The script changes seem to be working enough for e2e. Is there something else you'd like to check? |
@@ -360,6 +363,30 @@ current-context: service-account-context | |||
EOF | |||
} | |||
|
|||
function create-kubescheduler-kubeconfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no. presumably most of those are still running the scheduler against the unsecured port as well, but that doesn't need to block this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @justinsb @kubernetes/kubeadm-maintainers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
working on a guide for deployers setting up control plane components to work out of the box with RBAC at https://docs.google.com/document/d/1PqI--ql3LQsA69fEvRq1nQWgiIoE5Dyftja5Um9ML7Q/edit
that will be pulled into the doc repo as soon as we have a way to open PRs for 1.6 doc content
requestAttributes.GetResource(), requestAttributes.GetAPIGroup(), requestAttributes.GetSubresource(), | ||
) | ||
b := &bytes.Buffer{} | ||
fmt.Fprint(b, `"`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replacing these Fprints with b.WriteString() would eliminate some unnecessary string scanning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Changes LGTM. I don't think there is anything on the GKE-side that will need to change for this (the scripts should be self-contained for this token). |
@@ -769,7 +769,7 @@ start_kube_scheduler() { | |||
if [ -n "${SCHEDULER_TEST_LOG_LEVEL:-}" ]; then | |||
log_level="${SCHEDULER_TEST_LOG_LEVEL}" | |||
fi | |||
params="${log_level} ${SCHEDULER_TEST_ARGS:-}" | |||
params="--master=127.0.0.1:8080 ${log_level} ${SCHEDULER_TEST_ARGS:-}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this one not use the secure port?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we even test or use this kube-up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that I know of, so I was just maintaining status quo since the --master arg moved into the params
7b86472
to
1ce1430
Compare
@wojtek-t for kubemark review and approval |
changess to kubemark lgtm |
@mikedanese can you approve the salt changes? |
/approve |
/approve
…On Tue, Feb 14, 2017 at 5:19 PM, Kubernetes Submit Queue < ***@***.***> wrote:
[APPROVALNOTIFIER] This PR is *NOT APPROVED*
The following people have approved this PR: *liggitt, mikedanese*
Needs approval from an approver in each of these OWNERS Files:
- cluster/OWNERS
<https://github.com/kubernetes/kubernetes/blob/master/cluster/OWNERS>
[mikedanese]
- cluster/gce/container-linux/OWNERS
<https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/container-linux/OWNERS>
[mikedanese]
- *hack/OWNERS
<https://github.com/kubernetes/kubernetes/blob/master/hack/OWNERS>*
- plugin/pkg/auth/OWNERS
<https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/OWNERS>
[liggitt]
- staging/src/k8s.io/apiserver/OWNERS
<https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/OWNERS>
[liggitt]
- test/kubemark/OWNERS
<https://github.com/kubernetes/kubernetes/blob/master/test/kubemark/OWNERS>
[liggitt,mikedanese]
We suggest the following people:
cc @ethernetdan <https://github.com/ethernetdan> @jbeda
<https://github.com/jbeda> @wojtek-t <https://github.com/wojtek-t>
@deads2k <https://github.com/deads2k>
You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#41285 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p0yzO9qZV6K3XXEAuve1BCjTeJQwks5rciiGgaJpZM4L-FM6>
.
|
1ce1430
to
34782b2
Compare
[APPROVALNOTIFIER] This PR is APPROVED The following people have approved this PR: liggitt, mikedanese, smarterclayton Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
rebased, changed apiVersion to apiGroup in bootstrap role testdata |
Automatic merge from submit-queue (batch tested with PRs 40297, 41285, 41211, 41243, 39735) |
This unfortunately broke kubemark - I'm going to revert it (to unblock submit queue). |
|
fixed in #41480 |
This PR:
system:kube-scheduler
clusterrolesystem:kube-scheduler
user