-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: scheduler (13/): add more scheduler logic #422
feat: scheduler (13/): add more scheduler logic #422
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few minor comments, the rest LGTM :)
|
||
// ExtractNumOfClustersFromPolicySnapshot extracts the numOfClusters value from the annotations | ||
// on a policy snapshot. | ||
func ExtractNumOfClustersFromPolicySnapshot(policy *fleetv1beta1.ClusterSchedulingPolicySnapshot) (int, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not directly related to this PR. I realized that when we put values that should be in spec in annotation (like this case), it causes problem in terms of condition's observed generation. Thus, we have no clear idea what value of the "numberOfCluster" the condition is actually reflect. We might just bite the bullet and compute the hash value of the spec minus the "numberOfCluster" so we can actually consume the condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, putting the numOfClusters to the annotation is a compromise on many fronts. I remember the concern back then was that a) for obvious reasons we cannot include this field as part of the snapshot (this can be, as Ryan you have pointed out, avoided by excluding this field when computing the hash) b) it is a little bit weird to update a snapshot, which is supposed to be immutable.
Kubernetes does not have this concern as replicas is not part of the pod template; maybe we should move the placement type and numOfClusters out of policy? But that might be a little bit too much at this point 😣
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Ryan! I will merge this PR first so as to unblock the next PR. If you would like to modify the numOfClusters related behavior, I will send a separate PR to address this.
Description of your changes
This PR is part of the PRs that implement the Fleet workload scheduling.
It features more scheduling logic for PickN type CRPs.
I have:
make reviewable
to ensure this PR is ready for review.How has this code been tested
Special notes for your reviewer
To control the size of the PR, certain unit tests are not checked in; they will be sent in a separate PR.