Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: scheduler (13/): add more scheduler logic #422

Merged
merged 3 commits into from
Jul 11, 2023

Conversation

michaelawyu
Copy link
Contributor

Description of your changes

This PR is part of the PRs that implement the Fleet workload scheduling.

It features more scheduling logic for PickN type CRPs.

I have:

  • Run make reviewable to ensure this PR is ready for review.

How has this code been tested

  • Unit tests

Special notes for your reviewer

To control the size of the PR, certain unit tests are not checked in; they will be sent in a separate PR.

Copy link
Contributor

@zhiying-lin zhiying-lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

few minor comments, the rest LGTM :)

pkg/scheduler/framework/framework.go Outdated Show resolved Hide resolved
pkg/scheduler/framework/framework.go Show resolved Hide resolved
pkg/scheduler/framework/framework_test.go Outdated Show resolved Hide resolved

// ExtractNumOfClustersFromPolicySnapshot extracts the numOfClusters value from the annotations
// on a policy snapshot.
func ExtractNumOfClustersFromPolicySnapshot(policy *fleetv1beta1.ClusterSchedulingPolicySnapshot) (int, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not directly related to this PR. I realized that when we put values that should be in spec in annotation (like this case), it causes problem in terms of condition's observed generation. Thus, we have no clear idea what value of the "numberOfCluster" the condition is actually reflect. We might just bite the bullet and compute the hash value of the spec minus the "numberOfCluster" so we can actually consume the condition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, putting the numOfClusters to the annotation is a compromise on many fronts. I remember the concern back then was that a) for obvious reasons we cannot include this field as part of the snapshot (this can be, as Ryan you have pointed out, avoided by excluding this field when computing the hash) b) it is a little bit weird to update a snapshot, which is supposed to be immutable.

Kubernetes does not have this concern as replicas is not part of the pod template; maybe we should move the placement type and numOfClusters out of policy? But that might be a little bit too much at this point 😣

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ryan! I will merge this PR first so as to unblock the next PR. If you would like to modify the numOfClusters related behavior, I will send a separate PR to address this.

@michaelawyu michaelawyu merged commit 28dc945 into Azure:main Jul 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants