Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: clear instance type cache after ICE #7517

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jesseanttila-cai
Copy link

Fixes #7443

Description

The instanceTypesCache in the InstanceType provider uses a complex cache key that includes the SeqNum of the unavailableOfferings cache. Since all entries of instanceTypesCache become invalid whenever unavailableOfferings is modified, the cache can be flushed in this case to reduce memory usage.

Changes to other parts of the instanceTypesCache key are not considered in this patch. It is likely that a similar issue could be triggered for example by dynamically updating the blockDeviceMappings of a nodeclass based on pod requirements. Since our setup most commonly modifies nodepool/nodeclass configurations in response to ICEs, this patch was sufficient to solve our memory usage issues.

How was this change tested?

A patched version of the Karpenter controller was deployed in a development environment, and memory usage during cluster scale-up has been tracked for a period of two weeks. Peak memory usage appears to be reduced by as much as 80% in situations where several ICEs occur in quick succession, completely eliminating previously seen OOM issues.

Does this change impact docs?

  • Yes, PR includes docs updates
  • Yes, issue opened: #
  • No

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@jesseanttila-cai jesseanttila-cai requested a review from a team as a code owner December 11, 2024 16:57
Copy link

netlify bot commented Dec 11, 2024

Deploy Preview for karpenter-docs-prod ready!

Name Link
🔨 Latest commit d1ca237
🔍 Latest deploy log https://app.netlify.com/sites/karpenter-docs-prod/deploys/67936159a150ac000886c8c1
😎 Deploy Preview https://deploy-preview-7517--karpenter-docs-prod.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link
Contributor

This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 12281157741

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 16 of 16 (100.0%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.04%) to 65.067%

Totals Coverage Status
Change from base Build 12265951884: 0.04%
Covered Lines: 5748
Relevant Lines: 8834

💛 - Coveralls

@engedaam
Copy link
Contributor

@jesseanttila-cai Can you help me understand why the automatic cache clean-up does not cover the case you are describing? Defined Cache TTL:

@jesseanttila-cai
Copy link
Author

@engedaam

By default, the instance type provider cache should clean-up the old keys after 6 minutes

The automatic cache clean-up does indeed limit the effect of the issue, however in practice the number of ICE occurences within the TTL period can be large enough to have a very significant effect on memory usage. The graph shared in #7443 displays this behavior, including the effect of the cache TTL. The heap profile for the flamegraph in the comment is available here.

Copy link
Contributor

@engedaam engedaam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a few comments

/karpenter snapshot

return append([]*cloudprovider.InstanceType{}, item.([]*cloudprovider.InstanceType)...), nil

if p.instanceTypesResolver.UnavailableOfferingsChanged() {
p.instanceTypesCache.Flush()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about only deleting the old cache key? We can save some work if the customer has multiple nodeclass.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unavailableOfferings SeqNum is included in all cache keys, and it is the same across all nodeclasses:

return fmt.Sprintf("%016x-%016x-%s-%s-%d",
kcHash,
blockDeviceMappingsHash,
lo.FromPtr((*string)(nodeClass.Spec.InstanceStorePolicy)),
nodeClass.AMIFamily(),
d.unavailableOfferings.SeqNum,
)
All cached values become inaccessible when the SeqNum is incremented, so there should not be any accessible values left in the cache when it is flushed.

pkg/providers/instancetype/types.go Outdated Show resolved Hide resolved
Copy link
Contributor

@engedaam engedaam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/karpenter snapshot

Copy link
Contributor

Snapshot successfully published to oci://021119463062.dkr.ecr.us-east-1.amazonaws.com/karpenter/snapshot/karpenter:0-aa24e89e4d83300d86b07f41d2421dc000faa849.
To install you must login to the ECR repo with an AWS account:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 021119463062.dkr.ecr.us-east-1.amazonaws.com

helm upgrade --install karpenter oci://021119463062.dkr.ecr.us-east-1.amazonaws.com/karpenter/snapshot/karpenter --version "0-aa24e89e4d83300d86b07f41d2421dc000faa849" --namespace "kube-system" --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait

Comment on lines +66 to +67
// GetUnavailableOfferingsSeqNum returns the current seq num of the unavailable offerings cache
GetUnavailableOfferingsSeqNum() uint64
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instancetype provider could alternatively directly access the unavailable offerings cache for this, allowing this interface to remain unmodified but requiring a new field to be added to the instancetype provider constructor.

@jesseanttila-cai
Copy link
Author

@engedaam I ended up rewriting this to properly define the desired behavior for multiple concurrent callers, synchronizing flushes and insertions to keep outdated entries out of the cache. There is one more change I could make to further clean up the implementation, described in #7517 (comment). Other than that everything should be good to go, as I don't believe there is a reliable way to automatically test for the kind of concurrency issues that these latest changes are supposed to prevent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

InstanceType cache is not cleared after ICE
3 participants