-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: clear instance type cache after ICE #7517
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for karpenter-docs-prod ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Pull Request Test Coverage Report for Build 12281157741Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
@jesseanttila-cai Can you help me understand why the automatic cache clean-up does not cover the case you are describing? Defined Cache TTL:
|
The automatic cache clean-up does indeed limit the effect of the issue, however in practice the number of ICE occurences within the TTL period can be large enough to have a very significant effect on memory usage. The graph shared in #7443 displays this behavior, including the effect of the cache TTL. The heap profile for the flamegraph in the comment is available here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a few comments
/karpenter snapshot
return append([]*cloudprovider.InstanceType{}, item.([]*cloudprovider.InstanceType)...), nil | ||
|
||
if p.instanceTypesResolver.UnavailableOfferingsChanged() { | ||
p.instanceTypesCache.Flush() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about only deleting the old cache key? We can save some work if the customer has multiple nodeclass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The unavailableOfferings SeqNum is included in all cache keys, and it is the same across all nodeclasses:
karpenter-provider-aws/pkg/providers/instancetype/types.go
Lines 92 to 98 in aa24e89
return fmt.Sprintf("%016x-%016x-%s-%s-%d", | |
kcHash, | |
blockDeviceMappingsHash, | |
lo.FromPtr((*string)(nodeClass.Spec.InstanceStorePolicy)), | |
nodeClass.AMIFamily(), | |
d.unavailableOfferings.SeqNum, | |
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/karpenter snapshot
Snapshot successfully published to
|
// GetUnavailableOfferingsSeqNum returns the current seq num of the unavailable offerings cache | ||
GetUnavailableOfferingsSeqNum() uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The instancetype provider could alternatively directly access the unavailable offerings cache for this, allowing this interface to remain unmodified but requiring a new field to be added to the instancetype provider constructor.
@engedaam I ended up rewriting this to properly define the desired behavior for multiple concurrent callers, synchronizing flushes and insertions to keep outdated entries out of the cache. There is one more change I could make to further clean up the implementation, described in #7517 (comment). Other than that everything should be good to go, as I don't believe there is a reliable way to automatically test for the kind of concurrency issues that these latest changes are supposed to prevent. |
Fixes #7443
Description
The
instanceTypesCache
in theInstanceType
provider uses a complex cache key that includes the SeqNum of theunavailableOfferings
cache. Since all entries ofinstanceTypesCache
become invalid wheneverunavailableOfferings
is modified, the cache can be flushed in this case to reduce memory usage.Changes to other parts of the
instanceTypesCache
key are not considered in this patch. It is likely that a similar issue could be triggered for example by dynamically updating the blockDeviceMappings of a nodeclass based on pod requirements. Since our setup most commonly modifies nodepool/nodeclass configurations in response to ICEs, this patch was sufficient to solve our memory usage issues.How was this change tested?
A patched version of the Karpenter controller was deployed in a development environment, and memory usage during cluster scale-up has been tracked for a period of two weeks. Peak memory usage appears to be reduced by as much as 80% in situations where several ICEs occur in quick succession, completely eliminating previously seen OOM issues.
Does this change impact docs?
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.