-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement collection GETs for node pools #883
base: main
Are you sure you want to change the base?
Conversation
Works for any document type now, and just stores a slice rather than the entire cache for that document type.
A new function NewQueryItemsSinglePageIterator alters the behavior of QueryItemsIterator to stop after the first "page" of items and record the Cosmos DB continuation token. The continuation token can be retrieved from the iterator with GetContinuationToken. A QueryItemsIterator created with NewQueryItemsIterator will never have a continuation token because it iterates until the last item. The in-memory cache also adds a GetContinuationToken method to its iterator implementation to fulfill the interface contract, but it always returns an empty string.
For CosmosDBClient, the maxItems argument controls the type of iterator returned. A positive maxItems returns a single- page iterator with a possible continuation token, otherwise the iterator continues until the last item. Since the in-memory cache does not have continuation tokens, the maxItems argument is ignored. This also drops the resourceType argument. Callers first need to parse the iterator items into resource documents before checking the resource type.
Similar to QueryItemsIterator for Cosmos DB pagination.
10ed16e
to
807f48a
Compare
return | ||
} | ||
|
||
if strings.HasSuffix(strings.ToLower(doc.Key.ResourceType.Type), resourceTypeName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be done with a WHERE
clause at the DB layer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically yes, but see my in-memory cache answer below.
I'm going to leave this as is for this PR but I added a "FIXME" comment here to circle back to it once I've gotten rid of the generic database interface I described below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Not being able to delegate this kind of thing to the DB because of the vestigial in-memory implementation IMO makes getting rid of it a good use of time before long.
type operationCacheIterator struct { | ||
operation map[string]*OperationDocument | ||
err error | ||
type cacheIterator struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my understanding, why is there an in-memory cache for CosmosDB? Was there a particular REST p95 that couldn't be reached without caching?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's there for historical reasons, from early in development before we had access to Cosmos DB. Both the Cosmos DB client and the in-memory cache are hidden behind a generic database interface. Once we got Cosmos DB access it was decided to keep the interface and cache around as a unit test aid, and for a time it was helpful when our database use cases were simple. But it's technical debt now; it's become an obstacle to leveraging more advanced features of Cosmos DB. I'm honestly ready to ditch it and just use gomock or some such for unit tests, but I'll need to find time to pay that debt.
I was surprised to discover the collection endpoint for node pools is a valid resource ID (according to ParseResourceID), whereas the collection endpoints for clusters (by subscription and by resource group) are not. This required tweaking the static validation middleware, which was blocking node pool collection requests because the parsed resource ID had no resource name.
807f48a
to
f8b56ec
Compare
What this PR does
Per the Resource Provider Contract rules for Get Resource (specifically rule RPC-Get-V1-05), the only collection GET endpoint required for nested tracked resources is on the parent resource:
I managed to generalize the
Frontend.ArmResourceList
handler function for both clusters and node pools. The logic there was already fairly complicated and I wanted to avoid having two copies of it.There's also a bunch of enhancements here to my "failable push iterator" pattern, such as extending it to encapsulate pagination of Cluster Service list responses. Unfortunately I can't really add unit tests for these since they depend on actual Cluster Service responses.
Jira: I couldn't find a specific user story for this, but
Link to demo recording:
Special notes for your reviewer