-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure current OOP calls for the same solution-checksum can share the same OOP solution computation #60575
Ensure current OOP calls for the same solution-checksum can share the same OOP solution computation #60575
Conversation
06974e1
to
1c43709
Compare
… same OOP solution computation
1c43709
to
4323e78
Compare
src/Workspaces/Remote/ServiceHub/Services/DocumentHighlights/RemoteDocumentHighlightsService.cs
Outdated
Show resolved
Hide resolved
…emoteDocumentHighlightsService.cs
...ces/Remote/ServiceHub/Services/SemanticClassification/RemoteSemanticClassificationService.cs
Outdated
Show resolved
Hide resolved
…on/RemoteSemanticClassificationService.cs
…roslyn into cacheComputation
/// classification). As long as we're holding onto it, concurrent feature requests for the same checksum can | ||
/// share the computation of that particular solution and avoid duplicated concurrent work. | ||
/// </summary> | ||
private readonly Dictionary<Checksum, (int refCount, AsyncLazy<Solution> lazySolution)> _checksumToRefCountAndLazySolution = new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the meat of the PR. as long as an operation is in flight that is using a particular PinnedSolutionInfo checksum, we will just hand anyone that wants the solution for it the corresponding AsyncLazy in this dictionary. The refcount is used to keep track of concurrent callers and to release the solution from this map once nothing is asking about it.
src/Workspaces/Remote/ServiceHub/Services/CodeLensReferences/RemoteCodeLensReferencesService.cs
Outdated
Show resolved
Hide resolved
…emoteCodeLensReferencesService.cs
{ | ||
var solution = await GetSolutionAsync(solutionInfo, cancellationToken).ConfigureAwait(false); | ||
var document = solution.GetDocument(documentId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the new pattern for OOP features. Features call RunServiceWithSolutionAsync passing in their PinnedSolutionInfo they are associated with. They also pass in a callback which is given the solution that is computed once done. This allows that computation to be shared among multiple concurrent callers into oop with teh same solution checksum.
// solution instance). | ||
return this.CurrentSolution; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these diffs are terrible. I'm not srue why it threw it off so much to indent. working on this.
Test failure showing gold bars:
|
// We're not doing an update, we're moving to a new solution entirely. Clear out the old one. This | ||
// is necessary so that we clear out any open document information this workspace is tracking. Note: | ||
// this seems suspect as the remote workspace should not be tracking any open document state. | ||
this.ClearSolutionData(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tmat for thoughts. t his preserves what we had before... but hwat we had before seems very fishy/suspect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let's keep it for now and just remove the workspace updating completely once it's possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
/// classification). As long as we're holding onto it, concurrent feature requests for the same checksum can | ||
/// share the computation of that particular solution and avoid duplicated concurrent work. | ||
/// </summary> | ||
private readonly Dictionary<Checksum, (int refCount, AsyncLazy<Solution> lazySolution)> _checksumToRefCountAndLazySolution = new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrmm... i'm ok with the name as is. _availableSolutions is a bit too vague for me.
{ | ||
return null; | ||
} | ||
return await RunWithSolutionAsync(solutionInfo, async solution => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we use RunServiceWithSolutionAsync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the issue here is preserving the meaning of the using-block for telemetry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could LogBlock be around RunServiceAsync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me do that in a followup.
src/Workspaces/Remote/ServiceHub/Services/BrokeredServiceBase.cs
Outdated
Show resolved
Hide resolved
src/Workspaces/Remote/ServiceHub/Services/BrokeredServiceBase.cs
Outdated
Show resolved
Hide resolved
src/Workspaces/Remote/ServiceHub/Services/BrokeredServiceBase.cs
Outdated
Show resolved
Hide resolved
{ | ||
using (await _gate.DisposableWaitAsync(cancellationToken).ConfigureAwait(false)) | ||
{ | ||
if (_checksumToRefCountAndLazySolution.TryGetValue(solutionChecksum, out var tuple)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a thing the language supports (yet).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea. Just a few suggestions to improve readability.
Sending out review for initial thoughts/feedback.
The general idea here is to setup our host/OOP communication to avoid teh following problem.
To avoid this, this PR introduces a slightly different model for host-OOP calls. Now, when the host calls into OOP for a paticular feature on a particular snapshot checksum, then OOP side creates a mapping from that checksum to the async computation for taht solution (an AsyncLazy), and it keeps it alive as long as that call is in flight. If any other calls come in during that time and need that same solution, they will be given the same solution computation object back so that everything is shared across those calls. This is done in a ref-counting fashion to ensure that as long as something is calling into oop that is working on that snapshot, the lazy for it will be kept alive and any concurrent calls will just get that and do no more extra work.
We still also keep track of hte last primary-workspace-solution, and the last-solution queried for, so that if a call gets either, and then returns, thne the next call can see those and potentially benefit from it.