-
-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get document (in separate transaction) after QueryIndex #538
Comments
We could probably add In the meantime you can create your own SQL query to list this index. |
Indeed, having the DocumentId available right away would be great. I've since come up with a relatively simple workaround, costing a redundant join but no custom stuff needed, so works for me until a proper solution exists:
Another problem I discovered with this, is that if you ever abort the inner transaction with CancelAsync, you set a flag which is never cleared, not even by BeginTransactionAsync, the session object is simply unusable afterwards. Which essentially means you have to create a new session object for every transaction (and since I have to use DI, it means I have to create a new DI scope for every transaction). I suppose that's probably not an unreasonable thing to need to do, I just wish it was documented or something. Thanks for the help so far! |
Another option would be for you to add this document id to the concrete index. Like any other value. In Orchard we do that to point to the document using a logical id, a unique identifier that is constant to the lifetime of the document (content item id) since the |
I have an index, and I want a background job to go through each document one by one and process each in a separate transaction. Thus, Query is not useful since it would not use the transaction I want, but I can't quite see how to get DocumentId from the QueryIndex result, either. I'm trying to do something like
Is this possible? The "Id" field in the MapIndex class seems to be unrelated to the document ID.
The text was updated successfully, but these errors were encountered: