You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
consider historical recalculations, as a user optional settings (default off, or with specified period span)
The challenging part here is, outside of a current period where aggregations will handle it, when there's a late historical change, once that has increasingly larger impact the further back it goes. This can happen with an upstream correction or similarly for splits / dividend adjustments, an even more insidious impact.
For example, say you're streaming all day long with a single quote streams and 100 indicators and 30 with 2 layers of chaining. If an old or updated quote comes in for 500 periods ago you now have a rather big situation where there's potentially large and complex chain of cleanup work, $500(100+30\times2)=80,000$ in this case. It's possible to do, but if this happens frequently in large datasets, you can imagine the spiking loads that can occur.
I say "potential", because at some point it leans more towards moot than useful, it's ancient history a user may not want to address at all. The option to do this may fall into the same settings category as @elpht recommends.
consider historical recalculations, as a user optional settings (default off, or with specified period span)
Current thoughts here are that deleting forward cached values and notifying observers is still net better load outcome than hierarchically rebuilding each increment thereafter. The later rebuild triggers rebuilds, etc. which can be an accelerating out of control loading spike. Whereas, deleting and rebuilding the base with notification can be done in an orderly time sequence manner, which would be more efficient.
Handline late-arrivals in streaming use cases
The challenging part here is, outside of a current period where aggregations will handle it, when there's a late historical change, once that has increasingly larger impact the further back it goes. This can happen with an upstream correction or similarly for splits / dividend adjustments, an even more insidious impact.
For example, say you're streaming all day long with a single quote streams and 100 indicators and 30 with 2 layers of chaining. If an old or updated quote comes in for 500 periods ago you now have a rather big situation where there's potentially large and complex chain of cleanup work,$500(100+30\times2)=80,000$ in this case. It's possible to do, but if this happens frequently in large datasets, you can imagine the spiking loads that can occur.
I say "potential", because at some point it leans more towards moot than useful, it's ancient history a user may not want to address at all. The option to do this may fall into the same settings category as @elpht recommends.
Originally posted by @DaveSkender in #1018 (reply in thread) discussion with @elAndyG
The text was updated successfully, but these errors were encountered: