You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the static/inactive tree of the broad phase uses the exact same level of aggressiveness as the active tree. That's extremely wasteful, given that static and inactive objects generally do not move at all.
Further, the static tree's jobs are not handled in parallel with the active tree's. That pointlessly creates sync overhead.
To improve this:
Make the multithreaded refinement context create jobs for use in an external scheduler. Do not internally dispatch them.
Build a dedicated static/inactive refinement that takes advantage of the unique assumptions. This most likely involves explicit refits on add/remove/move, followed by low effort periodic refinements. The fact that we probably won't have more than one static refinement per frame is totally fine since it should run in the same dispatch as the active tree's refinement.
There may be value in some form of batched refit. API isn't immediately obvious, but if if you changed a thousand bodies, they'll each be doing a lot of duplicate work near the top of the tree. On the other hand, the worst case might cost a handful of microseconds, so further optimization effort could be silly.
The text was updated successfully, but these errors were encountered:
Currently, the static/inactive tree of the broad phase uses the exact same level of aggressiveness as the active tree. That's extremely wasteful, given that static and inactive objects generally do not move at all.
Further, the static tree's jobs are not handled in parallel with the active tree's. That pointlessly creates sync overhead.
To improve this:
There may be value in some form of batched refit. API isn't immediately obvious, but if if you changed a thousand bodies, they'll each be doing a lot of duplicate work near the top of the tree. On the other hand, the worst case might cost a handful of microseconds, so further optimization effort could be silly.
The text was updated successfully, but these errors were encountered: