Skip to content

DirectComposition FAQ

Markus Stange edited this page Oct 3, 2019 · 2 revisions

This page is where we gather questions and answers about DirectComposition.

The guiding question is: What's the most efficient way to get pixels on the screen? This question probably has different answers for WebRender's regular rendering, hardware-decoded video, and software-decoded video.

What are the different ways to get pixels on the screen with DirectComposition?

You create a visual, and then you call SetContent to say what the visual renders. You have three choices for your contents: A swap chain, or a composition surface, or a window handle. The window handle probably isn't useful to us. So it comes down to swap chain vs composition surface.

How do you put pixels on the screen with a swap chain?

You draw to the swap chain as usual, then you call Present1(), then you call Commit().

How do you put pixels on the screen with a composition surface?

You call BeginDraw on the composition surface, then you draw, then you call EndDraw, then you call Commit(). BeginDraw gives you a DXGI surface that you draw to.

You can have a regular composition surface or a "virtual" composition surface. The virtual composition surface uses tiles internally. EdgeHTML uses it for scrolled layers.

Does drawing to a composition surfaces involve a copy?

(Not an answer:) It's unclear to me whether the DXGI update surface can ever be the composition surface's "true" backing store, i.e. whether it will let you draw directly to the surface. It seems to me that it might always allocate a new update surface and then copy that to the actual surface. Something like that seems like it would be necessary for virtual composition surfaces anyway.

Do CompositionSurfaces allow partial present?

(Not an answer:) From update flashing it seems like the entire composition surface is always composited by the DWM whenever anything changes on it. The update rectangle passed to BeginDraw seems to not matter. This requires more investigation.

Do swap chains in DC visuals allow partial present?

It seems they do: Present1() can be called as usual, and WebRender's current swap chain is presented via a DC visual, too, so everything should work as described in the Present1 documentation.

So is a swap chain cheaper?

(Not an answer:) It seems like drawing to a swap chain should be more efficient: There's no copy, the swap chain just flips between different buffers. And Present1 lets us specify an update rect. And updates to multiple visuals in the same window will be in sync because Present1 takes no effect until Commit is called. So what's the advantage to using a composition surface? Does it save memory?

If I have an ID3D11Texture2D object, or any other IDXGISurface, can I set it on the visual directly?

(Not an answer:) When I draw video, I often already have a texture that contains the video frame. The data in that texture might be in a YUV color format. Can I use that texture in a visual directly, without a copy? The answer is likely no. I haven't found an API to set a IDXGISurface on a visual. And there would need to be a mechanism that prevents me from touching the texture while the DWM is using it. It seems like I have to copy the texture to a swap chain's back buffer and present it that way.

Can I upload main memory data directly into a composition surface or visual?

(Not an answer:) I haven't seen a way to do that. I can get the data into an IDXGISurface easily, but then I still need to copy that surface to the swap chain. Alternatively, I could use a composition surface, call BeginDraw, and then upload my data into the update surface that BeginDraw gives me, by mapping that update surface. This will likely also involve a copy.

What's a good way to get hardware-decoded video into a visual?

DecodeSwapChain is something that exists. But I am not sure if it eliminates copies. According to this forum post, decode swap-chains mostly exist to allow moving the YUV->RGB conversion to happen at hardware scanout time.

Clone this wiki locally