We present GS-ProCams, the first Gaussian Splatting-based framework for projector-camera systems (ProCams). GS-ProCams significantly enhances the efficiency of projection mapping (PM) that requires establishing geometric and radiometric mappings between the projector and the camera. Previous CNN-based ProCams are constrained to a specific viewpoint, limiting their applicability to novel perspectives. In contrast, NeRF-based ProCams support view-agnostic projection mapping, however, they require an additional colocated light source and demand significant computational and memory resources. To address this issue, we propose GS-ProCams that employs 2D Gaussian for scene representations, and enables efficient view-agnostic ProCams applications. In particular, we explicitly model the complex geometric and photometric mappings of ProCams using projector responses, the target surface's geometry and materials represented by Gaussians, and global illumination component. Then, we employ differentiable physically-based rendering to jointly estimate them from captured multi-view projections. Compared to state-of-the-art NeRF-based methods, our GS-ProCams eliminates the need for additional devices, achieving superior ProCams simulation quality. It is also 600 times faster and uses only 1/10 of the GPU memory.
我们提出了 GS-ProCams,这是第一个基于高斯点云(Gaussian Splatting)的投影仪-摄像头系统(ProCams)框架,显著提升了投影映射(Projection Mapping, PM)的效率。投影映射需要建立投影仪与摄像头之间的几何和辐射映射,而以往基于 CNN 的 ProCams 方法仅适用于特定视角,限制了其在新视角下的应用能力。相比之下,基于 NeRF 的 ProCams 支持与视角无关的投影映射,但需要额外的同位光源,并且对计算和内存资源有较高需求。 为了解决上述问题,我们提出的 GS-ProCams 使用二维高斯点云进行场景表示,实现了高效的视角无关 ProCams 应用。具体而言,我们显式建模了 ProCams 的复杂几何和光度映射,涵盖投影仪响应、由高斯表示的目标表面几何和材质,以及全局光照组件。随后,我们通过可微分的基于物理的渲染(PBR),从多视角投影捕获中联合估计这些组件。 与最新的基于 NeRF 的方法相比,GS-ProCams 不再需要额外的设备,达到了更高质量的 ProCams 模拟效果,同时在速度上提高了 600 倍,GPU 内存使用量减少至原来的 1/10,展现了卓越的性能和效率。