We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos with accompanying parametric body fits. Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h.

These speedups are obtained by using a lightweight deformation model solely based on linear blend skinning, and an efficient factorized volumetric representation for modeling the shape and color of the person in canonical pose. Moreover, we propose a novel local ray marching rendering which, by exploiting standard GPU hardware and without any baking or conversion of the radiance field, allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality.

Our experimental evaluation shows superior or competitive results with state-of-the art methods while obtaining large training speedup, using a simple model, and achieving real-time rendering.

Method overview

On-device rendering

These videos were recorded from Virtual / Mixed Reality headset with color passthrough capabilities. We render reconstructed humans from ZJU-MoCap dataset with our custom shaders using TensoRF representation. Animation is done with linear blend skinning.

Sequence 377

Sequence 393

Sequence 394

WebGL demos

We also ported our code to WebGL to show real-time rendering capabilities and robustness of our rendering approach: it can run across most popular platforms.


      author    = {Rocco, Ignacio and Makarov, Iurii and Kokkinos, Filippos and Novotny, David and Graham, Benjamin and Neverova, Natalia and Vedaldi, Andrea},
      title     = {Real-time volumetric rendering of dynamic humans},
      journal   = {arXiv},
      year      = {2023},