EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields (WACV 2024)

We present EvDNeRF, a pipeline for generating event data and training an event-based dynamic NeRF, for the purpose of faithfully reconstructing eventstreams on scenes with rigid and non-rigid deformations that may be too fast to capture with a standard camera. By combining the perception advantages of event cameras with the strong geometric priors and differentiable rendering of NeRFs, EvDNeRF can predict eventstreams of dynamic scenes from a static or moving viewpoint between any desired timestamps, thereby allowing it to be used as an event-based simulator for a given scene.




Event predictions

Simulated scenes


Data generation

For simulated event data generation, we use Kubric to simulate and render scenes from multiple viewpoints, Vid2E to generate events from images, and then slice the event streams into varying batch sizes.

For real event data generation, we collect multiview event data from a single event camera by rotating the scene and triggering a repeatable object motion. We then manually time-synchronize the eventstreams by looking for peaks in the timestamp-event histogram.


We find that by training EvDNeRF with varied batch sizes of events, we achieve better events reconstructions of fine details under novel viewpoints, timestamps, and camera motion.



  title={EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields},

  author={Bhattacharya, Anish and Madaan, Ratnesh and Cladera, Fernando and Vemprala, Sai and Bonatti, Rogerio and Daniilidis, Kostas and Kapoor, Ashish and Kumar, Vijay and Matni, Nikolai and Gupta, Jayesh K},

  journal={arXiv preprint arXiv:2310.02437},




We would like to thank all the members of the Autonomous Systems Research group at Microsoft Research for their support and discussions; Anthony Bisulco for guidance on collecting real eventstream data; Bernd Pfrommer for his work on drivers and software supporting event cameras; Jiahui Lei for NeRF-related suggestions and discussions; Kenneth Chaney for his support on using event cameras and related software. This work was supported by National Science Foundation, grant no. DGE1745016.