For simulated event data generation, we use Kubric to simulate and render scenes from multiple viewpoints, Vid2E to generate events from images, and then slice the event streams into varying batch sizes.
For real event data generation, we collect multiview event data from a single event camera by rotating the scene and triggering a repeatable object motion. We then manually time-synchronize the eventstreams by looking for peaks in the timestamp-event histogram.
We find that by training EvDNeRF with varied batch sizes of events, we achieve better events reconstructions of fine details under novel viewpoints, timestamps, and camera motion.
We would like to thank all the members of the Autonomous Systems Research group at Microsoft Research for their support and discussions; Anthony Bisulco for guidance on collecting real eventstream data; Bernd Pfrommer for his work on drivers and software supporting event cameras; Jiahui Lei for NeRF-related suggestions and discussions; Kenneth Chaney for his support on using event cameras and related software. This work was supported by National Science Foundation, grant no. DGE1745016.