top of page

About

 

If you visited a rock concert, or any other event that attracts crowds, you probably recognized how many people are taking videos of the scenareo, using their mobile phone cameras. Unfortunately, after the event, each user is left only with his own video, i.e. video taken from a specific angle and location, usually of only moderate or poor quality. While it is possible to publish this video and relate it to other videos on existing video portals, the experience remains quite disappointing, as the feeling of sharing the event is not delivered. Tools for combining or improving videos are missing entirely.

 

The main goal of SceneNet is to aggregate audio-visual recordings of public events, obtained by large numbers of people using mobile devices, in order to create a multi-view high quality video sequence. In addition, we will design the necessary tools to allow users to form small communities related to specific events, to upload and to truly share videos, and to implement these modules into social platforms, including editors' capabilities.

Our aim is to aggregate a 3D video feed into a live and interactive 3D scene application where the user can select arbitrary points of view, perambulate through the scene, and use in-video applications.

 

 

Five partners from three countries: SagivTech (coordinator) from Israel, EPFL from Switzerland, and from Germany the University of Bremen, the Steinbeis Innovation gGmbH, and the European Research Services GmbH.

 

This project is funded by the European Union under the 7th Research Framework, programme FET-Open SME,
Grant agreement no. 309169.

 

“Unlike all the other art forms, film is able to seize and render the passage of time, to stop it, almost to possess it in infinity. I’d say that film is the sculpting of time.”

– Andrei Tarkovsky

bottom of page