In this video for Kazu Makino, a combination of photogrammetry, which Rhizomatiks has cultivated in various past projects, and cutting-edge software and hardware are used to collect abundant data to realize seamless mixed reality work on a large scale. The technique for creation of 3D data of dancers is based on is called photogrammetry. Photogrammetry is a technique that enables creation of 3D data from still images taken from multiple angles, or you can reconstruct 3D data from 2D data.
The first time Rhizomatiks used photogrammetry for dancers was the project with Perfume in 2012, when the algorithm was based on silhouette. It was not sophisticated enough to capture the detailed expression of facial surfaces. The texture was not smooth. But the quality has improved so much that Rhizomatiks use the technique in many projects. Besides 3D data using photogrammetry, static objects such as the landform are also scanned with a laser scanner to attain 3D data.
Though this method to synthesize various 3D data into the same space since 2012, as seen in our work with Perfume at SXSW in 2015 or music video of Nosaj Thing and Chance The Rapper, it was the first time to have so many dancers and handle this much of environmental data.
This music video is one shot video where audience can enjoy seamless transition between the real world and 3D world utilizing CG processing techniques, which include the drones’ flight data, digital image analysis from 2D shooting data, photogrammetry data made from three dimensional 3DCG model, and 3D laser scanner data that uses laser emission to provide high-quality 3D landform mapping data. First we shot dancers with 32 cameras using photogrammetry techniques. Those footage from 32 cameras are aligned into matrix to make them in 3D. Dancers were fully reproduced in CG.
Also using FARO, a laser scanner that enables creation of high quality 3D mapping data of the landform of the site, Ponderosa Music & Art converted the location into 3D data by taking a wide range of 3D point cloud data and textured material data from the environment. Using those technologies, they synthesized dancers and the landform to have them exist in the same 3D space by taking 3D data of both dancers and the landform. Other than this, Ponderosa also used drones and CG effects to make the video more dynamic.
For shooting the real space scenes, Ponderosa succeeded in challenges of expressing the different time of the day, which are the morning, noon and night, using drones. They improved the accuracy of flight path, altitude, direction of camera and timing of turning of drones with repeated tests using GPS and programming of Litchi, an autonomous flight app.
Point cloud effects of Kazu are generated on Houdini using color information of the points or outline data from 4D Views data. The effect in the last scene, a trail effect, is generated with lines of particles, which consists of point clouds inside the model that follows the motion based on 4D views data. Also for the wave effect, the 3D model of the landform are used with displacement so that the wave flows smoothly by arranging UV and polygon according to the layers of the landform.