Tuesday, November 13, 2018

Walt Disney's office in VR

For D23 2017 Anaheim + 2018 Tokyo, I photo-scanned Walt's belongings using a custom cross-polarized lighting rig.  Because the data is built from photography, as rendering technology improves, so will the quality of the imagery in virtual reality.




The scene features interactive props, books, and even a pop-up museum that tells the story of the objects Walt owned and their connections.


In progress: a "working" virtual Kem Weber animation desk


Friday, December 30, 2016

Building a 1950's Cinerama experience in VR



"This is Cinerama." What was Cinerama? Cribbing from Wikipedia: "Cinerama is a widescreen process that originally projected images simultaneously from three synchronized 35 mm projectors onto a huge, deeply curved screen, subtending 146° of arc." I had to look up subtending - basically a line joining the endpoints of the screen's arc. We'll keep that in mind as we develop our virtual reality recreation of a Cinerama screen. I recommend reading the wiki article, though in summary the process involved filming with three overlapping cinema cameras to project a panoramic image, an effect that is very similar to Imagineering's Soarin' projection technology and very like the multiple camera 360 pano capture devices that are flooding into public hands.



When we ask "can Hollywood make a VR movie" the simple answer is that they sorta' already did in 1962, and it worked quite well. As the documentary Cinerama Adventure demonstrates, some directors were frustrated by the limitations the format imposed on their work, while others embraced the creative possibilities within those boundaries. Both facets are demonstrated in the epic MGM movie How The West Was Won. As one subject put it, the scene is often front lit, side lit, and back lit. That sounds very like a 360° VR shoot. Since Turner has released a Blu-ray version for which their technicians have done a brilliant computer-stitched version of the three camera's frames and even included border material that most theaters lost in projection, I think it is a perfect opportunity to attempt to recreate the Cinerama experience in VR.


The first step involved creating a screen geometry for projection. Since the Blu-ray material extends the HTWWW screen dimensions to nearly 180°, I used Autodesk Maya to build a partial sphere that covered that angle. Interviews with Cinerama filmmakers have made reference to a "Cinerama zone" where I imagine the sweet spot for viewing takes place, much like any current theater. I found that this was not in the center of the sphere, as this gave a feeling of being too close to the screen. In addition, I found that a cylindrical screen - which is what would have been found in Cinerama theaters - led to an effect of stretching at the top and bottom of the image, probably because these areas are at a greater dimensional distance from the viewer, but was likely a necessary concession to audience members who were not seated in the "zone" and also would have been much less costly to build than a spherical dome/screen. We don't have that limitation in VR, although the VR scene we're using can display on any surface form.

After some testing, the screen was scaled up 15 times. One reason viewers are uncomfortable setting close to a screen might be that their eyes are focused at twenty feet when their mind is seeing objects that are supposed to be at or near infinity. It doesn't seem to work the other way, it is not uncomfortable to focus far away on a subject that seems close to the camera, so I scaled our screen geometry to allow the VR viewer to focus at a distance, making the massive screen seem more comfortable.

Because we're using the Unity engine to display the scene in an HTC Vive, there is no projector, instead the pixels are mapped in the GPU via UV position to points on our screen geometry using the GPU's warping capability. It's just texture-mapping with a movie instead of an image, and Unity is up to the task on my MSI laptop, even after changing the texture import settings to 100% quality. This takes a long time to import, and was frustrating when tweaks needed to be made.

Unity won't play a Blu-ray directly, so I needed to convert the movie to MP4 files, and keep the file size under Unity's two gigabyte texture limitation, so in order to keep the quality pegged I exported into several 1GB chunks and wrote a script to swap the movie textures as needed. This made it easier to jump around within the film for testing as well, after I bought the movie here.



Here are the controls and movie swapping code attached to the 3D screen0 model object in Unity:

Tuesday, October 18, 2016

Virtual steality - take your own Disney Treasures - Me-Gifting - AKA out of word play

It's not that I don't appreciate the fine merchandise in the souvenir store at Disneyland Park. It's just that Duffy, Itchy and the Gang are not for me. So if I am to leave Disneyland Paris with a tchotchke that I can appreciate, I am going to have to be creative. Enter Agisoft.


Ignoring the mournful gaze of the Phantom Manor cast member (or was it mere pity?) I began photographing the plaque outside the queue entrance. By taking a few dozen shots with a Sony A7S2, I was able to rebuild my would-be souvenir as a 3D model in Metashape using the magic of photogrammetry. Unfortunately this was not good enough for my pointlessly high standards. The spiky teeth of the freaky head guy were missing from this reconstruction.

According to the Agisoft manual, taking around one hundred pictures of this checkerboard would enable the software to compute a more precise model of my camera's lens distortions, and make a better model.

Fortunately a kindly Imagineer took pity upon me and pointed out that the checkerboard cannot have the same pattern if it is rotated 180°, so I had to 'shop out the last row of squares from each photo. OpenCV Brownian Distortion lens model algorithms are at play here.

Upon feeding the calibration into Metashape...
He's still gamming at me! Next I tried bumping up the Clarity in Photoshop in an attempt to create more localized contrast for Agisoft to key on when the focus is not perfect. This also sometimes helps with photos that don't align properly. Then, I tried super-sampling the images before doing any tweaks.

I was quickly coming to the conclusion that my efforts to make my souvenir had failed. Perhaps the Sony did not have gift-shop quality resolution, or perhaps the photos were taken at too wide of an angle (35mm) to be useful.

 

With only one option left, I bit the bullet and purchased a Sony 90mm macro lens, a linear polarizer to block reflections, and airplane tickets back to Paris. This was now the most expensive souvenir ever. *A kindly mouse may have helped with the ticket cost.



155 photos later, we have pointy teeth! Now it is just a simple hopelessly complicated matter to place the data in a 3D printer and push the "Should have just bought the Duffy" command. Would that it were so easy. The Agisoft export has holes and no thickness, so some further processing is needed. I turned to DesignX to do some more robust manipulations than Metashape currently provides.

Note to self: don't be afraid to brush away spider-webs, they had a visible impact on the quality of the Crypt Keeper's right side.

The penultimate step is to bring the design into a program called Geomagic Sculpt, which is the cleanest method of voxelizing 3d printer data to ensure water-tight meshes and manifold geo. It is also a good place to smooth out any rough patches and refine the model in an intuitive sculptural manner. This is because one can feel the 3D model using Sculpt's haptic feedback stylus. Here's another heisted souvenir in Sculpt:



Ready to print using Photoshop CC, our go-to support generation software for extruded filament printers.



NEXT HEIST: WE STEAL THE ENTIRE PHANTOM MANOR

Wednesday, January 15, 2014

Mars needs models

Custom translator: NASA Mars Orbital Lidar Altimeter to 3D printing data.




3D Prints 



Friday, January 10, 2014

Software controlled DMX portrait lighting with Kinect based automatic positioning, 12 Nikon D800e camera satellite

The goal: to create a portable lighting system able to capture diffuse, specular, and normal texture maps with even shadowless lighting, and capture a 3D model of the actor or subject at the same time, with or without augmented structured light from a satellite of five projectors. Add in enough software control of lighting to create a mesoscopic normal map.

I spec'ed and designed the rig, and wrote the software that drives the cameras, projectors, and lighting, as well as an automated Kinect-based trigger that detects the moment when the subject is in position and holding still.






A mastery of the Nikon SDK was required to control the cameras, but USB timing limitations required a separate microcontroller-based hardware trigger to half/full press with precise timing.



Thursday, January 9, 2014

Software dev - DAM automated digital asset management


The problem: even the fastest 3D data viewer is brought to its knees by dense scan data, making DAM difficult without indirection.
The solution: automated preview generator that in a "live" manipulatable window that allows rotation, scaling, and lighting changes before committing to a 2D bitmap version to be used in cataloging software.


Avengers 2 cross polarization camera + filter control

A fast but stable (vibration free) polarization angle switcher. The embedded microcontroller also fires the camera(s). By taking a continuous sequence of several shots, two that match closely can be culled.


Truth-be-told, I had about five hours to design and build this contraption before leaving LA for the Manhattan photo studio where the actors would converge the next day. 


This allows diffuse and specular skin reference images without using a beam splitter (and thus halving of light intensity.)