DHRI@A-State

3D Presentation and Preservation

This session is built to provide an understanding of the necessary tools involved in both the creation and display of photogrammetry-based 3D content.

What exactly is photogrammetry?

Photogrammetry as a term, broken down, means making measurements from photographs
Photogrammetry is used in many different applications to generate valuable data from photographs, but for our purposes, we are using photogrammetry to generate detailed 3D models.  This means an object which previously could only be viewed "in the round" by visiting a specific location can now be seen and manipulated around the world!  Imagine having an accurate 3D representation of an object you can bring to a presentation or a classroom and allow viewers to freely interact with.
Below is one such model, created by the University of South Florida Libraries with the Smithsonian Institute.
 


Try playing around with the above 3D model viewer.  Try clicking and dragging, using the scroll wheel to zoom in and out, clicking on the various annotations, and checking out some of the more advanced viewer options.
You'll notice there is a lot in that viewer that may not necessarily make a whole lot of sense right now, such as terms like Post-Processing, Albedo, and Matcap.  Many of these terms will come up in this session, so it's fine to feel a bit lost right now.

In this session, we are going to start by diving deep into how we generate this type of content using free tools.  Then we will jump into some of the conceptual and ethical considerations surrounding photogrammetry and scanned objects.  Lastly, we'll take a look at some further resources and tools to use in advancing our research.

How do we do it?

The first step is to gather our necessary tools.  This session assumes you are using a Mac OSX machine, but the vast majority of these tasks can be accomplished on Windows/Linux using the same tools.

List of tools

The essential workflow

Every model is generated in essentially the same way, with a few considerations based around our final output.
Objects can be generated for video, still image display, rotational 3D display, or real-time rendering applications such as video game engines for AR and VR.
Regardless of the output, the workflow is pretty consistent:Let's visit these step-by-step.

Capture Content

The first step, capturing our content, is probably the most important phase of our photogrammetry experimentation because without effective data, we can't generate a proper model even with the fanciest supercomputers.
Generally, should evaluate our final use-case and select a camera based on that, but of course budget restrictions and access are the primary restrictions.  Usually, I create my models using a basic DSLR camera, shooting in a JPEG format with a typical kit lens, meaning I'm using a kind of bare-bones set up.  For larger projects (funded) I use specialized camera equipment.
However, there are many different photogrammetry tools you can use, including any modern smartphone, although you may have to grab an additional camera application to give you access to the settings we will be modifying for ideal pictures.
Indeed, photogrammetry relies primarily on the type of pictures you take, not so much the equipment you use.

Probably the most defining factor in this capturing process is the end-object we desire.  Is it a simple, small object that can be placed on a desk or the floor?  Or is it a large-scale sculpture?  Or maybe even an entire room-sized space?  Or possibly a city block??
All of these cases are viable, but we just have to wrap our heads around our approach.

I'll be diving deeper into the specifics for small-object and room-scale photogrammetry during our session, but in this curriculum I'll briefly outline some of the considerations.
All the guidelines below are for small-object capturing.

Lighting
FLAT lighting is ideal, meaning no harsh lights, spotlights, etc.  Basically overcast and bland light.
WHITE light is also much better than warmer/cooler lights, like light from lamps.

Camera
Try to keep your aperture as CLOSED as possible (f8, f11, f16), this will keep the majority of the scene in deep focus.
Shallow depth of field is your enemy, remember we aren't taking beautiful photos, just useful photos.
DO NOT change your zoom/focal length.  Stay consistent.
NO FLASH!

Scene
Try to have an "average" setting, meaning not a strange white-space or dark-space.  Objects in the background can help isolate your object and create more accurate photogrammetry models.


The process itself:
Take pictures walking around the object, every 3-5 degrees of movement.  Take from at least 5 different heights and angles, and try to do at least 2 passes of photographs.  It's important to move with the pictures, and not take multiple photographs from the same position but just a different angle, as this image illustrates.
 

 

Generate Data Points

At this stage, you'll have possibly hundreds of photographs of your object/space.  You'll need to transfer these to a computer.
You'll use Regard3D in this session, but there are plenty of other options, including the expensive yet professional Recap from Autodesk.
You can visit Regard3D's site for more details on the specifics, but essentially you'll hand over your set of photographs to the software, and it will find "matches" of specific points in the imagery.
It will generate a series of points and measurements from these photographs.

 


These pieces of data then lead to a point cloud.

 

Triangulate/Densify

At this stage, we move to generating proper 3D geometry from this point cloud.
We need to create a more dense point cloud, and then generate a surface from those points.  These surfaces can contain color data, so they'll be fully textured when we display them later.

Once the model has a proper surface, we can move on to exporting the object.

Export

The model is exported as an OBJ, an interoperable format we can send to all different kinds of applications.
We take the model to Sketchfab for viewing.