Blog

A Primer on 3D Scanning & Photogrammetry

Published on

January 18, 2019

We often use 3D scanned objects and environments in the VR applications we develop, so I thought I’d write a primer on how to get started with Photogrammetry.

What is Photogrammetry

Photogrammetry is the science of making a 3D model from photographs. The input are regular 2D photographs, and the output is a 3D model of some real-world object or scene. The process itself is very complex and I will not explain it here, lets just say it involves computer vision algorithms and a lot of math.

Photogrammetry is the processing of “converting” a physical objects into a mesh and texture.

Why use Photogrammetry (and also why not)

Here are two models of a TLR (Twin Lens Reflex) camera. The left was made using traditional 3D modeling tools (Maya, Blender, etc) and the right was 3D scanned. The left model looks better, but requires massive amount of work. Easily 20 hours for an experienced modeler. The right model took me about 20 minutes of work and requires almost no skill.

3D Modeling vs Scanning

**3D Modeling****3D Scanning**
- **Accurate** - **Clean** - **Multiple materials** - **Optimized** - **Cheap** - **Fast** - **No skill required**
- **Expensive** - **Slow** - **Requires skill** - **Higher polygon count** - **Need a physical copy** - **Not everything can be scanned**

Limitations

Some objects (or parts of objects) are very difficult to scan, or downright impossible. This includes:

  • Shiny/Reflective surfaces
  • Moving objects (like, people, pets)
  • Occlusions
  • Fuzzy bits (hair, fur, leaves)

There are ways around these, they work for some cases and not for others. Reflective surfaces can be dulled with matting/dulling spray or baby powder. Scanning moving objects requires lots and lots of cameras all synced and shooting at the same instant (very expensive). There is nothing you can do about hair though. When scanning people usually a hairnet is used and the hair is then added manually in post-processing (a complex and expensive process).

The scanning process, Part 1 – Shooting pictures

This blog post will focus on scanning objects (as opposed to environments, which are harder). The key to a good scan is shooting good photos. You will need a good camera, DSLR works best but I’ve also had great results with a high-end smartphone.

When shooting pictures for scanning, your goals are a bit different than “normal” photography. Lets go over what makes a good set of pictures for photogrammetry:

  • Lots of pictures. At least 150 and up to 400.
  • High quality photos, low noise and as sharp as possible.
  • Shoot RAW if possible
  • Shoot with a manually set white-balance and exposure (if possible)
  • Wide DOF so that everything is in focus
  • No motion blur
  • Flat lighting, no shadows or highlights

That last one is really important. You want your lighting to be as flat as possible, like the rock on the right. These makes for a visually unappealing photo, but remember that our goal is a good scan, not a good picture.

For a full scan you will need to walk around your objects (or rotate it, more on that later) and shoot many pictures from different angles.

I usually do three or four different heights, waist level, chest, eye level and above the head. For each elevation I do a full rotation of the object and shoot 50-100 pictures per elevation.

You can walk around your object, you can fly around it with a drone, or you can rotate it using a turn table. I suggest starting with walking. If you want to know more about turn-table scanning, I wrote a blog post about that which can be found here:

3D scanning using Photogrammetry and an automated turn table

Part 2 – Post processing the pictures

Go over your pictures and delete any that are blurry for any reason.

If you shot your pictures using RAW (which you really should, if you have that option), open them in your favorite RAW processing software and flatten the lighting. Bring up shadows and bring down highlights.

Also adjust the white balance and colors if you like, just make sure to apply those settings to all the pictures.

DO NOT apply any lens distortion correction. If your software does that by default, turn it off. It will mess up the reconstruction process later.

Export the images as highest quality JPEG. Some people are adamant about using TIFF and keeping the images uncompressed. I didn’t notice any difference when I did some testing. TIFF are huge and it just slows down the process and I don’t think it’s worth the hassle.

Part 3 – Photogrammetry software

Now you’ll need to feed those photos to the photogrammetry software of your choice. There are many options, here are a few well known ones:

Agisoft use to be king of the hill and is still a good solid option at $170 lifetime license. RealityCapture overtook it by storm recently, with it’s lighting fast GPU-powered reconstruction and superior photo alignment algorithm. However RealityCapture has terrible UI, a draconian DRM and a pricing model that has you paying for each scan. Agisoft latest update added GPU-powered processing and made it considerably faster, however it’s still about 2-3x slower than RealityCapture in comparison tests I’ve done. I have not user Zephyr, but it has a free version which is limited to 50 pictures. Might be a good way to get your toes wet. Meshroom is open-source and free, but is hard to use, slow and sometimes fails or crashes.

Part 4 – Photogrammetry process

This part will vary depending on which software you use, but it’s a very similar process and I will describe it in general terms. It consists of these steps:

Alignment – The software will process all the photos and figure out the 3D position and angle on each one. This can take anywhere between a few minutes to an hour, depending on the number of pictures and the software used. Once’s done, go over the results and delete any that are clearly aligned wrong.

This is easier to do in a turn-table setup, but sometimes you can also clearly see bad alignment when you shoot hand-held. If you deleted any pictures, run the alignment again.

Set Region – After alignment is done you will be presented with a point-cloud. Use that point cloud to set the region of interest. You don’t want to process and create a mesh for things that you don’t need, that’s just a waste of time.

Build Mesh – The software will create a mesh from the point cloud. This can take anywhere between 10 minutes and many hours, again depending on the number images, their resolution and the software. As a rule of thumb, the higher the resolution of the images the more detail (and vertices) you’ll get in the mesh.

Post-process the mesh – Now you can decimate (or simplify) the mesh to bring down the polygon count, close holes, delete any bits and parts you don’t need and generally clean it up. Both Agisoft and RealityCapture have built-in tools that do this quite well, but you can also use export the mesh, use 3rd party software, and then import it back. For example, you can use Z-Brush or InstantMeshes to rebuilt the object with clean topology, or MeshMixer to close holes, or Blender to do anything you want. Just don’t forget to import it back into your photogrammetry app 🙂

Build Texture – Now the software will take all the photos and “project” them unto the mesh, and blend the results. This will generate a texture map and also something called a UV map (which describes how the texture is applied to the mesh). A texture map is just a regular image and the UV map is baked into the mesh itself (UV maps are super complicated so I won’t go into more details here, it’s can be subject of a whole series of posts). I usually generate 4k or 8k textures. This will take anywhere between 10 minutes and two hours.

Textures generated by Photogrammetry tend to be a bit… crazy.

Export – Hooray, you now have a 3D model of a physical object! Wasn’t that fun? After exporting I usually open the texture in Photoshop and fix the color balance, or patch up any weird artifacts. You can upload your model to Sketchfab and post it in the comments below.

Conclusion

Photogrammetry is fun and you should try it. This guide was written with scanning small(ish) objects. The technique changes a bit when scanning a room or a building, but most of the points here remain valid. In the future I will write a post about scanning bigger things.

Stuff I scanned:

A book shelf

A Stone Garden Bench

Old Rusty Car

Inuit Stone Sculpture

Many more scans

Bonus

With the right lens, you can 3D scan really small object. This asteroid is actually a tiny fossil.

Other Resources:

A Beginner’s Guide to Photogrammetry by Timothy Porter