Crime Scenes in The Wild

hero-image-The-Wild-view-sync-rev1.jpg

This week, we reached a milestone with interactive 3d visualization. Imagine stepping directly into a crime scene and working with your team to move people and objects around to test witness accounts and develop a narrative for what actually occurred. We already knew how to do a version of this using on-screen viewing of simplified digital models built from scene measurements. Now we can do real-time collaboration in virtual reality (VR) while looking at actual data from crime scene scans.

For the past several years, law enforcement teams and forensic experts in major cities across the world have been surveying the scene of murders and fatal crashes with a laser scanner and/or drone. The resulting data set, a point cloud, captures an incredible amount of detail. Taken together, the millions of points form a 3d picture that can be viewed from any perspective, and measured with great accuracy. This is a valuable resource for attorneys and investigators, who in theory can go back to visit an exact replica of the scene at any time. In practice, the complexity and sheer size of point cloud data presents a challenge.

Read more about...

Due to the large expense of equipment and training, most agencies have a small team, sometimes a single person, that is able to collect, process, and view point cloud data. This work is time consuming, and can only be done using specialized software which can feel pretty “clunky” when the data files are large. Funneling all requests through this small group limits the usefulness of point cloud data in an investigation. A reasonable first step is to look for ways to share the data, which is typically handled in one of two ways. The first is to create still images with annotations, or a video “fly-through” of the scene. These can be opened on any computer and provide a snapshot of what was surveyed, but there is no way to see different views or measure anything beyond what is called out in the images.

point-cloud-hallway-measurement-recap.jpg

A snapshot of a point cloud data set, with the width of the hallway dimensioned.

Fly-thru video of point cloud data set

A second option provided by some scanner manufactures is to create a package of files that can be viewed and measured inside of a web browser. Leica calls this a TruView package, and FARO has branded their version WebShare2Go. It’s sometimes touted as a “free viewer” or “lite version” of the point cloud data. However, it’s not in fact point cloud data at all. Instead, it’s a group of panoramic images with location information for each pixel in the image. The experience of using these data sets is like using Google Street View. You can look all around from a series of pre-determined locations, and because of the pixel location data, you can do some approximate measurements. However these measurements will never be as accurate as one could get by working with the original point cloud data.

Neither of these options is a great match for Fat Pencil’s approach. We strive to create simple and responsive tools that can be used in real time as visual facilitation for team meetings, and deliver 3d visualization experiences that attorneys and investigators can use on their own.

The 1st time we had an opportunity to work with a point cloud was in 2015 while working on a case involving a blind spot for the driver of a left turning bus. We used AutoDesk ReCap to measure and inform our efforts to create an accurate, but simplified SketchUp model of the scene. This method opened up a new world of possibilities for attorneys and investigators. In addition to viewing and studying an accurate 3d model of the scene, we provided the option of adding new objects to the scene, moving them around, and testing visibility or trajectories, all in real time. We have now used this approach on many cases, including:

SketchUp is a great tool for testing ideas about what happened and explaining these conclusion to others. But viewing a 3d model on screen doesn’t really convey what it feels like look at the original data collected at the scene. That’s why we’ve been working so hard to convert point cloud data into a form that can be experienced with out any advanced training, complex software, or high-end computers. After many months of research and testing, we’ve finally got a process that accomplishes this goal:

  1. Get original structured scan data (not the TruView/WebShare version) ideally in .e57 format.
  2. Convert point cloud to geometric mesh using Reality Capture software.
  3. Upload resulting 3d model to The Wild for viewing and collaboration using a VR headset.

Here’s an example of what that collaboration can be like, using a model from the aforementioned Hallway Shooting case:

Fun fact: The first real object ever “scanned” and rendered by computer was a VW beetle, in 1972. It was measured by hand using yard sticks.

https://jalopnik.com/the-first-real-object-ever-3d-scanned-and-rendered-was-494241353

Like any new process, there is some room for improvement. In order to keep the VR experience fluid, we need to simplify the geometric mesh, which can make it harder to see small details. Also, the visualization is only as good as the original point cloud data, which may have areas with no data ("holes”), or irregularities caused by moving or reflective objects ("noise"). Even with these limitations, the overall viewing experience feels a lot like being inside the actual scene, which is a new and exciting tool to have when working on a case.

It’s hard to explain in words or pictures how much of a game-changer this is… you just need to put on a headset and try it. We are currently planning a demonstration event for early 2020. If you’d like to be invited please let us know!

Jannine Hanczarek was a Designer at Fat Pencil Studio from 2017-2020.