Joshua Cohen is a principal at Fat Pencil Studio
I just returned from the Human Factors & Ergonomics Society (HFES) conference in San Diego, where I was invited to speak at a session titled Animated Computer Graphics: Working with Visual Media Professionals to Create Compelling Presentations. I shared the podium with Dr. Gary Sloan, a human factors forensic expert, and Jay Syverson, of Seattle based OnPoint Productions. I was able to offer an in-depth presentation of Fat Pencil Studio's work on the recently settled TriMet case. I didn't get quite the cross-examination that I would have had in court, but there was a lively Q&A session afterwards.
One question that came up repeatedly: Can we accurately simulate lighting conditions, color, depth of field and other issues that affect visual perception? My answer was this is difficult, if not impossible to do well using current technology. Why? Because images displayed on paper or on a flat screen cannot accurately simulate the way our brains understand 3d space. Imagine the following scenario:
You are driving a car on the freeway, looking straight ahead as another car moves up to pass on your left. Can you tell what kind of car it is? No, not while you are looking straight ahead, but you perceive that it's there using peripheral vision that spans nearly 180 degrees in a healthy human eye. You look to your left and see the car speeding past is actually a red pickup truck with tricked out hubcaps. You glance in the rearview mirror to see if any more vehicles are coming up behind, then back to the road in front of you. All this happens in a matter of seconds, and gets processed by your brain to build up a mental image of the 3d world you inhabit… or in this case, the world you are speeding through at a mile a minute.
One could attempt to simulate this by strapping a video camera on the head while driving (in fact, I've done this). However, the resulting flat image does not provide depth perception, and there's no information about where a subject is looking in the field of view captured by the camera. The best we can do today is focus on some aspect of the experience (such as timing or sequence) and leave it to our brains to fill in the rest. But this may not be the case for long. During a quick stroll through the HFES Conference exhibit hall I came across many vendors showing off impressive technological advances in 3d displays and eye tracking technology. In the courtroom of the future, will jurors find themselves donning 3d glasses for closing arguments? Only time will tell.