Throughout the research I have done in the field of 3-D scanning, I have noticed the struggle between large commercial scale scanning hardware pushing companies and the consumer driven attempts to eliminate the need for this seemingly redundant hardware. With the right technique and software interconnectivity, the average person could produce an accurate 3-D depiction of a real-life object with just a few images, claims software such as 1-2-3 D Catch. I have a growing belief through my mishaps and other research that this assemblage of software conversions could be a little more tricky, though. In order to tame this beast, one must first get to know it and all of it’s components. The scanning platform that I initially attempted to saddle up with was vastly inadequate for the task in which I set out, as I learned through trial and error. My original idea didn’t even consider depth of background, which is critical in mapping an object accurately, but this just intrigued me more. I want to know what to apply to my scanner in order for a person with no previous knowledge of the matter to be able to make use of this amazing technology without having to worry about any process beside scanning.

Fabio Remondino has been at the forefront of researching methods to accurately recover 3-D meshes from photographs, or photogrammetry,  as exposed to expensive scanner hardware. In his journal From Point cloud to Surface: The Modelling and Visualization Problem () from 2003, we can sort of see that his disposition also leans towards the cheaper, more accessible route to get to a 3-D model.He even goes on to state that the photographic mode has a higher measurement reliability in terms of photogrammetry, but lacks in detail due to less images.This intrigues me to know that not only is the method I’m pursuing more novel, it can even improve on mapped details of a more complex and costly scan. Another interesting culmination of cell phone hardware and existing topographical data is described in the work titled Mobile Photogrammetry by Armin, Gruen, and Devrim Acha, where they blend aerial data, GPS coordinates from a cell phone, and images taken by the cell phone to construct a 3-D map of an area.I am really excited to discuss this with my classmate, Forbes, as I hope that this could lead to a more convenient way to map Olympia. In the paper “Shape and Correspondence Problem” written by Abhijit S. Ogale and Yiannis Aliomonos, the battle between detail and accuracy is revealed. The introduction and list of previous works that are incorporated proclaims that there is an “energy output” threshold of compiled images, meaning that there is a give and take in terms of detail and accuracy. The article goes on to list the sub-types of the threshold causing calculations for compilations of images through this particular type of process. This set of congruent issues reminded me of problems that I encountered with my first phase of testing the capabilities of the 1,2,3,D Catch software, when I had no idea how the compilation process of multiple images worked. Because of this week’s research, I am glad to say that I can better understand the nature and history of research done in relation to 2-D to 3-D  digitalobject assemblies.

Based on the short history of 3-D scanning as described by myself and my cited experts, my thoughts and concepts have expanded to the point where my simple scanning platform has to take on new challenges. What shapes can I use best as topographical reference points for the compiling software to accurately assimilate depth in comparison to the target object? How can I best utilize lighting for gathering intricate details of an object? What angle do I fix these lights at? Of all the marketed scanning platforms currently available, why do none of them utilize the simple, powerful tools within a cell phone. Why do they push this costly idea involving lasers and expensive cameras? I realize that while these questions make their way to the forefront, they are only reiterations of the questions I had coming into this project, from a more educated point of view. Through reading of my chosen articles and journals, I learned the inner workings of how a scan is actually produced; methods such as range imaging (with and without laser supported systems), photogrammetrical assemblies, and LiDAR assisted 3D topographical constructions of large scale maps. Another bit of very pleasing information I discovered was that all this scanning (landscape, human, object) has a wide variety of applications that is constantly growing, as we discover new uses for old technologies .Another thing that the experts I have cited have piqued my interest about is: although expensive, new hardware exists to perform the tasks we pursue in this day and age, our older, or common hardware has not yet been fully utilized and re-evaluated, perhaps prematurely, considering the amount of time it has been around. This thought echoes through my brain with the mantra of our class. What new do we need to create in a world full of stuff we haven’t used to it’s full potential?

In my first iteration, I asked what could be useful to scan. Not yet backed by data, I was struggling to find a concise answer to this. After various scholarly insight, information, and experiments I can say that a scanner is not only hand in hand with a printer, but it can be used for facial recognition, preservation of artifacts, and even building upon our knowledge of trees and tree systems, large and small scale. I have also learned refining techniques for capturing and smoothing 3-D objects, as well as insight on the mechanics and mathematics of assembling that object from multiple 2-D images. I would say that this week’s research has allowed me to be knowledgeable enough to build my scanning platform, while having confidence that in can make a change in the clarity and easiness of capturing an item.