Camera Data
 
Camera Positions - Note that the camera positions start at C3. C1 and C2 are for the beginning  sequence of the PGF, and as work is still underway to connect the beginning sequence to the main site area, this will be shown in a later segment of the report. So C3 is the first camera position showing the main log, the big tree in the background, and such. It is Frame 205 in the PG Film sequence.
 
Camera/BG image code is as follows:
 
TMR (The Munns Report)
PGF (Patterson Gimlin Film)
"C" plus number - The camera position on my site map
bg - indicates background image for comparison
"F" plus number - Actual film frame number for reference.
 
Verification Documents
 
Each link below goes to a image with three panels validating that camera position and the site model alignment to the PG Film. In each panel, the top image is the PG Film frame, set up as a potential background image for other researchers to test. You will need to crop that top panel to 800 x 600 for use as a background image in an animation software application.
 
In each panel, the three images are:  A) the background image, a PGF frame   B)  my render of the site model with the background image in place, to compare alignment of model and film   C)  the model objects alone, no background image, to clearly see the objects. The Site Model Data Form has scale, position, and rotation coordinates for every object as well as position and rotation coordinates for each camera position. A Chart grading all the object matches is HERE.
 
 
TMR_PGF_C3_bg_F205

TMR_PGF_C4A_bg_F352

TMR_PGF_C4B_bg_F462

TMR_PGF_C5_bg_F634 

TMR_PGF_C6_bg_F715

TMR_PGF_C7_bg_F727

TMR_PGF_C8_bg_F875
Site Model Data Form (Bryce Coordinates)
 
The above link will give you the actual Site Model Data Sheet which contains all the coordinate information on all site model objects ( dimension, position, and rotation) as well as the coordinates (position and rotation) for all seven camera positions, so you may test this model in a 3D visualization software application. This document will be considered my proof that the model and the camera positions and lens specification (a 15mm lens) are correct.
 
Please Note that there are two versions of this chart, because my 3D application has used objects called symmetrical lattices as the five center trees, while many other 3D applications will use simple image planes for those trees. Bryce loads the symmetrical lattice into the workspace laying flat (one mesh up, its mirror mesh down). So when it is used to make a tree, I must rotate the object -90 degrees to turn it upright. If you use an image plane created already in an X & Y orientation, upright, then that -90 degree rotation is not necessary. So in the second form (Noted "For X, Y Image Plane Trees") the rotation on C1 to C5 are 90 degrees different from my Bryce coordinates. An Image plane also has only two dimensions, so you will see only X and Y size coordinates.
 
Site Model Data Form (For X, Y Image Plane Coordinates)
 

I invite other researchers to test or check my data independently. If you feel that another solution of object arrangements and/or camera positions and/or lens angles will produce a result as well, or better, and thus re-butt my contention that this is the correct solution, please use the following blank form to note your data for object and camera coordinates and either post in a forum or e-mail to me, and I will be pleased to test the data you provide, and publish the results to compare with my results herein.

Blank Site Model Data Form
Release One     Foundation Material     Camera Material     Model Data    Texture Maps     Conclusion
Comparing Other Site photos

Aside from the actual site images contained in the PG Film itself, there were other photos of the Bluff Creek site taken in subsequent years by other investigators. Byrne and Dahinden visited the site and took pictures which are available to researchers.
 
Comparing the digital site model of mine to these photos is a different procedure, because we do not know the camera, lens, or original film image format, whether the image we have has been cropped, etc. But if the location of the camera can be found, and a digital picture (a render) of the digital model is made with simply a wider field of view than the trees and objects seen in the research photo, we can test to see if the digital trees and objects align with the photo trees and objects, in a position and proportional size comparison. If there is an alignment, this lends further credibility to the accuracy of the digital model.
 
For the two photos compared, I have included a separate data sheet listing the camera position and rotation coordinates used to produce the digital render. These coordinates give us a fine proximate camera position for the people who took these photos on the real site.
 
In the digital renders, there are additional gray cylinders for trees in the north rim section of the site. These trees have not yet been fixed in a final position, and have not yet been identified. They are simply additional trees that the continuing effort will try to confirm location of, in the final site model and diagram which will be released in subsequent months with the final report release.
 
Comparison with Byrne Photo
 
Comparison with Dahinden Photo
About the Photogrammetry Process
 
 
Introduction - A  sophisticated technique, called stereophotogrammetry makes it possible to estimate the three-dimensional coordinates of points on an object seen in photographs. These are determined by measurements made in two or more photographic images taken from different positions. Common points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays, by triangulation,  that determines the three-dimensional location of the point.
 
Photogrammetry is used in different fields, such as topographic mapping, architecture, engineering, manufacturing, police investigation, and geology, as well as by archaeologists to quickly produce plans of large or complex sites. It is also used to combine live action film footage with computer generated effects in movies.
 
Algorithms for photogrammetry typically express the problem as that of minimizing the sum of the squares of a set of errors.
 
For more explanation of the process, go HERE
Website Index         Overview Navigation Page