Programming Assignment 2 from 15-869: VIEW TRANSFORMATION


The original image:


This is my face scanned by the Minolta Vivid 700 scanner in Martial Hebert's lab.


Some new different views from the original image:


A new view from the top left corner of the object. Even the depth information of the papers on the wall can be showed from this new view.


The first image is a miniature version of the new view from the top of the object. The second image is a part of the first one, including only my head. It can help us to check the result more clearly, since it is in the same size as the output image, but not a miniature version.


A new view from the top right corner of the object.


A new view from the left side of the object(a miniature version).


A new view from the front of the object.


A new view from the right side of the object(a miniature version).


A new view from the down left corner of the object. There are many long, bogus polygons in it, created by the code which I had to use to avoid the much more troublesome holes.


The first image is a new view from the point below the object(a miniature version). The second image is one part of the first one, including only my head(in the original output size). It seems there is something wrong at the bottom of the face(A brown bar covers a part of my face.) and it also exists in the other new views above. I think it comes from the original data, created by some unexpected factors. So I read the original data and found that there were several points in that region which had much bigger range values than the surrounding points. I don't know why(Maybe something fell down to that place in front of my face when the scanner was scanning my face.). With the code to avoid holes, these points would generate the above brown bar. Based on this analysis, I changed these values to the similar magnitude as the surrounding points. Then I got a much better result, shown as the third image above(also only including my head and in the original output size). In it, the brown bar has disappeared. So the above analysis is correct and I think it will also work in the other new views.


A new view from the down right corner of the object.



The first two images are the profile views of both sides of the object(the miniature versions). They are not very good. I think it is because of the unfit view points. The second two images are respectively one part of the first two, including only my head(in the original output size).



I used a C procedure to get these final result. This procedure just provides a text-based interface, but not a visual interface, since I have no enough time. The users need input the quantities for X/Y/Z rotation, translation and zoom, and then the procedure will give out the result from the new viewpoint. In general, it will take several seconds to generate one new view. But for some view points, because the output images are too large(more than 2000*3000 pixels), it will take up to several minutes to get the results. To avoid the holes, I tried two methods, respectively with triangles and quadrilaterals. Their results were almost same and they both created long, bogus polygons at silhouettes. But the former would take about 50 percent more time than the later. So the later is much better. The images above are some new different views from the original image, including each of the 9 cases of visibility orders and the profile views of both sides of the object. In these new views, even the depth information of the papers on the wall can be showed. But the profile views are not very good. I implemented McMillan's image warping and visibility technique to get these results. If I have more time, I may provide a more friendly interface and do some further work.


Jing XIAO

Oct. 4, 1999