What have I done and why have I done it?
What: Examine the possibilities of the off-the-shelf library OpenCV when developing markerless augmented reality applications.
How: By implementing the structure from motion algorithm with algorithms provided by OpenCV
What is Augmented Reality?
Tracking provides information about the users viewpoint or the camera position and orientation in 6 DoF. There are different tracking approaches:
Pros:
Cons:
Pros:
Cons:
Simultaneously calculating the camera motion and structure of the scene using computer vision algorithms
What experimental studies have I done?
Gives the intrinsic parameters and the distortion coefficients.
How has things turned out?
Matches from view one to view two
Matches from view two to view one
Ratio test from view one to view two
Ratio test from view two to view one
Matches after symmetry test
Matches after epipolar constraint test
A small benchmark test has been performed to see how the different algorithms provided by OpenCV perform. The main thing noticed in this test is that the feature description is slow.
What are the conclusions of this, is OpenCV mature to use for markerless augmented reality applications and SfM?