Q & A: Recovering pose of a calibrated camera - Algebraic vs. Geometric method?

March 29, 2015

This week I received an email with a question about recovering camera pose:

Q: I have images with a known intrinsic matrix, and corresponding points in world and image coordinates. What's the best technique to resolve the extrinsic matrix? Hartley and Zisserman cover geometric and algebraic approaches. What are the tradeoffs between the geometric and algebraic approaches? Under what applications would we choose one or the other?


Compiling ELSD (Ellipse and Line Segment Detector) on OS X

April 28, 2014
Input image

ELSD results

ELSD is a new program for detecting line segments and elliptical curves in images. It gives very impressive results by using a novel model selection criterion to distinguish noise curves from foreground, as detailed in the author's ECCV 2012 paper. Most impressive, it works out of the box with no parameter tuning.

The authors have generously released their code under Affero GPL, but it requires a few tweaks to compile on OSX.


Dissecting the Camera Matrix, Part 3: The Intrinsic Matrix

August 13, 2013

Today we'll study the intrinsic camera matrix in our third and final chapter in the trilogy "Dissecting the Camera Matrix." In the first article, we learned how to split the full camera matrix into the intrinsic and extrinsic matrices and how to properly handle ambiguities that arise in that process. The second article examined the extrinsic matrix in greater detail, looking into several different interpretations of its 3D rotations and translations. Today we'll give the same treatment to the intrinsic matrix, examining two equivalent interpretations: as a description of the virtual camera's geometry and as a sequence of simple 2D transformations. Afterward, you'll see an interactive demo illustrating both interpretations.

If you're not interested in delving into the theory and just want to use your intrinsic matrix with OpenGL, check out the articles Calibrated Cameras in OpenGL without glFrustum and Calibrated Cameras and gluPerspective.

All of these articles are part of the series "The Perspective Camera, an Interactive Tour." To read the other entries in the series, head over to the table of contents.


Calibrated Cameras and gluPerspective

June 18, 2013

After posting my last article relating glFrustum to the intrinsic camera matrix, I receieved some emails asking how the (now deprecated) gluPerspective function relates to the intrinsic matrix. We can show a similar result with gluPerspective as we did with glFrustum, namely that it is the product of a glOrtho matrix and a (modified) intrinsic camera matrix, but in this case the intrinsic matrix has different constraints. I'll be re-using notation and concepts from the previous article, so if you aren't familiar with them, I recommend reading it first.


Calibrated Cameras in OpenGL without glFrustum

June 03, 2013
Simulating a calibrated camera for augmented reality.

You've calibrated your camera. You've decomposed it into intrinsic and extrinsic camera matrices. Now you need to use it to render a synthetic scene in OpenGL. You know the extrinsic matrix corresponds to the modelview matrix and the intrinsic is the projection matrix, but beyond that you're stumped. You remember something about gluPerspective, but it only permits two degrees of freedom, and your intrinsic camera matrix has five. glFrustum looks promising, but the mapping between its parameters and the camera matrix aren't obvious and it looks like you'll have to ignore your camera's axis skew. You may be asking yourself, "I have a matrix, why can't I just use it?"

You can. And you don't have to jettison your axis skew, either. In this article, I'll show how to use your intrinsic camera matrix in OpenGL with minimal modification. For illustration, I'll use OpenGL 2.1 API calls, but the same matrices can be sent to your shaders in modern OpenGL.