SightationsA computer vision blog by Kyle Simek
http://ksimek.github.io
Q & A: Recovering pose of a calibrated camera - Algebraic vs. Geometric method?<p>This week I received an email with a question about recovering camera pose:</p>
<p><strong>Q: I have images with a known intrinsic matrix, and corresponding points in world and image coordinates. What's the best technique to resolve the extrinsic matrix? Hartley and Zisserman cover geometric and algebraic approaches. What are the tradeoffs between the geometric and algebraic approaches? Under what applications would we choose one or the other?</strong></p>
<!--more-->
<p>This topic is covered in Section 7.3 of <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Multiple View Geometry in Computer Vision</a>, "Restricted camera estimation." The authors describe a method for estimating a subset of camera parameters when the others are known beforehand. One common scenario is recovering pose (position and orientation) given intrinsic parameters.</p>
<p>Assume you have multiple 2D image points whose corresponding 3D position is known. The authors outline two different error functions for the camera: a geometric error function which measures the distance between the 3D point's projection and the 2D observation, and an algebraic error function, which is the residual of a homogeneous least-squares problem (constructed in section 7.1). The choice of error function can be seen as a trade-off between quality and speed. First I will describe why the geometric solution is better for quality and then why the algebraic solution is faster.</p>
<div class='context-img' style='width:317px'>
<div class='noexcerpt'>
<img src='/img/algebraic_geometric_error.png' width="317" />
<div class='caption'>Let \(X_i\) be a 3D point and \(x_i\) be its observation. The plane \(w\) contains \(X_i\) and is parallel to the image plane. The algebraic error is \(\Delta\), the distance between \(X_i\) and the backprojection ray in the plane \(w\). The geometric error \(d\) is the distance between \(x_i\) and projection of \(X_i\) onto the image plane, \(f\). Note that as the 3D point moves farther from the camera, the algebraic error increases, while the geometric error remains constant.
</div>
<br />
</div>
</div>
<p>The geometric solution is generally considered the "right" solution, in the sense that the assumptions about noise are the most sensible in the majority of cases. Penalizing the squared distance between the 2D observation and the projection of the 3D point amounts to assuming noise arises from the imaging process (e.g. due to camera/lens/sensor imperfections) and is i.i.d. Gaussian distributed in the image plane. In contrast, roughly speaking, the algebraic error measures the distance between the known 3D point and the observation’s backprojection ray. This implies errors arise from noise in 3D points as opposed to the camera itself, and tends to overemphasize distant points when finding a solution. For this reason, Hartley and Zisserman call the solution with minimal geometric error the "gold standard" solution.</p>
<p>The geometric approach also has an advantage of letting you use different cost functions if necessary. For example, if your correspondences include outliers, they could wreak havok on your calibration under a squared-error cost function. Using the geometric approach, you could swap-in a robust cost function (e.g. the Huber function), which will minimize the influence of outliers.</p>
<p>The cost of doing the "right thing" is running time. Both solutions require costly iterative minimization, but the geometric solution's cost function grows linearly with the number of observations, whereas the algebraic cost function is constant (after an SVD operation in preprocessing). In Hartley and Zisserman's example, the two approaches give very similar results.</p>
<p>If speed isn't a concern (e.g. if calibration is performed off-line), the geometric solution is the way to go. The geometric approach may also be easier to implement -- just take an existing bundle adjustment routine like the one provided by <a href="http://ceres-solver.org/">Ceres Solver</a>, and hold the 3D points and intrinsic parameters fixed. Also, if the number of observations is small, the algebraic approach loses its advantages, because the SVD required for preprocessing could eclipse the gains of its efficient cost function. So the geometric solution could be preferable, even in real time scenarios.</p>
<p>If speed is a concern and you have many observations, a two-pass approach might work well. First solve using the algebraic technique, then use it to initialize a few iterations of the geometric approach. Your mileage may vary. Finally, if you are recovering multiple poses of a moving camera, you will likely want to run bundle adjustment as a final step anyway, which jointly minimizes the geometric error of all camera poses and the 3D point locations. In this case, the algebraic solution is almost certainly a "good enough" first pass.</p>
<p>I hope that helps!</p>
Sun, 29 Mar 2015 00:00:00 -0700
http://ksimek.github.io/2015/03/29/QA-recovering-pose-of-calibrated-camera/
http://ksimek.github.io/2015/03/29/QA-recovering-pose-of-calibrated-camera/Compiling ELSD (Ellipse and Line Segment Detector) on OS X
<div class="clearer"></div>
<div class='context-img' style='width:317px'>
<div class='noexcerpt'>
<img src='/img/elsd_before_small.jpg' width="317" />
<div class='caption'>Input image
</div>
<br />
</div>
<img src='/img/elsd_after_small.png' width="317" />
<div class='caption'>ELSD results
</div>
</div>
<p><a href="http://ubee.enseeiht.fr/vision/ELSD/">ELSD is a new program</a> for detecting line segments and elliptical curves in images. It gives <a href="/misc/elsd_results.html">very impressive results</a> by using a novel model selection criterion to distinguish noise curves from foreground, as detailed in the author's <a href="http://ubee.enseeiht.fr/vision/ELSD/eccv2012-ID576.pdf">ECCV 2012 paper</a>. Most impressive, it works out of the box <strong>with no parameter tuning.</strong></p>
<p>The authors have generously released their code under <a href="http://www.gnu.org/licenses/why-affero-gpl.html">Affero GPL</a>, but it requires a few tweaks to compile on OSX.</p>
<!--more-->
<p>First, in <code>process_curve.c</code>, replace this line:</p>
<pre><code>#include <clapack.h>
</code></pre>
<p>with this:</p>
<pre><code>#ifdef __APPLE__
#include <Accelerate/Accelerate.h>
#else
#include <clapack.h>
#endif
</code></pre>
<p>Second, in <code>makefile</code>, change this line</p>
<pre><code>cc -o elsd elsd.c valid_curve.c process_curve.c process_line.c write_svg.c -llapack_LINUX -lblas_LINUX -llibf2c -lm
</code></pre>
<p>to this:</p>
<pre><code>cc -o elsd -framework accelerate elsd.c valid_curve.c process_curve.c process_line.c write_svg.c -lf2c -lm
</code></pre>
<p>Thanks to authors Viorica Pătrăucean, Pierre Gurdjos, and Rafael Grompone von Gioi for sharing this valuable new tool!</p>
<p><strong>Update</strong>: I've written a python script to convert ELSD's output into polylines, check out the <a href="/code.html">code page</a></p>
Mon, 28 Apr 2014 00:00:00 -0700
http://ksimek.github.io/2014/04/28/compiling-elsd-on-osx/
http://ksimek.github.io/2014/04/28/compiling-elsd-on-osx/Dissecting the Camera Matrix, Part 3: The Intrinsic Matrix
<div class="clearer"></div>
<div class='context-img' style='width:320px'>
<img src='/img/kodak-camera.jpg' />
<div class='caption'>
<div class='credit'><a href="http://www.flickr.com/photos/alhazen/8587124359/">Credit: Dave6163 (via Flickr)</a></div>
</div>
</div>
<p>Today we'll study the intrinsic camera matrix in our third and final chapter in the trilogy "Dissecting the Camera Matrix." In <a href="/2012/08/14/decompose/">the first article</a>, we learned how to split the full camera matrix into the intrinsic and extrinsic matrices and how to properly handle ambiguities that arise in that process. The <a href="/2012/08/22/extrinsic/">second article</a> examined the extrinsic matrix in greater detail, looking into several different interpretations of its 3D rotations and translations. Today we'll give the same treatment to the intrinsic matrix, examining two equivalent interpretations: as a description of the virtual camera's geometry and as a sequence of simple 2D transformations. Afterward, you'll see an interactive demo illustrating both interpretations.</p>
<p>If you're not interested in delving into the theory and just want to use your intrinsic matrix with OpenGL, check out the articles <a href="/2013/06/03/calibrated_cameras_in_opengl/">Calibrated Cameras in OpenGL without glFrustum</a> and <a href="/2013/06/18/calibrated-cameras-and-gluperspective/">Calibrated Cameras and gluPerspective</a>.</p>
<p>All of these articles are part of the series "<a href="/2012/08/13/introduction/">The Perspective Camera, an Interactive Tour</a>." To read the other entries in the series, <a href="/2012/08/13/introduction/#toc">head over to the table of contents</a>.</p>
<!--more-->
<h1>The Pinhole Camera</h1>
<p>The intrinsic matrix transforms 3D camera cooordinates to 2D homogeneous image coordinates. This perspective projection is modeled by the ideal pinhole camera, illustrated below.</p>
<p><img src="/img/intrinsic-pinhole-camera.png" alt="pinhole camera" /></p>
<p>The intrinsic matrix is parameterized by <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Hartley and Zisserman</a> as</p>
<div>
\[
K = \left (
\begin{array}{ c c c}
f_x & s & x_0 \\
0 & f_y & y_0 \\
0 & 0 & 1 \\
\end{array}
\right )
\]
</div>
<p>Each intrinsic parameter describes a geometric property of the camera. Let's examine each of these properties in detail.</p>
<h2>Focal Length, \(f_x\), \(f_y\)</h2>
<p>The focal length is the distance between the pinhole and the film (a.k.a. image plane). For reasons we'll discuss later, the focal length is measured in pixels. In a true pinhole camera, both \(f_x\) and \(f_y\) have the same value, which is illustrated as \(f\) below.</p>
<p><img src="/img/intrinsic-focal-length.png" alt="focal length" /></p>
<p>In practice, \(f_x\) and \(f_y\) can differ for a number of reasons:</p>
<ul>
<li>Flaws in the digital camera sensor.</li>
<li>The image has been non-uniformly scaled in post-processing.</li>
<li>The camera's lens introduces unintentional distortion.</li>
<li>The camera uses an <a href="http://en.wikipedia.org/wiki/Anamorphic_format">anamorphic format</a>, where the lens compresses a widescreen scene into a standard-sized sensor.</li>
<li>Errors in camera calibration.</li>
</ul>
<p>In all of these cases, the resulting image has non-square pixels.</p>
<p>Having two different focal lengths isn't terribly intuitive, so some texts (e.g. <a href="http://luthuli.cs.uiuc.edu/~daf/book/book.html">Forsyth and Ponce</a>) use a single focal length and an "aspect ratio" that describes the amount of deviation from a perfectly square pixel. Such a parameterization nicely separates the camera geometry (i.e. focal length) from distortion (aspect ratio).</p>
<h2>Principal Point Offset, \(x_0\), \(y_0\)</h2>
<p>The camera's "principal axis" is the line perpendicular to the image plane that passes through the pinhole. Its itersection with the image plane is referred to as the "principal point," illustrated below.</p>
<p><img src="/img/intrinsic-pp.png" alt="Principal point and principal axis" /></p>
<p>The "principal point offset" is the location of the principal point relative to the film's origin. The exact definition depends on which convention is used for the location of the origin; the illustration below assumes it's at the bottom-left of the film.</p>
<p><img src="/img/intrinsic-pp-offset.png" alt="Principal point offset" /></p>
<p>Increasing \(x_0\) shifts the pinhole to the right:</p>
<p><img src="/img/intrinsic-pp-offset-delta-alt.png" alt="Principal point offset, pinhole shifted right" /></p>
<p>This is equivalent to shifting the film to the left and leaving the pinhole unchanged.</p>
<p><img src="/img/intrinsic-pp-offset-delta.png" alt="Principal point offset, film shifted left" /></p>
<p>Notice that the box surrounding the camera is irrelevant, only the pinhole's position relative to the film matters.</p>
<h2>Axis Skew, \(s\)</h2>
<p>Axis skew causes shear distortion in the projected image. As far as I know, there isn't any analogue to axis skew a true pinhole camera, but <a href="http://www.epixea.com/research/multi-view-coding-thesisse8.html#x13-320002.2.1">apparently some digitization processes can cause nonzero skew</a>. We'll examine skew more later.</p>
<h2>Other Geometric Properties</h2>
<p>The focal length and principal point offset amount to simple translations of the film relative to the pinhole. There must be other ways to transform the camera, right? What about rotating or scaling the film?</p>
<p>Rotating the film around the pinhole is equivalent to rotating the camera itself, which is handled by the <a href="/2012/08/22/extrinsic/">extrinsic matrix</a>. Rotating the film around any other fixed point \(x\) is equivalent to rotating around the pinhole \(P\), then translating by \((x-P)\).</p>
<p>What about scaling? It should be obvious that doubling all camera dimensions (film size and focal length) has no effect on the captured scene. If instead, you double the film size and <em>not</em> the focal length, it is equivalent to doubling both (a no-op) and then halving the focal length. Thus, representing the film's scale explicitly would be redundant; it is captured by the focal length.</p>
<h2>Focal Length - From Pixels to World Units</h2>
<p>This discussion of camera-scaling shows that there are an infinite number of pinhole cameras that produce the same image. The intrinsic matrix is only concerned with the relationship between camera coordinates and image coordinates, so the absolute camera dimensions are irrelevant. Using pixel units for focal length and principal point offset allows us to represent the relative dimensions of the camera, namely, the film's position relative to its size in pixels.</p>
<p>Another way to say this is that the intrinsic camera transformation is invariant to uniform scaling of the camera geometry. By representing dimensions in pixel units, we naturally capture this invariance.</p>
<p>You can use similar triangles to convert pixel units to world units (e.g. mm) if you know at least one camera dimension in world units. For example, if you know the camera's film (or digital sensor) has a width \(W\) in millimiters, and the image width in pixels is \(w\), you can convert the focal length \(f_x\) to world units using:</p>
<div> \[ F_x = f_x \frac{W}{w} \] </div>
<p>Other parameters \(f_y\), \(x_0\), and \(y_0\) can be converted to their world-unit counterparts \(F_y\), \(X_0\), and \(Y_0\) using similar equations:</p>
<div> \[
\begin{array}{ccc}
F_y = f_y \frac{H}{h} \qquad
X_0 = x_0 \frac{W}{w} \qquad
Y_0 = y_0 \frac{H}{h}
\end{array}
\] </div>
<h1>The Camera Frustum - A Pinhole Camera Made Simple</h1>
<p>As we discussed earlier, only the arrangement of the pinhole and the film matter, so the physical box surrounding the camera is irrelevant. For this reason, many discussion of camera geometry use a simpler visual representation: the camera frustum.</p>
<p>The camera's viewable region is pyramid shaped, and is sometimes called the "visibility cone." Lets add some 3D spheres to our scene and show how they fall within the visibility cone and create an image.</p>
<p><img src="/img/intrinsic-frustum.png" alt="frustum" /></p>
<p>Since the camera's "box" is irrelevant, let's remove it. Also, note that the film's image depicts a mirrored version of reality. To fix this, we'll use a "virtual image" instead of the film itself. The virtual image has the same properties as the film image, but unlike the true image, the virtual image appears in front of the camera, and the projected image is unflipped.</p>
<p><img src="/img/intrinsic-frustum-no-box.png" alt="frustum without camera box" /></p>
<p>Note that the position and size of the virtual image plane is arbitrary — we could have doubled its size as long as we also doubled its distance from the pinhole.</p>
<p>After removing the true image we're left with the "viewing frustum" representation of our pinhole camera.</p>
<p><img src="/img/intrinsic-frustum-final.png" alt="frustum representation, final " /></p>
<p>The pinhole has been replaced by the tip of the visibility cone, and the film is now represented by the virtual image plane. We'll use this representation for our demo later.</p>
<h1>Intrinsic parameters as 2D transformations</h1>
<p>In the previous sections, we interpreted our incoming 3-vectors as 3D image coordinates, which are transformed to homogeneous 2D image coordinates. Alternatively, we can interpret these 3-vectors as 2D homogeneous coordinates which are transformed to a new set of 2D points. This gives us a new view of the intrinsic matrix: a sequence of 2D affine transformations.</p>
<p>We can decompose the intrinsic matrix into a sequence of shear, scaling, and translation transformations, corresponding to axis skew, focal length, and principal point offset, respectively:</p>
<div>
\[
\begin{align}
K &= \left (
\begin{array}{ c c c}
f_x & s & x_0 \\
0 & f_y & y_0 \\
0 & 0 & 1 \\
\end{array}
\right )
\\[0.5em]
&=
\underbrace{
\left (
\begin{array}{ c c c}
1 & 0 & x_0 \\
0 & 1 & y_0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Translation}
\times
\underbrace{
\left (
\begin{array}{ c c c}
f_x & 0 & 0 \\
0 & f_y & 0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Scaling}
\times
\underbrace{
\left (
\begin{array}{ c c c}
1 & s/f_x & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Shear}
\end{align}
\]
</div>
<p>An equivalent decomposition places shear <em>after</em> scaling:</p>
<div>
\[
\begin{align}
K &=
\underbrace{
\left (
\begin{array}{ c c c}
1 & 0 & x_0 \\
0 & 1 & y_0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Translation}
\times
\underbrace{
\left (
\begin{array}{ c c c}
1 & s/f_y & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Shear}
\times
\underbrace{
\left (
\begin{array}{ c c c}
f_x & 0 & 0 \\
0 & f_y & 0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Scaling}
\end{align}
\]
</div>
<p>This interpretation nicely separates the extrinsic and intrinsic parameters into the realms of 3D and 2D, respactively. It also emphasizes that the intrinsic camera transformation occurs <em>post-projection</em>. One notable result of this is that <strong>intrinsic parameters cannot affect visibility</strong> — occluded objects cannot be revealed by simple 2D transformations in image space.</p>
<h1>Demo</h1>
<p>The demo below illustrates both interpretations of the intrinsic matrix. On the left is the "camera-geometry" interpretation. Notice how the pinhole moves relative to the image plane as \(x_0\) and \(y_0\) are adjusted.</p>
<p>On the right is the "2D transformation" interpretation. Notice how changing focal length results causes the projected image to be scaled and changing principal point results in pure translation.</p>
<script type="text/javascript" src="/js/geometry/FocalPlaneGeometry.js"></script>
<script type="text/javascript" src="/js/geometry/FrustumGeometry.js"></script>
<script type="text/javascript" src="/js/cam_demo.js"></script>
<div id="webgl_error"></div>
<div id="javascript_error">Javascript is required for this demo.</div>
<div class="demo_3d" style="display:none">
<table style="width: 100%"><tr style="text-align:center;"><td width="50%">Scene</td><td>Image</td></tr></table>
<div id="3d_container" >
</div>
<div class="caption">
<em>Left</em>: scene with camera and viewing volume. Virtual image plane is shown in yellow. <em>Right</em>: camera's image.</div>
<div id="demo_controls">
<ul>
<li><a href="#intrinsic-controls">Intrinsic</a></li>
</ul>
<div id="intrinsic-controls">
<div class="slider-control">
<div class="slider" id="focal_slider">
</div>
<div class="slider-label">
Focal Length
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="skew_slider">
</div>
<div class="slider-label">
Axis Skew
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="x0_slider">
</div>
<div class="slider-label">
\(x_0\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="y0_slider">
</div>
<div class="slider-label">
\(y_0\)
</div>
<div class="clearer"></div>
</div>
</div>
</div>
</div>
<p><br /></p>
<h1>Dissecting the Camera Matrix, A Summary</h1>
<p>Over the course of this series of articles we've seen how to decompose</p>
<ol>
<li><a href="/2012/08/14/decompose/">the full camera matrix into intrinsic and extrinsic matrices</a>,</li>
<li><a href="/2012/08/22/extrinsic/">the extrinsic matrix into 3D rotation followed by translation</a>, and</li>
<li>the intrinsic matrix into three basic 2D transformations.</li>
</ol>
<p>We summarize this full decomposition below.</p>
<div>
\[
\begin{align}
P &= \overbrace{K}^\text{Intrinsic Matrix} \times \overbrace{[R \mid \mathbf{t}]}^\text{Extrinsic Matrix} \\[0.5em]
&=
\overbrace{
\underbrace{
\left (
\begin{array}{ c c c}
1 & 0 & x_0 \\
0 & 1 & y_0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Translation}
\times
\underbrace{
\left (
\begin{array}{ c c c}
f_x & 0 & 0 \\
0 & f_y & 0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Scaling}
\times
\underbrace{
\left (
\begin{array}{ c c c}
1 & s/f_x & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}
\right )
}_\text{2D Shear}
}^\text{Intrinsic Matrix}
\times
\overbrace{
\underbrace{
\left( \begin{array}{c | c}
I & \mathbf{t}
\end{array}\right)
}_\text{3D Translation}
\times
\underbrace{
\left( \begin{array}{c | c}
R & 0 \\ \hline
0 & 1
\end{array}\right)
}_\text{3D Rotation}
}^\text{Extrinsic Matrix}
\end{align}
\]
</div>
<p>To see all of these transformations in action, head over to my <a href="/perspective_camera_toy.html">Perpective Camera Toy</a> page for an interactive demo of the full perspective camera.</p>
<p>Do you have other ways of interpreting the intrinsic camera matrix? Leave a comment or <a href="/contact.html">drop me a line</a>!</p>
<p>Next time, we'll show how to prepare your calibrated camera to generate stereo image pairs. See you then!</p>
Tue, 13 Aug 2013 00:00:00 -0700
http://ksimek.github.io/2013/08/13/intrinsic/
http://ksimek.github.io/2013/08/13/intrinsic/Calibrated Cameras and gluPerspective<p>After posting my last article <a href="/2013/06/03/calibrated_cameras_in_opengl/">relating glFrustum to the intrinsic camera matrix</a>, I receieved some emails asking how the (now deprecated) <a href="http://pic.dhe.ibm.com/infocenter/aix/v6r1/index.jsp?topic=%2Fcom.ibm.aix.opengl%2Fdoc%2Fopenglrf%2FgluPerspective.htm">gluPerspective</a> function relates to the intrinsic matrix. We can show a similar result with <code>gluPerspective</code> as we did with <code>glFrustum</code>, namely that it is the product of a <code>glOrtho</code> matrix and a (modified) intrinsic camera matrix, but in this case the intrinsic matrix has different constraints. I'll be re-using notation and concepts from the previous article, so if you aren't familiar with them, I recommend reading it first.</p>
<!--more-->
<h2>Decomposing gluPerspective</h2>
<p>The matrix generated by <code>gluPerspective</code> is</p>
<div> \[
\begin{align}
\left (
\begin{array}{cccc}
\frac{f}{\text{aspect}} & 0 & 0 & 0 \\
0 & f & 0 & 0 \\
0 & 0 & C' & D' \\
0 & 0 & -1 & 0
\end{array}
\right )
\end{align}
\]
</div>
<p>where</p>
<div> \[
\begin{align}
f &= \cot(fovy/2) \\
C' &= -\frac{far + near}{far - near} \\
D' &= -\frac{2 \; far \; near}{far - near} \\
\end{align}
\]
</div>
<p>Like with <code>glFrustum</code>, <code>gluPerspective</code> permits no axis skew, but it also restricts the viewing volume to be centered around the camera's principal (viewing) axis. This means that the principal point offsets \(x_0\) and \(y_0\) must be zero, <em>and</em> the matrix generated by <code>glOrtho</code> must be centered, i.e. <code>bottom = -top</code> and <code>left = -right</code>. The <em>Persp</em> matrix corresponding to the intrinsic matrix is:</p>
<div>\[ Persp = \left( \begin{array}{cccc} \alpha & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & A & B \\ 0 & 0 & -1 & 0 \end{array} \right) \]</div>
<p>where</p>
<div> \[ \begin{align}
A &= near + far \\
B &= near * far
\end{align} \]
</div>
<p>and the <em>NDC</em> matrix is</p>
<div>\[ \begin{align}
NDC &= \left( \begin{array}{cccc}
\frac{2}{right - left} & 0 & 0 & t_x \\
0 & \frac{2}{top - bottom} & 0 & t_y \\
0 & 0 & -\frac{2}{far - near} & t_z \\
0 & 0 & 0 & 1
\end{array} \right) \\[1.5em]
&= \left( \begin{array}{cccc}
\frac{2}{width} & 0 & 0 & 0 \\
0 & \frac{2}{height} & 0 & 0 \\
0 & 0 & -\frac{2}{far - near} & t_z \\
0 & 0 & 0 & 1
\end{array} \right)
\end{align}
\]</div>
<p>where</p>
<div> \[ \begin{align}
t_x &= -\frac{right + left}{right - left} \\
t_y &= -\frac{top + bottom}{top - bottom} \\
t_z &= -\frac{far + near}{far - near}
\end{align} \]
</div>
<p>It is easy to show that the product \((NDC \times Persp)\) is equivalent to the matrix generated by <code>gluPerspective(fovy, aspect, near, far)</code> with</p>
<div>\[ \begin{align}
\text{fovy} &= 2 \text{arctan}\left (\frac{\text{height}}{2 \beta} \right ) \\
\text{aspect} &= \frac{\beta}{\alpha} \frac{\text{width}}{\text{height}}.
\end{align}
\]
</div>
<h2>glFrustum vs. gluPerpsective </h2>
<p>In my experience, the zero-skew assumption is usually reasonable, so <code>glFrustum</code> can provide a decent approximation to the full intrinsic matrix. However there is quite often a non-negligible principal point offset (~ 2% of the image size), even in high-quality cameras. For this reason, <code>gluPerspective</code> might be a good choice for quick-and-dirty demos, but for the most accurate simulation, you should use the full camera matrix <a href="/2013/06/03/calibrated_cameras_in_opengl/">like I described previously</a>.</p>
Tue, 18 Jun 2013 00:00:00 -0700
http://ksimek.github.io/2013/06/18/calibrated-cameras-and-gluperspective/
http://ksimek.github.io/2013/06/18/calibrated-cameras-and-gluperspective/Calibrated Cameras in OpenGL without glFrustum<div class="clearer"></div>
<div class='context-img' style='width:317px'>
<img src='/img/augmented_reality.jpg' />
<div class='caption'>Simulating a calibrated camera for augmented reality.
<div class='credit'><a href="http://www.flickr.com/photos/thp4/8060086636/">Credit: thp4</a></div>
</div>
</div>
<p>You've calibrated your camera. You've decomposed it into intrinsic and extrinsic camera matrices. Now you need to use it to render a synthetic scene in OpenGL. You know the extrinsic matrix corresponds to the modelview matrix and the intrinsic is the projection matrix, but beyond that you're stumped. You remember something about <code>gluPerspective</code>, but it only permits two degrees of freedom, and your intrinsic camera matrix has five. glFrustum looks promising, but the mapping between its parameters and the camera matrix aren't obvious and it looks like you'll have to ignore your camera's axis skew. You may be asking yourself, "I have a matrix, why can't I just use it?"</p>
<p>You can. And you don't have to jettison your axis skew, either. In this article, I'll show how to use your intrinsic camera matrix in OpenGL with minimal modification. For illustration, I'll use OpenGL 2.1 API calls, but the same matrices can be sent to your shaders in modern OpenGL.</p>
<!--more-->
<h2>glFrustum: Two Transforms in One</h2>
<p>To better understand perspective projection in OpenGL, let's examine <code>glFrustum</code>. According to the OpenGL documentation,</p>
<blockquote><p>glFrustum describes a perspective matrix that produces a perspective projection.</p></blockquote>
<p>While this is true, it only tells half of the story.</p>
<p>In reality, <code>glFrustum</code> does two things: first it performs perspective projection, and then it converts to <a href="http://medialab.di.unipi.it/web/IUM/Waterloo/node15.html">normalized device coordinates (NDC)</a>. The former is a common operation in projective geometry, while the latter is OpenGL arcana, an implementation detail.</p>
<p>To give us finer-grained control over these operations, we'll separate projection matrix into two matrices <em>Persp</em> and <em>NDC</em>:</p>
<div>\[ Proj = NDC \times Persp \]</div>
<p>Our intrinsic camera matrix describes a perspective projection, so it will be the key to the <em>Persp</em> matrix. For the <em>NDC</em> matrix, we'll (ab)use OpenGL's <code>glOrtho</code> routine.</p>
<h2>Step 1: Projective Transform</h2>
<p>Our 3x3 intrinsic camera matrix <em>K</em> needs two modifications before it's ready to use in OpenGL. First, for proper clipping, the (3,3) element of <em>K</em> <em>must</em> be -1. OpenGL's camera looks down the <em>negative</em> z-axis, so if \(K_{33}\) is positive, vertices in front of the camera will have a negative <em>w</em> coordinate after projection. In principle, this is okay, but <a href="http://stackoverflow.com/questions/2286529/why-does-sign-matter-in-opengl-projection-matrix">because of how OpenGL performs clipping</a>, all of these points will be clipped.</p>
<p>If \(K_{33}\) isn't -1, your intrinsic and extrinsic matrices need some modifications. Getting the camera decomposition right isn't trivial, so I'll refer the reader to <a href="/2012/08/14/decompose/">my earlier article on camera decomposition</a>, which will walk you through the steps. Part of the result will be the negation of the third column of the intrinsic matrix, so you'll see those elements negated below.</p>
<div>\[ K = \left( \begin{array}{ccc} \alpha & s & -x_0 \\ 0 & \beta & -y_0 \\ 0 & 0 & -1 \end{array} \right) \]</div>
<p>For the second modification, we need to prevent losing Z-depth information, so we'll add an extra row and column to the intrinsic matrix.</p>
<div>\[ Persp = \left( \begin{array}{cccc} \alpha & s & -x_0 & 0 \\ 0 & \beta & -y_0 & 0 \\ 0 & 0 & A & B \\ 0 & 0 & -1 & 0 \end{array} \right) \]</div>
<p>where</p>
<div> \[ \begin{align}
A &= near + far \\
B &= near * far
\end{align} \]
</div>
<p>The new third row preserve the ordering of Z-values while mapping <em>-near</em> and <em>-far</em> onto themselves (after normalizing by <em>w</em>, proof left as an exercise). The result is that points between the clipping planes remain between clipping planes after multiplication by <em>Persp</em>.</p>
<h2>Step 2: Transform to NDC</h2>
<p>The <em>NDC</em> matrix is (perhaps surprisingly) provided by <code>glOrtho</code>. The <em>Persp</em> matrix converts a frustum-shaped space into a cuboid-shaped shape, while <code>glOrtho</code> converts the cuboid space to normalized device coordinates. A call to <code>glOrtho(left, right, bottom, top, near, far)</code> constructs the matrix:</p>
<div>\[ \text{glOrtho} = \left( \begin{array}{cccc} \frac{2}{right - left} & 0 & 0 & t_x \\ 0 & \frac{2}{top - bottom} & 0 & t_y \\ 0 & 0 & -\frac{2}{far - near} & t_z \\ 0 & 0 & 0 & 1 \end{array} \right) \]</div>
<p>where</p>
<div> \[ \begin{align}
t_x &= -\frac{right + left}{right - left} \\
t_y &= -\frac{top + bottom}{top - bottom} \\
t_z &= -\frac{far + near}{far - near}
\end{align} \]
</div>
<p>When calling <code>glOrtho</code>, the <em>near</em> and <em>far</em> parameters should be the same as those used to compute <em>A</em> and <em>B</em> above. The choice of top, bottom, left, and right clipping planes correspond to the dimensions of the original image and the coordinate conventions used during calibration. For example, if your camera was calibrated from an image with dimensions \(W \times H\) and its origin at the top-left, your OpenGL 2.1 code would be</p>
<pre><code>glLoadIdentity();
glOrtho(0, W, H, 0, near, far);
glMultMatrix(persp);
</code></pre>
<p>Note that <em>H</em> is used as the "bottom" parameter and <em>0</em> is the "top," indicating a y-downward axis convention.</p>
<p>If you calibrated using a coordinate system with the y-axis pointing upward and the origin at the center of the image,</p>
<pre><code>glLoadIdentity();
glOrtho(-W/2, W/2, -H/2, H/2, near, far);
glMultMatrix(persp);
</code></pre>
<p>Note that there is a strong relationship between the <code>glOrtho</code> parameters and the perspective matrix. For example, shifting the viewing volume left by X is equivalent to shifting the principal point right by X. Doubling \(\alpha\) is equivalent to dividing <em>left</em> and <em>right</em> by two. This is the same relationship that exists in a pinhole camera between the camera's geometry and the geometry of its film--shifting the pinhole right is equivalent to shifting the film left; doubling the focal length is equivalent to halving the dimensions of the film. Clearly the two-matrix representation of projection is redundant, but keeping these matrices separate allows us to maintain the logical separation between the camera geometry and the image geometry.</p>
<h2>Equivalence to glFrustum</h2>
<p>We can show that the two-matrix approach above reduces to a single call to <code>glFrustum</code> when \(\alpha\) and \(\beta\) are set to <em>near</em> and \(s\), \(x_0\) and \(y_0\) are zero. The resulting matrix is:</p>
<div>
\[ \begin{align}
Proj &= NDC * Persp \\[1.5em]
&=
\left( \begin{array}{cccc} \frac{2}{right - left} & 0 & 0 & t_x \\ 0 & \frac{2}{top - bottom} & 0 & t_y \\ 0 & 0 & -\frac{2}{far - near} & t_z \\ 0 & 0 & 0 & 1 \end{array} \right)
*
\left( \begin{array}{cccc} near & 0 & 0 & 0 \\ 0 & near & 0 & 0 \\ 0 & 0 & A & B \\ 0 & 0 & -1 & 0 \end{array} \right) \\[1.5em]
&= \left( \begin{array}{cccc} \frac{2 near}{right - left} & 0 & A' & 0 \\ 0 & \frac{2 near}{top - bottom} & B' & 0 \\ 0 & 0 & C' & D' \\ 0 & 0 & -1 & 0 \end{array} \right)
\end{align} \]
</div>
<p>where</p>
<div> \[ \begin{align}
A' &= \frac{right + left}{right - left} \\
B' &= \frac{top + bottom}{top - bottom} \\
C' &= -\frac{far + near}{far - near} \\
D' &= -\frac{2 \; far \; near}{far - near} \\
\end{align} \] </div>
<p>This is equivalent to <a href="http://www.glprogramming.com/blue/ch05.html#id5478066">the matrix produced by glFrustum</a>.</p>
<p>By tweaking the frame bounds we can relax the constraints imposed above. We can implement focal lengths other than <em>near</em> by scaling the frame:</p>
<div> \[ \begin{align}
left' &= \left( \frac{near}{\alpha} \right) left \\
right' &= \left( \frac{near}{\alpha} \right) right \\
top' &= \left( \frac{near}{\beta} \right) top \\
bottom' &= \left( \frac{near}{\beta} \right) bottom
\end{align} \] </div>
<p>Non-zero principal point offsets are achieved by shifting the frame window:</p>
<div> \[ \begin{align}
left'' &= left' - x_0 \\
right'' &= right' - x_0 \\
top'' &= top' - y_0 \\
bottom'' &= bottom' - y_0
\end{align} \] </div>
<p>Thus, with a little massaging, <code>glFrustum</code> can simulate a general intrinsic camera matrix with zero axis skew.</p>
<h2>The Extrinsic Matrix</h2>
<p>The extrinsic matrix can be used as the modelview matrix without modification, just convert it to a 4x4 matrix by adding an extra row of <em>(0,0,0,1)</em>, and pass it to <code>glLoadMatrix</code> or send it to your shader. If lighting or back-face culling are acting strangely, it's likely that your rotation matrix has a determinant of -1. This results in the geometry rendering in the right place, but with normal-vectors reversed so your scene is inside-out. The <a href="/2012/08/14/decompose/">previous article on camera decomposition</a> should help you prevent this.</p>
<p>Alternatively, you can convert your rotation matrix to axis-angle form and use <code>glRotate</code>. Remember that the fourth column of the extrinsic matrix is the translation <em>after</em> rotating, so your call to <code>glTranslate</code> should come <em>before</em> <code>glRotate</code>. Check out <a href="/2012/08/22/extrinsic/">this previous article</a> for a longer discussion of the extrinsic matrix, including how to it with <code>glLookAt</code>.</p>
<h2>Conclusion</h2>
<p>We've seen two different ways to simulate a calibrated camera in OpenGL, one using glFrustum and one using the intrinsic camera matrix directly. If you need to implement radial distortion, it should be possible with a vertex shader, but you'll probably want a high poly count so the curved distortions appear smooth--does anyone have experience with this? In a future article, I'll cover how to accomplish stereo and head-tracked rendering using simple modifications to your intrinsic camera parameters.</p>
Mon, 03 Jun 2013 00:00:00 -0700
http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/
http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/Dissecting the Camera Matrix, Part 2: The Extrinsic Matrix<p>Welcome to the third post in the series "<a href="/2012/08/13/introduction/">The Perspecive Camera - An Interactive Tour</a>." In the last post, <a href="/2012/08/14/decompose/">we learned how to decompose the camera matrix</a> into a product of intrinsic and extrinsic matrices. In the next two posts, we'll explore the extrinsic and intrinsic matrices in greater detail. First we'll explore various ways of looking at the extrinsic matrix, with an interactive demo at the end.</p>
<h2>The Extrinsic Camera Matrix</h2>
<p>The camera's extrinsic matrix describes the camera's location in the world, and what direction it's pointing. Those familiar with OpenGL know this as the "view matrix" (or rolled into the "modelview matrix"). It has two components: a rotation matrix, <em>R</em>, and a translation vector <strong><em>t</em></strong>, but as we'll soon see, these don't exactly correspond to the camera's rotation and translation. First we'll examine the parts of the extrinsic matrix, and later we'll look at alternative ways of describing the camera's pose that are more intuitive.</p>
<!--more-->
<p>The extrinsic matrix takes the form of a rigid transformation matrix: a 3x3 rotation matrix in the left-block, and 3x1 translation column-vector in the right:</p>
<div>
\[ [ R \, |\, \boldsymbol{t}] =
\left[ \begin{array}{ccc|c}
r_{1,1} & r_{1,2} & r_{1,3} & t_1 \\
r_{2,1} & r_{2,2} & r_{2,3} & t_2 \\
r_{3,1} & r_{3,2} & r_{3,3} & t_3 \\
\end{array} \right] \]
</div>
<p>It's common to see a version of this matrix with extra row of (0,0,0,1) added to the bottom. This makes the matrix square, which allows us to further decompose this matrix into a rotation <em>followed by</em> translation:</p>
<div>
\[
\begin{align}
\left [
\begin{array}{c|c}
R & \boldsymbol{t} \\
\hline
\boldsymbol{0} & 1
\end{array}
\right ] &=
\left [
\begin{array}{c|c}
I & \boldsymbol{t} \\
\hline
\boldsymbol{0} & 1
\end{array}
\right ]
\times
\left [
\begin{array}{c|c}
R & \boldsymbol{0} \\
\hline
\boldsymbol{0} & 1
\end{array}
\right ] \\
&=
\left[ \begin{array}{ccc|c}
1 & 0 & 0 & t_1 \\
0 & 1 & 0 & t_2 \\
0 & 0 & 1 & t_3 \\
\hline
0 & 0 & 0 & 1
\end{array} \right] \times
\left[ \begin{array}{ccc|c}
r_{1,1} & r_{1,2} & r_{1,3} & 0 \\
r_{2,1} & r_{2,2} & r_{2,3} & 0 \\
r_{3,1} & r_{3,2} & r_{3,3} & 0 \\
\hline
0 & 0 & 0 & 1
\end{array} \right]
\end{align}
\]
</div>
<p>This matrix describes how to transform points in world coordinates to camera coordinates. The vector <strong><em>t</em></strong> can be interpreted as the position of the world origin in camera coordinates, and the columns of <em>R</em> represent represent the directions of the world-axes in camera coordinates.</p>
<p>The important thing to remember about the extrinsic matrix is that it describes how the <em>world</em> is transformed relative to the <em>camera</em>. This is often counter-intuitive, because we usually want to specify how the <em>camera</em> is transformed relative to the <em>world</em>. Next, we'll examine two alternative ways to describe the camera's extrinsic parameters that are more intuitive and how to convert them into the form of an extrinsic matrix.</p>
<h2>Building the Extrinsic Matrix from Camera Pose</h2>
<p>It's often more natural to specify the camera's pose directly rather than specifying how world points should transform to camera coordinates. Luckily, building an extrinsic camera matrix this way is easy: just build a rigid transformation matrix that describes the camera's pose and then take it's inverse.</p>
<p>Let <em>C</em> be a column vector describing the location of the camera-center in world coordinates, and let \(R_c\) be the rotation matrix describing the camera's orientation with respect to the world coordinate axes. The transformation matrix that describes the camera's pose is then \([R_c \,|\, C ]\). Like before, we make the matrix square by adding an extra row of (0,0,0,1). Then the extrinsic matrix is obtained by inverting the camera's pose matrix:</p>
<div>
\begin{align}
\left[
\begin{array}{c|c}
R & \boldsymbol{t} \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]
&=
\left[
\begin{array}{c|c}
R_c & C \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]^{-1} \\
&=
\left[
\left[
\begin{array}{c|c}
I & C \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]
\left[
\begin{array}{c|c}
R_c & 0 \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]
\right]^{-1} & \text{(decomposing rigid transform)} \\
&=
\left[
\begin{array}{c|c}
R_c & 0 \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]^{-1}
\left[
\begin{array}{c|c}
I & C \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]^{-1} & \text{(distributing the inverse)}\\
&=
\left[
\begin{array}{c|c}
R_c^T & 0 \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right]
\left[
\begin{array}{c|c}
I & -C \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right] & \text{(applying the inverse)}\\
&=
\left[
\begin{array}{c|c}
R_c^T & -R_c^TC \\
\hline
\boldsymbol{0} & 1 \\
\end{array}
\right] & \text{(matrix multiplication)}
\end{align}
</div>
<p>When applying the inverse, we use the fact that the inverse of a rotation matrix is it's transpose, and inverting a translation matrix simply negates the translation vector. Thus, we see that the relationship between the extrinsic matrix parameters and the camera's pose is straightforward:</p>
<div>
\[
\begin{align}
R &= R_c^T \\
\boldsymbol{t} &= -RC
\end{align}
\]
</div>
<p>Some texts write the extrinsic matrix substituting <em>-RC</em> for <strong><em>t</em></strong>, which mixes a world transform (<em>R</em>) and camera transform notation (<em>C</em>).</p>
<h2>The "Look-At" Camera</h2>
<p>Readers familiar with OpenGL might prefer a third way of specifying the camera's pose using <em>(a)</em> the camera's position, <em>(b)</em> what it's looking at, and <em>(c)</em> the "up" direction. In legacy OpenGL, this is accomplished by the gluLookAt() function, so we'll call this the "look-at" camera. Let <em>C</em> be the camera center, <strong><em>p</em></strong> be the target point, and <strong><em>u</em></strong> be up-direction. The algorithm for computing the rotation matrix is (paraphrased from the <a href="https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml">OpenGL documentation</a>):</p>
<ol>
<li>Compute L = p - C.</li>
<li>Normalize L.</li>
<li>Compute s = L x u. (cross product)</li>
<li>Normalize s.</li>
<li>Compute u' = s x L.</li>
</ol>
<p>The extrinsic rotation matrix is then given by:</p>
<div>
\[
R = \left[
\begin{array}{ccc}
s_1 & s_2 & s_3 \\
u_1' & u_2' & u_3' \\
-L_1 & -L_2 & -L_3
\end{array}
\right]
\]
</div>
<p><em>(Updated May 21, 2014 -- transposed matrix)</em></p>
<p>You can get the translation vector the same way as before, <em><strong>t</strong> = -RC</em>.</p>
<h2>Try it out!</h2>
<p>Below is an interactive demonstration of the three different ways of parameterizing a camera's extrinsic parameters. Note how the camera moves differently as you switch between the three parameterizations.</p>
<p>This requires a WebGL-enabled browser with Javascript enabled.</p>
<script type="text/javascript" src="/js/geometry/FocalPlaneGeometry.js"></script>
<script type="text/javascript" src="/js/geometry/FrustumGeometry.js"></script>
<script type="text/javascript" src="/js/cam_demo.js"></script>
<div id="webgl_error"></div>
<div id="javascript_error">Javascript is required for this demo.</div>
<div class="demo_3d" style="display:none">
<table style="width: 100%"><tr style="text-align:center;"><td width="50%">Scene</td><td>Image</td></tr></table>
<div id="3d_container" >
</div>
<div class="caption">
<em>Left</em>: scene with camera and viewing volume. Virtual image plane is shown in yellow. <em>Right</em>: camera's image.</div>
<div id="demo_controls">
<ul>
<li><a href="#extrinsic-world-controls">Extrinsic (World)</a></li>
<li><a href="#extrinsic-camera-controls">Extr. (Camera)</a></li>
<li><a href="#extrinsic-lookat-controls">Extr. ("Look-at")</a></li>
<li><a href="#intrinsic-controls">Intrinsic</a></li>
</ul>
<div id="extrinsic-world-controls">
<div class="slider-control">
<div class="slider" id="world_x_slider">
</div>
<div class="slider-label">
\(\boldsymbol{t}_x\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="world_y_slider">
</div>
<div class="slider-label">
\(\boldsymbol{t}_y\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="world_z_slider">
</div>
<div class="slider-label">
\(\boldsymbol{t}_z\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="world_rx_slider">
</div>
<div class="slider-label">
x-Rotation
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="world_ry_slider">
</div>
<div class="slider-label">
y-Rotation
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="world_rz_slider">
</div>
<div class="slider-label">
z-Rotation
</div>
<div class="clearer"></div>
</div>
<p>Adjust extrinsic parameters above.</p>
<p>This is a "world-centric" parameterization. These parameters describe how the <em>world</em> changes relative to the <em>camera</em>. These parameters correspond directly to entries in the extrinsic camera matrix.</p>
<p>As you adjust these parameters, note how the camera moves in the world (left pane) and contrast with the "camera-centric" parameterization:</p>
<ul>
<li>Rotating affects the camera's position (the blue box).</li>
<li>The direction of camera motion depends on the current rotation.</li>
<li>Positive rotations move the camera clockwise (or equivalently, rotate the world counter-clockwise).</li>
</ul>
<p>Also note how the image is affected (right pane):</p>
<ul>
<li>Rotating never moves the world origin (red ball).</li>
<li>Changing \(t_x\) always moves the spheres horizontally, regardless of rotation. </li>
<li>Increasing \(t_z\) always moves the camera closer to the world origin. </li>
</ul>
</div>
<div id="extrinsic-camera-controls">
<div class="slider-control">
<div class="slider" id="camera_x_slider">
</div>
<div class="slider-label">
\(C_x\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="camera_y_slider">
</div>
<div class="slider-label">
\(C_y\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="camera_z_slider">
</div>
<div class="slider-label">
\(C_z\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="camera_rx_slider">
</div>
<div class="slider-label">
x-Rotation
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="camera_ry_slider">
</div>
<div class="slider-label">
y-Rotation
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="camera_rz_slider">
</div>
<div class="slider-label">
z-Rotation
</div>
<div class="clearer"></div>
</div>
<p>Adjust extrinsic parameters above.</p>
<p>This is a "camera-centric" parameterization, which describes how the <em>camera</em> changes relative to the <em>world</em>. These parameters correspond to elements of the <em>inverse</em> extrinsic camera matrix.</p>
<p>As you adjust these parameters, note how the camera moves in the world (left pane) and contrast with the "world-centric" parameterization:</p>
<ul>
<li>Rotation occurs about the camera's position (the blue box).</li>
<li>The direction of camera motion is independent of the current rotation.</li>
<li>A positive rotation rotates the camera counter-clockwise (or equivalently, rotates the world clockwise).</li>
<li>Increasing \(C_y\) always moves the camera toward the sky, regardless of rotation. </li>
</ul>
<p>Also note how the image is affected (right pane):</p>
<ul>
<li>Rotating around y moves both spheres horizontally.</li>
<li>With different rotations, changing \(C_x\) moves the spheres in different directions. </li>
</ul>
</div>
<div id="extrinsic-lookat-controls">
<div class="slider-control">
<div class="slider" id="lookat_x_slider">
</div>
<div class="slider-label">
\(C_x\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="lookat_y_slider">
</div>
<div class="slider-label">
\(C_y\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="lookat_z_slider">
</div>
<div class="slider-label">
\(C_z\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="lookat_px_slider">
</div>
<div class="slider-label">
\(p_x\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="lookat_py_slider">
</div>
<div class="slider-label">
\(p_y\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="lookat_pz_slider">
</div>
<div class="slider-label">
\(p_z\)
</div>
<div class="clearer"></div>
</div>
<p>Adjust extrinsic parameters above.</p>
<p>This is a "look-at" parameterization, which describes the camera's orientation in terms of what it is looking at. Adjust \(p_x\), \(p_y\), and \(p_z\) to change where the camera is looking (orange dot). The up vector is fixed at (0,1,0)'. Notice that moving the camera center, *C*, causes the camera to rotate.</p>
<p>
</p>
</div>
<div id="intrinsic-controls">
<div class="slider-control">
<div class="slider" id="focal_slider">
</div>
<div class="slider-label">
Focal Length
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="skew_slider">
</div>
<div class="slider-label">
Axis Skew
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="x0_slider">
</div>
<div class="slider-label">
\(x_0\)
</div>
<div class="clearer"></div>
</div>
<div class="slider-control">
<div class="slider" id="y0_slider">
</div>
<div class="slider-label">
\(y_0\)
</div>
<div class="clearer"></div>
</div>
<p>Adjust intrinsic parameters above. As you adjust these parameters, observe how the viewing volume changes in the left pane:</p>
<ul>
<li> Changing the focal length moves the yellow focal plane, which chainges the field-of-view angle of the viewing volume.</li>
<li> Changing the principal point affects where the green center-line intersects the focal plane.</li>
<li> Setting skew to non-zero causes the focal plane to be non-rectangular</li>
</ul>
<p>Intrinsic parameters result in 2D transformations only; the depth of objects are ignored. To see this, observe how the image in the right pane is affected by changing intrinsic parameters:</p>
<ul>
<li>Changing the focal length scales the near sphere and the far sphere equally.</li>
<li>Changing the principal point has no affect on parallax.</li>
<li>No combination of intrinsic parameters will reveal occluded parts of an object.</li>
</ul>
</div>
</div>
</div>
<p><br /></p>
<h2>Conclusion</h2>
<p>We've just explored three different ways of parameterizing a camera's extrinsic state. Which parameterization you prefer to use will depend on your application. If you're writing a Wolfenstein-style FPS, you might like the world-centric parameterization, because moving along (t_z) always corresponds to walking forward. Or you might be interpolating a camera through waypoints in your scene, in which case, the camera-centric parameterization is preferred, since you can specify the position of your camera directly. If you aren't sure which you prefer, play with the tool above and decide which approach feels the most natural.</p>
<p>Join us next time <a href="/2013/08/13/intrinsic/">when we explore the intrinsic matrix</a>, and we'll learn why hidden parts of your scene can never be revealed by zooming your camera. See you then!</p>
<p><br /></p>
Wed, 22 Aug 2012 00:00:00 -0700
http://ksimek.github.io/2012/08/22/extrinsic/
http://ksimek.github.io/2012/08/22/extrinsic/Dissecting the Camera Matrix, Part 1: Extrinsic/Intrinsic Decomposition<div class="clearer"></div>
<div class='context-img' style='width:320px'>
<img src='/img/decompose.jpg' />
<div class='caption'>Not this kind of decomposition.
<div class='credit'><a href="http://www.flickr.com/photos/dhollister/2596483147/">Credit: Daniel Hollister</a></div>
</div>
</div>
<p>So, you've been playing around a new computer vision library, and you've managed to calibrate your camera... now what do you do with it? It would be a lot more useful if you could get at the camera's position or find out it's field-of view. You crack open your trusty copy of <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Hartley and Zisserman</a>, which tells you how to decompose your camera into an intrinsic and extrinsic matrix --- great! But when you look at the results, something isn't quite right. Maybe your rotation matrix has a determinant of -1, causing your matrix-to-quaternion function to barf. Maybe your focal-length is negative, and you can't understand why. Maybe your translation vector mistakenly claims that the world origin in <em>behind</em> the camera. Or worst of all, everything looks fine, but when you plug it into OpenGL, you just don't see <em>anything</em>.</p>
<p>Today we'll cover the process of decomposing a camera matrix into intrinsic and extrinsic matrices, and we'll try to untangle the issues that can crop-up with different coordinate conventions. In later articles, we'll study the <a href="/2013/08/13/intrinsic/">intrinsic</a> and <a href="/2012/08/22/extrinsic/">extrinsic</a> matrices in more detail, and I'll cover <a href="/2013/06/03/calibrated_cameras_in_opengl/">how to convert them into a form usable by OpenGL</a>.</p>
<!--more-->
<p>This is the second article in the series, "<a href="/2012/08/13/introduction/">The Perspective Camera, an Interactive Tour</a>." To read other article in this series, head over to the <a href="/2012/08/13/introduction/#toc">introduction page</a>.</p>
<h2>Prologue: Getting a Camera Matrix</h2>
<p>I'll assume you've already obtained your camera matrix beforehand, but if you're looking for help with camera calibration, I recommend looking into the <a href="http://www.vision.caltech.edu/bouguetj/calib_doc/">Camera Calibration Toolbox for Matlab</a>. OpenCV also seems to have <a href="http://opencv.willowgarage.com/documentation/python/camera_calibration_and_3d_reconstruction.html">some useful routines</a> for automatic camera calibration from a sequences of chessboard images, although I haven't personally used them. As usual, <a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">Hartley and Zisserman's</a> has a nice treatment of the topic.</p>
<h2>Cut 'em Up: Camera Decomposition <a href="http://www.break.com/video/ugc/mitch-hedberg-on-pringles-169072" class="huh">[?]</a></h2>
<p>To start, we'll assume your camera matrix is 3x4, which transforms homogeneous 3D world coordinates to homogeneous 2D image coordinates. Following Hartley and Zisserman, we'll denote the matrix as <em>P</em>, and occasionally it will be useful to use the block-form:</p>
<div>
\[ P = [M \,| -MC] \]
</div>
<p>where <em>M</em> is an invertible 3x3 matrix, and <em>C</em> is a column-vector representing the camera's position in world coordinates. Some calibration software provides a 4x4 matrix, which adds an extra row to preserve the <em>z</em>-coordinate. In this case, just drop the third row to get a 3x4 matrix.</p>
<p>The camera matrix by itself is useful for projecting 3D points into 2D, but it has several drawbacks:</p>
<ul>
<li>It doesn't tell you where the camera's pose.</li>
<li>It doesn't tell you about the camera's internal geometry.</li>
<li>Specular lighting isn't possible, since you can't get surface normals in camera coordinates.</li>
</ul>
<p>To address these drawbacks, a camera matrix can be decomposed into the product of two matrices: an intrinsic matrix, <em>K</em>, and an extrinsic matrix, \([R \, |\, -RC ]\):</p>
<div>\[P = K [R \,| -RC ] \]</div>
<p>The matrix <em>K</em> is a 3x3 upper-triangular matrix that describes the camera's internal parameters like focal length. <em>R</em> is a 3x3 rotation matrix whose columns are the directions of the world axes in the camera's reference frame. The vector <em>C</em> is the camera center in world coordinates; the vector <em><strong>t</strong> = -RC</em> gives the position of the world origin in camera coordinates. We'll study each of these matrices in more detail in later articles, today we'll just discuss how to get them from <em>P</em>.</p>
<p>Recovering the camera center, <em>C</em>, is straightforward. Note that the last column of <em>P</em> is <em>-MC</em>, so just left-multiply it by \(-M^{-1}\).</p>
<h2>Before You RQ-ze Me... <a href="http://www.youtube.com/watch?v=jQAvWte8w0c" class="huh">[?]</a></h2>
<p>To recover R and K, we note that R is orthogonal by virtue of being a rotation matrix, and K is upper-triangular. Any full-rank matrix can be decomposed into the product of an upper-triangular matrix and an orthogonal matrix by using <a href="http://en.wikipedia.org/wiki/QR_decomposition">RQ-decomposition</a>. Unfortunately RQ-decomposition isn't available in many libraries including Matlab, but luckily, it's friend QR-decomposition usually is. <a href="http://www.janeriksolem.net/2011/03/rq-factorization-of-camera-matrices.html">Solem's vision blog</a> has a nice article implementing the missing function using a few matrix flips; here's a Matlab version (thanks to Solem for letting me repost this!):</p>
<div class="highlight"><pre><code class="matlab"><span class="k">function</span><span class="w"> </span>[R Q] <span class="p">=</span><span class="w"> </span><span class="nf">rq</span><span class="p">(</span>M<span class="p">)</span><span class="w"></span>
<span class="w"> </span><span class="p">[</span><span class="n">Q</span><span class="p">,</span><span class="n">R</span><span class="p">]</span> <span class="p">=</span> <span class="n">qr</span><span class="p">(</span><span class="nb">flipud</span><span class="p">(</span><span class="n">M</span><span class="p">)</span><span class="o">'</span><span class="p">)</span>
<span class="n">R</span> <span class="p">=</span> <span class="nb">flipud</span><span class="p">(</span><span class="n">R</span><span class="o">'</span><span class="p">);</span>
<span class="n">R</span> <span class="p">=</span> <span class="nb">fliplr</span><span class="p">(</span><span class="n">R</span><span class="p">);</span>
<span class="n">Q</span> <span class="p">=</span> <span class="n">Q</span><span class="o">'</span><span class="p">;</span>
<span class="n">Q</span> <span class="p">=</span> <span class="nb">flipud</span><span class="p">(</span><span class="n">Q</span><span class="p">);</span>
</code></pre></div>
<p> Easy!</p>
<h2> I'm seeing double... FOUR decompositions! <a href="http://imgur.com/1pAsu" class="huh">[?]</a></h2>
<p>There's only one problem: the result of RQ-decomposition isn't unique. To see this, try negating any column of <em>K</em> and the corresponding row of <em>R</em>: the resulting camera matrix is unchanged. Most people simply force the diagonal elements of <em>K</em> to be positive, which is the correct approach if two conditions are true:</p>
<ol>
<li>your image's X/Y axes point in the same direction as your camera's X/Y axes.</li>
<li>your camera looks in the positive-<em>z</em> direction.</li>
</ol>
<p>Solem's blog elegantly gives us positive diagonal entries in three lines of code:</p>
<div class="highlight"><pre><code class="matlab"># <span class="n">make</span> <span class="n">diagonal</span> <span class="n">of</span> <span class="n">K</span> <span class="n">positive</span>
<span class="n">T</span> <span class="p">=</span> <span class="nb">diag</span><span class="p">(</span><span class="nb">sign</span><span class="p">(</span><span class="nb">diag</span><span class="p">(</span><span class="n">K</span><span class="p">)));</span>
<span class="n">K</span> <span class="p">=</span> <span class="n">K</span> <span class="o">*</span> <span class="n">T</span><span class="p">;</span>
<span class="n">R</span> <span class="p">=</span> <span class="n">T</span> <span class="o">*</span> <span class="n">R</span><span class="p">;</span> # <span class="p">(</span><span class="n">T</span> <span class="n">is</span> <span class="n">its</span> <span class="n">own</span> <span class="n">inverse</span><span class="p">)</span>
</code></pre></div>
<p> In practice, the camera and image axes won't agree, and the diagonal elements of <em>K</em> shouldn't be positive. Forcing them to be positive can result in nasty side-effect, including:</p>
<ul>
<li> The objects appear on the wrong side of the camera.</li>
<li> The rotation matrix has a determinant of -1 instead of 1.</li>
<li> Incorrect specular lighting.</li>
<li> Visible geometry won't render <a href="http://stackoverflow.com/questions/2286529/why-does-sign-matter-in-opengl-projection-matrix">due to a having negative <em>w</em> coordinate</a>.</li>
</ul>
<div class='context-img' style='width:321px'>
<img src='/img/hz_camera.png' />
<div class='caption'>Hartley and Zisserman's coordinate conventions. Note that camera and image <em>x</em>-axes point left when viewed from the camera's POV.
<div class='credit'><a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">From "Multiple View Geometry in Computer Vision"</a></div>
</div>
</div>
<p> In this case, you've got some fixing to do. Start by making sure that your camera and world coordinates both have the same <a href="http://en.wikipedia.org/wiki/Right-hand_rule">handedness</a>. Then take note of the axis conventions you used when you calibrated your camera. What direction did the image <em>y</em>-axis point, up or down? The <em>x</em>-axis? Now consider your camera's coordinate axes. Does your camera look down the negative-<em>z</em> axis (OpenGL-style)? Positive-<em>z</em> (like Hartley and Zisserman)? Does the <em>x</em>-axis point left or right? The <em>y</em>-axis? Okay, okay, you get the idea.</p>
<p> Starting from an all-positive diagonal, follow these four steps:</p>
<ol>
<li>If the image <em>x</em>-axis and camera <em>x</em>-axis point in opposite directions, negate the first column of <em>K</em> and the first row of <em>R</em>.</li>
<li>If the image <em>y</em>-axis and camera <em>y</em>-axis point in opposite directions, negate the second column of <em>K</em> and the second row of <em>R</em>.</li>
<li>If the camera looks down the <strong>negative</strong>-<em>z</em> axis, negate the third column of <em>K</em>. <del><em>Leave R unchanged</em>.</del> <em>Edit: Also negate the third column of R</em>.</li>
<li>If the determinant of <em>R</em> is -1, negate it.</li>
</ol>
<p>Note that each of these steps leaves the combined camera matrix unchanged. The last step is equivalent to multiplying the entire camera matrix, <em>P</em>, by -1. Since <em>P</em> operates on homogeneous coordinates, multiplying it by any constant has no effect.</p>
<p>Regarding step 3, Hartley and Zisserman's camera looks down the positive-<em>z</em> direction, but in some real-world systems, (e.g. OpenGL) the camera looks down the negative-<em>z</em> axis. This allows the <em>x</em> and <em>y</em> axis to point right and up, resulting in a coordinate system that feels natural while still being right-handed. Step 3 above corrects for this, by causing <em>w</em> to be positive when <em>z</em> is negative. You may balk at the fact that \(K_{3,3}\) is negative, but OpenGL <em>requires</em> this for proper clipping. We'll discuss OpenGL more in a future article.</p>
<p>You can double-check the result by inspecting the vector \(\mathbf{t} = -RC\), which is the location of the world origin in camera coordinates. If everything is correct, the sign of \(t_x, t_y, t_z\) should reflect where the world origin appears in the camera (left/right of center, above/below center, in front/behind camera, respectively).</p>
<h2><a id="flipaxis"></a> Who Flipped my Axes? </h2>
<p>Until now, our discussion of 2D coordinate conventions have referred to the coordinates used during calibration. If your application uses a different 2D coordinate convention, you'll need to transform K using 2D translation and reflection.</p>
<p>For example, consider a camera matrix that was calibrated with the origin in the top-left and the <em>y</em>-axis pointing downward, but you prefer a bottom-left origin with the <em>y-axis</em> pointing upward. To convert, you'll first negate the image <em>y</em>-coordinate and then translate upward by the image height, <em>h</em>. The resulting intrinsic matrix <em>K'</em> is given by:</p>
<div>
\[
K' = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & h \\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix}1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \; K
\]
</div>
<h2>Summary</h2>
<p>The procedure above should give you a correct camera decomposition regardless of the coordinate conventions you use. I've tested it in a handful of scenarios in my own research, and it has worked so far. Of course, if you have any problems with this approach, I'm eager to hear about them, just leave a message in the comments, or <a href="/contact.html">email me</a>.</p>
<p>In the next article, we'll <a href="/2012/08/22/extrinsic/">investigate the extrinsic matrix</a> in more detail, with interactive demos.</p>
Tue, 14 Aug 2012 00:00:00 -0700
http://ksimek.github.io/2012/08/14/decompose/
http://ksimek.github.io/2012/08/14/decompose/The Perspective Camera - An Interactive Tour<div class='context-img' style='width:350px'>
<img src='/img/1st_and_ten.jpg' />
<div class='caption'>The "1st and Ten" system, one of the first successful applications of augmented reality in sports.
<div class='credit'><a href="http://www.howstuffworks.com/first-down-line.htm">howstuffworks.com</a></div>
</div>
</div>
<p>On September 27, 1998 a yellow line appeared across the gridiron during an otherwise ordinary football game between the Cincinnati Bengals and the Baltimore Ravens. It had been added by a computer that analyzed the camera's position and the shape of the ground in real-time in order to overlay thin yellow strip onto the field. The line marked marked the position of the next first-down, but it also marked the beginning of a new era of computer vision in live sports, from <a href="http://www.youtube.com/watch?v=p-y7N-giirQ">computerized pitch analysis</a> in baseball to <a href="http://www.youtube.com/watch?v=Cgeb61VIKvo">automatic line-refs</a> in tennis.</p>
<p>In 2006, researchers from Microsoft and the University of Washington <a href="http://www.youtube.com/watch?v=IgBQCoEfiMs">automatically constructed a 3D tour of the Trevi Fountain in Rome</a> using only images obtained by searching Flickr for "trevi AND rome."</p>
<p>In 2007, Carnegie Mellon PhD student Johnny Lee <a href="http://www.youtube.com/watch?v=Jd3-eiid-Uw">hacked a $40 Nintento Wii-mote</a> into an impressive head-tracking virtual reality interface.</p>
<p>In 2010, <a href="http://en.wikipedia.org/wiki/Kinect">Microsoft released the Kinect</a>, a consumer stereo camera that rivaled the functionality of competitors sold for ten times its price, which continues to disrupt the worlds of both gaming and computer vision.</p>
<p>What do all of these technologies have in common? They all require a precise understanding of how the pixels in a 2D image relate to the 3D world they represent. In other words, they all hinge on a strong camera model. This is the first in a series of articles that explores one of the most important camera models in computer vision: the pinhole perspective camera. We'll start by deconstructing the perspective camera to show how each of its parts affect the rendering of a 3D scene. Next, we'll describe how to import your calibrated camera into OpenGL to render virtual objects into a real image. Finally, we'll show how to use your perspective camera to implement rendering in a virtual-reality system, complete with stereo rendering and head-tracking.</p>
<div class='context-img' style='width:180px'>
<a href="http://www.robots.ox.ac.uk/~vgg/hzbook/">
<img src='/img/h_and_z.jpg' />
</a>
<div class='caption'>
These articles won't cover everything. This book does.
</div>
</div>
<p>This series of articles is intended as a supplement to a more rigorous treatment available in several excellent textbooks. I will focus on providing what textbooks generally don't provide: interactive demos, runnable code, and practical advice on implementation. I will assume the reader has a basic understanding of 3D graphics and OpenGL, as well as some background in computer vision. In other words, if you've never heard of homogeneous coordinates or a camera matrix, you might want to start with an introductory book on computer vision. I highly recommend <a href="http://www.amazon.com/Multiple-View-Geometry-Computer-Vision/dp/0521540518/ref=sr_1_fkmr1_1?ie=UTF8&qid=1343611611&sr=8-1-fkmr1&keywords=harley+and+zisserman">Multiple View Geometry in Computer Vision</a> by Hartley and Zisserman, from which I borrow mathematical notation and conventions (e.g. column vectors, right-handed coordinates, etc.)</p>
<!--more-->
<h2>Technical Requirements</h2>
<p>Equations in these articles are typeset using MathJax, which won't display if you've disabled JavaScript or <a href="http://www.mathjax.org/resources/browser-compatibility/">are using a browser that is woefully out of date</a> (sorry IE 5 users). If everything is working, you should see a matrix below:</p>
<div>
\[
\left (
\begin{array}{c c c}
a^2 & b^2 & c^2 \\
d^2 & e^2 & f^2 \\
g^2 & h^2 & i^2
\end{array}
\right )
\]
</div>
<p>3D interactive demos are provided by <a href="https://github.com/mrdoob/three.js/">three.js</a>, which also needs JavaScript and prefers a browser that supports WebGL ( <a href="https://www.google.com/intl/en/chrome/browser/">Google Chrome</a> works great, as does <a href="http://www.mozilla.org/en-US/firefox/fx/#desktop">the latest version of Firefox</a>). Older browsers will render using canvas, which will run slowly, look ugly, and hurl vicious insults at you. But it should work. If you see two spheres below, you're in business.</p>
<script>
requestAnimFrame = (function(){
return window.requestAnimationFrame ||
window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame ||
window.oRequestAnimationFrame ||
window.msRequestAnimationFrame ||
function( callback ){
window.setTimeout(callback, 1000 / 60);
};
})();
var $container;
var mouseDX = 0, mouseDY = 0;
var mouseDownX, mouseDownY;
var x0, y0, s, fx, fy;
var rot_y, tx, ty, tz;
// set the scene size
var WIDTH = 400,
HEIGHT = 300;
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = WIDTH / HEIGHT,
NEAR = 0.1,
FAR = 10000;
// get the DOM element to attach to
// - assume we've got jQuery to hand
// create a WebGL renderer, camera
// and a scene
var renderer = new THREE.WebGLRenderer();
// var renderer = new THREE.CanvasRenderer();
moveParameter = moveCameraCenter;
//moveParameter = moveCameraPP;
//moveParameter = zoomCamera;
var default_focal = HEIGHT / 2 / Math.tan(VIEW_ANGLE * Math.PI / 360);
var camera =
new THREE.CalibratedCamera(
default_focal, default_focal,
0, 0,
0,
WIDTH,
HEIGHT,
NEAR,
FAR);
var scene = new THREE.Scene();
// add the camera to the scene
scene.add(camera);
// the camera starts at 0,0,0
// so pull it back
camera.position.z = 300;
// start the renderer
renderer.setSize(WIDTH, HEIGHT);
// set up the sphere vars
var radius = 50,
segments = 16,
rings = 16;
// create the sphere's material
var sphereMaterial =
new THREE.MeshLambertMaterial(
{
color: 0xCC0000
});
var sphere2Material =
new THREE.MeshLambertMaterial(
{
color: 0x00CC00
});
var sphere = new THREE.Mesh(
new THREE.SphereGeometry(
radius,
segments,
rings),
sphereMaterial);
var sphere2 = new THREE.Mesh(
new THREE.SphereGeometry(
radius,
segments,
rings),
sphere2Material);
sphere2.position.z -= 100;
sphere2.position.x -= 100;
// add the sphere to the scene
scene.add(sphere);
scene.add(sphere2);
// create a point light
var pointLight =
new THREE.PointLight(0xFFFFFF);
// set its position
pointLight.position.x = 10;
pointLight.position.y = 50;
pointLight.position.z = 130;
// add to the scene
scene.add(pointLight);
function onMouseDown(event)
{
$(document).mousemove(onMouseMove);
$(document).mouseup(onMouseUp);
$(document).mouseout(onMouseOut);
mouseDownX = event.screenX;
mouseDownY = event.screenY;
}
function onMouseMove(event)
{
var mouseX = event.screenX;
var mouseY = event.screenY;
var mouseDX = mouseX - mouseDownX;
var mouseDY = mouseY - mouseDownY;
moveParameter(mouseDX, mouseDY);
render();
}
function onMouseOut(event)
{
removeListeners();
}
function onMouseUp(event)
{
removeListeners();
}
function removeListeners()
{
$(document).unbind( 'mousemove');
$(document).unbind( 'mouseup');
$(document).unbind( 'mouseout');
}
function onTouchStart(event)
{
if ( event.touches.length == 1 ) {
event.preventDefault();
mouseDownX = event.touches[ 0 ].pageX;
mouseDownY = event.touches[ 0 ].pageY;
}
}
function onTouchMove(event)
{
if ( event.touches.length == 1 ) {
event.preventDefault();
var mouseX = event.touches[ 0 ].pageX;
var mouseY = event.touches[ 0 ].pageY;
var mouseDX = mouseX - mouseDownX;
var mouseDY = mouseY - mouseDownY;
moveParameter(mouseDX, mouseDY);
render();
}
}
function zoomCamera(param1, param2)
{
camera.fx = default_focal + 2*param2;
camera.fy = default_focal + 2*param2;
camera.s = -2*param1;
camera.updateProjectionMatrix();
}
// move camera's principal point
function moveCameraPP(param1, param2)
{
camera.x0 = param1;
camera.y0 = -param2;
camera.updateProjectionMatrix();
}
function moveCameraCenter(param1, param2)
{
camera.position.x = param1;
camera.position.y = -param2;
}
function animLoop()
{
requestAnimFrame(animLoop);
render();
}
function render()
{
renderer.render(scene, camera);
}
// attach the render-supplied DOM element
$(document).ready(function(){
$container = $('#3d_container');
$container.prepend(renderer.domElement);
$container.mousedown(onMouseDown);
$container.bind( 'touchstart', onTouchStart);
$container.bind( 'touchmove', onTouchMove);
render();
});
</script>
<div class="demo_3d">
<div id="3d_container" >
</div>
<div class="caption">3D demo. Drag to move camera. </div>
</div>
<p><a name="toc"></a></p>
<h2>Table of Contents </h2>
<p>Below is a list of all the articles in this series. New articles will be added to this list as I post them, so you can always return to this page for an up-to-date listing.</p>
<ul>
<li><a href="/2012/08/14/decompose/">Dissecting the Camera Matrix, Part 1: Intrinsic/Extrinsic Decomposition</a></li>
<li><a href="/2012/08/22/extrinsic/">Dissecting the Camera Matrix, Part 2: The Extrinsic Matrix</a></li>
<li>Simulating your Calibrated Camera in OpenGL - <a href="/2013/06/03/calibrated_cameras_in_opengl/">part 1</a>, <a href="/2013/06/18/calibrated-cameras-and-gluperspective/">part 2</a></li>
<li><a href="/2013/08/13/intrinsic/">Dissecting the Camera Matrix, Part 3: The Intrinsic Matrix</a></li>
<li>Stereo Rendering using a Calibrated Camera</li>
<li>Head-tracked Display using a Calibrated Camera</li>
</ul>
<p>Happy reading!</p>
Mon, 13 Aug 2012 00:00:00 -0700
http://ksimek.github.io/2012/08/13/introduction/
http://ksimek.github.io/2012/08/13/introduction/