EUROGRAPHICS 2009 / P. Alliez and M. Magnor Short Paper Depth of Field in Plenoptic Cameras T. Georgiev1 and A. Lumsdaine2 1Adobe Systems, Inc. San Jose, CA USA 2Indiana University, Bloomington, IN USA Abstract Certain new algorithms used by plenoptic cameras require focused microlens images. The range of applicability of these algorithms therefore depends on the depth of field of the relay system comprising the plenoptic camera. We analyze the relationships and tradeoffs between camera parameters and depth of field and characterize conditions for optimal refocusing, stereo, and 3D imaging. Categories and Subject Descriptors (according to ACM CCS): Image Processing And Computer Vision [I.4.3]: Imaging Geometry— 1. Introduction Capture and display of 3D images is becoming increasingly popular with recent work on 3D displays, movies and video. It is likely that high-quality 3D photography and image pro- cessing will eventually replace current 2D photography and image processing in applications like Adobe Photoshop. Fully 3D or “integral” photography was first introduced by Lippmann [Lip08], and improved throughout the years by many researchers [Ive28, IMG00, LH96, Ng05]. The use of film as a medium for integral photography restricted its prac- ticality. However, the approach found new life with digital photography. Initial work by Adelson [AW92], along with further improvements of Ng [Ng05], and Fife [FGW08] held out significant promise that the plenoptic camera could be the 3D camera of the future. One impediment to realizing this promise has been the limited resolution of plenoptic cameras. However, recent re- sults [LG08] suggest that a “Plenoptic 2.0” camera can cap- ture much higher resolution based on appropriate focusing of the microlenses. In this modified plenoptic camera, the microlenses are focused on the image created “in air” by the main camera lens. In this way each microlens works together with the main lens as a relay system, forming on the sensor a true image of part of the photographed object. In this short paper we analyze the parameters of the mod- ified plenoptic camera for the purpose of achieving optimal focusing and depth of field for 3D imaging. We propose a setting where the two possible modes of relay imaging can be realized at the same time. Our experimental results demonstrate that such parameters work in practice to gener- ate a large depth of field. 2. The two modes of focusing of the Plenoptic camera We treat the plenoptic camera as a relay system, where the main lens creates a main image in the air, then this main image is remapped to the sensor by the microlens array. De- pending on where the microlens array is located relative to the main image we have two different modes of operation: Keplerian or Galilean. 2.1. Keplerian mode In this mode the main image is formed in front of the mi- crolens array. If the distance from the microlenses to the main image is a, and the distance from the microlenses to the sensor is b, a perfectly focused system satisfies the lens equation 1/a + 1/b = 1/ f , where f is the focal length of the microlens. See Figure 1. We define M = a b (1) as the inverse magnification. Let’s observe that M needs to satisfy M > 2 because each point needs to be imaged by at least two different microlenses in order to have stereo paral- lax information captured. c© The Eurographics Association 2009. T. Georgiev & A. Lumsdaine / Depth of Field Figure 1: Microlens imaging in Keplerian mode. The main lens (above) forms a main image in front of the microlenses. Figure 2: Microlens imaging in a Galilean camera. Only rays through one microlens are shown. Substituting a from (1) into the lens equation produces b = M + 1 M f (2) We see that the distance b from the microlens to the sensor is required to be in the range f ≤ b < 3 2 f . (3) 2.2. Galilean mode When the main lens is focused to form an image behind the sensor, the image can be treated as virtual, and it can still be focused onto the sensor. In this case the lens equation becomes −1/a + 1/b = 1/ f . Definition (1) and the require- ment M > 2 remain the same. The imaging geometry is rep- resented in Figure 2. In the place of (2) and (3) we derive b = M−1 M f (4) f 2 < b ≤ f (5) Both cases are represented in Figure 3. Horizontal lines Figure 3: Locations behind a microlens where in focus im- ages are formed for different magnifications. Figure 4: The depth of field within which a camera is in focus (i.e. blur smaller than a pixel) is related to pixel size. represent integer values of M starting from 2 and increas- ing to infinity when approaching the focal plane from both sides. These are the locations behind the lens where perfectly in focus images of inverse magnification M are formed, ac- cording to formulas (2) and (4). In the traditional plenoptic camera the sensor is placed at the focal plane. Figure 3 shows where it would have to be placed in the two types of modified plenoptic cameras for different values of M, if the image is perfectly focused. 3. Depth of Field It is well known that a camera image is in focus within a range of distances from the lens, called depth of field. At any distance beyond that range the image is defocused, and can be described by its blur radius. We consider a digital image to be in focus if this blur radius is smaller than the pixel size p. The depth of field x is related to aperture diameter D, which is often expressed in terms of the F-number F = b/D, by the following relation: x = pF. (6) It can be derived considering similar triangles in Figure 4. Using formulas (2) and (4) we can verify that in both cases, if the sensor is placed at distance f from the lens, the c© The Eurographics Association 2009. T. Georgiev & A. Lumsdaine / Depth of Field depth of field x = |b− f | would satisfy M = f x (7) Example: As an example of how (6) and (7) can be used, consider the camera of Ng [Ng05], in which the sensor is placed at the focal plane of the microlenses. The parameters are p = 9µ, F = 4, f = 500µ. Using (6) we compute x = 36µ. In other words, the image is in focus within 36µ of the sensor, on both sides. Also, from (7) we compute the inverse magnification M = 14. Using M, together with formulas (1), (2) and (4), we com- pute two distances aK and aG: aK = (M + 1) f (8) aG = (M−1) f . (9) If the main image is at either of these locations (7 mm from the microlenses), the image on the sensor will be in focus. We see that within about 7mm of the microlens array there is a zone of poor focusing. Anything beyond that zone, and all the way to infinity, is perfectly in focus. This explains the observation of [LG08] that the same camera can be used for Galilean and Keplerian imaging. 4. Effects of Wave Optics Due to diffraction effects, the image blur p depends on the F-number [Goo04]. For simplicity we consider 1D cameras (equivalently, square apertures) in which case the blur and F-number are related according to p = λF (10) This is well known in photography. Using (6) we have x = λF 2 (11) Substituting F from (10) in (6), gives us x = p2/λ, and using (7) we derive a new formula for the lowest M at which microlenses are still in focus. M = λ f p2 . (12) Example: The camera described by Ng [Ng05] can be im- proved by adding apertures to the microlenses so that depth of field is increased. What are the optimal magnification and aperture diameter? Assume λ = 0.5µ. From (12) we get M = 3 (instead of 14), and from (10) we get F = 18 (in- stead of 4). This is a significant improvement because now everything not within x = 2mm from the microlenses is in focus. The camera works at the same time in Keplerian and Galilean mode! If the goal was “refocusability” of the light- field images, now everything is fully refocusable, except for a small 2mm region around the microlenses. Our camera pro- totype, which works at F = 10, is described in Section 5. Our last theoretical result is probably the most interesting one. We find the following relationship between the maxi- mum possible number of pixels N in a microimage and the size of a pixel, p, assuming in focus imaging in Galilean and Keplerian mode at the same time: N = p λ . (13) To derive it, assume that the size of the microimage is half the focal length. This corresponds to camera lens aperture F = 2, which is realistic for most cameras. Then the number of pixels is N = f2 p . Substituting in (12), M = 2Nλ/p. Since the minimal (and the best!) value of M is 2, we obtain (13). We can define multiplicity M as the number of times a world point is seen in different microimages. If our goal is to achieve lowest multiplicity and many pixels in individual microimages, we need big pixels. Example: The sensor of Keith Fife [FGW08] uses very small pixels, under 1µ. According to formula (13), if we want low multiplicity, the microimage would be only 2 pix- els! We argue that this is not optimal for combined Galilean and Keplerian camera that is everywhere in focus. Contrary to common intuition, small pixels are not the so- lution that would produce large depth of field! This might be a nontrivial result. The simple explanation would be that small pixels require big aperture in order to minimize diffraction, and big aperture causes shallow depth of field. Looking at formula (13), the optimal way to achieve mi- croimages of many pixels is to make those pixels big com- pared to the wavelength. For example, if pixel size is p = 10µ, and λ = 0.5µ, then N = 20. Formula (12) gives us the focal length for such a camera, f = 400µ (at M = 2). The apertures on the microlenses must be D = 20µ. 5. Experimental Results We have implemented a camera with the main goal of achieving large depth of field. For practical reasons, besides depth of field we needed to have reasonable sensitivity to light (speed). As a good tradeoff we chose for our microlens apertures about 2 times lower (“faster”) F-number of F = 10 instead of the theoretical F = 18. Two stereo views of the captured scene have been gen- erated with our system. The virtual camera can be syntheti- cally focused on any object. The stereo views above have been generated from the main image inside our camera, observed through the mi- crolens array. In this particular case, the “Optical System” is mapped behind the microlenses as a virtual image and ob- served in a Galilean mode. In Figure 6 we can observe the sharp imaging of our system due to the large depth of field. The main lens of the camera was focused exactly on the text at the top of the “EG” book. Consequently, the main im- age of this area falls in the region of bad focusing, within c© The Eurographics Association 2009. T. Georgiev & A. Lumsdaine / Depth of Field Figure 5: Crossed-eyes stereo rendered from our lightfield. The synthetic camera is focused on the Fluid Dynamics book. Figure 6: Galilean imaging. We see part of the text “Opti- cal” repeated and not inverted in each microimage. Figure 7: Microimages at the top of the “EG” book. 2mm from the microlenses. Some microimages from this re- gion are shown in Figure 7. In the main image inside our camera, the Fluid Dynam- ics book is mapped in front of the microlenses, in Keplerian mode (see Figure 8). Note the sharp imaging of our system due to the large depth of field. References [AW92] ADELSON T., WANG J.: Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Figure 8: Keplerian imaging. We see part of the text “Third Edition” inverted in each microimage. Machine Intelligence (1992), 99–106. [FGW08] FIFE K., GAMAL A. E., WONG H.-S. P.: A 3mpixel multi-aperture image sensor with 0.7um pixels in 0.11um cmos. In IEEE ISSCC Digest of Technical Papers (February 2008), pp. 48–49. [Goo04] GOODMAN J. W.: Introduction to Fourier Optics, 3rd ed. Roberts and Company, 2004. [IMG00] ISAKSEN A., MCMILLAN L., GORTLER S. J.: Dynam- ically reparameterized light fields. ACM Trans. Graph. (2000), 297–306. [Ive28] IVES H. E.: A camera for making parallax panorama- grams. Journal of the Optical Society of America 17, 4 (Dec. 1928), 435–439. [LG08] LUMSDAINE A., GEORGIEV T.: Full Resolution Light- field Rendering. Tech. rep., Adobe Systems, January 2008. [LH96] LEVOY M., HANRAHAN P.: Light field rendering. Pro- ceedings of the 23rd annual conference on Computer Graphics and Interactive Techniques (Jan 1996). [Lip08] LIPPMANN G.: Epreuves reversibles. photographies in- tegrales. Academie des sciences (March 1908), 446–451. [Ng05] NG R.: Fourier slice photography. Proceedings of ACM SIGGRAPH 2005 (Jan 2005). c© The Eurographics Association 2009.