## Abstract

Planar velocity fields in flows are determined simultaneously on parallel measurement planes by means of an in-house manufactured light-field camera. The planes are defined by illuminating light sheets with constant spacing. Particle positions are reconstructed from a single 2D recording taken by a CMOS-camera equipped with a high-quality doublet lens array. The fast refocusing algorithm is based on synthetic-aperture particle image velocimetry (SAPIV). The reconstruction quality is tested via ray-tracing of synthetically generated particle fields. The introduced single-camera SAPIV is applied to a convective flow within a measurement volume of 30 x 30 x 50 mm^{3}.

©2013 Optical Society of America

## 1. Introduction

Particle image velocimetry (PIV) is one of the most common tools used in current experimental flow studies, e.g., [1]. PIV provides two components of the flow velocity in two spatial dimensions in the measurement plane, which is generated by a light sheet. Comprehensive understanding of unsteady flow structures often requires velocity data not only in a single plane, but in multiple planes or in the complete 3D volume. Therefore, manifold techniques were developed extending the classical single-plane PIV to 3D space. After two decades of 3D-PIV developments, two branches emerged: I) PIV using multiple light-sheet illumination and II) PIV using volumetric illumination. Another classification type may be the number of detectable spatial dimensions (short: D) and velocity components (short: C) [2], which is used subsidiarily in this work.

One advantage of light-sheet illumination is the concentration of available light energy into one or more planes within the measurement volume. Locally, this enables a much higher intensity compared to volumetric illumination. Furthermore, the seeding concentration of tracer particles in the flow can be much higher. The signals from multiple, parallel light sheets are analyzed quasi-simultaneously by multiplexing techniques. Here, multiplexing means that light from multiple light sheets is collected by common optics. However, signals from different light sheets are encoded differently, e.g. by wavelength, time, phase or defocus blur. Consequently, data from each light sheet is recovered separately at the receiving end.

At least six physical quantities allow the separation of particle stray-light from different light sheets: *polarization* [3], *wavelength* [4,5], *time* [6], *phase* (holography) [7], *size of defocus blur* [8] and *parallax* (presented in this paper). Techniques using volumetric illumination are: holographic PIV [2], tomographic PIV [9], defocussing PIV [10], astigmatism PIV [11] and synthetic-aperture PIV [12]. The mentioned methods of separating particle stray light are not confined to the PIV technique. 3D particle tracking velocimetry (3D-PTV) may be superior especially for low particle image densities. The particle image density is denoted by the variable N. N is defined as the number of particles per image area in pixels.

Light-field imaging was introduced to fluid mechanics as synthetic-aperture particle image velocimetry (SAPIV) [12]. This paper retains this terminology. Belden et al. use eight cameras mounted in an aluminum frame. All cameras show the same particle-laden flow. Algorithms known from computer vision recombine the recorded images in order to obtain depth information, e.g., [13].

Especially for SAPIV, large depth resolution is obtained at the expense of sensor size. As the parallax between adjacent views is essential for depth estimation, reducing the sensor size inevitably reduces depth resolution. Combining SAPIV with multiple light-sheet illumination enables downsizing to a single-camera setup, which is the approach applied in this article. Two components of the velocity field are measured by the sensor in multiple planes (3D2C), which already provides important spatial information of the flow in many applications. Separating the signals of different light-sheets is straightforward because no inverse multiplexing units are required (see, e.g., [3–7]). Such a single-camera 3D-technique enables the observation of flows through narrow viewports and minimizes hardware complexity. Section 2 explains the working principle of SAPIV in multiple measurement planes. The generation of synthetic particle images for testing the SAPIV refocusing algorithm is presented in Subsection 2.4. The supporting experiment is illustrated in section 3. Section 4 offers the results and a discussion, followed by conclusions in section 5.

## 2. Method

Light-field imaging aims at capturing not only spatial coordinates of the intensity distribution on the sensor plane, but also angular information of the light that forms the image [14]. This allows the reconstruction of 3D spatial information by refocusing on different planes. The first technologically relevant work on light-field imaging is by Adelson et al. (1992), who named their device a plenoptic camera [15]. Commercial plenoptic cameras using the Adelson-approach are discussed in recent works, see, e.g., [16]. There is another approach beside this light-field technique, the focused plenoptic camera [17]. The lens array images the focal plane of the photo lens, instead of focusing at infinity like in a traditional plenoptic camera. The first publication on light-field imaging in fluid mechanics is related to the focused plenoptic principle. The method was named synthetic-aperture particle image velocimetry (SAPIV) [12]. The technique presented in this paper is based on the focused plenoptic approach as well. In the following, we will focus on a single-camera solution with a planar sensor field, which is depicted in Fig. 1. Different viewing angles are realized by a lens array in front of the sensor plane. The imaging process is discussed in this section.

#### 2.1 Refocusing

The basic principle of SAPIV with a lens array and a single camera is illustrated in Fig. 2. Two particles P_{1} and P_{2} at axial positions Z_{1} and Z_{2} are imaged through the planar lens array. The numerical aperture of the lenses in our application is about NA = 0.008. Therefore, the depth of field is large. Hence, the images of both particles are sharp. For illustration purposes, the particle P_{1} is colored green and P_{2} is colored black. In the following, we discuss how the two different particles are reconstructed out of the 2D image on the sensor plane.

When imaging P_{1} and P_{2} through the optics shown in Fig. 1, the magnification is different for each particle due to different Z-positions. Each lens in the array images the particle pair at a different angle, hence the magnification also depends on particle and lens position. This leads to the diverging dot pattern in Fig. 2. As the lens array is static, the variation of the dot pattern only depends on the particle position Z. The “image” in Fig. 2 consists of twenty-one sub-images corresponding to twenty-one lenses in the array. Image processing is used to refocus particle images on the different planes Z_{1} or Z_{2}. The *first step* is the decomposition of the image plane into sub-images, which correspond to one lens of the array each (Fig. 3). In order to refocus on position Z_{1,} all sub-images are superposed so that the green dots finally fall on top of each other (see *step two* in Fig. 4). Therefore, all sub-images are shifted appropriately against the central sub-image. Shifts are defined in a shift-map, Fig. 3. This procedure is repeated with another shift-map to refocus on plane Z_{2}. Hence, different shift-maps refocus on different planes.

Calibration is required to determine the corresponding shift-maps. This basically corresponds to the application of a central projection mapping between two sub-images, which is called homography. In general, each sub-image needs to be decomposed by non-linear grids to take care of off-axis aberrations including distortion,, e.g., [18]. In this work, decomposition is not necessary; rather a constant shift-vector **h** can be applied for all pixels in the sub-image. This approach is sufficient here for refocusing at sub-pixel accuracy, since we use high-quality doublet lenses in the array (see Subsection 2.4). However, homographies with non-linear grids should be applied for imaging at large field angles.

In *step three*, all shifted sub-images are summed up. Due to the constructive summation of images of particles in the refocused plane, large peaks appear as in Fig. 4, where the peak belongs to particle P_{1} at position Z_{1}. The evolved image is cropped to the dimensions of the central sub-image and intensity is normalized. Finally, in *step four*, a threshold is applied in order to eliminate destructively summed gray-scale values, which belong to out-of-focus particles. The required threshold level depends on the desired reconstruction quality; see Subsection 2.4.1 for details. Step two to four may be repeated with different shift-maps to refocus on different Z-positions. All steps of the refocusing algorithm are sketched in Fig. 4.

#### 2.2 Uncertainty in depth

To determine the uncertainty in depth, we consider two particles P_{1} and P_{2} at different depths. This time they represent particles within a light sheet; one in the center, another one at the edge of the light sheet. Only if the images of P_{1} and P_{2} can be separated clearly, their depth position can be determined unambiguously. In Fig. 5, ray propagation is illustrated through lens b) of a lens array for an object height that equals the lens pitch p. The present case only considers a cross-section through the lens array, i.e. image heights h_{1}, h_{2} and pitch p are scalar.

Both particles P_{1} and P_{2} are imaged with varying magnification which leads to different image point positions, spaced by Δh = h_{1} – h_{2}. As a necessary condition, both particle images need to be separated by at least one pixel edge length l_{px}, which requires$\left|\Delta h\right|\ge {l}_{px}$. The lower bound |Δh| = l_{px} determines the measurement uncertainty in depth. With a given spacing between both particles of δZ, the distance between both image points calculates to

_{i}the central light-sheet location. The magnification at light-sheet position Z

_{i}is${\beta}_{i}=-b/{Z}_{i}$. With the condition$\Delta h=\pm {l}_{px}$, distance δZ becomes the uncertainty$\delta Z=\pm b\left[1/(\gamma +{\beta}_{i})-1/{\beta}_{i}\right]$, with the geometrical factor γ = l

_{px}/p. The final equation for δZ reads as

As known from stereo imaging, uncertainty δΖ is inversely proportional to the detectable parallax [19], which is determined by lens pitch p. With Eq. (2) and$\gamma \ll \beta $, the following proportionality holds$\delta Z\sim (FOV\cdot {l}_{px})/p$. Note that magnification β ~1/FOV. Reducing the pixel size of the used camera l_{px} could decrease δZ significantly. If the FOV has to be enlarged without exchanging the optics, the light-sheet spacing ΔZ should be increased as well in order to retain high reconstruction quality.

As the real optics include further lenses along the optical axis (abbreviated by o.a.) like the field lens, the object distance Z_{i} or image distance b cannot be measured directly. The uncertainty δZ is determined experimentally by measuring the scale of magnification on adjacent measurement planes at Z_{i} and Z_{i+1}. Then, the image distance is calculated using $b=-\Delta Z/[1/{\beta}_{i}-1/{\beta}_{i+1}]$, with ΔZ as the spacing between adjacent light-sheets.

Knowledge of δΖ is important, as it determines the maximum light-sheet thickness d and the minimum required light-sheet spacing ΔΖ for unique depth measurements. The light-sheet thickness d is determined by the smallest feasible δΖ. It is calculated by largest β_{i} and largest pitch p_{max}, which equals the distance between the central and the outermost lens in the lens array. d should then be chosen$d\le 2\left|\delta {Z}_{\mathrm{min}}\right|$, which is typically in the order of a millimeter for flow applications. The refocusing quality might otherwise be deteriorated by too many out-of-focus particles. The uncertainty δΖ is maximal for smallest β_{i} and smallest pitch p_{min}, which equals the distance between the central lens and its nearest neighbor. For ideal conditions, the light-sheet spacing should be$\Delta Z\ge 2\left|\delta {Z}_{\mathrm{max}}\right|$.

#### 2.3 Calibration

In Fig. 5, the image heights h_{1} and h_{2} equal the x-dimension of shift-vectors **h** for different depths Z between the central lens and its nearest neighbor. Typical shift-vector maps are shown exemplarily in Fig. 3. Magnification β can be calculated analytically by laws of geometric optics. With effective focal length f, the conditional equation reads:

The difference β_{i+1 }– β_{i} is a measure for the capability of separating particles on adjacent light sheets. The key task of sensor adjustment is its maximization. Equation (3) is valid for perfect alignment of the lenses in the array, especially in absence of lens tilt. According to Fig. 5, shift-vectors can be calculated by **h** = β**p**. Here, **p** is the pitch vector. In practice, measured shifts will differ from theoretically obtained values. Hence, calibrating the sensor is inevitable. Measured magnitudes of **h** for all present pitches |**p|** are given in Fig. 7.

#### 2.4 Optical simulation

The reconstruction of particle positions in 3D is simulated by synthetic particle images and ray tracing through the simulation code ZEMAX. Reconstruction quality is analyzed as a function of the applied refocusing threshold and the particle image density.

In ZEMAX, the photo lens is modeled as paraxial plane to account for its high-quality imaging characteristics. The field lens and the lens array are modeled by original glass, radii and thickness data. The implemented setup is identical to the experimental one in Subsection 3.1. The layout is given in Fig. 1. Tracer particles are modeled by plane, circular surfaces, 100µm in diameter, which emit Gaussian-shaped light. Ray tracing is conducted using ZEMAX’s non-sequential mode. The number of rays used for the analysis in the coherent ray tracing is 3x10^{5}. Rays originate at the particle position and hit the 1024x1024px detector (sensor chip).

A real measurement situation is simulated by particles distributed randomly on five equally spaced XY-planes. These planes correspond to the light sheets used in practice. Intensity fields are calculated and exported for each Z-position. The five exported intensity fields are later compared directly to refocused images.

Thus, the detector image is the sum of intensity fields of all illuminated particles of all planes together. This image is refocused using the SAPIV algorithm described in Subsection 2.1. Refocusing on the Z-positions that were preset in ZEMAX should result in images similar to the previously exported intensity fields. Any deviations between exact and refocused images are due to reconstruction errors. Naturally, these errors deteriorate reconstruction quality. Figure 8 shows simulated intensity fields generated at three different particle image densities. The images are generated by 10, 50 and 100 randomly distributed particles on each of the five Z-planes. Particle image densities on the detector are 0.002, 0.008 and 0.015. The e^{−2}-diameter of a particle is 2px.

Refocusing of the images given in Fig. 8 is carried out by means of the shift maps, like the one shown in Fig. 3. The shift map for each depth Z is determined by calibration. Therefore, a single particle is imaged at X = 0, Y = 0. A cross-correlation procedure between all sub-images and the central sub-image then provides the shift maps, see Subsection 2.3. Sub-images in the recordings are interpolated using a grid of 0.1px step size, which allows shifting with a precision of 0.1px. Figure 9, left, depicts a magnified version of the central sub-image of the central image in Fig. 8. Refocused images of five different measurement planes are merged into one color-coded illustration, Fig. 9, right.

Darker particles at the rim of the raw image are obviously not reconstructed because they are much darker than the brightest particles. Decreasing the threshold value increases the size of the reconstructed volume. However, this deteriorates the reconstruction quality.

### 2.4.1 Reconstruction quality

For correlation-based velocimetry the quality factor Q was introduced in [9], see Eq. (4). Q^{2} is an estimate for the correlation coefficient in the assumption of perfect cross-correlation between subsequent particle images. It was found that correlation-based methods work reliably for Q > 0.75 [9]. The same argument is used here to estimate the maximum particle image density up to which correlation-based velocimetry will work with the presented light-field camera.

The displayed set of curves is Gaussian, as the histogram of refocused and original intensity field, I_{r} and I_{0}, is Gaussian as well. Beside the maxima of the curves, the average change in reconstruction quality is $\Delta Q/\Delta threshold\approx -0.004$. In a first approximation, the maximum reconstruction quality Q_{max} decreases linearly with increasing seeding concentration, which is illustrated in Fig. 11, left. A linear fit is applied and plotted as a solid line. If Q = 0.75 is assumed as minimum acceptable reconstruction quality [9], the maximum particle image density is about 0.03. For comparison: In tomographic PIV, particle image densities of up to N = 0.1 are feasible, while in defocusing PIV, for instance, the upper limit for correct particle reconstruction should be N = 0.01 [20].

The required threshold to achieve a maximum of Q is a function of the particle image density N. The relation is given in Fig. 11, right. The linear trend helps to estimate optimum thresholds for refocusing. Required thresholds increase for increasing particle image density N. It might be noted that the threshold depends on the illumination intensity. For low light imaging, noise of the camera chip should be considered in order to obtain realistic threshold values from the simulation. For the presented simulation and the subsequent experiment, high light levels are realized.

The proposed light-field camera can be used for 3D particle tracking velocimetry (3D-PTV) as well. In order to estimate the reconstruction quality of the particle position the rms deviation between exact and refocused particle positions is determined. Therefore, particle positions are replaced by weighted centroids, which are determined geometrically. Root mean square values of the deviations between exact and refocused positions rms_{x} and rms_{y} are calculated in X and Y on all measurement planes. The overall uncertainty is${\sigma}_{r}={(rm{s}_{X}^{2}+rm{s}_{Y}^{2})}^{1/2}$. Values of σ_{r} at the largest reconstruction quality Q_{max} are plotted in Fig. 12 over the particle image density. At a particle image density of N = 0.005, particles overlap increasingly and the calculation of centroid positions fails. This characteristic value is known from conventional PTV (see, e.g., [20]). It is also found in the present analysis, as particles are identified by simple centroid estimation. Advanced PTV algorithms using the cascade cross-correlation method (CCM), PIV guidance and the “principle of proximity or exclusion” would raise the maximum N to 0.03–0.04 [21].

## 3. Experiment

#### 3.1 Sensor Setup

The complete optics is sketched in Fig. 13. Illuminating optics and flow basin are mounted on an aluminum frame. Figure 13 illustrates their orientation.. The laser used for the current experiment was a Minilite (Continuum, USA) with 5ns pulse width and 5mJ pulse energy. The repetition rate of the laser is 1Hz, forced by an external trigger. The 2mm laser beam is expanded to a sheet by a negative cylindrical lens (focal length f = −25mm). Subsequently, the beam propagates a phase grating. In a distance of about 300mm behind the grating, five diffraction orders are parallelized by a cylindrical lens, f = 300mm. After parallelization, the light sheets are equally spaced by ΔZ = 12.5mm. The grating exhibits nearly the same intensity distribution in the +1st, 0 and –1st diffraction order. In the +2nd and –2nd order, the diffracted intensity is lower. Applying neutral density filters for the inner three orders equalizes light intensity in all light sheets. This is important as heterogeneous sheet intensities could disturb the reconstruction quality. The homogeneity of the illumination should be checked by a power meter.

The receiving optics and the camera are mounted on a rail allowing on-axis shifting relative to the flow basin. The front lens is a photo lens, focal length f = 50mm with a free aperture diameter of D = 30mm, located at a distance of 100mm to the nearest light sheet. The photo lens can be zoomed in and out, which allows controlling the working distance without changing the alignment of the optics. The field lens is a singlet lens with f = 50mm, D = 28mm. The lens array consists of twenty-one doublet lenses (Edmund Optics) with f = 9mm, D = 2mm. The doublets are aligned in a quadratic grid and glued into a frame consisting of two identical silicon plates with polydimethylsiloxane (PDMS). The outer edges of the square array are left free, as imaging quality would be insufficient for lenses located there. 2mm holes are etched into both silicon plates as apertures. The lens pitch between adjacent lenses is 3.13mm. The distance between photo and field lens is 90mm; between field lens and lens array, it is 30mm. The intermediate image is a demagnified version of the flow. An aperture plate with variable diameter enables fine adjusting the final size of the measurement volume. It prevents sub-images from overlapping while also blocking large field angles, which minimizes distortion aberration. The aperture plate, field lens and the lens array are mounted in a LINOS micro bench. As we are applying a simple homography for refocusing (see Subsection 2.1), the lens array should provide superior imaging characteristics. Doublet lenses exhibit considerably less wave-front aberration compared to singlet lenses [22]. The camera used is a Photron APX RS 1024x1024px with a CMOS chip, 17.8x17.8mm^{2} in size and 17.4µm pixel edge length. The camera is triggered by the same electronics determining the repetition rate of the laser.

Applying Eq. (2) with minimum pitch, p = 3.13mm, and minimum magnification (see Fig. 6) yields$\delta {Z}_{\mathrm{max}}=\pm 6mm$. Therefore, the optimum light-sheet spacing should be larger than 12mm. The optical simulation in Subsection 2.4.1 reveals sufficiently high Q for ΔZ = 12.5mm. Applying the maximum occurring pitch in the used lens array,$p=\sqrt{5}\cdot 3.13mm$(see Fig. 7), and maximum magnification yields $\delta {Z}_{\mathrm{min}}=\pm 2mm$. Consequently, the light-sheet thickness d should be smaller than 4mm for high reconstruction quality. In practice, it is d = 1.5mm, which is advantageous for highly seeded flows, as only a fraction of the seeding is imaged onto the camera chip.

The field of view (FOV) is imaged at different angles onto the CMOS chip by the lens array. In the present case, the effective chip area used to capture the FOV is 204x204px. The active chip area is given by the number of lenses in the array. Magnification β differs significantly for each light sheet. Knowledge of the exact value of β is required for the determination of lateral particle positions on individual light sheets.

#### 3.2 Calibration

2D-PIV is conducted on each of the five light sheets, which are located at different depths; see Fig. 13. Light signals from different depths need to be distinguished unambiguously, which is guaranteed by the calibration procedure described in Subsection 2.3. The calibration does not deliver 3D information within the single light sheets, but allows assigning a particle image to one of the realized light-sheet positions. In the experiment, the calibration is carried out in situ at zero-flow condition using tracer particles as targets. In single-camera SAPIV, the camera prescribes the coordinate system. For calibration, all light sheets except for one are blocked. For each light-sheet position a particle image is recorded. Two-dimensional cross-correlation is used to determine the shifts between sub-images, which are a measure for the particle position in depth. In the cross-correlation function, the highest peak is fitted by a 2D Gaussian using a Levenberg-Marquardt least-squares minimization. Therefore, image shifts are determined at sub-pixel accuracy, which here is 0.1px. Consequently, this allows precise depth assignments even for particle images overlapping partly. The magnitudes of calibrated shift-vectors are sketched in Fig. 7. Shifts are given in pixel as a function of the Z-position within the flow basin. The set of curves is parameterized by occurring pitches |**p**| in the used lens array. Shift magnitudes |**h**| represent the measured correction shift of an image of an individual lens relative to the central sub-image. Shifts are determined at positions of the five light sheets. The maximum measured shift is 72px, the smallest one is 17px. The sensor’s depth sensitivity is given by the gradient of the shift function. In a measurement, all twenty sub-images are shifted by the according calibrated values. There is an individual set of sub-image shifts for each of the five discrete depth positions.

#### 3.3 Convective flow

For validation, single-camera SAPIV is applied to a three-dimensional convective flow. The fluid is heated laterally, which is a relevant configuration for manufacturing bulk semi-conductor crystals; see, e.g., [23]. The flow is generated in a basin with D = 100mm edge length. One sidewall of the basin is connected to the heating circuit, the opposite wall to the cooling circuit; see Fig. 16 below. Both circuits are realized by independently working water thermostats (Julabo HC, Germany). The remaining four walls of the flow basin are nearly adiabatic, which was confirmed by temperature measurements. Adiabatic walls are made from PMMA and therefore are transparent. The temperature controlled walls are made from aluminum. Thermo couples log the wall temperature during measurements. In thermodynamic equilibrium, the temperature controlled walls exhibit mean temperatures of 76.88°C and 5.86°C respectively.

The basin is filled completely with a 0.73:0.27 water-glycerin solution resulting in 1.061g/cm3 fluid density at 30°C (Glycerine Producers’ Association, 1963). Glycerin is added to the fluid in order to adapt the fluid’s density to the used tracer particles. Tracers are made of polyamide 12 base polymer (Vestosint 1141, Evonik, Germany). The fraction of 55% of particles ranges from 100µm to 250µm in diameter. The realized particle image density is 0.007, which equals 0.06 particles per mm^{3} in the illuminated measurement volume.

Five static light sheets, spaced by 12.5mm, illuminate the convective flow at the same time; see Fig. 13. Each light sheet forms an individual measurement plane. After reaching thermodynamic equilibrium, the recording is started at 1Hz repetition rate of laser firing and camera exposure. 2048 images are recorded. Figure 14 displays a 1024x1024px snapshot of particles, which is the sum of particle intensities from all five light sheets.

All twenty-one sub-images are interpolated at 0.1px steps in order to allow shifting the recorded sub-images with the same accuracy like the calibrated shift map. Subsequently, all sub-images are shifted two-dimensionally by distances determined through the preceding calibration. Summing and thresholding all images enables identifying particles on individual light sheets. Figure 15 (left) shows the raw central sub-image. It might be noted that preprocessing of this image is limited to a simple threshold filtering as light sheet illumination allows low-noise imaging. The central part of Fig. 15 is the summed image (compare Fig. 4, step 3) for Z/D = 0.46. Destructive summation of the sub-images leads to smeared patterns and blurred particles in the refocused plane. Undesired patterns and blur are blocked by applying$threshold=65.8+2400N$, Fig. 11 (right). In the present case, the threshold is 83.

Particle positions are not resolved within the light sheets with the current light-field camera, as the depth resolution is insufficient. Particles are eventually reconstructed at five discrete measurement positions in depth that equal the light sheet positions. After refocusing all recordings, a 2D-PIV is conducted separately on each measurement plane.

## 4. Results and Discussion

The process chain indicated in Fig. 15 is repeated for each light-sheet Z-position. Due to low seeding concentration, a complete velocity field is computed by averaging 2048 recordings. 2D-PIV is applied to the measurement planes sequenced in time using the open source code *fluere* by K. P. Lynch. The magnification scale is adapted for each plane in accordance with Fig. 6. A three pass algorithm is applied starting at 64x64px, continuing with 32x32px and ending at 16x16px interrogation window size. Windows are weighted uniformly. 50% overlap leads to a smallest window size of 8x8px, which determines the spatial resolution of the vector field to a minimum of 1.3mm edge length. The depth resolution is given by the light-sheet thickness of 1.5mm. Velocity vectors are validated in MATLAB using the ratio of the two largest correlation peaks. 2D cross-correlation is applied for each Z-position separately. All validated two-component vectors are averaged over time. Figure 16 shows the orientation of the measurement volume within the flow basin. Dimensions are related to the edge length D = 100mm of the flow basin. As mentioned above, the measurement volume is composed of five XY-planes, with the first one at Z/D = 0.08 and the last one at Z/D = 0.58. All slices are spaced by Z/D = 0.125. The lateral extension of measurement planes increases for larger values Z/D, as image magnification decreases.

The mean velocity field ${\overline{v}}_{abs}$ is displayed in Fig. 17. It is composed of five XY-slices at five different Z-positions. The edges of the slices form a cuboid widening conically in Z-direction. The size of the displayed vector-cones is a measure for the absolute velocity. Cone peaks indicate the direction of the flow. The magnitude of the time-averaged velocity is color-coded for the smallest and largest values of Z/D.

Figure 18 displays single XY-slices. Mean velocity fields for all measurement positions Z/D are given for increasing values Z/D from left to right. For Z/D = 0.08, no predominant flow direction is visible. For Z/D > 0.08, the flow is clearly directed to the top right corner of the FOV. The magnitudes of the displayed velocity vectors increase for increasing Z/D and are maximal for Z/D = 0.46. As expected, the flow accelerates along X because of buoyant forces acting on the fluid. The flow seems to exhibit a clockwise rotating, asymmetric single-roll. The center of rotation is underneath the center of the basin, located at X/D = 0.5, Y/D = 0.5, Z/D = 0.5. The flow pattern is only shown partly because of the limitation of the FOV forced by the optics.

## 5. Conclusions

Correlation-based velocimetry requires high in-plane spatial resolution. As the presented light-field camera utilizes the focused plenoptic principle, the full image resolution of the applied CMOS-chip can be used. In commercial plenoptic cameras image resolution may be reduced due to the formation of macro-pixels; see [15]. Especially in applications using cameras with large pixels (e.g. high-speed imaging), the proposed optics is superior. The spatial resolution of the velocity field obtained by PIV is 1.3mm.

In fluid mechanics, light-field imaging can be realized with bulky setups consisting of multiple cameras, see synthetic-aperture particle image velocimetry (SAPIV) [12]. Instead of using multiple cameras and volumetric illumination, this paper uses a single camera with light sheet illumination. The single camera is used at the expense of a more complex illumination unit compared to illumination setups known from tomographic PIV for instance. The realized light-field imaging technique allows two-component velocity measurements on multiple planes quasi-simultaneously (3D2C velocimetry). A lens array consisting of doublet lenses enables twenty-one views on a particle-laden flow at different angles. Simulations reveal that a reconstruction quality [9] of Q > 0.75 is realistic for the present optics at particle image densities smaller than 0.03. Reconstructed particle fields can be correlated at high quality.

3D3C velocimetry would be feasible with the optical methods explained here (see [12]). Due to the reduction to a single camera, only a small range of solid angles is covered. For volumetric illumination as known from tomographic PIV this leads to a depth resolution of the presented sensor of$\delta Z=\pm 6mm$. This is almost one quarter of the realized FOV diameter. 2D slices extracted from the 3D velocity raw data might exhibit an extenuated signal to noise ratio (SNR). Therefore, we decided to illuminate through five light sheets at the same time with spacing larger than 2|δZ|. Consequently, the uncertainty in depth is given by the light sheet thickness of 1.5mm. Furthermore, the SNR on the realized measurement planes is high. 3D3C-PIV with volumetric illumination at a depth resolution of about 1mm is feasible with the presented optics and a 20 megapixel consumer camera with (however, at the expense of temporal resolution).

The measurement volume is 30x30x50mm^{3} in size. Larger FOVs are feasible with larger light-sheet spacing or if using a camera with smaller pixel size. As SAPIV is a threshold technique, knowledge of the threshold scaling in relation to the particle image density is helpful. Optical simulations enable the determination of an optimal threshold.

In a validation experiment a particle image density of 0.007 is realized. The, compared to existing 3D-PIV techniques [20], low density is reasoned by the small maximum parallax of 1°. For instance, in tomographic PIV the angle of choice is typically 30° [9]. Due to the low seeding concentration, the recordings were time-averaged, which excludes time-resolved measurements. Averaging allows applying the PIV algorithm at low seeding concentrations, but is not mandatory. Using particle-tracking algorithms instead of PIV is promising, as PTV copes with particle image densities of 0.005 and smaller. This in turn allows highest theoretic reconstruction quality of about Q = 0.92 and time-resolved velocimetry. The working distance of the sensor can be varied easily using a zoom photo lens. No fine tuning is required between lens array and camera chip. Therefore, the technique is robust and easy to implement in the lab.

Furthermore, single-camera, multiple-plane SAPIV is especially suited to environments not accessible for multiple-camera setups, like reactors or engines.

## Acknowledgments

This work was performed within the Cluster of Excellence “Structure Design of Novel High-Performance Materials via Atomic Design and Defect Engineering (ADDE)”, which is financially supported by the European Union (European regional development fund) and by the Saxon State Ministry of Science and Art (SMWK).

## References and links

**1. **C. E. Willert and M. Gharib, “Digital particle image velocimetry,” Exp. Fluids **10**(4), 181–193 (1991). [CrossRef]

**2. **K. D. Hinsch, “Three-dimensional particle velocimetry,” Meas. Sci. Technol. **6**(6), 742–753 (1995). [CrossRef]

**3. **C. J. Kähler and J. Kompenhans, “Fundamentals of multiple plane stereo particle image velocimetry,” Exp. Fluids **29**(7), S070–S077 (2000). [CrossRef]

**4. **C. Brücker, “3-D PIV via spatial correlation in a color-coded light-sheet,” Exp. Fluids **21**, 312–314 (1996). [CrossRef]

**5. **J. A. Mullin and W. J. A. Dahm, “Dual-plane stereo particle image velocimetry (DSPIV) for measuring velocity gradient fields at intermediate and small scales of turbulent flows,” Exp. Fluids **38**(2), 185–196 (2005). [CrossRef]

**6. **C. Brücker, “Digital-particle-image-velocimetry (DPIV) in a scanning light-sheet: 3D starting flow around a short cylinder,” Exp. Fluids **19**, 255–263 (1995). [CrossRef]

**7. **V. Palero, J. Lobera, and M. P. Arroyo, “Digital image plane holography (DIPH) for two-phase flow diagnostics in multiple planes,” Exp. Fluids **39**(2), 397–406 (2005). [CrossRef]

**8. **A. Liberzon, R. Gurka, and G. Hetsroni, “XPIV-Multi-plane stereoscopic particle image velocimetry,” Exp. Fluids **36**(2), 355–362 (2004). [CrossRef]

**9. **G. E. Elsinga, F. Scarano, B. Wieneke, and B. W. Oudheusden, “Tomographic particle image velocimetry,” Exp. Fluids **41**(6), 933–947 (2006). [CrossRef]

**10. **F. Pereira and M. Gharib, “Defocusing digital particle image velocimetry and the three-dimensional characterization of two-phase flows,” Meas. Sci. Technol. **13**(5), 683–694 (2002). [CrossRef]

**11. **C. Cierpka, R. Segura, R. Hain, and C. J. Kähler, “A simple single camera 3C3D velocity measurement technique without errors due to depth of correlation and spatial averaging for microfluidics,” Meas. Sci. Technol. **21**(4), 045401 (2010). [CrossRef]

**12. **J. Belden, T. T. Truscott, M. C. Axiak, and A. H. Techet, “Three-dimensional synthetic aperture particle image velocimetry,” Meas. Sci. Technol. **21**(12), 125403 (2010). [CrossRef]

**13. **B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. **24**(3), 765–766 (2005). [CrossRef]

**14. **M. Levoy, “Light fields and computational imaging,” Computer **39**(8), 46–55 (2006). [CrossRef]

**15. **E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. **14**(2), 99–106 (1992). [CrossRef]

**16. **T. Nonn, J. Kitzhofer, D. Hess, and C. Brücker, “Measurements in an IC-engine flow using light-field volumetric velocimetry,” presented at 16th International Symposium on Applications of Laser Techniques to Fluid Mechanics, Lisbon, Portugal (9–12 Jul*y* 2012).

**17. **A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in *2009 IEEE International Conference on Computational Photography (ICCP)* (IEEE, 2009), pp. 1–8.

**18. **R. I. Hartley and A. Zisserman, *Multiple View Geometry in Computer Vision* (Cambridge University Press, 2004).

**19. **M. P. Arroyo and C. A. Greated, “Stereoscopic particle image velocimetry,” Meas. Sci. Technol. **2**(12), 1181–1186 (1991). [CrossRef]

**20. **M. P. Arroyo and K. Hinsch, “Recent developments of PIV towards 3D measurements,” in *Particle Image Velocimetry*, Vol. 112 of Topics in Applied Physics (Springer, Berlin, 2008), pp. 127–154.

**21. **Y. C. Lei, W. H. Tien, J. Duncan, M. Paul, N. Ponchaut, C. Mouton, D. Dabiri, T. Rösgen, and J. Hove, “A vision-based hybrid particle tracking velocimetry (PTV) technique using a modified cascade correlation peak-finding method,” Exp. Fluids **53**(5), 1251–1268 (2012). [CrossRef]

**22. **C. Skupsch, T. Klotz, H. Chaves, and C. Brücker, “Channelling optics for high quality imaging of sensory hair,” Rev. Sci. Instrum. **83**(4), 045001 (2012). [CrossRef] [PubMed]

**23. **M. Lappa, “Review: thermal convection and related instabilities in models of crystal growth from the melt on earth and in microgravity: Past history and current status,” Cryst. Res. Technol. **40**(6), 531–549 (2005). [CrossRef]