Modeling a Flash Lidar System - Part 1

In the consumer electronics space, engineers leverage lidar for several functions, such as facial recognition and 3D mapping. While vastly different embodiments of lidar systems exist, a “flash lidar” solution serves to generate an array of detectable points across a target scene with solid-state optical elements. The benefit in obtaining three-dimensional spatial data for use in a small-form package has caused this solid-state lidar system to become more commonplace in consumer electronics products, such as smart phones and tablets. In this series of articles, we will explore how OpticStudio can be used for modelling these kinds of systems from sequential starting points to incorporating mechanical housing.

Authored By Angel Morales

Downloads

Article Attachments

Introduction

Lidar (Light Detection and Ranging) systems have been used in several cases across many industries. While there are different types of lidar systems, such as systems with scanning elements that determine the field of view, this example will explore the use of diffractive optics to replicate the projection of an array of sources across a target scene. An imaging lens system then observes this projected array of sources to obtain time-of-flight information from the incoming rays, thus generating depth information from the projected dots.

In Part 1, we cover the background and characterization of the sequential models for the transmitting and receiving modules for the flash lidar system.

Application of Flash Lidar System

The working principle for this lidar system relies on a set of collimating optics in front of an array of sources (for example, a VCSEL array). This will project the array of sources onto the scene where we want to track geometry and movement. A diffractive optical element placed after the collimating lens will create multiple projections of this VCSEL array along the X, Y, and diagonal directions.

mceclip0.png

With the illumination module generating a point array to project light onto our area of interest, an imaging system subsequently observes the illuminated area to detect the projected array and obtain depth information of the scene.

The lidar system we explore is imagined to be used to track real world geometry, as well as its movement, in order to overlay some computer-generated imagery. We can also take the lidar as being intended to act as a part of an AR headset module, meaning the user may interact with the CGI through gesture-recognition observed by the lidar module.

For the illuminated area, we target a 480mm x 480mm (roughly 19” x 19”) region at 1 meter away (or, slightly greater than an arm’s length). We can imagine that this is a reasonable coverage area if we were aiming this lidar system at a table or desk and intended to track the geometry of the surface as well as any items upon it. We also assume the user will interact with virtual elements close to their direct line of sight.

Illumination System

To begin, we define the requirements for the illumination module. Since the illumination area is a projection of the area of the source array, it is critical to ensure that our collimating optics are appropriately specified in tandem with the source being used. If we assume that our source array has an active area of 1.6mm x 1.6mm, then we can determine the necessary focal length of the lens as:

image005.png

image007.png

To define the model in OpticStudio, the source was assumed to have an emission NA of 0.2 at 0.94 microns. The lens was optimized to yield a collimated output across the field of view to ensure that dots across the source array area were of reasonable size on the scene of interest. Due to the use case of the flash lidar system, it was also important to select small-form, plastic materials for a compact, mass-producible design (the file is provided as “FlashLidar_Emitter.ZAR”):

image009.png

For now, we can treat the emission of each field point as the emission of a single diode that will be projected into the observed scene. At this stage, since the beam will be observed in the far-field, and since this system is mostly affected by geometric aberrations, we can take geometric ray-based results as good indicators of the performance of the spot on our observed scene. The Geometric Image Analysis tool can then be used to visualize the spots at roughly a meter away from the illumination module:

image011.png

image013.png

Each of the above Geometric Image Analysis windows displays the spread of a point source from our Object plane on our “scene” Image plane across a 55mm x 55mm area. The non-zero diffraction orders will generate additional spot patterns around the central order on the observed scene into the X and Y directions, thus extending the field of view of the lidar system.

In this model, we will use a pair of crossed diffraction gratings to create the additional projections. Therefore, we’ll need to calculate the required spatial frequency of the linear grating pattern to ensure that the first diffraction order is projected onto an area that does not overlap with the zeroth order:

image015.png

The minimum allowable diffraction angle, θd, is therefore two times the horizontal half field of view of view. With fc = 10mm, and an object height of 0.8mm, the half field of view of the zeroth order, θhoriz, is 4.57⁰, which allowing allows us to find the required distance between grating lines in microns, d:

image017.png

Since the native Diffraction Grating surface in OpticStudio takes in the spatial frequency of the grating as a parameter, this results in a spatial frequency of 0.17 lines/µm. We can validate this calculation in OpticStudio to see if it provides enough distance between the different orders by adding in the Diffraction Grating surfaces in the sequential model:

image019.png

To check for any overlap between the projected areas (which would risk superimposed points between different orders), we can leverage the Geometric Image Analysis tool along with the Multi-Configuration Editor. Two configurations can be defined, with one showing the central order and another showing the first order along the X-axis. A modified, “filled-in” version of the “SQUARE.IMA” (provided in all OpticStudio installations) is used to demonstrate any potential overlap in the projection of the active area of the source in the far-field. With the current spatial frequency definition, we can see there is some overlap:

image021.png

To remedy this, we can slightly increase the spatial frequency of the Diffraction Grating surfaces to increase the diffraction angle. A quick edit of 0.2 lines/µm yields a clearer separation:

image023.png

At this stage, the file is saved as “FlashLidar_Emitter_DiffGrat.ZAR”. Again, while this is the output of the overall area that the diode array will encompass, the actual illumination module will use a series of diodes that will act as point sources, and so the illumination pattern will become a series of spots. This check in GIA was meant to ensure that when we define our sources more directly in the non-sequential model, we will have no overlap in the dots between orders.

Imaging System

In order to obtain depth information from the illumination projection, an imaging system is needed to observe the scene and convert the image data into depth data by accounting for the roundtrip time-of-flight of each point. From our prior calculations, we know that the half horizontal and vertical field of view for the central order will be about 4.57°. Due to the diffraction element introducing projections around this central order, this increases the required field of view of the imaging system to roughly 9.14° (or, just about two times the central order half FOV) to the horizontal and vertical half fields of view. The required half field of view for the imaging system is therefore 13.71° in the horizontal and vertical directions, or about 19.39° diagonally:

image025.png

Therefore, the imaging module requires a minimum field of view of about 20°. Due to the use case of this lidar system, it was again critical to have a compact, small-form design using plastics for the elements. The lens is illustrated below and is attached to this article as “FlashLidar_Receiver.ZAR”:

image027.png

The lens was nominally designed for a field of view larger than 20° (roughly 30°-36°) to ensure that the optimization of the imaging system yielded elements that were physically realizable. For instance, this helped better control edge thicknesses of the aspheric elements, as well as ensure there is clearance between the elements for mounting. It was also designed nominally at an infinite object distance, as there could be a range of working distances involving this lens.

Since this design is also aiming to be compact in form factor, the imaging system has to balance this criterion with the impact it will have on field-dependent aberrations, such as distortion and field curvature. The design itself follows a similar structure to the Cooke triplet, with a high-index negative lens between two low-index positive lenses. Aspheric coefficients are exposed on all elements, allowing for spherical aberration to be corrected by the first lens, with the third lens behaving as a field lens to improve the performance of distortion and field curvature. A final cover glass window which covers the image sensor for the receiver module is also included in the model.

To ensure that the imaging system will perform to our requirements, we can take a look at the FFT MTF plot out to 100 lp/mm:

image029.png

We can observe close to diffraction-limited performance in the MTF. As a check, we can calculate the size of a spot as imaged onto the detector by this system to check the image quality. We’ll turn to the sequential emitter module and look at the spot size on the “scene Image plane” as determined by the Spot Diagram:

image031.png

The smallest spot that will be observed can be assumed to be the most-central point source that will be emitted from the source array. We can therefore take the RMS radius of the central field point, 2.089 mm, and find the resulting size of this spot as imaged onto the detector:

image035.png

image033.png

The spatial frequency of the spot as imaged by this lens is therefore about 72 lp/mm, which has an on-axis MTF of 72.2%, which we take as sufficient contrast to detect this spot.

Conclusion

In this article, we have covered the background of how a flash lidar system operates and represented the two components of the system as sequential models. We have taken a first-order approach in modeling a diffractive element for the lidar emitter to generate the various orders of projection as well as assessing and avoiding any potential overlap of the source projections. We also discuss and verify that the imaging module is of sufficient performance for our requirements.

This is the first article of the Modeling a Flash Lidar System series.

Next article: Modeling a Flash Lidar System - Part 2 – Knowledgebase (zemax.com)

Was this article helpful?
9 out of 13 found this helpful

Comments

0 comments

Article is closed for comments.